Show simple item record

Alucinaciones de la inteligencia artificial: impacto en tecnología, política y sociedad

dc.creatorBarria Huidobro, Cristian
dc.date2024-06-30
dc.date.accessioned2025-03-18T16:53:18Z
dc.date.available2025-03-18T16:53:18Z
dc.identifierhttps://esdegrevistas.edu.co/index.php/rpod/article/view/4847
dc.identifier10.25062/2955-0289.4847
dc.identifier.urihttps://hdl.handle.net/20.500.14205/11520
dc.descriptionArtificial intelligence has become the leading force in technological development in recent years, undergoing rapid advancements that have allowed it to integrate into various aspects of human life. An AI’s ability to provide users with exactly what they need is a fundamental factor for solutions employing these technologies to become widespread and part of human daily life. However, the content generated by an AI can incorporate alterations that produce inaccurate or even false results. This phenomenon, known as AI hallucinations, presents significant challenges for the development of these technologies and generates political, economic, reputational, and personal impacts, among others.en-US
dc.descriptionLa inteligencia artificial se ha convertido en la principal protagonista del desarrollo tecnológico de los últimos años, experimentando avances vertiginosos que le han permitido integrarse en diversos aspectos de la vida humana. La capacidad de una IA para entregar al usuario exactamente lo que necesita es un factor fundamental para que soluciones que emplean estas tecnologías puedan masificarse y formar parte del día a día humano. No obstante, el contenido generado por una IA puede incorporar alteraciones que produzcan resultados imprecisos o incluso falsos. Este fenómeno conocido como alucinaciones de la IA representa importantes desafíos para el desarrollo de estas tecnologías y genera impactos políticos, económicos, reputacionales y personales, entre otros.es-ES
dc.formatapplication/pdf
dc.languagespa
dc.publisherSello Editorial ESDEGes-ES
dc.relationhttps://esdegrevistas.edu.co/index.php/rpod/article/view/4847/5286
dc.relation/*ref*/Buchanan, B. G. (2005). A (very) brief history of artificial intelligence. Ai Magazine, 26(4), 53-53.
dc.relation/*ref*/Chen, Y., Fu, Q., Yuan, Y., Wen, Z., Fan, G., Liu, D., ... & Xiao, Y. (2023, October). Hallucination detection: Robustly discerning reliable answers in large language models. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management (pp. 245-255).
dc.relation/*ref*/Cipolla, C. M., & Altan. (2015). Le leggi fondamentali della stupidità umana. Il mulino.
dc.relation/*ref*/Dechter, R. (1986). Learning while searching in constraint-satisfaction problems. https://n9.cl/f8jd2
dc.relation/*ref*/Dziri, N., Milton, S., Yu, M., Zaiane, O., & Reddy, S. (2022). On the origin of hallucinations in conversational models: Is it the datasets or the models? arXiv preprint arXiv:2204.07931
dc.relation/*ref*/Floridi, L. (2020). AI and its new winter: From myths to realities. Philosophy & Technology, 33, 1-3.
dc.relation/*ref*/Fradkov, A. L. (2020). Early history of machine learning. IFAC-PapersOnLine, 53(2), 1385-1390.
dc.relation/*ref*/Gugerty, L. (2006). Newell and Simon's logic theorist: Historical background and impact on cognitive modeling. In Proceedings of the human factors and ergonomics society annual meeting 50(9), 880-884). Sage CA. SAGE Publications.
dc.relation/*ref*/Hatem, R., Simmons, B., & Thornton, J. E. (2023). A Call to Address AI “Hallucinations” and How Healthcare Professionals Can Mitigate Their Risks. Cureus, 15(9).
dc.relation/*ref*/Haugeland, J. (1989). Artificial intelligence: The very idea. MIT press.
dc.relation/*ref*/IBM. (2024, April 11). What are ai hallucinations? https://n9.cl/3tvd4
dc.relation/*ref*/Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., ... & Fung, P. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 1-38.
dc.relation/*ref*/Lee, K., Firat, O., Agarwal, A., Fannjiang, C., & Sussillo, D. (2018). Hallucinations in neural machine translation. https://n9.cl/0s0vb
dc.relation/*ref*/Li, Z. (2023). The dark side of ChatGPT: legal and ethical challenges from stochastic parrots and hallucination. arXiv preprint arXiv:2304.14347
dc.relation/*ref*/Luo, J., Li, T., Wu, D., Jenkin, M., Liu, S., & Dudek, G. (2024). Hallucination Detection and Hallucination Mitigation: An Investigation. arXiv preprint arXiv:2401.08358
dc.relation/*ref*/Merriam-Webster. (s. f.). Hallucination. In Merriam-Webster.com dictionary. Retrieved May 3, 2024, from https://n9.cl/ay325n
dc.relation/*ref*/Muthukrishnan, N., Maleki, F., Ovens, K., Reinhold, C., Forghani, B., & Forghani, R. (2020). Brief history of artificial intelligence. Neuroimaging Clinics of North America, 30(4), 393-399.
dc.relation/*ref*/Newell, A., & Shaw, J. C. (1959). A variety of intelligent learning in a general problem solver. RAND Report, p. 1742. https://n9.cl/wy401
dc.relation/*ref*/Olivetti, E. (s. f.). Online Latin dictionary - Latin - English. Latin Dictionary. Olivetti Media Communication. https://n9.cl/pb4ft
dc.relation/*ref*/Rawte, V., Sheth, A., & Das, A. (2023). A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922
dc.relation/*ref*/Sah, S. (2020). Machine learning: a review of learning types. https://doi.org/10.20944/preprints202007.0230.v1
dc.relation/*ref*/Schmidhuber, J. (2022). Annotated history of modern ai and deep learning. arXiv preprint arXiv:2212.11279.
dc.relation/*ref*/Siegel, R. K. (1977) Hallucinations. Scientific American, 237(4), 132-141.
dc.relation/*ref*/Su, W., Wang, C., Ai, Q., Hu, Y., Wu, Z., Zhou, Y., & Liu, Y. (2024). Unsupervised real-time hallucination detection based on the internal states of large language models. arXiv preprint arXiv:2403.06448
dc.relation/*ref*/Venkit, P. N., Chakravorti, T., Gupta, V., Biggs, H., Srinath, M., Goswami, K., ... & Wilson, S. (2024). “Confidently Nonsensical?”: A Critical Survey on the Perspectives and Challenges of ‘Hallucinations' in NLP. arXiv preprint arXiv:2404.07461
dc.relation/*ref*/West, D. M., & Allen, J. R. (2018). How artificial intelligence is transforming the world. https://n9.cl/ju4u5
dc.relation/*ref*/Xiao, W., Huang, Z., Gan, L., He, W., Li, H., Yu, Z., ... & Zhu, L. (2024). Detecting and Mitigating Hallucination in Large Vision Language Models via Fine-Grained AI Feedback. arXiv preprint arXiv:2404.14233
dc.relation/*ref*/Buchanan, B. G. (2005). A (very) brief history of artificial intelligence. Ai Magazine, 26(4), 53-53.
dc.relation/*ref*/Chen, Y., Fu, Q., Yuan, Y., Wen, Z., Fan, G., Liu, D., ... & Xiao, Y. (2023, October). Hallucination detection: Robustly discerning reliable answers in large language models. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management (pp. 245-255).
dc.relation/*ref*/Cipolla, C. M., & Altan. (2015). Le leggi fondamentali della stupidità umana. Il mulino.
dc.relation/*ref*/Dechter, R. (1986). Learning while searching in constraint-satisfaction problems. https://n9.cl/f8jd2
dc.relation/*ref*/Dziri, N., Milton, S., Yu, M., Zaiane, O., & Reddy, S. (2022). On the origin of hallucinations in conversational models: Is it the datasets or the models? arXiv preprint arXiv:2204.07931
dc.relation/*ref*/Floridi, L. (2020). AI and its new winter: From myths to realities. Philosophy & Technology, 33, 1-3.
dc.relation/*ref*/Fradkov, A. L. (2020). Early history of machine learning. IFAC-PapersOnLine, 53(2), 1385-1390.
dc.relation/*ref*/Gugerty, L. (2006). Newell and Simon's logic theorist: Historical background and impact on cognitive modeling. In Proceedings of the human factors and ergonomics society annual meeting 50(9), 880-884). Sage CA. SAGE Publications.
dc.relation/*ref*/Hatem, R., Simmons, B., & Thornton, J. E. (2023). A Call to Address AI “Hallucinations” and How Healthcare Professionals Can Mitigate Their Risks. Cureus, 15(9).
dc.relation/*ref*/Haugeland, J. (1989). Artificial intelligence: The very idea. MIT press.
dc.relation/*ref*/IBM. (2024, April 11). What are ai hallucinations? https://n9.cl/3tvd4
dc.relation/*ref*/Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., ... & Fung, P. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 1-38.
dc.relation/*ref*/Lee, K., Firat, O., Agarwal, A., Fannjiang, C., & Sussillo, D. (2018). Hallucinations in neural machine translation. https://n9.cl/0s0vb
dc.relation/*ref*/Li, Z. (2023). The dark side of ChatGPT: legal and ethical challenges from stochastic parrots and hallucination. arXiv preprint arXiv:2304.14347
dc.relation/*ref*/Luo, J., Li, T., Wu, D., Jenkin, M., Liu, S., & Dudek, G. (2024). Hallucination Detection and Hallucination Mitigation: An Investigation. arXiv preprint arXiv:2401.08358
dc.relation/*ref*/Merriam-Webster. (s. f.). Hallucination. In Merriam-Webster.com dictionary. Retrieved May 3, 2024, from https://n9.cl/ay325n
dc.relation/*ref*/Muthukrishnan, N., Maleki, F., Ovens, K., Reinhold, C., Forghani, B., & Forghani, R. (2020). Brief history of artificial intelligence. Neuroimaging Clinics of North America, 30(4), 393-399.
dc.relation/*ref*/Newell, A., & Shaw, J. C. (1959). A variety of intelligent learning in a general problem solver. RAND Report, p. 1742. https://n9.cl/wy401
dc.relation/*ref*/Olivetti, E. (s. f.). Online Latin dictionary - Latin - English. Latin Dictionary. Olivetti Media Communication. https://n9.cl/pb4ft
dc.relation/*ref*/Rawte, V., Sheth, A., & Das, A. (2023). A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922
dc.relation/*ref*/Sah, S. (2020). Machine learning: a review of learning types. https://doi.org/10.20944/preprints202007.0230.v1
dc.relation/*ref*/Schmidhuber, J. (2022). Annotated history of modern ai and deep learning. arXiv preprint arXiv:2212.11279.
dc.relation/*ref*/Siegel, R. K. (1977) Hallucinations. Scientific American, 237(4), 132-141.
dc.relation/*ref*/Su, W., Wang, C., Ai, Q., Hu, Y., Wu, Z., Zhou, Y., & Liu, Y. (2024). Unsupervised real-time hallucination detection based on the internal states of large language models. arXiv preprint arXiv:2403.06448
dc.relation/*ref*/Venkit, P. N., Chakravorti, T., Gupta, V., Biggs, H., Srinath, M., Goswami, K., ... & Wilson, S. (2024). “Confidently Nonsensical?”: A Critical Survey on the Perspectives and Challenges of ‘Hallucinations' in NLP. arXiv preprint arXiv:2404.07461
dc.relation/*ref*/West, D. M., & Allen, J. R. (2018). How artificial intelligence is transforming the world. https://n9.cl/ju4u5
dc.relation/*ref*/Xiao, W., Huang, Z., Gan, L., He, W., Li, H., Yu, Z., ... & Zhu, L. (2024). Detecting and Mitigating Hallucination in Large Vision Language Models via Fine-Grained AI Feedback. arXiv preprint arXiv:2404.14233
dc.rightsDerechos de autor 2024 Revista Estrategia, Poder y Desarrolloes-ES
dc.rightshttps://creativecommons.org/licenses/by-nc-nd/4.0es-ES
dc.sourceEstrategia, Poder y Desarrollo; Vol. 3 No. 5 (2024): Proyección y disrupción del poder en la estrategia y la geopolítica; 47-64en-US
dc.sourceEstrategia, Poder y Desarrollo; Vol. 3 Núm. 5 (2024): Proyección y disrupción del poder en la estrategia y la geopolítica; 47-64es-ES
dc.source2981-4863
dc.source2955-0289
dc.subjectalucinaciones de la inteligencia artificiales-ES
dc.subjectalucinaciones humanases-ES
dc.subjectdesarrollo tecnológicoes-ES
dc.subjectinteligencia artificiales-ES
dc.subjecttecnologíaes-ES
dc.subjectAI hallucinationsen-US
dc.subjectartificial intelligenceen-US
dc.subjecthuman hallucinationsen-US
dc.subjecttechnological Developmenten-US
dc.subjecttechnologyen-US
dc.titleHallucinations of artificial intelligence: impact on technology, politics, and societyen-US
dc.titleAlucinaciones de la inteligencia artificial: impacto en tecnología, política y sociedades-ES
dc.typeinfo:eu-repo/semantics/article
dc.typeinfo:eu-repo/semantics/publishedVersion


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record