248 | Yejin Choi on AI and Common Sense
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
Mon, August 28, 2023
Podchat Summary

Exploring the Capabilities and Limitations of Large Language Models in AI

Welcome to another episode of the Mindscape Podcast. In this episode, host Sean Carroll sits down with Yajin Choi, a computer science researcher, to delve into the fascinating world of large language models (LLMs) in artificial intelligence (AI). LLMs, such as ChatGPT, have garnered significant attention for their ability to generate human-like text. However, as Choi explains, their capabilities and limitations are worth exploring.

The Creative Remixers

Choi highlights that LLMs are trained to predict the next word in a sentence based on vast amounts of text data. While they can generate fluent and coherent text that appears human-like, they lack a deep understanding of the meaning behind the words and concepts they produce. Their creativity is limited to remixing existing patterns and ideas from the training data, rather than true comprehension.

The Common Sense Conundrum

One of the major challenges with LLMs is their lack of common sense reasoning. Choi explains that they struggle with extrapolation to unfamiliar contexts and often provide incorrect or nonsensical answers, particularly when faced with questions that require common sense knowledge. Teaching common sense to LLMs is an active area of research, necessitating the development of new algorithms and architectures that incorporate symbolic reasoning and a deeper understanding of the world.

Misinformation and Ethical Concerns

While LLMs have made significant advancements, there are concerns about their potential misuse. Choi discusses how LLMs can be used to generate misinformation and deep fakes, raising questions about the spread of false information. Detecting and combating misinformation requires a combination of AI solutions and platform-level interventions to ensure responsible and ethical use of AI technologies.

Aligning AI with Human Values

The alignment of AI with human values and the potential risks associated with AI development are ongoing topics of debate. Increasing AI literacy and developing safeguards are crucial to ensure responsible and ethical use of AI technologies. Choi emphasizes the importance of understanding the fundamental differences between AI and human intelligence, as humans possess unique capabilities such as curiosity, creativity, and the ability to forget and ask questions that are not easily replicated in AI systems.

The Exciting Interdisciplinary Journey

Choi concludes by highlighting the interdisciplinary nature of AI research. It involves fields such as philosophy, psychology, art, journalism, and politics, which both challenge and excite researchers. This interdisciplinary approach allows for a deeper understanding of human intelligence and the development of AI technologies that align with human values and needs.

Tune in to this thought-provoking episode as Carroll and Choi explore the capabilities and limitations of large language models in AI, shedding light on the challenges and potential of this rapidly evolving field.

Original Show Notes

Over the last year, AI large-language models (LLMs) like ChatGPT have demonstrated a remarkable ability to carry on human-like conversations in a variety of different concepts. But the way these LLMs "learn" is very different from how human beings learn, and the same can be said for how they "reason." It's reasonable to ask, do these AI programs really understand the world they are talking about? Do they possess a common-sense picture of reality, or can they just string together words in convincing ways without any underlying understanding? Computer scientist Yejin Choi is a leader in trying to understand the sense in which AIs are actually intelligent, and why in some ways they're still shockingly stupid.

Blog post with transcript: https://www.preposterousuniverse.com/podcast/2023/08/28/248-yejin-choi-on-ai-and-common-sense/

Support Mindscape on Patreon.

Yejin Choi received a Ph.D. in computer science from Cornell University. She is currently the Wissner-Slivka Professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and also a senior research director at AI2 overseeing the project Mosaic. Among her awards are a MacArthur fellowship and a fellow of the Association for Computational Linguistics.


See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

🔭
Made with ☕️ in SF/SD.
© 2023 Spyglass Search, Inc.