The Defeat of the Winograd Schema Challenge
Data Skeptic
Mon, September 11, 2023
Podchat Summary

Episode Description: The Winograd Schema Challenge and the Defeat by Large Language Models

In this episode, our guest delves into the fascinating world of the Winograd Schema Challenge and its recent encounter with large language models. The challenge revolves around pairs of sentences that appear similar but have subtle differences, leading to distinct interpretations. Our guest sheds light on the significant progress made by these language models in tackling the challenge.

While the triumph over the Winograd Schema Challenge is noteworthy, our guest also highlights the limitations of language models when it comes to solving common sense reasoning problems. They emphasize the necessity for new metrics and benchmarks to further enhance these models' capabilities.

However, it is crucial to note that the defeat of the Winograd Schema Challenge does not mark a major milestone in the field of artificial general intelligence (AGI). Our guest emphasizes that the path to AGI remains uncertain and complex.

As the conversation unfolds, our guest expresses their preference not to be followed online, adding an intriguing personal touch to the episode.

Original Show Notes

Our guest today is Vid Kocijan, a Machine Learning Engineer at Kumo AI. Vid has a Ph.D. in Computer Science at the University of Oxford. His research focused on common sense reasoning, pre-training in LLMs, pretraining in knowledge-based completion, and how these pre-trainings impact societal bias. He joins us to discuss how he built a BERT model that solved the Winograd Schema Challenge.

🔭
Made with ☕️ in SF/SD.
© 2023 Spyglass Search, Inc.