Should AGI Really Be the Goal of Artificial Intelligence Research?
The Sunday Show - A podcast by Tech Policy Press

Categories:
The goal of achieving "artificial general intelligence," or AGI, is shared by many in the AI field. OpenAI’s charter defines AGI as "highly autonomous systems that outperform humans at most economically valuable work,” and last summer, the company announced its plan to achieve AGI within five years. While other experts at companies like Meta and Anthropic quibble with the term, many AI researchers recognize AGI as either an explicit or implicit goal. Google Deepmind went so far as to set out "Levels of AGI,” identifying key principles and definitions of the term. Today’s guests are among the authors of a new paper that argues the field should stop treating AGI as the north-star goal of AI research. They include:Eryk Salvaggio, a visiting professor in the Humanities Computing and Design department at the Rochester Institute of Technology and a Tech Policy Press fellow;Borhane Blili-Hamelin, an independent AI researcher and currently a data scientist at the Canadian bank TD; andMargaret Mitchell, chief ethics scientist at Hugging Face.