Google engineers criticize OpenAI for slowing down AGI research


OpenAI

The development of Artificial General Intelligence (AGI) has been a topic of significant interest and concern in recent years. As the field advances, so do the risks and potential consequences of its misuse. In a recent podcast, François Chollet, a software engineer at Google, expressed his concerns about the current status of AGI research, specifically criticizing OpenAI for slowing down progress in the field.

OpenAI API

OpenAI’s Influence on AGI Research

Chollet attributes the shift away from open research and collaboration to the influence of OpenAI. He believes that OpenAI’s focus on large language models has diverted resources and attention from other potential AGI research areas, effectively setting back progress by several years. This shift has led to a homogenization of research, where everyone is working on variations of the same thing, rather than exploring different directions.

The Impact on AGI Research

The impact of this shift is evident in the lack of progress in AGI research. In 2019, Sholay and Mike Knoop created a competition called ARC-AGI with a prize of $1 million. This competition measures AGI’s ability to acquire new skills and efficiently solve novel, open-ended problems. Knopp said that despite 300 teams trying ARC-AGI last year, the state-of-the-art (SOTA) scores have only increased from 20% to 34% initially, while human scores range from 85% to 100%. This stagnation highlights the need for a broader approach to AGI research, rather than simply focusing on large language models.

The Importance of Diversified Research

Chollet emphasizes the importance of diversified research in AGI. He believes that the early days of AI research were more productive because of the variety of different directions being explored. This diversity allowed for faster progress and a more comprehensive understanding of the field.

OpenAI ChatGPT Google

OpenAI’s Role in Slowing Down AGI Research

OpenAI currently focuses on large language models. Chollet believes has slowed down progress towards AGI and also created hype around these models. He further argues that OpenAI’s approach has led to a “complete closing down of frontier research publishing,” making it difficult for researchers to explore new and innovative ideas.

OpenAI’s role in slowing down AGI research is complex and multifaceted. OpenAI was founded as a nonprofit dedicated to developing AGI in a way that prioritized safety and avoided the risks associated with unaligned superintelligence. However, recent events have led to concerns about OpenAI’s priorities. The company’s CEO has announced plans to slow down the development of AI, which some see as a positive step towards ensuring the safety of AGI. However, others are sceptical about the feasibility and effectiveness of this approach, as it may allow competitors to catch up and potentially lead to a loss of competitive advantage.

Gizchina News of the week


The debate around slowing down AGI research is contentious, with some experts arguing that it is necessary to ensure safety and others contending that it is neither feasible nor desirable. The challenges involved in aligning AGI with human values and preventing its misuse are significant, and the role of OpenAI in addressing these challenges remains uncertain.

The Need for Change

Chollet’s criticism of OpenAI highlights the need for a change in approach to AGI research. He believes that the current focus on large language models is not only slowing down progress but also limiting the potential of AGI. The goal of the ARC-AGI competition, which offers a $1 million prize, is to increase the number of researchers focusing on cutting-edge AGI research rather than fiddling with large language models.

OpenAI

OpenAI’s Response to the Accusations

OpenAI’s response to accusations of slowing down AGI research is that they are actively working on developing safe and beneficial artificial general intelligence (AGI). They emphasize the importance of caution and coordination in the development of AGI, particularly in ensuring safety and preventing misuse. OpenAI’s co-founder, John Schulman, has explicitly stated that AGI is coming fast and that companies should prepare for safe development and coordinated efforts.

Read Also:  Privacy Concerns Rise as Musk Opposes Apple's ChatGPT Move

Also, OpenAI’s co-founder and president, Greg Brockman, has acknowledged the importance of safety concerns and emphasized the balance between delivering on the potential benefits and managing risks. This suggests that the company is aware of the potential risks and is actively working to address them. However, the disbandment of the superalignment team and the integration of its work into research divisions has raised concerns about the prioritization of safety considerations. The team’s co-leaders, Ilya Sutskever and Jan Leike, have criticized OpenAI for insufficient investment in understanding how to steer and control AI systems much smarter than humans, implying that the company may be prioritizing product development over safety.

Overall OpenAI’s response to accusations of slowing down AGI research is complex and open to interpretation. While the company has acknowledged the importance of safety considerations, its actions have also raised concerns. The future direction of the company and its commitment to long-term safety considerations remain uncertain.

Conclusion

The development of AGI is a complex and multifaceted issue that requires a comprehensive approach. François Chollet’s criticism of OpenAI serves as a reminder of the importance of diversified research and the need to move beyond the current focus on large language models. However, OpenAI says it is actively working on a safe and beneficial artificial general intelligence (AGI). As the field continues to evolve, researchers and developers must prioritize progress towards AGI while also ensuring the safety and responsible development of this technology. What do you think about the current path of OpenAI? Is its acknowledgement of prioritizing safety enough to tackle the criticisms? Let us know your thoughts in the comment section below

Disclaimer: We may be compensated by some of the companies whose products we talk about, but our articles and reviews are always our honest opinions. For more details, you can check out our editorial guidelines and learn about how we use affiliate links.

Source/VIA :
Previous Jeff Bezos regains the title of world's richest man
Next Samsung shows off its first energy-saving color E-paper display