OPENAI'S SENIOR ADVISER WARNS: NO ONE READY FOR HUMAN-LEVEL AI, EXITS AMID SHAKEUP OVER SAFETY CONCERNS!
As the dawn of artificial general intelligence (AGI)—an AI system with human-level intellectual capability—approaches, the revelations by Miles Brundage, OpenAI’s senior adviser for AGI readiness, suggests that both the world and AI companies, including OpenAI, are not adequately prepared.
Having dedicated over half a decade to the company, working diligently to shape its AI safety initiatives, Brundage has announced his departure with a resounding warning, hanging heavy over the horizon of the future. His exit letter accentuates a stark reality - our lack of readiness for the advent of AGI.
Brundage’s farewell follows a series of high-profile exits from OpenAI’s safety teams. Renowned personalities including Jan Leike and even co-founder Ilya Sutskever have distanced themselves from OpenAI, the latter to propagate his own AI start-up.
Upon the receipt of a substantial $6.6 billion investment round, OpenAI is reportedly stewing under mounting pressure to mutate from a non-profit into a for-profit public benefit corporation within a span of mere two years. Observers anticipate radical changes to the organization's value structure, and not necessarily for the better.
The commercialization shift plaguing the firm has been a cause for concern for Brundage since the worrying trend began in 2019. The researcher cited intensifying constraints on his research freedom as a core reason for his departure, a sentiment resonating amongst his peers, thus indicating deeper and persistent cultural issues within the corporation.
According to Brundage, the need of the hour is the amplification of independent voices in AI policy discussions. He advocates for the separation of these discussions from industry biases and conflicts of interest, to maintain objectivity and prioritize safety and ethical guidelines over profits. Regrettably, the rise of product-focused culture at OpenAI has led to dissent, with many researchers finding themselves caught between a rock and a hard place.
Despite the challenging issues, it is noteworthy that OpenAI has graciously offered to support Brundage's future work, providing funding, access to APIs, and early model access, all without any contractual obligation.
This episode serves as a potent reminder of the growing tension between the pursuit of commercially viable AI products and the urgent need for developing safe and ethical AI frameworks. As the AI landscape continues to shift, the world must grapple with the systemic changes needed to foster a safe yet innovative technological future. Failing to address these challenges risks being caught unprepared for AGI, a phenomenon the world is yet to fathom in its entirety.