A Mysterious Letter And A Dangerous AI Breakthrough That Pose Risks To Humanity. What's Behind The Recent Open AI Events?
We are at a surprisingly new chapter of the Open AI saga, today. Yes, that chapter on the border line between reality and science fiction. Preceding OpenAI CEO Sam Altman's recent departure, in fact, a group of researchers within the organization reportedly penned a letter to the board of directors. The letter, according to sources familiar with the matter, highlighted concerns about a groundbreaking artificial intelligence discovery that could pose risks to humanity.
The undisclosed letter, coupled with the emergence of a powerful AI algorithm known as Q* (pronounced "Q-Star", a name undoubtedly reminiscent of a series of well known weird conspiracies that have spread in recent years) played a central role in the decision to oust Altman, a prominent figure in generative AI. Prior to Altman's return after a four-day hiatus, over 700 employees had allegedly contemplated leaving OpenAI to join supporter Microsoft in solidarity with their terminated leader.
Among the board's reasons for Altman's dismissal, the letter and concerns over the hasty commercialization of technological advances without a full grasp of potential consequences were key factors. Unfortunately Reuters - which exclusively reported these facts - could not independently review the letter, and the researchers who composed it did not respond to requests for comment (well, this is another typical element in conspiracy stories. Just for saying...).
By the way, following inquiries, OpenAI, while refusing to provide comments, internally acknowledged a project referred to as Q* and a letter to the board preceding the recent events. An internal message, sent by long-time executive Mira Murati, informed staff about media stories without confirming their accuracy.
Some within OpenAI see Q* as a potential breakthrough in the pursuit of artificial general intelligence. OpenAI has been defined as autonomous systems surpassing humans in economically valuable tasks, and holds the promise of advanced reasoning capabilities akin to human intelligence.
Q* demonstrated its computational prowess by solving specific mathematical problems, albeit at the level of grade-school students. While currently limited to mathematical tasks, researchers view this as a significant step, as mastering math – an area with unequivocal right answers – suggests broader reasoning capabilities for AI. This could have applications in novel scientific research, according to AI researchers.
The letter to the board reportedly emphasized both the prowess and potential dangers of AI, though specific safety concerns were not disclosed. The longstanding debate within the computer science community about the risks associated with highly intelligent machines, particularly their potential to act against humanity's interests, remains a focal point of discussions.
Additionally, researchers highlighted the work of an "AI scientist" team, formed by merging the "Code Gen" and "Math Gen" teams. This group explored optimizing existing AI models to enhance reasoning and eventually perform scientific tasks. Sam Altman, recognized for propelling ChatGPT's rapid growth, attracted significant investments and computing resources from Microsoft to advance AGI (Artificial Generative Intelligence). Despite Altman's recent announcements of new tools and optimism about major AI advances, the board decided to terminate his leadership.
The sudden change in leadership at OpenAI, and the consequent reintegration of Altman and Brokman, underscores the complex intersection of AI breakthroughs, ethical considerations, and the potential impact on society, leaving the tech community and the public eagerly anticipating further developments in this unfolding narrative.
SOURCE: REUTERS | COVER IMAGE BY FABRIKASIMF ON FREEPIK / REVIEW SPACE
 COMMENTS