Cookie Consent by Free Privacy Policy Generator OpenAI Establishes AI Risk Assessment Team to Safeguard Against Catastrophic Risks | Review Space



Cover Image

OpenAI Establishes AI Risk Assessment Team to Safeguard Against Catastrophic Risks

A Strategic Move to Ensure Responsible AI Development in the Hands of OpenAI

NEWS  AI  December 20, 2023  Reading time: 2 Minute(s)

mdo Max (RS editor)


After Sam Altman's return to OpenAI, the renowned American artificial intelligence company has unveiled plans to establish a dedicated team tasked with assessing the "catastrophic risks" associated with its evolving AI models. The decision comes as part of OpenAI's commitment to responsible AI development and reflects a proactive approach to addressing potential threats arising from advanced artificial intelligence systems.

In a formal letter, OpenAI articulated its mission to advance the scientific understanding of catastrophic risks linked to artificial intelligence. The company emphasizes that the current state of research on this critical subject falls short of what is necessary. Consequently, OpenAI plans to not only evaluate risks but also formulate comprehensive guidelines to mitigate potential catastrophic uses of its AI technology.

The newly formed monitoring and evaluation team will be at the forefront of this initiative, focusing on assessing the risk levels associated with various aspects of AI models under development. Models will be assigned a risk score ranging from "low" to "critical" across four distinct categories, ensuring that only those rated "low" or lower than "medium" are deemed suitable for deployment.

The four crucial categories to be examined by the risk assessment team include:

  1. Cybersecurity and Large-Scale Cyber Attacks: Evaluating the model's susceptibility to cyber threats and its potential to carry out large-scale cyber attacks.
  2. Propensity for Weaponization: Analyzing the software's capacity to create chemical or nuclear weapons, thereby addressing concerns related to the weaponization of AI.
  3. Persuasive Power and Behavioral Influence: Assessing the model's persuasive power and its ability to influence human behavior, acknowledging the ethical implications of AI influence.
  4. Autonomy and Developer Control: Examining the potential autonomy of the AI model and its capability to escape the control of developers, highlighting the importance of maintaining oversight and control.

To further bolster this initiative, the Safety Advisor Group will play a crucial role in reviewing the identified risks. This group will subsequently provide recommendations to OpenAI's CEO, Sam Altman, or a designated representative. The establishment of this thorough risk assessment process underscores OpenAI's commitment to the responsible and ethical development of artificial intelligence, demonstrating a clear intent to stay ahead of potential challenges and ensuring that AI technology is harnessed for the betterment of society.

As OpenAI takes these proactive steps, the broader AI community will undoubtedly be watching closely, and this move is expected to set a precedent for responsible AI development practices across the industry.

 COVER IMAGE BY FREEPIK 

SHARE THIS ARTICLE



 COMMENTS


Currently there are no comments, so be the first!

*Our pages may contain affiliate links. If you buy something via one of our affiliate links, Review Space may earn a commission. Thanks for your support!
spacer

SPONSORED



SPONSORED


CATEGORIES



banner

Buy Me a Coffee at ko-fi.com