Cookie Consent by Free Privacy Policy Generator OpenAI Collaborates with US Department of Defense on Cybersecurity Solutions | Review Space



Cover Image

OpenAI Collaborates with US Department of Defense on Cybersecurity Solutions

OpenAI Expands Its Reach to Contribute to National Security Initiatives

NEWS  AI  January 18, 2024  Reading time: 2 Minute(s)

mdo Max (RS editor)


OpenAI, the renowned artificial intelligence (AI) company responsible for creations like ChatGPT, has entered into a collaboration with the US Department of Defense (DOD) and the Defense Advanced Research Projects Agency (DARPA). This partnership focuses on the development of open-source cybersecurity tools tailored for government applications. The announcement comes on the heels of OpenAI's recent revision of its Terms of Service, eliminating the prohibition on AI involvement in "military and warfare" pursuits.

Bloomberg's recent report sheds light on OpenAI's participation in the AI Cyber Challenge (AIxCC), a DARPA initiative introduced in the latter part of the preceding year. Under AIxCC, leading AI companies, including Anthropic, Google, Microsoft, and OpenAI, will join forces with DARPA to leverage their cutting-edge technologies and expertise. The primary objective is to provide a platform for competitors to innovate and develop state-of-the-art cybersecurity systems.

According to the report, OpenAI's collaboration extends beyond cybersecurity, as discussions are underway with the US government to address critical domestic issues, notably the prevention of veteran suicide. Anna Makanju, Vice President of Global Affairs at OpenAI, emphasized that the company remains committed to ethical AI usage, maintaining a prohibition on the development of AI for weapons, property destruction, or harm to individuals.

Makanju commented:

"Because we previously had what was essentially a blanket prohibition on military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world."

OpenAI's recent endeavors align with a broader industry trend focusing on ethical AI application, particularly in sensitive areas like military and elections. The company has recently taken steps to prevent the misuse of its technology, addressing concerns related to the spread of misinformation and interference with democratic processes, such as the US Presidential election.

Sam Altman, CEO of OpenAI, emphasized the significance of safeguarding elections, acknowledging the anxiety surrounding them. This development follows Microsoft's recent challenges with its Bing AI, which faced accusations of providing false answers about the 2023 elections. Microsoft has responded to such challenges by introducing a deepfake detection tool, aimed at ensuring the authenticity of political content created by parties, such as ads and videos, remains unaltered by AI technologies.

OpenAI continues to navigate the delicate balance between technological advancement and ethical responsibility. Its collaboration with the US Department of Defense (The Pentagon) - even though officially aimed to contributing to national security - it's an alarming signal that shed light on the possible reasons behind the turmoil of the last months that took the oust and then reintegration of Altman, with voices speaking of "catastrophic risks".   

SHARE THIS ARTICLE


*Our pages may contain affiliate links. If you buy something via one of our affiliate links, Review Space may earn a commission. Thanks for your support!
spacer

SPONSORED



SPONSORED


CATEGORIES



banner

Buy Me a Coffee at ko-fi.com