OpenAI outlines national security approach



Zoom in AI 052123 AP Michael Dwyer

OpenAI doubled down on the importance of the U.S. maintaining leadership on artificial intelligence development in a new outline of the company’s approach to national security.

The ChatGPT maker, in a blog post Thursday, laid out how the company sees its role in national security following the Biden administration’s national security memoranda on AI.

The memo, signed by President Biden, marked the first-ever national security memo on AI. It encouraged government agencies to seize on AI systems to maintain an edge over foreign adversaries and boost national security while still stressing the importance of safe deployment.

OpenAI said it views the memo as an “important step forward” in ensuring AI benefits the “most people possible” in a way that “upholds Democratic values.”

“AI is a transformational technology that can be used to strengthen democratic values or to undermine them. That’s why we believe democracies should continue to take the lead in AI development, guided by values like freedom, fairness, and respect for human rights,” the post said.

“And it’s why we think countries that share these values should understand how, with the proper safeguards, AI can help protect people, deter adversaries, and even prevent future conflict,” the company continued.

The AI developer noted there are national security cases that already align with its mission, pointing to OpenAI’s collaboration with DARPA — or Defense Advanced Research Projects Agency — and the U.S. National Laboratories.

The company evaluates potential national security partnerships with a specific framework prioritizing “Democratic values, safety, responsibility and accountability.”

These uses also come with the need for guardrails, OpenAI said, referencing the company’s usage policies which ban the use of its technology “to harm people, destroy property, or develop weapons.”

Earlier this month, the tech giant said it continues to see attempts by cybercriminals to use its AI models for fake content to interfere with this year’s elections.

“We believe the U.S. government and U.S. companies like ours have an opportunity to take the lead on setting norms around how AI is safely and responsibly used in the national security context, just like we’re leading the development of the technology itself,” the company wrote. “As we explore potential partnerships with the U.S. government and allies, we want to help set those norms with transparency and care.”

The conversation over guardrails on AI has spread across government in recent months, including in Congress, where lawmakers have held various hearings and meetings with experts to understand the risks and benefits of the technology. 

While OpenAI touts its commitment to safety, recent changes in leadership and its plans to restructure as a for-profit business has some experts questioning if the company will potentially depart from these values.



Source link

About The Author

Scroll to Top