Key Points:
- OpenAI forms Safety and Security Committee after high-profile resignations over safety concerns.
- A committee led by CEO Sam Altman will review current safeguards and recommend improvements within 90 days.
- This move follows resignations of Chief Scientist and Ambassador due to disagreements on prioritizing long-term AI safety research.
After a recent controversy and high-profile resignations, OpenAI, the leading research institute for artificial intelligence, has announced the formation of a new Safety and Security Committee. The committee, to be led by OpenAI CEO Sam Altman, comes amidst concerns about the organization’s commitment to the responsible development of AI.
Also Read: OpenAI Releases ChatGPT 4o: Enhanced Text, Video & Audio Processing
This move follows the resignation of Ilya Sutskever, OpenAI’s former Cheif Scientist and a leading figure in the field of AI safety. Sutskever’s resignation, along with that of his deputy Jan Leike, was reportedly driven by disagreements over the allocation of resources towards long-term safety research.
The controversy got depth after actress Scarlett Johansson, a former brand ambassador for OpenAI, publicly broke ties with the company. This came after a disagreement over the safety protocols surrounding a powerful new language model being developed by OpenAI.
The newly formed Safety and Security Committee will consist of six OpenAI employees with three members of the board of directors, including Altman himself. The committee’s primary function will be to conduct a thorough 90-day review of the safeguards currently in place for OpenAI’s AI models. Following the review, the committee will deliver a report with recommendations for improvement.
“OpenAI remains committed to the safe and ethical development of artificial intelligence,” said Althman in a statement. “The formation of this committee reflects our commitment to ensuring that our work is conducted in a responsible manner, with safety always at the forefront of our considerations,” he added.
Whether this committee can effectively address the concerns raised by Sutskever, Leike, and Johansson remains to be seen. Only time will tell if OpenAI can regain the trust of the public and the AI research community.
- Why Python is Used for Machine Learning: 10 Key Reasons - December 23, 2024
- The JS Developer’s Podcast [EP: 2] Variables and Data Manipulation - October 15, 2024
- YouTube Channels to Learn Coding: Top 9 Picks That Will Make a You Master - October 10, 2024