TECH NEWS – Artificial intelligence has made great strides, but it’s time to think about security…
OpenAI has taken AI to a whole new level, and the tech community is paying more attention as the company reaches new heights. According to a leading AI expert, OpenAI’s new model, o1, is not better at reasoning, but much more powerful at fooling its users. The model can handle and solve complex problems in a human way, with a problem-solving strategy. Despite its superior reasoning abilities, Apollo Research, an AI company, believes that the O1 model is better at lying.
A Business Insider article is being discussed on Reddit, and in it, AI expert Yoshua Bengio emerges as a key figure in the field of egiy. He believes that stronger security testing needs to be implemented to avoid the negative consequences that users may commit with the O1 model. “In general, the ability to deceive is very dangerous, and we should have much stronger security tests to evaluate that risk and its consequences in the case of o1,” Bengio said.
Like many others, Bengio is concerned about the rapid development of artificial intelligence and the urgent need for legislative safeguards. He proposes a law similar to California’s SB 1047, which imposes strict safety restrictions on AI models. SB 1047 is an AI safety bill that would regulate powerful AI models and require companies to allow third-party testing to assess harm or address potential risks.
OpenAI has ensured that the o1-preview is managed under the Preparedness Framework, which is designed to manage the risks associated with the development of AI models. The model is rated as medium risk and concerns are moderate. Bengio also emphasized the need for companies to demonstrate greater predictability before moving forward with AI models and deploying them without adequate safeguards. According to him, a regulatory framework is needed to ensure that AI is moving in the right direction.
It is therefore worth reflecting on what Bengio says.
Source: WCCFTech