Microsoft And Meta Are Secretive: Is Artificial Intelligence Dangerous Under Them?

TECH NEWS – A representative of each of the two US tech companies has gone on record before a British committee…

 

Unfortunately, it is all too common to see people fail to answer the question asked, which is usually a legitimate one, and either run away or evasively fail to answer what they should, but pull out the accusation card. A similar performance was given by Rob Sherman (Meta’s policy and deputy chief privacy officer) and Owen Larter (Microsoft’s director of global responsible AI public policy) to the House of Lords Communications and Digital Committee. They discussed LLMs (large language models) and some of the broader implications of artificial intelligence.

The chair of the committee, Baroness Stowell of Beeston, asked whether a company could recall an AI model if it was deemed dangerous, or stop its further use. Sherman’s answer was a little different from what was supposed to be said:

“I think it depends on what the technology is and how it’s being used…one of the things that’s very important is to think about these things up front before they’re released…there are a number of other things that we can do, for example, once a model is released, there’s a lot of work that what we call a deployer of the model has to do, so there’s not just one actor that’s responsible for deploying this technology… When we released Llama, [we] put out a responsible use guide that talks about the steps that a deployer of the technology can take to make sure that it’s being used safely, and that includes things like what we call fine-tuning, which is taking the model and making sure that it’s being used appropriately…and then also filtering on the output to make sure that when somebody uses it in a final capacity, that the model is being used responsibly and thoughtfully,” Sherman said.

Larter didn’t respond, but the discussion covered a number of and had some time constraints, but Meta’s lack of response is typical of the company. of the company. Only, as time went on, neither of them could explain what the discussion was about. Sherman had earlier raised the issue of the potential irresponsible use of artificial intelligence: “What are the risks that we’re thinking about, what are the tools that we have to assess whether those risks exist, and then what are the those risks exist, and then what are the things we need to do to mitigate them?”

Both thought the discussion should continue, but neither was enthusiastic. was enthusiastic. A lot of the time their response was just that they were working on something. working on something, but it was not very informative.

Source: PCGamer

Spread the love
Avatar photo
Anikó, our news editor and communication manager, is more interested in the business side of the gaming industry. She worked at banks, and she has a vast knowledge of business life. Still, she likes puzzle and story-oriented games, like Sherlock Holmes: Crimes & Punishments, which is her favourite title. She also played The Sims 3, but after accidentally killing a whole sim family, swore not to play it again. (For our office address, email and phone number check out our IMPRESSUM)

No comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.