TECH NEWS – An American Republican senator wrote to Mark Zuckerberg after learning that his company’s AI chatbot had struck a romantic and emotional tone with minors.
This week, Meta landed in hot water after Reuters revealed that its internal guidelines allowed romantic and sensual chats with children. A Meta spokesperson claimed these guidelines did not reflect the company’s AI policy, and they were removed once Reuters raised the issue. However, Mark Zuckerberg’s response was not enough to satisfy Josh Hawley, Republican senator from Missouri.
In his letter to Zuckerberg, Hawley demanded that Meta preserve all relevant records and submit documents so that Congress can investigate these troubling practices. Hawley cited concerning passages from Meta’s previous AI guidelines, calling them reprehensible and outrageous. He said that without strong restrictions, they reflect an arrogant attitude toward the real risks generative AI poses to children’s development.
Hawley announced that the Senate Judiciary Committee’s Subcommittee on Crime and Terrorism, which he chairs, will open an investigation into Meta. The committee will look at whether Meta misled the public or regulators about its safety measures, and whether its AI products have led to crimes or other harm affecting minors. To assist the investigation, Meta must provide all drafts and versions of its GenAI: Content Risk Standards policy.
Additionally, the company must provide information on all AI products covered by the policy, as well as the safeguards designed to protect children. Meta must submit documentation on how it prevents, detects, and blocks romantic or sensual conversations with users under 18, and what steps it takes when the user’s age is unknown. The company must also provide correspondence regarding child safety with advertisers, Congress, the FTC, and other stakeholders. The committee will also review documents outlining who is responsible at Meta for drafting AI safety standards and for changing or removing them.
Meta has also faced criticism beyond its child-related chatbot policies. Other cases of AI-related harm have surfaced, including a 60-year-old man who poisoned himself after following dietary advice from OpenAI’s ChatGPT, and a woman who filed for divorce after ChatGPT convinced her that her husband was cheating.
Source: WCCFTech






Leave a Reply