According to new internal documents review by NPR,Historical Archives Meta is allegedly planning to replace human risk assessors with AI, as the company edges closer to complete automation.
Historically, Meta has relied on human analysts to evaluate the potential harms posed by new technologies across its platforms, including updates to the algorithm and safety features, part of a process known as privacy and integrity reviews.
But in the near future, these essential assessments may be taken over by bots, as the company looks to automate 90 percent of this work using artificial intelligence.
Despite previously stating that AI would only be used to assess "low-risk" releases, Meta is now rolling out use of the tech in decisions on AI safety, youth risk, and integrity, which includes misinformation and violent content moderation, reported NPR. Under the new system, product teams submit questionnaires and receive instant risk decisions and recommendations, with engineers taking on greater decision-making powers.
While the automation may speed up app updates and developer releases in line with Meta's efficiency goals, insiders say it may also pose a greater risk to billions of users, including unnecessary threats to data privacy.
In April, Meta's oversight board published a series of decisions that simultaneously validated the company's stance on allowing "controversial" speech and rebuked the tech giant for its content moderation policies.
"As these changes are being rolled out globally, the Board emphasizes it is now essential that Meta identifies and addresses adverse impacts on human rights that may result from them," the decision reads. "This should include assessing whether reducing its reliance on automated detection of policy violations could have uneven consequences globally, especially in countries experiencing current or recent crises, such as armed conflicts."
Earlier that month, Meta shuttered its human fact-checking program, replacing it with crowd-sourced Community Notes and relying more heavily on its content-moderating algorithm — internal tech that is known to miss and incorrectly flag misinformation and other posts that violate the company's recently overhauled content policies.
Topics Artificial Intelligence Meta
(Editor: {typename type="name"/})
How to Easily Make iPhone Ringtones Using Only iTunes
OpenAI reportedly plans to block access in China. Chinese AI companies may fill the void.
China and EU nearing agreement on import tariffs on Chinese EVs: report · TechNode
NetEase launches social content app NetEase Bee · TechNode
Fyre Festival and Trump’s Language
Wordle today: The answer and hints for June 26
Ukraine vs. Belgium 2024 livestream: Watch Euro 2024 for free
Las Vegas Aces vs. Chicago Sky 2024 livestream: Watch WNBA for free
Watch how an old Venus spacecraft tumbled before crashing to Earth
Here's how Google thinks AI should be regulated
This is the fattest of the extremely fat bears
NASA's asteroid sample is about to plunge 63K miles to Earth
接受PR>=1、BR>=1,流量相当,内容相关类链接。