Facebook actually stopped harmful content from spreading.
The homo eroticism of action heroescompany said in a press release Monday that it removed or added a content warning to 1.9 million pieces of "ISIS and al-Qaeda" content in January through March — twice the amount it removed in the previous three months.
SEE ALSO: A UK journalist is suing Facebook for defamation over fake adsSupposedly, 99 percent of that content was removed because Facebook's technology and employees found it, not because users reported it.
"In most cases, we found this material due to advances in our technology, but this also includes detection by our internal reviewers," wrote Monika Bickert, Facebook's vice president of global policy management, and Brian Fishman, its global head of counterterrorism policy.
Facebook's counter-terrorism team has grown to 200 people from 150 last June, the company said. Overall, terrorist material posted to Facebook was typically removed within a minute, according to the press release.
Facebook also made the unusual (dare we say editorial?) decision to define terrorism on its platform: “Any non-governmental organization that engages in premeditated acts of violence against persons or property to intimidate a civilian population, government, or international organization in order to achieve a political, religious, or ideological aim.”
No doubt cognizant that critics are sensitive about the social network's political leanings — in a Congressional hearing with Mark Zuckerberg earlier this month, Sen. Ted Cruz (R-Tex) trotted out familiar, paranoid concerns about its supposed anti-conservative bias — Facebook also went out of its way to say its "definition is agnostic to the ideology or political goals of a group."
Terrorist organizations have used Facebook in the past to recruit new members, boast about attacks, and even share gruesome images of acts of violence, such as beheadings. The U.S. Department of Justice has claimed that ISIS uses Facebook, Twitter and YouTube to target isolated young people in Europe, the United States, and Canada with recruitment messages.
Meanwhile, Facebook is still under attack for allowing the spread of propaganda and misinformation on its platform. It makes sense that it would want to show that technology and minor tweaks to staffing — as opposed to government regulation and changes to its business model — can prevent harmful messages, photos, and videos from going viral. Perhaps not coincidentally, Facebook will report its latest earnings this Wednesday.
"We’re under no illusion that the job is done or that the progress we have made is enough," the company wrote. "Terrorist groups are always trying to circumvent our systems, so we must constantly improve. Researchers and our own teams of reviewers regularly find material that our technology misses. But we learn from every misstep, experiment with new detection methods and work to expand what terrorist groups we target."
Topics Facebook
(Editor: {typename type="name"/})
Many Indigenous communities still lack broadband internet. Here's why.
NYT Connections hints and answers for June 7: Tips to solve 'Connections' #727.
Scientists explore deep sea around Easter Island, find strange animals
China releases draft regulations on facial recognition technology · TechNode
How to make a hook in a TikTok video
Bear cub is killed by a dominant male on Alaska's bear cam
Air pollution is increasing in parts of the U.S. because of wildfires
How to unpair an Apple Watch: Resetting your smartwatch in a few simple steps
Best early Prime Day Fitbit deals 2025
'Sasquatch Sunset' review: Gross
Hands on with Lenovo's 'rollable' display laptop at CES 2025
Antfin sells 10.3% of $628 million Paytm stake to company’s CEO · TechNode
接受PR>=1、BR>=1,流量相当,内容相关类链接。