In recent years, governments have pressured tech companies like Facebook, Twitter, and Google to do more to combat online radicalization on social-media platforms, which terrorist groups have used to recruit and spread propaganda.
A joint anti-terror campaign launched by the U.K. and France earlier this year considered imposing fines on social-media companies that “fail to take action” against violent content and terrorist propaganda.
Partially in response to those pressures, Facebook, Microsoft, Twitter, and YouTube launched the Global Internet Forum to Counter Terrorism this summer, an extension of existing anti-terror measures intended to improve technology for detecting terrorist material online and provide counter-narratives for potential terrorist recruits.
It’s unclear how effective the group has been so far, but at least one company involved in the partnership says it has had success in seeking out and removing terrorist activity on its platform. In its biannual transparency report, Twitter announced on Tuesday that it had removed 299,649 accounts for the promotion of terrorism in the first half of 2017, and 935,897 accounts between August 2015 and June 2017.
Notably, Twitter said its own internal controls have allowed it to weed out accounts without requests from the government, increasing its efficiency: 75 percent of the accounts Twitter has removed were suspended before even posting their first tweet, Twitter says, and 95 percent of account suspensions “were the result of our internal efforts to combat this content with proprietary tools.”
Using automated tools is a necessity for a platform with 328 million users like Twitter. Facebook and YouTube have likewise adapted algorithms to combat extremist content, instead of removing it manually, which would likely prove an insurmountable task.
“In the last six months we have seen our internal, spam-fighting tools play an increasingly valuable role in helping us get terrorist content off of Twitter,” a Twitter spokesperson told TechCrunch. “Our anti-spam tools are getting faster, more efficient, and smarter in how we take down accounts that violate our T.O.S.”
Of course, terror-related accounts create only a tiny fraction of the abusive content that gets reported to Twitter. The company said that 98 percent of government takedown requests were related to “abusive behavior,” and that they took action 13 percent of the time.
Those takedowns, in addition to other new tools to combat online harassment, have helped to detoxify Twitter in recent months, creating a somewhat better user experience.
Investors are waiting to see those changes reflected in Twitter’s stock price. The company didn’t add any net users last quarter, and shares have traded sideways for much of the last two years. Still, cleaning up its image could help Twitter down the road.
Last year, when Twitter was hemorrhaging talent and looking like an acquisition target, the company’s well-documented harassment issues reportedly worried potential buyers like Disney and Salesforce.
Cracking down on terrorism would be good for society, but also, perhaps cynically, good for Twitter’s bottom line. Regardless of whether taking care of extremist content helps Twitter start making more money, it might, at a minimum, make it a more palatable asset if and when suitors come calling again.
admin in: How the Muslim Brotherhood betrayed Saudi Arabia?
Great article with insight ...
https://www.viagrapascherfr.com/achat-sildenafil-pfizer-tarif/ in: Cross-region cooperation between anti-terrorism agencies needed
Hello there, just became aware of your blog through Google, and found ...