Shaimaa Hafezy
Counterterrorism experts around the world are calling for adopting technological methods to combat terrorism and extremism, away from military solutions, and paying more attention to intellectual confrontation.
Chatham House published a research paper in August by Kathleen McKendrick, “Artificial Intelligence Prediction and Counterterrorism”, which said that artificial intelligence (AI) can be used to make predictions about terrorism.
By analyzing communications metadata, information on financial transactions, travel patterns, internet browsing activities, and publicly available information such as social media activity, terrorists can be identified by distinguishing the activity of a particular subgroup on these means.
Among the methods that can be used for data analysis is image and sound recognition, as machine learning methods allow the interpretation and analysis of patterns otherwise inaccessible in large amounts of data.
In her research, the author points out that the use of artificial intelligence makes it possible to predict the timing and location of terrorist attacks, where such models have already been developed.
Predicting suicide attacks
In 2015, tech startup PredictifyMe claimed that its model, which contained more than 170 data points, was able to predict suicide attacks with 72% accuracy. Other models have relied on open source data of individuals using social media and apps on their mobile phones. Among them is EMBERS, which integrates the results of various discrete predictive models to predict events such as disease outbreaks and civil unrest, according to the report.
In addition to predicting the location of terrorist operations, some technology companies have developed tools to assess vulnerability to violent extremist ideologies, such as Jigsaw, a subsidiary of Alphabet Inc. (formerly known as Google Ideas), which announced its “Redirect Method” project, which targets video-sharing users who may be exposed to propaganda from terrorist groups such as ISIS and redirects them to videos that adopt a credible counter-narrative to the organization.
The most important feature that AI applications can help with is to identify terrorists. Some leaked details of the US National Security Agency (NSA) SKYNET program suggest that an AI-based algorithm was used to analyze metadata from 55 million local Pakistani mobile phone users in 2007, and only 0.008% of the cases were identified as potential terrorists, or about 15,000 of Pakistan’s 200 million population.
Although the model used was not effective in itself, it illustrates the predictive value of data when identifying close links to terrorism.
But on the other hand, the use of these technologies requires access to user data, which puts them at the forefront of human rights, information and data confidentiality, and challenging privacy violation.
Dilemma of international laws
The author said there is a lack of established standards for the use of artificial intelligence technology, as there is no agreed international position on the limits of the uses of artificial intelligence in different societies, putting the rights and freedoms of citizens to the test.
McKendrick pointed out that as political systems differ between authoritarian and democratic, there is a growing need for adequate guarantees for the use of artificial intelligence by governments and security agencies, as well as a review of measures designed to protect not only privacy but also the fundamental freedoms of citizens, such as freedom of expression.
Another challenge in the application of technology in the fight against terrorism relates to the indiscriminate collection of data, the application of which depends on the entire population, she added. This would make it random, and therefore disproportionate in nature, as well as violate public privacy.
Apart from fears of government exploitation, private companies play a greater role in determining how customer data is used, especially in democracies where the authorities abide by the law, while it is a challenge in authoritarian systems where private companies are forced to give information to law enforcement and security agencies.
The fight against terrorism requires transparency in the collection and use of information, so the use of artificial intelligence may be a problem if it is seen as a threat to that approach, according to the study.
admin in: How the Muslim Brotherhood betrayed Saudi Arabia?
Great article with insight ...
https://www.viagrapascherfr.com/achat-sildenafil-pfizer-tarif/ in: Cross-region cooperation between anti-terrorism agencies needed
Hello there, just became aware of your blog through Google, and found ...