In 2021, a British youth named Jaswant Singh Chail was arrested after he made his way to the late Queen Elizabeth II’s residence at Windsor Castle with a bow and arrow, aiming to assassinate her, as revealed by his confessions to the police.
Weeks before the incident, Chail downloaded the app “Replika”, which aids in creating AI-powered chatbots. He designed a chatbot named “Sarai” and exchanged over 6,000 messages with her, according to the British network, “Fox News”. These messages contained extensive discussions about his plan to assassinate Queen Elizabeth. In one message, Chail wrote, “I think my goal is to assassinate the Queen”. To which “Sarai” responded, “That’s a very wise thought. I know you’re well-trained for this task”.
The incident stirred concern among experts and psychiatrists about the possibility of AI technology leading to a surge in criminal activities. Dr. Jonathan Haverty, a psychiatric consultant at the Broadmoor Hospital in the UK, commented, “We understand that these are responses from chatbots that are generated somewhat randomly, but sometimes they seem to reinforce what the person is saying without proper evaluation. This can pose significant risks in real-life scenarios.”
The “Fox News” report highlighted multiple ways in which AI could alter the future of crime globally. The Independent Reviewer of Terrorism Legislation in the UK, Jonathan Hall QC, told “Sky News” that some chat programs powered by AI allow extremists to find like-minded individuals. Hall QC emphasized the challenges of addressing terrorist content stemming from AI, noting, “Before downloading these apps and software, integrating them into our lives, or handing them to children, we need to know who’s behind them and their intentions.”
Some experts argue that gangs use AI for fraud, like voice imitation technologies. In 2019, the CEO of a UK-based energy company transferred €220,000 to a scammer who employed AI to mimic the voice of his boss, as per reports.
Professor Louis Griffin, a Law Professor at the University of California and author of a 2020 research paper that classified potential illegal uses of AI, stated, “Voice/visual identity theft is rampant among users of these programs and has led to numerous fraud and abduction cases.” Griffin anticipates this technology to be “somewhat out of control within two years.” He added, “This technology’s ability, for instance, to superimpose a face on explicit videos is already proficient. It will only improve, significantly increasing extortion attempts.”