Security Expert Warns of Three Major AI Fears

(BrightPress.org) – Cybersecurity experts speaking with The Sun have shared three potential avenues of attack that criminals could exploit using new artificial intelligence (AI) technologies. They cite popular services offered by Google Bard, Chat GPT, and AI art generation tools as potential tools criminals and others can use to deceive or manipulate unsuspecting individuals.

Advisor to Advanced Cyber Defense Systems and criminologist Paige Mullen described a few of the ways these tools could be taken advantage of by bad actors. Mullen warned that AI is being used to generate phishing scams; these are phone calls or emails that request some bit of personal information, or money. They may come in the form of a phone call about your car’s extended warranty expiring or an email that your UPS package needs some extra cash to be delivered. Mullen suggested that with a few tweaks, one could use Chat GPT to deliver a script for a phone call or email for this type of attack.

The next type of deception AI can conjure is “deepfakes.” Deepfakes are incredibly convincing audio clips, videos, or pictures that claim to be real but are entirely fabricated. Famously, Joe Rogan deepfakes are very easy to generate given the volume of audio and video content he’s put out. One such scam used its content to generate a fake advertisement for a product he has nothing to do with.

The last major issue users need to concern themselves with is privacy. When folks sign up for any app or service these days there’s a mile-long page of text they have to “accept the privacy policy” to use the service. How companies exploit their users’ data varies widely.

There’s also a concern that data from a service like Chat GPT could be leaked or otherwise accessed by unauthorized parties. Recently they found a glitch where users could see the headlines of other users’ chats. The problem was fixed in short order, but it likely isn’t the only bug in the system.

Copyright 2023, BrightPress.org