Stay Safe: AI Use In Cybercrime

TechStay Safe: AI Use In Cybercrime

When AI became mainstream, many people started to believe we were living in the Terminator movie series and that ChatGPT was the new Skynet. Well, a young and buff version of Arnold Schwarzenegger didn’t travel back in time for us, so we’re probably safe.

However, that doesn’t mean that giving the power of AI to the general public was overwhelmingly positive. In society, there’s a small subset of bad actors who want to use technology for evil.

Cybercriminals use every tool at their disposal to do as much harm as possible and get as much money as they can. AI has a leading role in today’s cybersecurity climate, and here are some ways it’s being used.

WormGPT

WormGPT is ChatGPT’s evil brother. It has no limitations or ethical boundaries and was widely marketed on many dark web cybercrime forums. WormGPT can produce copy that can be used in hacking campaigns.

Users can only access it after a paywall on the dark web, and it bypasses the security guardrails introduced in ChatGPT.

When security firms analyzed this malicious AI tool, they discovered it was mainly used for business email compromise attacks. The model was trained on malware-related data, which remains confidential.

FraudGPT

Another evil AI tool enters the GPT arena, and it’s called FraudGPT (or FraudBot). It circulates in Telegram groups and on the dark web, and cybercriminals market it as an all-in-one solution for cybercrime.

FraudBot has an annual price of $1700 and a monthly subscription of $200, and it has already had over 3000 sales. Bad actors can use it to find vulnerabilities, create phishing pages and undetectable malware, and write fraudulent messages, scam pages, and malicious code.

Some experts believe that FraudGPT was created by the same group that runs WormGPT, whereas one tool focuses on high-volume and short-term attacks, while other deals with ransomware and malware.

Upgraded Scams

The AI apocalypse isn’t Skynet taking over the planet. It’s a bit dumber than that, and it exploits our emotions. Generative AI helps scam artists elevate their game, and it’s taking a toll on our pockets. Here’s how:

Pig Butchering Scams

Unlike what the name suggests, these types of scams have nothing to do with animal cruelty or pork. It’s the process of leading pigs to a butcher that created the name. Pig butchering scams are essentially sophisticated romance scams.

The scammer will contact you on dating apps or social media and build a relationship with you. Eventually, they’ll try to convince you to invest in a crypto scheme. If you are in love and trust that person, you might spew out the money (don’t fall for it).

Humanitarian Scams

This is the most immoral type of scam that exists. Scammers take advantage of events in Ukraine, Palestine, Yemen, or anywhere a human-made or natural catastrophe occurs. They start social media campaigns and outreach for donations into cryptocurrency wallets.

Of course, they create hundreds of fake accounts to make it seem more real and use emotional words or heartbreaking images to extract money from you. If you want to help, only donate through trusted organizations.

Inheritance Scams

Inheritance scams have been around since the dawn of the internet. And they are back again. The way this scam works is simple. You receive a physical letter or an email that claims your relative has died, and you’re eligible for an inheritance of a large sum of money.

To get it, you need to send them personal information (like your ID, SSN, or passport) and send money so the organization can start the process.

The easiest way to notice this is a scam is if you notice urgency or if they are asking you to keep the inheritance process a secret. It’s scary how accurate these scams can get with the help of generative AI and by scanning your social media accounts.

How can you avoid being scammed by AI?

For now, AI content is primarily text-based. It’s close to how a human would write, but some signs give it away. Here are some of them:

  • The text is too perfect. AI follows strict rulebooks, and there’s a strict style to its writing. A regular chat has multiple styles, and it feels natural. If something seems off from the general tone of the writing, there’s a chance it could be AI.
  • See if you can find a pattern. Tools like ChatGPT often use alliteration, start sentences in the same manner, “In the era of…”, and may use too many emojis. Combine that with the first tip, and it’s probably AI.
  • Weird slang and idioms. You know the feeling when you try to use an idiom in a second language. It just sounds weird. Common sayings are often universal, but translated literally, they feel a little off. That’s another red flag you should be looking for.
  • No human experience. Apart from slang, people use shortened words (gm – good morning, jk – just kidding, gn – good night). AI doesn’t have that human element and rarely adds a personal touch. It’s too generic.

Final Words

Lastly, you must get your hands on cybersecurity tools. An antivirus, a VPN, and a firewall will block almost everything malicious.

Even if you start falling for a phishing email and downloading an attachment, these tools may pop some notifications and restore your judgment.

Check out our other content

Check out other tags:

Most Popular Articles