1 min read

AI-aided cybercrime may be on the rise

 

Unless you live under a rock, you’ve noticed that AI (artificial intelligence) chatbots are a hot topic right now. It’s pretty fun to ask ChatGPT to write new Seinfeld episodes or streamline your workflow, right? 

Guess who else is experimenting with AI for more ominous reasons. That’s right, Prime Minister Justin Trudeau. Just kidding; I mean cybercriminals. 

Although AI can support our work, AI-supported cybercrime will also become more difficult to identify as the technology improves.

It used to be easier to bust a nogoodnik because of their poor grammar or spelling. But AI is changing that.

Naturally, you and your staff are careful with emails. You read unexpected messages closely and don’t click on any links that look even remotely suspicious.

No probs there.

However, because AI can fabricate phishing emails that are more human than the humans can, we could have a problem with scams. There are so many new variations of old swindles that it’s tough to keep up. Nowadays, these crooks use the AI to clean up their lures to make them more enticing. We’ve even seen fake email strings to make the sting look more authentic.

Even though there is software that will detect these messages, it’s a ways off. In the meantime, the same rules apply.

Take caution when you get unexpected email. Is the address from a company or person you recognize? That’s usually a dead giveaway. When in doubt, call, text, carrier pigeon (anything but email!) the sender to confirm. It only takes a minute. Unless it’s a carrier pigeon, of course.

We’re experts in this area, so if you need a hand with phishing scams, drop us a line.