We've always been taught to be wary of common red flags in scam emails—typos, unusual grammar, and suspicious links. But what happens when these signs disappear? CNMO has noticed that with the rise of AI services, these warning signs are disappearing, and phishing attacks are becoming more personalized and dangerous than ever.
A new report released by cybersecurity experts at Kaspersky warns that AI is being used to create sophisticated scams that bypass natural human defenses. Clicks on malicious links increased by over 3% in just one quarter, a trend that suggests these new attacks are not only more realistic but also easier to pull off.
AI's ability to mimic human language has revolutionized cybercrime. Criminals are no longer limited to poorly written mass emails; they can now create perfectly crafted, personalized messages tailored to the target's identity or interests. This customization has become a powerful new tool for deceiving victims.
AI is also being used to generate incredibly convincing "deepfakes"—audio and video. For example, you might receive an urgent voicemail from your boss's voice or a video call from someone who looks like a family member, requesting a money transfer. These forgeries can bypass multi-factor authentication, steal passwords, or promote fake investments, making AI-powered phishing attacks more successful—especially for those with limited technical experience.
Foreign media analysis suggests that when old rules fail, the best defense is to remain vigilant and trust your intuition. While the grammar of scam messages is impeccable, there are still discernible patterns:
The most critical warning sign is a sense of urgency. Phishing attacks almost always demand immediate action and threaten consequences for non-compliance. For any unsolicited messages pressuring you to click a link, share passwords, or transfer money, always verify the authenticity of the request through independent channels, such as through a known number or direct contact.