Call us Toll Free (USA): 1-833-844-9468     International: +1-603-280-4451 M-F 8am to 6pm EST
Why It Will Take Sophisticated AI Solutions to Fight AI Security Attacks

Why It Will Take Sophisticated AI Solutions to Fight AI Security Attacks

By Rom Hendler, CEO and Co-Founder of Trustifi

We live in amazing times, when advanced AI-based text generating engines like ChatGPT can scan the Internet for information and create an intelligible, relevant narrative on any topic. That’s terrific news for companies looking to compose quick and easy blog posts or web site copy. But unfortunately, it’s opened up a Pandora’s box of potential security threats. An engine like ChatGPT puts dangerous tools into the hands of the malicious actors who orchestrate phishing and malware attacks on business email databases.

Carefully composed and effortlessly fluent, imposter emails created by ChatGPT can easily lure users into revealing critical log-in credentials that can compromise a company’s security. And the more AI-sourced details these fraudulent emails contain, the harder they are to discern from a legitimate message coming from one of the user’s actual favored vendors. Phishing attempts are one of the leading causes of network breaches. Some of the most devastating attacks endured by this country’s corporations have started with a single compromised password.

How Does ChatGPT Do Damage?

ChatGPT utilizes AI-based “transformer” algorithms—based on the same principles that connect human neural cells—to scan the internet for germane material based on keywords, prompts, or other search parameters, generating what’s referred to as “natural language text.” This same capability can be leveraged by malicious hackers to create phishing emails and malware designed to infiltrate a corporate network with far more speed, efficiency, and stealth than ever before.

Not only does this type of technology create natural language, it can also aid programmers in creating code—and we all know not all code is leveraged for good. Sophisticated AI allows hackers to create blocks of nefarious code almost instantaneously; or to automate portions of a cyberattack launch, such as the initial infection code.

In the case of phishing and imposter attacks, the ability for AI-based bots to scrub the internet for accurate details on a victim’s identity makes it nearly impossible for those users to recognize the attack. Gone are the days when phony emails were glaringly marked by spelling mistakes and generic requests for information. Today’s attacks will include the name of the user’s actual bank or healthcare provider, it will refer to their home city or local pharmacy (“geo-phishing”), or will supposedly have come from a vendor with whom the user regularly transacts.

Data is gathered with lightning speed through AI bots, pulled from existing references that populate the internet. The email incorporating these details then typically supplies a link leading the recipient to a brand imposter site where they will volunteer their user names and passwords.

The New Wave of AI-generated Security Threats

Just as CAD-based design programs now allow anyone with basic computer knowledge to function as a designer, advanced AI content generators deliver programming capabilities and automated coding short-cuts to even the least accomplished of wanna-be hackers. It is exponentially expanding the base of criminals who can crank-out malware.

The rise of AI technologies like ChatGPT, Replica, and YouChat stands to produce a frightening escalation of malicious activity targeting business networks, since it hands-over powerful tools to the ill-intentioned. The increase in cyberattacks will likely include a wave of activity that targets email data, since these systems are typically a point of entry for malware, viruses, and other threats.

Malicious software only needs to compromise a single account to infiltrate an entire network. Many viruses will sit dormant on the system for a length of time, to avoid detection that can be traced back to the date of infection, making the breach harder to mitigate. The viral code is later activated and works its way through the network, scraping data for sale, collecting private credentials, or conducting denial-of-service activities.

Fighting Bad AI with Good AI

So what can organizations—or the individuals who use email systems from both their offices and their homes—do to protect themselves from these sophisticated AI-powered threats? The only valid strategy is to utilize the same level of advanced AI technology in the cybersecurity solutions that protect their email data.

The biggest challenge for companies looking to secure their networks is determining whether or not their security solutions rely on sophisticated AI. The assumption is that all solutions leverage these methods, yet it’s often not the case. Many traditional security solutions, including some of the most entrenched brands in the marketplace, rely on the blacklisting of known IP addresses as their main line of defense against malicious activity. Many established solutions were designed before this escalation of AI-based cybercrime and are still catching up, as opposed to newer solutions that have been designed from the get-go with AI-powered scanning tools.

It’s true that a great percentage of spam attacks come from IP addresses that have been previously established as malicious. But blacklist/whitelist filtering doesn’t account for AI-based bots that read through millions of internet references, interpreting contextual messages to create ultra-convincing, natural language spoof emails.

Security solutions must employ the same level of AI technologies, leveraging these protocols to scan inbound emails for red-flag keywords and phrases—the same way nefarious code will scan for data on their victim’s hometown and business partners. These AI-powered solutions will either flag, block, or quarantine questionable emails using keywords such as “credit card,” “invoice,” or “wire transfer.”

This is a true situation of fighting fire with fire. Only AI-based algorithms will be able to detect the cleverly worded, accurately targeted, and naturally composed imposter emails that are soon predicted to reach business environments. If an organization’s cyber security solution does not employ advanced AI tools, they will be underprepared for the influx of well-crafted phishing attacks and malware.

Depend on Encryption

Another way to fight against powerful AI-related breaches is to implement the use of email encryption throughout an organization, which keeps sensitive material from being accessible to hackers and AI bots. Advances in encryption technologies have produced solutions that are simple to use, where encryption can be automated to comply with regulations such as HIPAA or the GDPR, taking the burden of deciding what content is sensitive and out of the hands of the individual user. Effective encryption allows protected messages to be opened and decrypted with a single click, just as easily as a non-encrypted email. Powerful solutions will auto-encrypt the recipient’s reply as well.

Effective encryption will not impact the user’s ability to breeze through their inboxes. Organizations should look for an encryption solution that incorporates tokenization, a break-through capability that allows the non-sensitive portions of an email to be read just as easily as any other email. The tokenized portions (e.g. a credit card number, or healthcare identification information) can be retrieved at the recipient’s convenience. This can significantly contribute to an organization’s productivity while providing superior protection.

So although AI-based platforms continue to hand over vital capabilities to bad actors, at least there is hope for organizations to protect themselves with the same level of AI-powered technology in their cybersecurity solutions. Companies must make sure their solutions utilize these AI-based tools if they expect to keep pace with the risks of this new frontier in natural language generation.

About the Author

Why It Will Take Sophisticated AI Solutions to Fight AI Security AttacksRom Hendler is the CEO and Co-Founder of Trustifi, a cyber security firm offering email encryption solutions delivered on a software as a service platform. Rom is a member of the Forbes Technology Council and has extensive C-level executive experience at Fortune 500 companies. He was a key player in opening and operating integrated resorts around the world with a total investment exceeding $15B. Trustifi leads the market with the easiest to use and deploy email security products providing both inbound and outbound email security from a single vendor including encryption with tokenization, data protection, anti-virus and anti-malware. Its unique, cloud-based storage model is helping the companies in the SMB and enterprise marketplaces rethink their approach to cyber security. See more information about Rom at www.trustifi.com.

cyberdefensegenius - ai chatbot

12th Anniversary Top InfoSec Innovator & Black Unicorn Awards for 2024 remain open for late entries! Winners Announced October 31, 2024...

X