Call us Toll Free (USA): 1-833-844-9468     International: +1-603-280-4451 M-F 8am to 6pm EST
Securing Election Integrity – Reflecting on the Complex Landscape of 2024’s Threats

Securing Election Integrity – Reflecting on the Complex Landscape of 2024’s Threats

As we reflect on the 2024 election year, safeguarding the integrity of our democratic process remains more critical than ever. While much attention has been focused on securing ballot machines, the real threats extended far beyond the physical infrastructure. Misinformation, cyberattacks, and the rise of generative AI technologies like deepfakes presented significant challenges.

Between June 18 and July 12, the Trustwave SpiderLabs team received and analyzed more than 5,000 emails containing political subject matter coming from secure email gateway cloud submissions and spam trap collections. This included samples from both political parties of Democrats and Republicans, ranging from supportive to scathing. Topics included the introduction and promotion of candidates, campaign updates, derogatory remarks towards the opposition, and conspiracy theories.

Despite the differing points of view of these email senders, two things remained constant: the call for monetary donation and the usage of propaganda techniques. Clearly, threat actors leveraged any and all vectors to incite public opinion toward or away from their desired target. Understanding these tactics, mitigating their associated risks, and implementing proactive measures were all essential for voters, campaign workers, and media professionals alike.

Misinformation: The Invisible Enemy

Misinformation is a pervasive threat in our digital age. With the organic nature of social media, biased algorithms, and the rapid spread of fake news, misinformation has easily influenced public opinion. Social media platforms, despite their efforts to combat false information, remain a primary vehicle for the spread of misleading content. As we headed into election season, the potential for misinformation to shape voter perceptions and decisions was at an all-time high.

Key issues like healthcare, the economy, and education were particularly vulnerable to manipulation. Misleading narratives were crafted to exploit voter fears and biases, swaying public opinion and potentially altering the outcome of the election. For example, a headline or video featuring a candidate proclaiming their intent to put an end to a widely endorsed healthcare policy could have been believed and left unchallenged by many viewers. Without proper verification, such content spread rapidly across myriad platforms—its presence on Facebook, Instagram and X instantaneously and simultaneously served to further validate it, regardless of its authenticity.

It remained more crucial than ever for voters to critically evaluate the information they encountered and rely on reputable sources for their news. Training voters to cross-check information with multiple credible sources can greatly reduce the spread of false information moving forward. It will continue to be important for voters to always vet anything seen on social media against reports from legitimized media outlets.

The Digital Battlefield

In addition to misinformation, cyberattacks pose a significant threat to election security, and the introduction of generative AI has only inflamed this threat.

State-sponsored actors and independent hackers alike have demonstrated their ability to disrupt electoral processes through various means. From hacking into voter databases to launching denial-of-service attacks (DoS) on critical infrastructure, the tactics used in cyber warfare are diverse and constantly evolving.

Recent years have seen a rise in ransomware attacks targeting local government systems, including those responsible for managing elections. These attacks lead to the theft of sensitive voter information, disruptions in the voting process, and a general erosion of public trust in the electoral system. Not only do these attacks have the potential to spread fake news, but they also enable blackmail and leverage advanced phishing campaign tactics. For example, a campaign email could carry a malicious link to urge voters to click on a link to view the candidate’s recent speech. If the threat actor leverages AI, that link or any accompanying image could very well be realistic enough for the common citizen to click on it, exposing them to malware.

Mitigating phishing or malware threats may sometimes be left up to the individual on the receiving end, but strengthening cybersecurity measures at all levels of government is also essential to these risks—especially given the proliferation of AI. To combat the misuse of AI and the threat of automated cyberattacks, several nations had developed or rolled out protective legislation this past year. In the US, the Federal Artificial Intelligence Risk Management Act of 2023 directed federal agencies to follow guidelines for managing AI-related risks. States like California and New York also enacted laws to regulate AI systems and ensure ethical conduct.

Deepfakes and the New Frontier of Deception

Among the many threats to election security, deepfakes represent a particularly concerning development. These AI-generated videos depict individuals saying or doing things they never did, creating highly realistic but entirely false narratives. As technology advances, deepfakes become increasingly difficult to detect, posing a significant challenge for both the public and media professionals.

The ease of creating deepfakes has lowered the barriers for malicious actors. Freely available apps and user-friendly software mean that virtually anyone can generate a convincing deepfake. This democratization of technology made widespread misinformation more plausible than ever before. Last year, malicious actors produced and disseminated deepfakes quickly and in large volumes, flooding social media with fake content designed to influence voter decisions on key issues.

Deepfakes can even be tailored to exploit the fears and biases of specific demographic groups, potentially swaying public opinion against a candidate. Because deepfakes are so difficult to spot and often play on voters’ deepest fears, it remains essential for everyone to stay vigilant. The news media plays a crucial role in verifying information, and campaign organizations should also work to raise awareness by urging the public and tech companies to review and filter unverified videos moving forward.

The average person also bears a certain amount of responsibility for vetting campaign ads, videos, and other media encountered. Similar to how, in traditional cybersecurity, everyone is responsible for identifying phishing scams, it is necessary that every voter questions the authenticity of the photo and video media they see in future elections.

Detection and Prevention

Despite the sophisticated nature of these threats, there are measures that can be taken to combat them. For misinformation and fake news, media literacy campaigns and public awareness initiatives are crucial. Voters need to be educated on how to identify false information and encouraged to verify the credibility of their news sources. Social media platforms must also continue to improve their algorithms to detect and remove misleading content more effectively.

In the realm of cybersecurity, government agencies and private organizations should continue to collaborate to enhance the security of election infrastructure. Regular security audits, robust encryption methods, and comprehensive incident response plans are vital components of a resilient electoral system. Additionally, investing in advanced threat detection technologies can help identify and mitigate cyber threats before they cause significant damage.

For deepfakes, the development of sophisticated detection tools is paramount. AI-driven solutions analyze videos for signs of manipulation, such as inconsistencies in lighting, shadows, and facial movements. Public awareness campaigns should also be launched to inform voters about the existence of deepfakes and provide guidance on how to recognize them. Practical tools, such as Intel’s FakeCatcher, Microsoft’s Video AI Authenticator, and Deepware, are particularly helpful as they leverage machine learning to analyze videos for signs of manipulation.

This Election’s Legacy

With lessons from 2024’s election security threats forefront in mind, leaders must continue to advocate for and support legislation that regulates the use of AI and imposes penalties for the creation and distribution of malicious deepfakes and misinformation. Encouraging international cooperation on AI regulation and targeted, politicized cyber threats can also help create a unified approach, and general rules of thumb to shoring up election security.

It is imperative that the next elections’ voters, campaign workers, and media professionals stay vigilant and informed about these threats. By doing so, we can collectively work towards maximizing the security and transparency of our electoral process, ensuring that the voice of the people is accurately represented in the outcome of administrations to come.

About the Author

Securing Election Integrity – Reflecting on the Complex Landscape of 2024’s ThreatsKarl Sigler is a Security Research Manager at Trustwave SpiderLabs where he is responsible for research and analysis of current vulnerabilities, malware and threat trends. Karl and his team run the Trustwave SpiderLabs Threat Intelligence database, maintaining security feeds from internal research departments and third-party threat exchange programs. His team also serves as liaison for the Microsoft MAPP program, coordinates Trustwave SpiderLabs responsible vulnerability disclosure process and maintains the IDS/IPS signature set for their MSS customers. With more than 20 years’ experience working in information security, Karl has presented on topics like Intrusion Analysis, Pen Testing and Computer Forensics to audiences in over 30 countries. Karl can be reached online at https://www.linkedin.com/in/ksigler/ and at our company website www.trustwave.com.

cyberdefensegenius - ai chatbot

13th Anniversary Global InfoSec Awards for 2025 now open for early bird packages! Winners Announced during RSAC 2025...

X