Call us Toll Free (USA): 1-833-844-9468     International: +1-603-280-4451 M-F 8am to 6pm EST
AI and Elections

AI and Elections

Artificial intelligence (AI) technologies are transforming U.S. elections, creating new opportunities as well as critical security challenges. AI’s capabilities to predict voter behavior, automate campaign strategies, and even generate political communications are advancing rapidly. At the same time, the integrity of democratic processes is now at imminent risk due to AI’s encroachment. As believable deepfakes, cyberattacks, and data manipulation become more prevalent, relying on technology in the voting process demands stronger precautions and defenses. Establishing a unified cybersecurity framework will help safeguard elections, protect voter data, and uphold democratic rights in the face of these evolving AI-driven threats.

AI and U.S. elections

A growing number of states are implementing or considering laws to regulate generative AI in political campaigns to address the risks of AI’s potential to mislead or suppress voters. Research shows a patchwork of state-specific rules, with 19 states already enacting laws against AI-generated deepfakes or synthetic media in elections. Early incidents, including AI-generated impersonations of political figures, have heightened concerns about the misuse of generative AI.

Laws in some states mandate disclosure of AI-generated content, while others allow legal recourse for affected candidates. These efforts vary widely, as some states provide exemptions for satire or parody and differ on whether violations are criminal offenses. Experts argue that federal action and a comprehensive framework are necessary, as state-level regulations alone cannot fully safeguard elections from evolving AI threats.

In November 2024, the Biden administration announced its support of a United Nations cybercrime treaty. The treaty’s key provisions aim to enhance international cooperation in preventing and combatting cybercrime through a common framework. Its measures include consistent definitions for cybercrime offenses, structured guidelines for law enforcement actions, and collaboration among member states. This effort supports the global unification of standards in cybersecurity.

Cybersecurity threats posed by AI

As AI models rapidly advance in analyzing complex information and generating highly believable content, their integration into elections introduces benefits and challenges. AI can be weaponized for launching believable cyberattacks, but also can be leveraged to defend against them—making it AI versus AI. AI-driven cyberattacks can disrupt voting and target critical electoral infrastructure. For instance, attackers could employ AI to orchestrate fake 911 calls, diverting emergency resources and creating openings for physical attacks on polling stations. Additionally, AI-enabled “data poisoning” involves introducing misleading information into training data, which can degrade the effectiveness of AI systems designed to secure electoral campaigns and processes.

States with weaker cybersecurity defenses are especially vulnerable. With security efforts technologically unsophisticated and fragmented across states, those lacking robust protections experience heightened data manipulation risks or attacks on voting systems.

The lack of a unified federal cybersecurity framework

The fragmented approach to election cybersecurity in the United States is a crucial vulnerability. While the Cybersecurity and Infrastructure Security Agency (CISA) provides some guidance and resources, implementation varies widely across states, leading to inconsistent protections. This state-by-state disparity results in security gaps, leaving voters in some regions more exposed to risks.

CISA outlines a cybersecurity toolkit and other resources to protect elections. To address election security, three primary categories of cyberthreats—phishing, ransomware, and distributed denial of service (DDoS) attacks—require distinct steps for understanding, protection, and detection. Phishing attacks involve cyber actors using emails, texts, and websites to deceive election officials into disclosing information or installing malware. Ransomware locks or steals data, and DDoS attacks overwhelm servers and hinder access to voting information.

This toolkit, while effective, doesn’t yet account for the added layer of complication that the inclusion of AI in election activities would present. A unified effort toward security could foster collaborative efforts between federal and state agencies, enhancing collective resilience against AI-driven threats. One potential solution is to integrate cybersecurity into the AI tech stack to embed inherent protection mechanisms, which could provide a protective layer even in the face of delays in the implementation of regulations for AI development and usage.

Designing a unified federal cybersecurity framework

A unified federal cybersecurity framework ensures that AI technologies are resilient, compliant, and resistant to misuse by building safeguards directly into the development environment. Here are the summarized recommendations:

Engage in data security and privacy protection

  • Data encryption, anonymization, and storage protocols. Use strong encryption standards for data at rest and transport layer security/security sockets layer (TLS/SSL) protocols for data in transit. Implement critical management systems for secure encryption key handling. Apply data anonymization techniques (e.g., k-anonymity, synthetic data) during preprocessing. Secure storage with encrypted cloud solutions and role-based access controls; ensure off-site backup storage for ransomware protection.
  • Privacy by design. Integrate privacy protocols early in development through automated checks in continuous integration and continuous delivery (CI/CD) pipelines and follow privacy frameworks, ensuring compliance with regulatory guidelines.
  • Detect poisoned data. Automate data validation scripts to detect anomalies and outliers. Use data-filtering tools to manage noisy data before training. Employ models to cross-check for poisoning attempts by comparing predictions.

Ensure model integrity and a defense against cyberattacks

  • Adversarial training and testing. Generate adversarial examples during training to improve resilience. This is particularly consequential in the case of deepfakes. Use libraries that provide automated robustness checks against adversarial attacks.
  • Robustness testing. Conduct tests to verify model explainability and robustness under stress. Simulate attacks using principles of resilience testing to identify vulnerabilities.

Secure AI development and a deployment environment

  • DevSecOps for AI. Automate security checks within CI/CD pipelines to identify vulnerabilities early. Perform regular scans on dependencies to reduce risk from third-party libraries.
  • Containerization and isolation. Use containers with built-in security modules to enforce isolation. Deploy applications with strict security policies to ensure isolated environments.
  • Role-based access control and monitoring. Implement secure authentication and fine-grained access controls. Set up real-time monitoring systems for ongoing security oversight. Log and audit activities to ensure traceability of access and actions.
  • Cloud security with zero-trust architecture. Enforce identity verification through multifactor authentication. Use micro-segmentation to limit access to sensitive data across users and tenants.

Implement a governance and compliance model

  • Governance framework. Define governance policies aligned with industry security and compliance standards. Implement model risk management processes to maintain consistent compliance.
  • Version control and auditability. Use version-controlled repositories for model tracking and transparency. Automate audit processes to facilitate accountability and traceability.

Develop a threat detection and incident response plan

  • AI threat monitoring. Integrate threat detection models within deployed applications for continuous monitoring. Use intrusion detection systems o provide real-time alerts on unusual behavior.
  • Incident response plan. Establish a structured response playbook specifying roles, responsibilities, and communication. Conduct regular exercises to prepare teams for cybersecurity incidents.

This approach unifies election security by embedding cybersecurity and AI protections across data handling, model integrity, and deployment. It targets key threats, including phishing, ransomware, DDoS, and integrates proactive AI-specific safeguards, such as adversarial training and zero-trust architecture. A collaborative federal framework enables shared governance and compliance, aligning federal and state efforts. Continuous threat monitoring and structured incident responses ensure preparedness against evolving attacks.

Building ethical AI systems in the presence of open-source models

Even with the presence of these built-in technology safeguards, no system would be devoid of vulnerabilities. The prolific open-source development within AI has paved way for their performance to be on par with the more established models including those by Open AI, Google, and Microsoft. Integrating security measures into a tech stack of open-source models will be challenging. Ethical considerations and technical measures are crucial to ensuring the responsible use of AI in elections. Further complicating the matter is that every individual and institution will have their own interpretation and application of ethical standards.

Benefits of a unified approach

A unified federal approach offers a range of benefits to states and voters. For example, it would standardize protections across all states to close the current gaps in electoral campaigning and security. Additionally, a unified framework will strengthen electoral infrastructure resilience and ensure that every citizen’s vote is equally secure, irrespective of their state. Ultimately, implementing standardized cybersecurity practices within the tech stack can enhance voter data protection, provide more robust AI safeguards, and increase public confidence in the electoral process.

A comprehensive framework will facilitate collaboration between private sector stakeholders, state governments, and federal agencies. Moreover, it can enable the swift identification and mitigation of emergent threats, guaranteeing that the nation’s infrastructure can adjust to new obstacles as they emerge.

About the Author

AI and ElectionsShivani Shukla is the Associate Dean of Business Analytics and Professor of Analytics at the University of San Francisco. She specializes in operations research, statistics, and AI, with several years of experience in academic and industry research. Shivani can be reached on LinkedIn.

cyberdefensegenius - ai chatbot

13th Anniversary Global InfoSec Awards for 2025 now open for early bird packages! Winners Announced during RSAC 2025...

X