IBM OpenPages GRC Services | GRC Consulting – iTechGRC

The Good & the Nasty: Influence of AI in Politics & 2024 U.S. Elections

Technologies have always had an impact on political campaigns. At the turn of the 21st century, the use of the Internet for broader reach and cost-effective campaigns opened doors to marketing on social media and digital platforms. The new-age tools of information sharing empower voters and audiences with an independent voice and political viewpoints to choose for better or worse. Much recently, with AI and GenAI-led mishaps and propaganda coming into the picture, it is harder to ignore the dangers of misinformation and disinformation. As we gear up for the upcoming presidential elections, let’s look at the gripping impacts of AI in politics in the U.S., especially on the transparency, integrity, and authenticity of the electoral processes.  

AI in Politics: So, Here’s the Good Part 

  1. Wider Campaign Reach: AI regulations in the U.S. are evolving slowly, but AI-assisted campaigns are already on a roll. Marketing AI that can automate repetitive tasks is consequential to decision-making. It is the kind of technical sophistication needed in the political arena. AI-driven tools are utilized in large-scale data analysis to understand voting patterns and behaviors and craft relevant and well-targeted campaigns to reach a wider voting audience.  
  2. Hyper-Personalization for Voter Engagement: Personalized messaging in political marketing is not new.  Findings from the Pew Research Center on social media’s role in the 2016 presidential elections emphasize the positive effect of personalized messaging on voter behavior. They are used in tailoring campaign messages and personalized campaigns to influence voters. Large Learning Models (LLMs)-based GenAI tools are extensively used in drafting political speeches and writing texts and emails for fundraising. AI-powered chatbots and conversational voice agents that respond to voter queries in multiple languages and deliver contextually accurate replies allow hyper-personalized interactions with voters across communities. 
  3. Voter Turnout Predictions for Resource Planning: Predictive analytics can help forecast average voter turnout rates across locations based on various factors, such as voter demographics, socioeconomic events, recent trends, transport connectivity, etc. It also helps compare historical data to predict areas with the highest turnout rates and allocate resources for voter mobilization.  
  4. Flagging of Misinformation: Recently, at the World Economic Forum’s Building Trust through Transparency event in Davos, Chair François Valérian spoke about AI’s potential to identify corruption patterns and conflicts of interest that impact democracy. Governments and regulators can leverage AI to detect AI-generated, nefarious content on social media. Social media platforms like Facebook and X invested in AI-led tools to regulate fake accounts spreading misleading information.   

AI in Politics & U.S. Elections

5. Preventing Cybersecurity Threats: Advanced machine learning (ML) algorithms that thoroughly analyze network security help flag unusual patterns and anomalies to prevent cybersecurity incidents. It helps safeguard election infrastructure from hacking and infiltrations. To this end, the Cybersecurity & Infrastructure Agency (CISA) is implementing AI to enable cybersecurity in areas of privacy and civil liberties. One of them includes real-time exchange of machine-readable indicators of cyber threat attacks and preventive measures to help protect electronic voting systems and voter databases against and reduce the frequency of cyber incidents.

6. Personal Data Protection: Social engineering tactics or misleading users into sharing personally identifiable information (PII) for identity thefts or to gain access to the networks and endpoint devices of the victims. These incidents are common during election campaigns. Cybersecurity incidents like phishing, manipulation of electronic voting systems, misallocating reporting resources, and inaccurate computation of results risk the integrity of voting systems. Tech providers and lawmakers have been working towards ensuring robust cybersecurity strategies to enhance the integrity of AI by leveraging AI to prevent misuse of personal data.

7. Prevention of Biases & Discrimination in AI Systems: Discriminatory AI systems harboring faulty models lead to unfair and biased decisions in public and political contexts. They can be avoided through AI-led model governance tools. IBM OpenPages with Watson’s Watsonx.governance was launched last year to monitor AI activities for accuracy, drift, biases, and GenAI quality to support fair and equitable election campaigns.  

Now, to the Nasty Influence of AI in Politics & 2024 Election Cycle  

In 2024, 40 nations will hold elections, representing countries contributing more than 50% to global GDP. These countries also include some of the world’s largest democracies. Technological interference in geopolitics is a given in the era of free speech and freedom of expression. In the U.S., with the dangers of AI and GenAI misuse lurking around, the political typology takes a backseat. The Pizzagate conspiracy from the 2016 elections did not use AI but stands as a notable use case of emerging technologies’ invisible hand in political campaigns.  

For the 2024 election cycle, GenAI is amplifying existing risk factors in election systems, security, and administration. We have listed some negative influences of AI in politics. You get to decide which of those qualify as bad and ugly sides of the nasty! 

Also, explore AI use cases in governance, risk, and compliance (GRC) with IBM OpenPages with Watson.  

  • AI-generated Robocalls: Robocalls or deepfake calls have been happening all over the world. Recently, a robocall impersonating President Biden with derogatory content circulated in New Hampshire, targeting voters and advising them not to cast their vote. The call was linked to two companies in Texas. Immediately after the fake call, the Federal Communications Commission announced its ban on robocalls using AI voice cloning. The Commission has yet to set up a regulation on the usage of AI tools in political advertisements and promotional campaigns. On February 20, 2024, the U.S. White House announced a taskforce for researching and reporting on AI in politics and its usage with clear safeguards and impacts on the elections.  
  • False information: AI technologies provide the opportunity for bad actors to fabricate content and build fake images and false narratives that erode public trust in election systems. It can also be misused to incite fears and threats of physical harm against election administrators. AI-generated fake content has tampered with elections in countries like Argentina and Slovakia.  LLMs fueling AI tools and systems were identified with 23 risks in their architecture or foundational models that can negatively influence companies and election systems.  

Additionally, cybercriminals can leverage AI-generated audio clips to impersonate employees to gain access to election-related information and convince voter communities and agencies to change their thoughts and actions.  

  • Algorithmic Manipulation of Social Media: Nearly 64% of U.S. adults say disinformation and fake news are common on social media feeds, suggests a 2023 survey by UNESCO.  Misinformation over social media platforms and other sources alters public perception of elections and election results. 
  • Rise of Soft fakes: GenAI’s deepfakes are digitally altered visual content. Cheap fakes are poor-quality manipulated content. Soft fakes refer to synthetic images, audio, and videos to promote political candidates.  Recently, a fake image of Donald Trump seated among smiling Black people surfaced online. It was created and posted on the Facebook profile of a Trump supporter and garnered 1 million views. The soft fake to influence voters from the same community sparked further debates about the growing usage of AI-generated imagery in campaign approaches.  The dangers of poor digital content moderation and misleading voters with anti-communal messaging over AI content are serious threats to democracy.  
  • Malware and Phishing Attacks: Bad actors can use AI coding tools to build or enhance malware to evade detection systems. They also aid in spear phishing attacks against election officials and staff to hijack sensitive information.  

The protection of democracy against the nefarious impacts of AI in politics is the end goal! 

AI is just a tool for pushing out content at speed and scale. What compounds AI risks is the fluidity of public perception combined with the inherent vulnerability of political systems and the ‘self-sanctioned power’ of threat actors. 

Regulatory organizations like the CISA have shared a Cybersecurity Toolkit to Protect Elections with guidelines for securing voter information, websites, emails, and networks. The EU’s European Commission also offers guidance for tackling online disinformation and misinformation.  

Conservative Republicans or the Liberal Democrats, the end goal is protecting democracy and election systems.  

Who do you think has a better game for 2024’s election cycle? 

Also, if you’d like to keep up with the ever-changing industry regulations, trust our experts to unlock a single-pane view of risks across your organization using IBM OpenPages with Watson. 

Contact our experts today to harness infused AI capabilities to meet your GRC needs.