Last Tuesday, President Trump announced plans for billions of dollars in private-sector investment to strengthen artificial intelligence infrastructure in the United States. The initiative underscores his commitment to maintaining American leadership in AI research and industrial innovation. The Trump administration still faces many pressing questions about how to navigate the expanding influence of artificial intelligence. Chief among them: Are large language models (LLMs)—from OpenAI’s ChatGPT to Google’s Gemini—politically biased? A growing body of research suggests that they lean left. In my own studies, I have found that LLMs are more likely to use terminology favored by Democratic lawmakers, propose left-leaning policy solutions, and use more favorable language when discussing left-leaning public figures compared with their counterparts on the right.
This bias isn’t necessarily deliberate. LLMs are trained on troves of Internet content—ranging from news articles and Wikipedia entries to social media feeds, blog posts, and academic papers. Because these sources often reflect the cultural and political perspectives of their authors, the models inevitably absorb certain biases. These biases can become more pronounced during a model’s “fine-tuning” stage, when human contractors instruct the model on conversational norms. Even well-meaning trainers can shape the model by introducing their own frames of reference or assumptions about their employer’s expectations.
Political biases in AI can have profound societal repercussions. If mainstream AI systems exhibit a uniform ideological slant, public discourse may narrow. If users sense political bias in AI-generated content, they may start to perceive these tools as manipulative rather than neutral, corroding trust in a technology meant to be broadly useful. Finally, more conservative organizations might feel tempted to develop their own politically skewed models. Users gravitating toward AI systems that align with their political beliefs could further entrench ideological echo chambers and reinforce polarization.
So, what can the Trump administration do about political bias in AI systems? There’s no easy solution. Trying to mandate political neutrality in AI is inherently fraught because “neutrality” itself has no universal definition—especially when different groups can’t agree on core values or what constitutes a fair perspective. Any data or human input that shapes these systems will often reflect particular cultural assumptions.
Prominent tech leaders who supported President Trump’s presidential campaign—including Elon Musk, Marc Andreessen, and David Sacks—have previously expressed concerns about AI political bias. But Republicans have traditionally opposed government regulatory overreach. Advocating now for strict federal oversight of AI’s political predispositions would represent a significant departure from their longstanding stance.
Moreover, any attempt by the Trump administration to regulate AI’s political biases would likely be met with skepticism by at least half the country—and the world. Adding to the complexity, Musk’s own AI company, xAI, has developed Grok, a flagship language model integrated into the X platform (formerly Twitter). Given Musk’s close ties to the administration, any White House effort to address AI bias would inevitably face scrutiny over potential conflict of interest.
Ironically, AI labs at large companies may instinctively self-correct if they anticipate the new administration will take a dim view of perceived AI bias. For instance, Meta’s recent decision to suspend fact-checking initiatives on social media could be seen as a strategic move to align with the Trump administration rather than a genuine push for neutrality. Whether driven by self-preservation or sincere concern, it remains uncertain whether similar adjustments will extend to AI systems—and if they do, whether they will bring meaningful change or merely cosmetic fixes.
In an ideal world, AI systems would be purely truth-seeking—grounded in facts and devoid of bias. But achieving complete neutrality may be unrealistic. Instead, society may need to adapt to AI systems that, much like news media and algorithm-driven social media feeds, might exhibit political favoritism. Nevertheless, policymakers and industry leaders can implement measures to minimize ideological distortion in AI models:
Prioritize Accuracy and Neutrality. More rigorous data vetting could help curb subtle ideological tilt. This begins with carefully curating training data. Merely scraping the Internet risks importing widespread biases—or outright falsehoods—into a model’s answers.
Advance Research in Interpretability. AI systems often operate as “black boxes,” making it hard to understand why a model responds in a particular way. Interpretability tools that could pinpoint which parts of the training data influence specific responses might aid in identifying and correcting biases.
Adopt Transparency Standards. Legislation requiring AI providers to disclose specifics about training methods, data sources, and known biases could earn bipartisan support, as users deserve to know if a system has built-in preferences on sensitive political topics.
Establish Independent Oversight. Instead of relying solely on academic studies or internal audits conducted by AI labs, we should arrange for independent, nonpartisan organizations to evaluate AI models routinely for political bias. Their assessments would provide transparency, helping both users and policymakers make informed decisions about managing and utilizing AI responsibly.
In the early 2010s, many praised social media for connecting people and democratizing communication. The technology soon became a polarizing force, however, influencing elections, spreading misinformation, and reinforcing echo chambers. AI now stands at a similar crossroads: Will it serve as a trusted source of factual, balanced information, or will it become yet another battleground for partisan conflict? While perfect neutrality may be unattainable, acknowledging and addressing AI’s political bias is important. The Trump administration should carefully foster fair-minded AI without stifling innovation or free expression. Striking this balance won’t be easy, but the credibility of AI tools—and the broader political landscape—may depend on it.
Photo by Anna Moneymaker/Getty Images