keyboard_arrow_up
A System to Analyze and Modulate the Political Biases of Large Language Models using Prompt Engineering techniques

Authors

Yuanshou Chang1 and Yu Sun2, 1USA, 2California State Polytechnic University, USA

Abstract

In the burgeoning landscape of artificial intelligence, Large Language Models (LLMs) such as GPT have surged in popularity, embedding themselves into the fabric of daily digital interactions [1]. As these models assume a pivotal role in shaping discourse, understanding their inherent political biases becomes crucial. This paper delves into the political stance of GPT, examining its consistency and the potential for modification through prompt engineering. Our investigation reveals that GPT exhibits a consistent left-libertarian stance, a finding that underscores the importance of recognizing and addressing the ideological underpinnings of AI technologies [2]. Furthermore, we explore the feasibility of altering GPT's political stance towards neutral and right-authoritarian positions through strategic prompt design. This research not only illuminates the political dimensions of LLMs but also opens avenues for more balanced and controlled AI interactions, of ering insights into the complex interplay between technology, ideology, and user agency

Keywords

Prompt Engineering, Artificial Intelligence, Political Bias, Large Language Models (LLMs)

Full Text  Volume 14, Number 11