keyboard_arrow_up
Evaluating Persona Prompting for Question Answering Tasks

Authors

Carlos Olea, Holly Tucker, Jessica Phelan, Cameron Pattison, Shen Zhang, Maxwell Lieb, Doug Schmidt and Jules White, Vanderbilt University, United States

Abstract

Using large language models (LLMs) effectively by applying prompt engineering is a timely research topic due to the advent of highly performant LLMs, such as ChatGPT-4. Various patterns of prompting have proven effective, including chain-of-thought, self-consistency, and personas. This paper makes two contributions to research on prompting patterns. First, we measure the effect of single- and multi-agent personas in various knowledge-testing, multiple choice, and short answer environments, using a variation of question answering tasks known as as "openness". Second, we empirically evaluate several persona-based prompting styles on 4,000+ questions. Our results indicate that single-agent expert personas perform better on high-openness tasks and that effective prompt engineering becomes more important for complex multi-agent methods.

Keywords

Prompt Engineering, Large Language Models, Question Answering.

Full Text  Volume 14, Number 11