Authors
Tejul Pandit1, Meet Raval2 and Dhvani Upadhyay3, 1Palo Alto Networks, USA, 2University of Southern California, USA, 3Dhirubhai Ambani University, India
Abstract
Aspect-Based Sentiment Analysis (ABSA) offers granular insights into opinions but often suffers from the scarcity of diverse, labeled datasets that reflect real-world conversational nuances. This paper presents an approach for generating synthetic ABSA data using Large Language Models (LLMs) to address this gap. We detail the generation process aimed at producing data with consistent topic and sentiment distributions across multiple domains using GPT-4o. The quality and utility of the generated data were evaluated by assessing the performance of three state-of-the-art LLMs (Gemini 1.5 Pro, Claude 3.5 Sonnet, and DeepSeek-R1) on topic and sentiment classification tasks. Our results demonstrate the effectiveness of the synthetic data, revealing distinct performance trade-offs among the models: DeepSeekR1 showed higher precision, Gemini 1.5 Pro and Claude 3.5 Sonnet exhibited strong recall, and Gemini 1.5 Pro offered significantly faster inference. We conclude that LLM-based synthetic data generation is a viable and flexible method for creating valuable ABSA resources, facilitating research and model evaluation without reliance on limited or inaccessible real-world labeled data.
Keywords
Aspect-Based sentiment analysis (ABSA), Synthetic Data Generation, Large Language Models, GPT-4o, Gemini 1.5 Pro, Claude 3.5 Sonnet, Deepseek-R1, Comparative analysis of LLMs