Shifting adolescent perceptions of Artificial Intelligence: a pilot study and open challenges
Are you already subscribed?
Login to check
whether this content is already included on your personal or institutional subscription.
Abstract
Large Language Models (LLMs) and chatbots like ChatGPT have revolutionized artificial intelligence (AI) systems, offering unprecedented capabilities for processing unstructured data with remarkable accuracy and scope. However, there is a widespread misconception that AI with LLMs can effortlessly solve every problem, overlooking limitations such as bias, hallucinations, and inaccuracies in reasoning. Crafting effective prompts and verifying results are essential for using LLMs effectively and avoiding blind trust in their responses. Recognizing the fallibility of AI can also help alleviate concerns about devaluing one’s own abilities. In this article, we present a psychological intervention conducted in a secondary school with 21 students, including theoretical and practical concepts on AI and LLMs, followed by exercises with ChatGPT to create educational conversations and apply prompting strategies. The preliminary results are encouraging: high student satisfaction, improved interaction with the LLM, decreased threat perception posed by AI, and a better understanding of its limitations, such as unreliability and limited flexibility. Additionally, an analysis of open-ended responses and comparison with standardized questionnaires suggest the need for new methods to assess the quality of interaction with LLMs, given the diversity of these interactions compared to those with humans and traditional systems. Our findings can guide future studies on AI training, which should explore in depth aspects such as bias, impact on creativity, ethical use, impact on work, and the rights of content creators.
Keywords
- ChatGPT
- language learning models
- Human-Computer Interaction
- prompting
- AI perception
- AI limitations
- AI literacy