Repository logo
 
Publication

AI and decision-making under risk : a behavioural study exploring how large language models may affect our risk preferences

datacite.subject.fosCiências Sociais::Psicologia
dc.contributor.advisorAlmeida, Ana Filipa Martinho de
dc.contributor.authorSeabra, Lucas Bagnari de
dc.date.accessioned2025-04-11T10:21:43Z
dc.date.available2025-04-11T10:21:43Z
dc.date.issued2024-11-15
dc.date.submitted2024-09
dc.description.abstractThis dissertation investigates the role of AI, particularly Large Language Models, in influencing risk-taking behaviours in a decision-making context, hypothesizing a diffusion of responsibility in human-AI interactions. A Randomized-Control Trial was employed, with participants completing a risk elicitation task – the Bomb Risk Elicitation Task – across two sequential rounds. Participants were either assisted by an AI-powered chatbot during the task or placed in a control group without AI assistance. Measures such as Trust and Attitudes towards AI, and general risk aversion were collected, to serve as control variables. Participant’s locus of control was also measured to test the diffusion of responsibility hypothesis. A total of 138 participants completed an online experiment. Results indicate that AI assistance had a significant effect on participants’ risk preferences, particularly in the second round of the task. Notably, the outcome of the first round showed to be an important factor in this dynamic. Among those who did not have a successful outcome in the first round, participants in the control group exhibited greater risk aversion in the subsequent round, a pattern that was not observed in the AI-assisted group. Further analyses indicated that trust in AI and an external locus of control marginally moderated this effect, pointing to a diffusion of responsibility with the AI. Additional findings suggest the rational effect AI assistance had on participants. Particularly, the proportion of risk-neutral participants increased from 6% in the control group to 28% in the treatment group, indicating an approximation of rational decisionmaking with AI assistance. The findings suggest that AI assistance can alter risk preferences, potentially through mechanisms of increased confidence or diffusion of responsibility. This dissertation contributes to our understanding of human-AI interaction and highlights the need for further studies to disentangle these effects and explore their implications for decision-making in high-stakes environments.eng
dc.description.abstractThis dissertation investigates the role of AI, particularly Large Language Models, in influencing risk-taking behaviours in a decision-making context, hypothesizing a diffusion of responsibility in human-AI interactions. A Randomized-Control Trial was employed, with participants completing a risk elicitation task – the Bomb Risk Elicitation Task – across two sequential rounds. Participants were either assisted by an AI-powered chatbot during the task or placed in a control group without AI assistance. Measures such as Trust and Attitudes towards AI, and general risk aversion were collected, to serve as control variables. Participant’s locus of control was also measured to test the diffusion of responsibility hypothesis. A total of 138 participants completed an online experiment. Results indicate that AI assistance had a significant effect on participants’ risk preferences, particularly in the second round of the task. Notably, the outcome of the first round showed to be an important factor in this dynamic. Among those who did not have a successful outcome in the first round, participants in the control group exhibited greater risk aversion in the subsequent round, a pattern that was not observed in the AI-assisted group. Further analyses indicated that trust in AI and an external locus of control marginally moderated this effect, pointing to a diffusion of responsibility with the AI. Additional findings suggest the rational effect AI assistance had on participants. Particularly, the proportion of risk-neutral participants increased from 6% in the control group to 28% in the treatment group, indicating an approximation of rational decisionmaking with AI assistance. The findings suggest that AI assistance can alter risk preferences, potentially through mechanisms of increased confidence or diffusion of responsibility. This dissertation contributes to our understanding of human-AI interaction and highlights the need for further studies to disentangle these effects and explore their implications for decision-making in high-stakes environments.por
dc.identifier.tid203881176
dc.identifier.urihttp://hdl.handle.net/10400.14/52963
dc.language.isoeng
dc.rights.uriN/A
dc.subjectArtificial intelligence
dc.subjectHuman-AI interaction
dc.subjectRisk-taking behaviour
dc.subjectDiffusion of responsibility
dc.subjectAI-assisted decision-making
dc.subjectInteligência artificial
dc.subjectInteração humano-IA
dc.subjectComportamento de tomada de risco
dc.subjectDifusão de responsabilidade
dc.subjectTomada de decisão assistida pela IA
dc.titleAI and decision-making under risk : a behavioural study exploring how large language models may affect our risk preferenceseng
dc.typemaster thesis
dspace.entity.typePublication
thesis.degree.nameMestrado em Psicologia na Gestão e Economia

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
203881176.pdf
Size:
1.81 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
3.44 KB
Format:
Item-specific license agreed upon to submission
Description: