Triage against the Machine: Can Artificial Intelligence reason deliberatively?
This project investigates the ability of artificial intelligence (AI), specifically large language models (LLMs), to emulate human deliberative reasoning—a cognitive process fundamental to democratic and sustainable decision-making.
Unlike AI’s typical reliance on pattern recognition and statistical learning, human deliberative reasoning is inherently social and context-dependent, evolving to assess arguments within group settings by incorporating reciprocity of reasons (the collective identification of relevant reasons) and reflectivity of reasons (the integration of these reasons into decision-making). Using the Deliberative Reason Index (DRI), this study evaluates the extent to which LLMs can replicate intersubjective reasoning by simulating real-world deliberative processes across various scenarios, including non-dialogical, role-playing, and dialogical chain-of-thought simulations. The expected outcomes suggest that LLMs may struggle to match human deliberative reasoning due to their lack of social and contextual adaptability, potentially impacting the efficacy and sustainability of AI in democratic decision-making processes. This investigation aims to delineate AI’s potential and limitations in democratic governance, providing insights to guide future AI development and its responsible integration into decision-making contexts.