2025-12-12 –, Room02
The increasing deployment of artificial intelligence (AI) in public and private decision-making processes has transformed the legal landscape, particularly in areas such as predictive policing, credit scoring, and automated judicial assistance. These algorithmic systems present significant challenges to traditional legal doctrines of responsibility and accountability. When a decision generated by AI results in harm, discrimination, or the violation of individual rights, identifying a legally responsible party becomes increasingly difficult. This raises a fundamental legal question: how can responsibility be meaningfully attributed in the age of machine-led decisions? This paper critically examines the concept of algorithmic responsibility, which refers to the allocation of legal and ethical accountability for actions or decisions made by AI systems. The study explores the limitations of existing legal frameworks in addressing harms caused by autonomous systems that lack intent, foreseeability, or control in the human sense. Drawing from comparative perspectives, particularly the European Union’s Artificial Intelligence Act and OECD AI Principles, the paper argues that there is an urgent need to reconceptualize legal responsibility in ways that reflect the distributed, data-driven, and often inscrutable nature of algorithmic operations. Beyond the legal analysis, the paper engages with normative and philosophical debates concerning the moral agency of algorithms, the erosion of human oversight, and the risk of dehumanized decision-making. It contends that while AI cannot be considered a legal person or moral agent, regulatory systems must ensure that human actors—developers, deployers, and public authorities—are held accountable through principles of transparency, explainability, and due process. Ultimately, this paper proposes a multi-level framework for algorithmic responsibility that combines legal liability, ethical oversight, and institutional governance. Such a framework is essential to safeguard the values of justice, fairness, and the rule of law in an era increasingly shaped by autonomous technologies.