Factors Influencing the Evaluation of AI-Based Legal Advice: The Effects of Punishment Severity and Personal Involvement
Generative AI is increasingly used in legal services, particularly in public-facing legal consultations aimed at reducing costs and improving access to justice. Despite these benefits, concerns have been raised about unauthorized legal practice by non-lawyers, unclear legal accountability, and the public’s limited ability to assess the quality of AI-generated legal advice. These concerns underscore the importance of examining public acceptance of and psychological resistance to AI-based legal services.
This study aims to empirically examine how people evaluate AI-generated legal advice by presenting the same legal consultation content with different sources: either an AI lawyer or a human lawyer. A total of 160 adult participants will be presented with legal consultation scenarios that manipulate three variables: (1) the source of the advice (AI vs. human), (2) the personal relevance of the case (self vs. other), and (3) the severity of punishment (low vs. high). Participants will then evaluate the usefulness, trustworthiness, and accuracy of the legal advice.
The expected findings are as follows. First, when the severity of punishment is high, participants are expected to evaluate identical legal advice as less useful, trustworthy, and accurate when it is presented as coming from an AI lawyer rather than a human lawyer. Second, when the legal scenario is personally relevant to the participant, advice attributed to an AI lawyer is expected to be evaluated less positively than the same advice attributed to a human lawyer. Third, when both punishment severity is high and personal relevance is present, advice from an AI lawyer is expected to receive significantly lower evaluations compared to advice from a human lawyer.
The findings provide a realistic assessment of public acceptance of AI-generated legal advice and offer practical implications for the design, regulation, and transparent implementation of future AI legal systems.