LLMs as intellectual partners: strengthening validity in psychological text classification
This lightning talk will pitch how large language models (LLMs) can serve as intellectual partners in the classification of psychological phenomena in text (“psychological text classification”). Drawing on empirical work (Bunt et al., in press) where we developed and tested the validity of LLM-driven classifiers for phenomena such as reported speech and conversational repairs, I will argue that prompt-based interactions with LLMs can quickly generate insights on how to refine conceptualisations and operationalisations in the realm of text classification. Rather than replacing human coders, LLMs act as “collaborators” in an iterative cycle of classification and feedback, helping researchers spot ambiguities in definitions, catch errors, and challenge assumptions. This synergy can strengthen validity in psychological measurement, enabling both more robust conceptualisation of psychological phenomena in text and the efficient scaling of text-based research. By embracing LLMs as intellectual partners, we can advance methodological rigour and improve psychological science.