Accelerating Public Consultations with Large Language Models: A Case Study from the UK Planning Inspectorate
2023-04-18 , A1

Local Planning Authorities (LPAs) in the UK rely on written representations from the community to inform their Local Plans which outline development needs for their area. With an average of 2000 representations per consultation and 4 rounds of consultation per Local Plan, the volume of information can be overwhelming for both LPAs and the Planning Inspectorate tasked with examining the legality and soundness of plans. In this study, we investigate the potential for Large Language Models (LLMs) to streamline representation analysis.

We find that LLMs have the potential to significantly reduce the time and effort required to analyse representations, with simulations on historical Local Plans projecting a reduction in processing time by over 30%, and experiments showing classification accuracy of up to 90%.

In this presentation, we discuss our experimental process which used a distributed experimentation environment with Jupyter Lab and cloud resources to evaluate the performance of the BERT, RoBERTa, DistilBERT, and XLNet models. We also discuss the design and prototyping of web applications to support the aided processing of representations using Voilà, FastAPI, and React. Finally, we highlight successes and challenges encountered and suggest areas for future improvement.


In the United Kingdom, Local Planning Authorities (LPAs) are responsible for creating Local Plans that outline the development needs of their areas, including land allocation, infrastructure requirements, housing needs, and environmental protection measures. This process involves consulting with the local community and interested parties multiple times, which often results in hundreds or thousands of written representations that must be organised and analysed. On average, LPAs receive approx. 2000 written representations per consultation, and each Local Plan requires 4 rounds of consultation. The process of analysing these representations takes approx. 3.5 months per round of consultation to complete.

The Planning Inspectorate is tasked with examining Local Plans to ensure they follow national policy and legislation. The Inspectorate examines approx. 60 Local Plans a year, each examination lasting around a year’s time. The volume of information included in each Local Plan significantly outweighs the capacity of the Planning Inspectorate to read and analyse the content in detail. This can lead to important issues being overlooked and potential problems with the review process or legal challenges. Conducting a thorough and meticulous analysis of representations takes a lot of time and effort for both LPAs and the Planning Inspectorate.

Together with the Planning Inspectorate, we conducted an AI discovery to explore how Large Language Models (LLMs) can help reduce the time taken to analyze representations, improve resource planning, increase consistency in decision-making, and mitigate the risk of a key issue of material concern being missed.

We assessed the performance of competing models and demonstrated their goodness with proof-of-concept apps for both LPAs and the Planning Inspectorate that unify and streamline the aided processing of representations. Our simulations on historical Local Plans resulted in a projected reduction of the time taken to analyze representations by more than 30%, and experiments show that we are able to classify representations to the relevant policy in Local Plans with up to 90% accuracy.

In this talk, we share our experimental process based on Python and the experimental results. We delve into how we approached the problem, sourced and cleaned the data, and used a distributed experimentation environment with Jupyter Lab and cloud resources to evaluate the performance of BERT, RoBERTa, DistilBERT, and XLNet models. We also discuss our strategies for dealing with limited training data. Finally, we present the design and prototyping of two web applications using Voilà, and demonstrate how we iterated on them using FastAPI and React. Throughout the presentation, we highlight the successes and challenges we encountered, and suggest areas for future improvement.


Expected audience expertise: Python

Intermediate

Expected audience expertise: Domain

Intermediate

Abstract as a tweet

New study shows Large Language Models can accelerate public consultations by streamlining the analysis process of representations for Local Plans. Results show the potential for 30% faster analysis time and up to 90% classification accuracy #AI #NLP #DataScience #pyconde @PINSgov

Michele is a freelance data scientist based in Munich. He implemented solutions for Contact Center Forecasting, Marketing Attribution, Out-Of-Home Advertising, Natural Language Processing, Forecasting and Classification Models, Robots Autonomous Charging, Urban Traffic Optimisation, and other AI services for the governments of the United Kingdom and Hong Kong, and private clients including Google, NASA, Stanford University, Huawei, Taxfix, Wayfair, Telefónica, and others. He holds a Ph.D. in computer science earned for his research with the University of Trento, the IBM T.J. Watson Research Centre, and the Qatar Computing Research Institute on querying, mining, and storing uncertain data, with a particular interest in data series. He co-authored ten papers in top-tier publications on data management, including SIGMOD, VLDB, EDBT, KAIS, and DKE.

As Head of Data Science at Oxford Global Projects, I have dedicated my career to improving project planning and decision making through the use of data-driven methods. In my role, I lead technical projects involving advanced techniques like natural language processing and machine learning, and am responsible for managing a database of project performance data from over 17,000 projects across all industries. This data is used to inform future projects and improve our understanding of project performance. In addition to my work at Oxford Global Projects, I serve as an external examiner for quantitative methods and data science courses at universities in Denmark.

My passion is to apply data science to the field of project management and help our clients achieve their objectives. I am constantly seeking new and innovative ways to do so and am excited to continue pushing the boundaries of what is possible. I have a strong track record of success, including leading external risk analysis on some of Europe's largest capital projects, contributing to project appraisal methodology for the UK Department for Transport, and presenting statistical analysis and results to senior management and high-level figures globally. I have also led work on a diverse range of projects, including the feasibility assessment of the first road between settlements in Greenland, the risk modeling of large scale nuclear new builds and decommissioning programs, and the development of an AI-based Early Warning System for the Development Bureau in Hong Kong. My expertise in data-driven project planning and risk analysis, as well as my ability to effectively communicate technical information to diverse audiences, have been key to my achievements in this field.