Andreas Leed

As Head of Data Science at Oxford Global Projects, I have dedicated my career to improving project planning and decision making through the use of data-driven methods. In my role, I lead technical projects involving advanced techniques like natural language processing and machine learning, and am responsible for managing a database of project performance data from over 17,000 projects across all industries. This data is used to inform future projects and improve our understanding of project performance. In addition to my work at Oxford Global Projects, I serve as an external examiner for quantitative methods and data science courses at universities in Denmark.

My passion is to apply data science to the field of project management and help our clients achieve their objectives. I am constantly seeking new and innovative ways to do so and am excited to continue pushing the boundaries of what is possible. I have a strong track record of success, including leading external risk analysis on some of Europe's largest capital projects, contributing to project appraisal methodology for the UK Department for Transport, and presenting statistical analysis and results to senior management and high-level figures globally. I have also led work on a diverse range of projects, including the feasibility assessment of the first road between settlements in Greenland, the risk modeling of large scale nuclear new builds and decommissioning programs, and the development of an AI-based Early Warning System for the Development Bureau in Hong Kong. My expertise in data-driven project planning and risk analysis, as well as my ability to effectively communicate technical information to diverse audiences, have been key to my achievements in this field.


LinkedIn

https://www.linkedin.com/in/andreasleed/


Session

04-18
14:45
45min
Accelerating Public Consultations with Large Language Models: A Case Study from the UK Planning Inspectorate
Michele Dallachiesa, Andreas Leed

Local Planning Authorities (LPAs) in the UK rely on written representations from the community to inform their Local Plans which outline development needs for their area. With an average of 2000 representations per consultation and 4 rounds of consultation per Local Plan, the volume of information can be overwhelming for both LPAs and the Planning Inspectorate tasked with examining the legality and soundness of plans. In this study, we investigate the potential for Large Language Models (LLMs) to streamline representation analysis.

We find that LLMs have the potential to significantly reduce the time and effort required to analyse representations, with simulations on historical Local Plans projecting a reduction in processing time by over 30%, and experiments showing classification accuracy of up to 90%.

In this presentation, we discuss our experimental process which used a distributed experimentation environment with Jupyter Lab and cloud resources to evaluate the performance of the BERT, RoBERTa, DistilBERT, and XLNet models. We also discuss the design and prototyping of web applications to support the aided processing of representations using Voilà, FastAPI, and React. Finally, we highlight successes and challenges encountered and suggest areas for future improvement.

PyData: Natural Language Processing
A1