Michele Dallachiesa
Michele is a freelance data scientist based in Munich. He implemented solutions for Contact Center Forecasting, Marketing Attribution, Out-Of-Home Advertising, Natural Language Processing, Forecasting and Classification Models, Robots Autonomous Charging, Urban Traffic Optimisation, and other AI services for the governments of the United Kingdom and Hong Kong, and private clients including Google, NASA, Stanford University, Huawei, Taxfix, Wayfair, Telefónica, and others. He holds a Ph.D. in computer science earned for his research with the University of Trento, the IBM T.J. Watson Research Centre, and the Qatar Computing Research Institute on querying, mining, and storing uncertain data, with a particular interest in data series. He co-authored ten papers in top-tier publications on data management, including SIGMOD, VLDB, EDBT, KAIS, and DKE.
@elehcimd
Github – LinkedIn –Session
Local Planning Authorities (LPAs) in the UK rely on written representations from the community to inform their Local Plans which outline development needs for their area. With an average of 2000 representations per consultation and 4 rounds of consultation per Local Plan, the volume of information can be overwhelming for both LPAs and the Planning Inspectorate tasked with examining the legality and soundness of plans. In this study, we investigate the potential for Large Language Models (LLMs) to streamline representation analysis.
We find that LLMs have the potential to significantly reduce the time and effort required to analyse representations, with simulations on historical Local Plans projecting a reduction in processing time by over 30%, and experiments showing classification accuracy of up to 90%.
In this presentation, we discuss our experimental process which used a distributed experimentation environment with Jupyter Lab and cloud resources to evaluate the performance of the BERT, RoBERTa, DistilBERT, and XLNet models. We also discuss the design and prototyping of web applications to support the aided processing of representations using Voilà, FastAPI, and React. Finally, we highlight successes and challenges encountered and suggest areas for future improvement.