2025-10-11 –, Track A(LT-14)
Language: English
Large Language Models (LLMs) often struggle to provide current and comprehensive answers from vast, interconnected knowledge bases, a common challenge in fields like business, legal and administrative tasks. While traditional RAG improves LLM context, it can falter with complex, relationship-heavy information. GraphRAG offers a powerful solution by leveraging graph databases to enhance retrieval with structured relationships, leading to deeper contextual understanding.
This talk provides a practical introduction to implementing GraphRAG using Neo4j. We will explore how Neo4j can be used to construct knowledge graphs from unstructured data and enable advanced, relationship-aware retrieval. Attendees will learn the core concepts of GraphRAG and gain practical insights to build smarter RAG systems, capable of delivering more accurate and contextually rich LLM responses for complex real-world applications.
Large Language Models (LLMs) have revolutionized access to information, yet their inherent reliance on static training data often limits their ability to provide the most current, comprehensive, and contextually nuanced answers. This limitation is particularly evident when interacting with the vast, dynamically evolving, and highly interconnected knowledge bases found in many professional domains.
While Retrieval-Augmented Generation (RAG) offers a significant step forward by allowing LLMs to access external information, it frequently struggles with the intricate relationships embedded within complex datasets. This challenge is acutely felt in sectors such as legal research, business intelligence, and corporate knowledge management, where the ability to precisely navigate extensive documents and extract deep, interconnected insights is critical for informed decision-making. Traditional RAG, in these scenarios, often fails to select truly relevant context, resulting in fragmented or incomplete answers.
GraphRAG directly addresses these limitations by recognizing that information often derives its true meaning from its connections. This powerful paradigm enhances RAG by leveraging the structural richness of graph databases to represent knowledge as a network of entities and their relationships. By doing so, GraphRAG empowers LLMs to not just retrieve isolated text snippets, but to intelligently traverse and query the underlying relationships, leading to a far more comprehensive and contextual understanding of information.
Attendees will discover:
- The fundamental principles of GraphRAG and its significant advantages over conventional RAG methods in handling complex, interconnected datasets—a critical capability for applications in business analysis, legal document interpretation, and comprehensive knowledge base exploration.
- How Neo4j efficiently structures and queries interconnected data, enabling optimal information retrieval.
- An overview of Python tools related to GraphRAG, showcasing its user-friendliness in building GraphRAG applications. This includes its capabilities for automated knowledge graph construction, extracting entities and relationships, and its support for various LLM providers and embedding models.
- Practical patterns for ingesting diverse data into Neo4j and leveraging its graph intelligence to enrich LLM prompts, thereby significantly improving the relevance, coherence, and depth of generated responses.
Whether you're a Python developer eager to build more sophisticated LLM applications, a data scientist exploring advanced retrieval techniques, or a professional in business, law or administration, this session will equip you with the foundational knowledge and practical insights to begin your journey with Neo4j-powered GraphRAG in a Python environment.
Hon Kwan Shun Quinson is an MPhil student in Computer Science at The Chinese University of Hong Kong. His academic pursuits and research focus on advancements in artificial intelligence. Throughout his academic journey, he has developed strong skills in natural language processing and machine learning. Notable experiences include developing graph-based Retrieval Augmented Generation (RAG) systems, training convolutional and Transformer models with PyTorch, and fine-tuning LLMs for applications such as Cantonese translation. He is also keenly expanding his research into the intersection of LLMs with reinforcement learning and formal methods for program synthesis, exploring novel pathways for enhancing AI capabilities and robustness. Committed to continuous growth in this rapidly evolving field, he is passionate about contributing to the future of AI.