RoboCon 2026

AI-Powered Bug Classification and Creation from Robot Framework Test Reports
2026-03-04 , RoboCon Online

Discover how AI and Large Language Models (LLMs) can revolutionize software quality assurance by transforming Robot Framework test reports into actionable bug insights. This talk introduces an automated pipeline that classifies, summarizes, and creates bug tickets directly from Robot Framework results — integrating seamlessly with tools like TFS and Jira. Attendees will learn how to bridge testing and defect management intelligently.


Modern QA teams generate thousands of Robot Framework test logs and reports, but extracting meaningful insights from them — especially identifying and documenting bugs — remains a manual and time-consuming process.

This session presents a novel AI-driven Bug Clarification and Creation framework, leveraging Large Language Models (LLMs) to automatically interpret Robot Framework outputs and turn them into structured bug reports.

Key topics covered:

Parsing and enriching Robot Framework test results with metadata (suite, test, logs, screenshots).

Using LLMs to analyze failure patterns and generate human-readable bug summaries.

Intelligent bug classification: functional vs. performance vs. environment issues.

Automated bug creation: seamlessly pushing reports to TFS, Jira, or any modern ALM tool via APIs.

Integration patterns and architecture design for hybrid setups (on-prem or cloud).

Real-world demo: converting a Robot Framework test log into a detailed, ready-to-triage bug ticket.

Takeaways:

Learn how to connect Robot Framework’s structured outputs with LLM reasoning.

See practical steps to automate defect triage and documentation.

Understand how this approach reduces human effort, increases accuracy, and accelerates release cycles.

This talk is ideal for QA engineers, automation leads, and AI enthusiasts seeking to bridge the gap between test automation and intelligent defect management.


Categorize / Tags:

AI, Robot Framework, Automation Framework, Bug Classification, LLM, TFS, Jira, Quality Engineering, Test Results Analysis, DevOps, Intelligent Automation, Framework Architecture

Is this suitable for ..?: Intermediate RF User, Advanced RF User Describe your intended audience:

This talk is designed for automation architects, QA framework engineers, AI enthusiasts, and technical leads who focus on building or maintaining large-scale automation infrastructures. It’s especially relevant for teams using Robot Framework or similar tools who want to add AI-driven analysis, bug creation, and integration capabilities to their testing pipelines.

While the core ideas are technical, the concepts are explained in a way that both engineers and technical managers can understand and apply.

Full-Stack Lead Software Engineer in Test at SDAIA with 13 years experience helping Software Engineers in Test with Promoting Automation as Culture . Specializing in Robot Framework AppiumLibrary, RequestsLibrary and AI Solutions using RF language, Mohamed uses that experience to enhance and spreading usage of Robot Framework Language and enabling its Libraries to have a better and easier usage

AI and Backend Engineer specializing in designing intelligent automation ecosystems that merge backend architecture with AI and large language models (LLMs). Experienced in building self-learning QA platforms capable of automated test generation, root-cause analysis, and dynamic reporting pipelines.