GOOD 2025
What is the Contributor Jam? | The Contributor Jam will offer attendees an immersive opportunity to work closely with Open OnDemand (OOD) developers. Participants will gain an understanding of OOD's components and leave with the knowledge and tools to contribute to its development.
Room for optional breakout sessions
Room for optional breakout sessions
This tutorial provides an introduction to how to install and configure Open On Demand, Slurm and Keycloak. Open OnDemand offers a user-friendly web interface for managing HPC resources, allowing users to submit jobs, access files, and utilize interactive applications easily. Slurm, a robust workload manager, is introduced for efficient job scheduling and resource allocation in HPC clusters. Keycloak, an open-source identity and access management solution, is integrated to enhance security through authentication and authorization. By following this tutorial, users will gain practical knowledge and skills to deploy a seamless and secure HPC environment on their own PCs.
This tutorial will demonstrate how to integrate XDMoD job statistics graphics on the OnDemand dashboard and how to configure XDMoD to aggregate OnDemand usage logs.
A tutorial for getting a NextJS node status application set up with OOD.
OOD can leverage initializers to customize or provide extended options when generating forms, and using other OOD features. This talk covers what initializers are, how to set them up, and example initializers for different functions.
The conference will kick-off with a welcome to everyone attending as well as a brief overview of recent major accomplishments related to Open OnDemand. Open OnDemand PI Alan Chalker will provide a vision for the future of Open OnDemand, particularly as it relates to the rapid adoption of AI technologies.
This talk gives a high-level overview of our user-centered, collaborative approach to infrastructure operations and software development for Open OnDemand at Harvard IQSS. We highlight past, present, and future projects for Open OnDemand feature development. This talk can serve as a springboard for a number of other presentations as needed to share more in-depth experiences with Open OnDemand.
The growing demands for accelerated computing in batch HPC and AI training call for innovative strategies to enhance infrastructure utilization. GPU fractionalization enables dynamic allocation and sharing of GPU resources, resulting in cost savings and improved efficiency. This talk will discuss key approaches, including NVIDIA Multi-Instance GPU (MIG) and JuiceLabs' dynamic GPU sharing software, highlighting their features, impact on system design, and interaction with the Open OnDemand ecosystem. We will also present a new initiative between Cambridge Computer and JuiceLabs to develop the integration of their GPU-sharing technology with Open OnDemand and discuss how institutions can start using the product today and contribute to the design and vision of the product.
Integrating AI-powered developer tools within VS Code and Jupyter Notebooks significantly enhances coding efficiency and productivity. This presentation will feature select coding assistant tools applicable to Open OnDemand users engaged in coding, encompassing beginner developers, data analysts, and experienced developers. Participants will receive feature overviews and installation guidance to facilitate the seamless adoption of AI-powered coding tools.
Adopting a new software platform that has the power to change the way teaching and research is done is rarely easy. In this talk, we explain how Princeton University Research Computing started with Open OnDemand (OOD), what we contributed, and what we learned. Hear about our problems, solutions and continued pain points.
Meet with Open OnDemand Dev team members to ask questions about the platform and the docs that don’t easily lend themselves to discourse or email. Community members are welcome to chime in on topics outside the scope of what is deployed at Ohio Supercomputer Center.
Room for optional breakout sessions
Room for optional breakout sessions
Room for optional breakout sessions
Open OnDemand (OOD) is ideally placed to bridge the divide between traditional shell-based batch HPC and emerging large language model (LLM) workflows. Our team deployed OOD internally at MERLIN, a small HPC research cluster located inside Perth Children’s Hospital to provide HPC resources to a range of technical and non-technical users on sensitive healthcare data. We adapted existing LLM web applications for OOD to provide a low-code playground for non-technical researchers such as medical doctors and research assistants to engage with LLM resources in a healthcare environment. This presentation will discuss the rationale, and implementation of OOD LLM web applications in a highly restricted environment and motivate future improved support for these types of workflows within OOD.
Some organizations, such as CSC - IT Center for Science, provide the users with access to multiple supercomputers, where each of the supercomputers may have completely separate instances of Open OnDemand. This leads to a fragmented user experience, where the user is required to log in to another instance to access another supercomputer, as well as increased time spent on maintaining multiple instances. This talk targets system administrators, service owners, and other persons responsible for maintaining and developing Open OnDemand instances, and discusses the benefits and challenges of providing a single instance of Open OnDemand, which is connected to all of the organization's supercomputers and potentially even partner organizations' supercomputers.
This talk will describe the process of creating and deploying Singularity-based interactive applications on the Open OnDemand environment. Singularity containers offer a secure and portable way to package applications. When combined with Open OnDemand, they enable easy scaling of single-user applications to multi-user environments, while offloading user management to the Open OnDemand environment. The talk will also cover the used of Streamlit for the creation of python-based web applications that integrate with Open OnDemand’s job submission system and shared file systems. Attendees will gain hands-on experience in deploying reproducible, and easily accessible applications in a high-throughput computing environment.
A walk-through of concrete issues and performance improvements
which CSC has identified when approaching a scale of hundreds of concurrent
users on Open OnDemand. The aim is to inform system administrators and developers
who are operating an OOD instance about potential pitfalls, as well as quirks
which only become visible at a larger scale.
This technical talk will consider site-specific issues, code-specific issues in the OOD
upstream, as well as architectural impacts of using Passenger.
A general understanding of the OOD architecture, Passenger's role in it, and
Linux systems programming is beneficial.
Members of the leadership team of GOODLUCK, the most recent NSF-funded Open OnDemand project, will introduce the key elements of the four major thrusts of the project: building an apverse, gathering classroom solutions, developing cross-cutting solutions and growing the community through affinity groups with time for Q&A and feedback.
This session will be followed by the Idea Marketplace, where audience members will have an opportunity to have informal one-on-one discussions with the GOODLUCK team to provide feedback and join the effort.
Join us for an evening of posters, GOODLUCK conversations, and some fun and games.
Chat with our sponsors, meet fellow GOOD attendees and the Open OnDemand core team, and gear up for Day 3!
Drinks and refreshments will be served!
Integrating interdisciplinary collaboration in AI courses enables students to apply AI to real-world problems across diverse fields. In my AI classes, students partner with faculty from various departments, using OpenOnDemand for GPU-based computations to develop AI solutions. This hands-on approach includes projects like rainfall prediction, wildlife imagery classification, and healthcare trend analysis. OpenOnDemand supports these projects enabling students to tackle data-intensive challenges. This framework builds technical skills and highlights AI’s impact across disciplines, preparing students to innovate beyond traditional boundaries. This poster showcases the work done with OpenOnDemand across 7 different disciplines to show the interdisciplinary power of OpenOnDemand.
As computational methods in research evolve, many researchers face challenges using traditional High-Performance Computing (HPC) systems. At the University of Virginia Research Computing, we address this by leveraging Open OnDemand (OOD) as a user-friendly, browser-based platform for Interactive HPC (IHPC) and Slurm job management. OOD simplifies HPC access with popular applications, virtual desktops for GUI-based tools, and seamless connectivity without the need for a VPN or command-line interaction. To enhance usability, we’ve added custom utilities for monitoring account status, managing scratch filesystem files, and generating Slurm scripts with Service Unit (SU) estimates.
These innovations reduce barriers to HPC use, enabling researchers to focus on their work while accessing pow
There are thousands of scientific and HPC applications that users need to carry out their work. OOD offers an incredible resource for users to access these applications, but how best to understand the existing OOD Apps, to learn from them, and to create your own? In this poster, we propose a simple, centralized repository for OOD Apps and some basic guidelines for creating and sharing new ones.
How we worked through the Kerberized NFS storage
Learn how the Open OnDemand Dev team organizes the github issues that drive the development and release process and how to contribute to the project.
This presentation will provide an overview of the EESSI project and its objectives, as well as our plans to integrate it into the OOD platform. This will allow for combined easy access to both scientific software and HPC resources in a single platform.
Join us to learn how HPC Centers and other sites are integrating MATLAB to work with cluster hardware and data portals. This session will cover tools and best practices to make MATLAB available to users on OOD as a browser-based app and as a Jupyter Notebook language plug-in, as well as available GPU and other parallel computing capabilities.
The Ohio Supercomputer Center (OSC) saw increased use of its OSC OnDemand web interface for classroom applications during the COVID-19 pandemic, particularly for R and Python. To address challenges in creating shareable and reproducible environments, OSC customized RStudio and Jupyter Notebooks for a dedicated classroom OOD instance. This setup allows instructors to manage and configure software environments for students, ensuring consistency and reducing setup time. Instructors can also access student workspaces to review and manage homework.
Learn how to build on-demand Slurm clusters using the Azure CycleCloud workspace for Slurm and extend it with an Open OnDemand portal connected to Slurm. This talk will provide you with technical know-how and practical insights to efficiently leverage these tools, ensuring your Azure computational resources are both scalable and flexible.
Not all commercially available software will run on Linux or Wine, compelling researchers to request Windows in HPC environments. Traditional solutions demand dedicated servers, Active Directory infrastructure, and specialized IT staff. 7lbd (7-layer bean dip) is an open-source project that eliminates this overhead by treating Windows as "just another Open OnDemand application," allowing users to launch secure Windows desktops in an isolated environment anywhere on their cluster while maintaining access to all of the user’s files. This solution simplifies Windows to a level that even Linux systems administrators will find easy to maintain.
Discover how to securely connect desktop applications directly to Open OnDemand jobs using mutual TLS authentication all the way from desktop client to compute node. This technical presentation demonstrates a new proxy architecture enabling RDP, VNC, and other protocols to connect securely from desktop client programs to applications on compute nodes, with enhanced security controls missing from the current proxy implementation. It’s perfect for sites wanting to offer desktop client access while maintaining browser viewer capabilities, all with improved security.
Research computing organizations facilitate scientific investigations by providing access to computational resources, advanced networking, and ample storage to support the demands of scientific workflows. Traditional HPC systems run as self-contained environments with a head node that defines access to all resources and orchestrates operation of the cluster. Managing access to these services over the various lifetimes of hardware, software, clusters, and facilities presents challenges in maintaining access for users to different systems as they evolve. At UAB we are building a software defined HPC environment to manage evolution of our systems by implementing an A/B testing framework that leverages Open OnDemand as the web interface to different generations of hardware.
A comprehensive overview of taking a base container setup, building of the image and standing up of ood in a container.
Ever heard of coloring books for adults? Take a brain break, hang out with new friends and colleagues and color the "What's So Super About Super Computing?" OSC coloring book. Colored pencils provided!
This BoF will bring together administrators from different institutions who use OOD in a teaching context. We will have a panel of presenters to talk about how OOD is implemented for teaching in their implementation, and then open up the floor for discussion with the panel. This BoF is intended to be useful to developers and administrators who are responsible for OOD in teaching use cases, as well as instructors who want a better understanding of what's going on behind the scenes with the platforms they use. It's our hope that this sharing of ideas will result in a lot of learning from different solutions to similar problems unique to deploying OOD for teaching.
The Universidad de Sonora (UNISON) has successfully integrated Open OnDemand to address the growing demand for high-performance computing (HPC) resources among researchers and students, particularly those in data science with limited technical expertise in HPC. This presentation will showcase how Open OnDemand has empowered non-specialized users by providing an intuitive interface to access advanced computational resources, enabling breakthroughs in data-driven research. By detailing our implementation strategy, user-focused approach, and the resulting benefits, we aim to highlight the transformative potential of Open OnDemand for institutions facing similar challenges.
Tufts University hosts a vibrant bioinformatics community, many of whom are new to using the linux command line for high-performance computing (HPC). Open OnDemand (OOD) simplifies access to HPC resources with its user-friendly web interface. At Tufts, we have deployed over 30 bioinformatics applications and nf-core pipelines on OOD, including custom RStudio servers tailored for bioinformatics. The nf-core pipelines enable users to run complex workflows with ease. Here, we share our experiences in building a custom RStudio server container for bioinformatics, deploying containerized applications as OOD apps, and transforming the complex command-line interfaces of nf-core pipelines into user-friendly OOD web applications.
The Ecosystem for Research Networking (ERN) CryoEM Remote Instrument Access Pilot Project aims to simplify wide-area internet access to scientific instruments and data sets by multi-institutional collaboration with emphasis on under-represented and under-resourced institutions. The goal is a secure, web-based portal, built upon containerized Open OnDemand, providing federated access to scientific instruments and associated large data sets; generate workflows paired with AI microservices, edge computing, and advanced computing, for real-time experimental parameter adjustments and decisions. This talk will present an overview of the design and development efforts of this active project, concluding with a short video and link to the open-source GitHub repository for community participation.
We present our experience of leveraging several publicly available container projects for RStudio Server and PostGIS/PostgreSQL Open OnDemand interactive apps. These containers have pre-configured software stacks that facilitate easier application-specific package installation by users, reducing user support requests on computing center staff. This talk describes the user-friendly OOD interfaces to these complex apps and how the containers are launched on an HPC cluster with Singularity.
The AlphaFold AI system won the 2024 Chemistry Nobel Prize because of its predictive achievements poised to revolutionize disease understanding and drug discovery. Initially released as open-source (and now proprietary), researchers are working to improve the code to require less resources and maintain open-source accessibility. We present an open-source implementation of AlphaFold 2 & 3 that optimizes computational resource allocation by intelligently separating CPU and GPU phases within a single OOD instance. This addresses a critical challenge to make AlphaFold more accessible by minimizing idle GPU cycles. Benchmarking across three major clusters (NCSA Delta, Jetstream2, and ROAR), we developed a user-friendly OOD application that operates with maximum resource efficiency.
IFOM is a cancer biomedical research center, with the ultimate goal of translating the discoveries in treatments and prevention strategies. Democratizing the access to the computational resources is the key step for our biomedical researchers to be independent in analyzing and exploring the data; for bioinformaticians to deliver novel computational approaches with ease; for the organization to have an organic, scalable and sustainable platform. This talk walkthrough on IFOM’s adoption of Open OnDemand: the challenges, the solutions and the cultural implications of this technological integration.
Open OnDemand (OOD) is a transformative tool for Research and High Performance Computing Centers -- but getting it up-and-running at your institution can feel daunting. We recently took on this challenge, and trust me, if one guy from a small team at a small school can do it, then so can you! I will talk about excellent resources I had, lessons I learned, and some key takeaways from the experience. If you are considering OOD for your institution, then this is your chance to hear some positive first-hand experience.
Open OnDemand has evolved to offer powerful customization features, enabling institutions to tailor their instances like never before. This talk will explore how these features allow administrators and developers to create and deploy customizations easily, fostering a community-driven ecosystem of shared enhancements. Attendees will learn how to extend Open OnDemand using plugins and benefit from community contributions through practical examples, such as the metrics widget and the session card metrics developed by IQSS.
Whether you're managing an HPC environment or developing for Open OnDemand, you'll leave with practical knowledge on how to try, create, and share customizations that simplify administration and improve user experiences.
Rolling out Open OnDemand is one thing - ensuring it actually works for users is another. Without a structured testing strategy, unexpected issues can slip through, leading to frustrated researchers and overloaded support teams.
The OnDemand Template is a framework that simplifies and standardizes the app development process, which aims to reduce the learning curve for new developers. It offers a documented, generic application that provides most of the configuration required to get applications running in the OnDemand system. Users can specify parameters via the form file, which the template will utilize to self-configure. In most cases, users only need to specify the app metadata, modules to load, the command to execute, and whether it's VNC enabled. The template also utilizes a work in progress plugin system that allows developers to extend existing apps by dynamically adding attributes to the form file and evaluating scripts before the app is started, which allows for apps to be easily extended.
The Advanced Research Computing department at the University of British Columbia (UBC) have been supporting a local HPC cluster for use by the entire UBC research community for nearly a decade. With a demand for more interactive computing options coming from our researchers, we have begun implementing Open OnDemand as a portal for accessing our resources and with that has come challenges around our currently existing architecture. This talk will provide a detailed, yet high-level overview of the challenges we faced and the solutions we explored. By sharing our journey, we hope to provide other system administrators with a view of both the ease of modifying Open OnDemand for current systems, as well as potential challenges to keep in mind when exploring adding Open OnDemand.
Secure and efficient access to High-Performance Computing (HPC) resources is critical for enabling scientific and technical innovation. Open OnDemand (OOD), a widely used web-based HPC access portal, simplifies user interactions with cluster resources. However, traditional authentication methods often present challenges, including limited scalability, complex configurations, and security vulnerabilities. Integrating Security Assertion Markup Language (SAML)-based authentication with OOD addresses these challenges by leveraging federated identity providers for seamless and secure single sign-on (SSO). This approach enables researchers and institutions to utilize existing identity management systems, ensuring compliance with organizational policies while streamlining user access. The present
Given the ever increasing cost of compute (especially GPUs) it is imperative that these resources are used efficiently. How can this be achieved in a simple-to-use way on a high-performance computing cluster that supports a large number of diverse researchers? Our solution is Jobstats, a job monitoring platform which integrates with Open OnDemand (OOD). This talk will provide an overview of the platform and its various components while concentrating on its links with OOD. Our planned extensions for the OOD integration will be presented with the hope of receiving feedback and new ideas from attendees.
Meet with Open OnDemand Dev team members to ask questions about the platform and the docs that don’t easily lend themselves to discourse or email. Community members are welcome to chime in on topics outside the scope of what is deployed at Ohio Supercomputer Center.
Meet with Open OnDemand Dev team members to ask questions about the platform and the docs that don’t easily lend themselves to discourse or email. Community members are welcome to chime in on topics outside the scope of what is deployed at Ohio Supercomputer Center.
Meet with Open OnDemand Dev team members to ask questions about the platform and the docs that don’t easily lend themselves to discourse or email. Community members are welcome to chime in on topics outside the scope of what is deployed at Ohio Supercomputer Center.
📍 Location: Fenway Park
🕕 Time: Wednesday, 6:00 PM – 9:30 PM
Learn about the Open OnDemand Governance and Sustainability models that we are rolling out in 2025 and how to get involved.
Join us to learn what Dell and Intel are doing in the evolving world of AI and HPC. We will introduce the room to new platforms, new technology and the focused insights of Dell and Intel.
Open OnDemand and Globus are natural partners: Open OnDemand lowers the barrier to using advanced computing resources while Globus removes the friction from data management. Using Globus to move and share data makes your Open OnDemand system even more valuable to researchers. In this talk we will demonstrate how the two systems integrate to help researchers reach data management and computation nirvana.
High-Performance Computing (HPC) workforce development initiatives aimed to train diverse stakeholders in essential tools and methods. The "blinking cursor" barrier in terminal interfaces was addressed by integrating Open OnDemand within the projectEUREKA platform, facilitating 18 groups of MSI undergraduates and faculty in cyberinfrastructure-targeted hackathons. These time-bounded events, based on the HackHPC Model, addressed the HPC skills gap through intensive applied training. This presentation will compare traditional terminal/CLI-based and web-GUI-based training approaches, examining their evolution and effectiveness. Additionally, modified outcomes and artifacts produced by participants will provide insights into the impact of these training methodologies.
Meet with Open OnDemand Dev team members to ask questions about the platform and the docs that don’t easily lend themselves to discourse or email. Community members are welcome to chime in on topics outside the scope of what is deployed at Ohio Supercomputer Center.
Meet with Open OnDemand Dev team members to ask questions about the platform and the docs that don’t easily lend themselves to discourse or email. Community members are welcome to chime in on topics outside the scope of what is deployed at Ohio Supercomputer Center.
Meet with Open OnDemand Dev team members to ask questions about the platform and the docs that don’t easily lend themselves to discourse or email. Community members are welcome to chime in on topics outside the scope of what is deployed at Ohio Supercomputer Center.
This session discusses using CILogon and Open OnDemand to enable institution-specific authentication and branding for Swarthmore College's and Lafayette College's merged HPC cluster.
An up-to-the-minute status report of the configuration and availability of HPC resources can be useful for selecting resource parameters in interactive apps. GPU node resources can be low at times leaving few or no GPUs available for immediate use when launching interactive apps. By providing the status of readily available GPU node resources such as the type and number of GPUs, CPU cores and GB of memory, users can more efficiently utilize cluster resources and minimize job queue wait times.
Administrators and Research Consultants helping researchers scale their work can use Open OnDemand applications to make HPC resources more accessible to disciplines that are not traditionally identified as HPC users. A brief discussion of how we help make high performance computing (HPC) more approachable to new users, with an emphasis on the applications presented, support process, and using Open OnDemand as a launch pad to fully integrate new users into the HPC workflow.
The Anvil supercomputer, funded by the NSF and maintained by Purdue University, powers research across the country. Anvil’s web dashboard is an Open OnDemand portal with just the base features, including creating jobs and viewing the job queue. However, a lot of information is locked behind Slurm terminal commands and other scripts, making it difficult for researchers without terminal knowledge to access the information provided by these commands. The goal of this project is to create a detailed, user-friendly dashboard built off of Open OnDemand to provide useful information about the cluster and their own jobs. This improved dashboard enables researchers to more efficiently conduct their research and access commonly-needed cluster information without learning complex terminal commands.
Texas A&M has a long history of creating Apps for OpenOnDemand to improve the user experience. In this presentation, we will present two of our latest projects. We will briefly show our new experimental customizable dashboard, where users can manage their resources and interact with the HPRC helpdesk. Next, we will show the Drona workflow engine, a framework for composing and generating custom workflows. Drona abstracts the researcher as much as possible from HPC specifics normally required to run their jobs on our clusters so they can focus on their research. This includes setting up custom environments and automatically selecting resources. A major application is to assist researchers in creating and running their AI workloads on a variety of accelerators.
The Stack Science team will share use cases to demonstrate how the Open Science Operating System (OS2) is driving research and scientific outcomes through a suite of capabilities, including science gateways as a service, research software development, secure cloud enclaves, and regulatory data management.
A brief summary of the GOOD conference will be provided, along with a call to action to continue the engagement within the Open OnDemand community in the months and years to come.
TBA
TBA
TBA
TBA