What is Open OnDemand?
Open OnDemand is a browser-based web interface that provides access to SDSU's HPC clusters without needing an SSH client or command line knowledge. It is the recommended option for users who prefer a graphical interface.
Using Open OnDemand you can:
- Access a web-based terminal (shell) directly in your browser
- Browse, upload, download, and manage your files
- Submit and monitor Slurm jobs
- Launch interactive applications such as Jupyter Notebook, JupyterLab, and RStudio
- View and manage your active interactive sessions
Both Innovator and Discovery have their own separate OnDemand portals with different URLs and different available applications.
Innovator Open OnDemand
Accessing Innovator OnDemand
Open your browser and navigate to:
https://hpcportal.sdstate.edu
Log in with your email as first.lastname@jacks.sdstate.edu and your SDSU password.
Innovator OnDemand Menu Overview
Files
- Home Directory — browse and manage files in your home directory (
/home/jacks.local/username)
- Scratch Directory — browse and manage files in your scratch directory (
/mmfs1/scratch/jacks.local/username)
Jobs
- Active Jobs — view all currently running and queued jobs on Innovator
- Job Composer — create and submit Slurm job scripts through a graphical interface
Clusters
- Innovator Shell Access — opens a web-based terminal for full command line access to Innovator in your browser
Common Configuration Fields for All Innovator Interactive Apps
When launching any interactive app on Innovator, you will see these configuration fields:
- Node Type — select the partition to run on. Options vary per app but include Compute, Big Memory, GPU, and Quickq
- Number of cores — how many CPUs to request. Maximum is 48 for all node types
- Memory Allocation (in GB) — how much RAM to request. Maximum: 250 GB for compute/quickq, 2014 GB for bigmem, 502 GB for GPU
- Number of hours — how long your session will run before automatically closing
- Email address for Slurm notifications (optional but recommended) — you will receive an email when your session starts and ends
- Additional modules (optional) — space-separated list of extra modules to load, for example
mpich/3.2.1
Available node types on Innovator:
- Compute — 46 nodes, 1-48 cores, 256 GB RAM, 14-day walltime
- Big Memory — 4 nodes, 1-48 cores, 2 TB RAM, 14-day walltime
- GPU — 14 nodes, 1-48 cores, 512 GB RAM, 2x NVIDIA A100 80GB per node, 14-day walltime
- Quickq — 46 nodes, 1-48 cores, 256 GB RAM, 12-hour walltime. Best for short jobs that do not need GPU or large memory
Innovator Interactive Desktop
Launches a full graphical desktop environment running on an Innovator node directly in your browser. Useful for applications that require a graphical user interface.
Cellpose
Cellpose is a deep learning-based software tool designed for the segmentation of cells in microscopy images.
- Available node types: GPU (recommended), Quickq
- Number of cores: 1-48, maximum 48 for both queues
- Memory Allocation: maximum 250 GB for quickq, 502 GB for GPU
- Maximum hours: 4 hours
- GPUs: request 1x NVIDIA A100 80GB. Cellpose GUI does not support multi-GPU processing so only 1 GPU is available per session
- Launch Flags: optional flags to customize how Cellpose launches, for example --Zstack launches with the Zstack flag
- Note: Quickq does not offer GPU acceleration. Use the GPU partition for faster image processing and segmentation
ChimeraX
ChimeraX is a molecular visualization tool for structural biology used to visualize and analyze molecular structures.
- Available node types: Compute, Big Memory, GPU, Quickq
- Number of cores: 1-48
- Memory Allocation: maximum 250 GB for quickq/compute, 2014 GB for bigmem, 502 GB for GPU
- Maximum hours: 24 hours
- Additional modules: optional
Code Server (VS Code)
Code Server allows you to run Visual Studio Code (VS Code) on an Innovator node and access it from your browser. You can write and run code directly on the cluster using a familiar VS Code interface without needing to install VS Code locally.
- Available node types: Compute, Big Memory, GPU, Quickq
- Number of cores: 1-48
- Memory: maximum 250 GB for quickq/compute, 2014 GB for bigmem, 502 GB for GPU
- Maximum hours: 24 hours
- CUDA Version: select a CUDA version if using a GPU node. A GPU node is required to use CUDA
- Code Server Version: Code Server 4.92.2
- Number of GPUs: enter
none for no GPU, 1 for one GPU, or 2 for both GPUs on the node. If not using a GPU node enter none
Deepchem
DeepChem is an open-source Python library that applies deep learning to chemistry, biology, and drug discovery. It provides tools for molecular featurization, predictive modeling, and access to benchmark datasets. Ideal for building models to predict molecular properties or perform virtual screening.
- Available node types: Compute, Big Memory, GPU, Quickq
- Number of cores: 1-48
- Memory Allocation: maximum 250 GB for quickq/compute, 2014 GB for bigmem, 502 GB for GPU
- Maximum hours: 24 hours for compute/bigmem/GPU, 12 hours for quickq
HEC-HMS
HEC-HMS (Hydrologic Engineering Center — Hydrologic Modeling System) is software for hydrological modeling and simulation.
- Available node types: Compute only
- Number of cores: 1-48
- Memory Allocation: maximum 250 GB
- Maximum hours: 168 hours (7 days)
- Additional modules: optional
Haploview
Haploview is a tool for the analysis and visualization of linkage disequilibrium and haplotype data in genetic studies.
- Available node types: Compute, Quickq
- Number of cores: 1-48
- Memory Allocation: maximum 250 GB for both queues
- Maximum hours: 4 hours
- Additional modules: optional
Jupyter Notebook
Jupyter Notebook launches an interactive Python notebook server on an Innovator node. Useful for data analysis, machine learning, and scientific computing.
- Available node types: Compute, Big Memory, GPU, Quickq
- Number of cores: 1-48
- Memory Allocation: maximum 250 GB for quickq/compute, 2014 GB for bigmem, 502 GB for GPU
- Maximum hours: 24 hours for compute/bigmem/GPU, 12 hours for quickq
- GPUs (GPU node only): request 1 or 2 NVIDIA A100 80GB GPUs
- Additional modules: optional
- Important: Only one Jupyter Notebook session is allowed at a time. Use Jupyter primarily for debugging and testing. Once scripts are ready, submit them via sbatch for efficient processing
- Conda environments: Personal Conda environments in your /home can be accessed. Ensure the package
ipykernel is installed in your Conda environment
JupyterLab
JupyterLab is the next-generation Jupyter interface providing a file browser, multiple notebooks, terminals, and text editors in a single interface.
- Available node types: Compute, Big Memory, GPU, Quickq
- Number of cores: 1-48 (default is 10)
- Memory Allocation: maximum 250 GB for quickq/compute, 2014 GB for bigmem, 502 GB for GPU
- Maximum hours: 24 hours for compute/bigmem/GPU, 12 hours for quickq
- GPUs (GPU node only): request 1 or 2 NVIDIA A100 80GB GPUs
- Additional modules: optional
- Important: Only one Jupyter session is allowed at a time. Additional sessions will not start until your first session completes
- Conda environments: Personal Conda environments in your /home can be used. Ensure
ipykernel is installed
Napari
Napari is a multi-dimensional image viewer for scientific data, launched on Innovator for use with MicroSam.
- Available node types: Compute, Big Memory, GPU, Quickq
- Number of cores: 1-48
- Memory Allocation: maximum 250 GB for quickq/compute, 2014 GB for bigmem, 502 GB for GPU
- Maximum hours: 24 hours
- Additional modules: optional
ParaView
ParaView is a powerful open-source software package for the visualization of scientific, engineering, and analytical data.
- Available node types: GPU (recommended), Quickq
- Number of cores: 1-48, maximum 48 for both queues
- Memory Allocation: maximum 250 GB for quickq, 502 GB for GPU
- Maximum hours: 4 hours
- GPUs: request 1 NVIDIA A100 80GB GPU
- Note: Quickq does not offer GPU acceleration but can be used for small or infrequent tasks, or when the GPU queue is full
RStudio Server
RStudio Server launches an RStudio IDE session on an Innovator node for statistical computing and graphics using R.
- Available node types: Compute, Big Memory, GPU, Quickq
- Number of cores: 1-48
- Memory Allocation: maximum 250 GB for quickq/compute, 2014 GB for bigmem, 502 GB for GPU
- Maximum hours: 24 hours for compute/bigmem/GPU, 12 hours for quickq
- R Version: R 4.3.2 mkl
TASSEL
TASSEL is software for evaluating trait associations, evolutionary patterns, and linkage disequilibrium in genetic data.
- Available node types: Compute, Quickq
- Number of cores: 1-48
- Memory Allocation: maximum 250 GB for quickq/compute, 2014 GB for bigmem, 502 GB for GPU
TensorBoard
TensorBoard is a visualization toolkit to help you understand and monitor the performance of your machine learning models. It provides insights to help refine models, troubleshoot issues, and improve your machine learning projects.
- Available node types: Compute, Quickq
- Number of cores: 1-48
- Memory: maximum 250 GB for both queues
- Maximum hours: 24 hours
- TensorBoard Log Directory: enter the path on Innovator that contains your data to visualize. For example:
/home/jacks.local/username
VisIt
VisIt is an open source, interactive, scalable visualization, animation, and analysis tool. Users can quickly generate visualizations, animate them through time, manipulate them with a variety of operators and mathematical expressions, and save the resulting images and animations for presentations.
- Available node types: GPU (recommended), Quickq
- Number of cores: 1-48, maximum 48 for both queues
- Memory Allocation: maximum 250 GB for quickq, 502 GB for GPU
- Maximum hours: 4 hours
- GPUs: request 1 NVIDIA A100 80GB GPU
- Note: Quickq does not offer GPU acceleration but can be used for small or infrequent tasks, or when the GPU queue is full
Discovery Open OnDemand
Accessing Discovery OnDemand
Open your browser and navigate to:
https://mydiscovery.sdstate.edu
Log in with your email as first.lastname@jacks.sdstate.edu and your SDSU password.
Discovery OnDemand Menu Overview
Files
- Home Directory — browse and manage files in your home directory (
/home/jacks.local/username)
Jobs
- Active Jobs — view all currently running and queued jobs on Discovery
- Job Composer — create and submit Slurm job scripts through a graphical interface
Clusters
- Discovery Shell Access — opens a web-based terminal for full command line access to Discovery in your browser
Common Configuration Fields for All Discovery Interactive Apps
When launching any interactive app on Discovery, you will see these configuration fields:
- Node Type — Compute or GPU depending on the app
- Number of cores — maximum is 48 for all queues
- Memory Allocation (in GB) — maximum 502 GB for GPU. For compute, memory requests over 250 GB will be automatically routed to a high-memory node. There are only 2 high-memory nodes on Discovery so please request only the memory you need as requesting more than necessary may increase your wait time. For jobs requiring 250 GB or less, 10 standard compute nodes are available
- Number of hours — maximum 24 hours for all apps
- Email address for Slurm notifications (optional but recommended) — you will receive an email when your session starts and ends
- Additional modules (optional) — space-separated list of extra modules to load
Available node types on Discovery interactive apps:
- Compute — general compute nodes. Memory requests over 250 GB are automatically routed to high-memory nodes (2 TB RAM). 10 standard compute nodes available for jobs requiring 250 GB or less
- GPU — Discovery GPU nodes with up to 2x NVIDIA H100 GPU cards per node, 512 GB RAM
Haploview (Discovery)
Haploview is a tool for the analysis and visualization of linkage disequilibrium and haplotype data in genetic studies.
- Available node types: Compute only
- Number of cores: 1-48
- Memory Allocation: requests over 250 GB automatically routed to high-memory node (2 TB). Only 2 high-memory nodes available — request only what you need
- Maximum hours: 24 hours
- Additional modules: optional
Jupyter Notebook (Discovery)
Jupyter Notebook launches an interactive Python notebook server on a Discovery node.
- Available node types: Compute, GPU
- Number of cores: 1-48
- Memory Allocation: maximum 502 GB for GPU. For compute, requests over 250 GB routed to high-memory node. Only 2 high-memory nodes on Discovery — request only what you need
- Maximum hours: 24 hours
- Additional modules: optional
- Important: Only one Jupyter session is allowed at a time. Additional sessions will not start until your first session completes. Use Jupyter primarily for debugging — submit tested scripts via sbatch for efficient processing
- Conda environments: Personal Conda environments in your /home can be accessed. Ensure
ipykernel is installed in your Conda environment
JupyterLab (Discovery)
JupyterLab is the next-generation Jupyter interface providing a file browser, multiple notebooks, and terminals in a single interface.
- Available node types: Compute, GPU
- Number of cores: 1-48
- Memory Allocation: maximum 502 GB for GPU. For compute, requests over 250 GB routed to high-memory node. Only 2 high-memory nodes on Discovery — request only what you need
- Maximum hours: 24 hours
- Additional modules: optional
- Important: Only one Jupyter session is allowed at a time. Additional sessions will not start until your first session completes
- Conda environments: Personal Conda environments in your /home can be used. Ensure
ipykernel is installed
RStudio Server (Discovery)
RStudio Server launches an RStudio IDE session on a Discovery node for statistical computing and graphics using R.
- Available node types: Compute, GPU
- Number of cores: 1-48
- Memory Allocation: maximum 502 GB for GPU. For compute, requests over 250 GB routed to high-memory node. Only 2 high-memory nodes on Discovery — request only what you need
- Maximum hours: 24 hours
- R Version: R 4.4.3 mkl
- Additional modules: optional
Comparison: Innovator vs Discovery OnDemand
| Feature |
Innovator OnDemand |
Discovery OnDemand |
| URL |
https://hpcportal.sdstate.edu |
https://mydiscovery.sdstate.edu |
| Shell Access |
Innovator Shell Access |
Discovery Shell Access |
| Jupyter Notebook |
Yes — GPU: NVIDIA A100 80GB |
Yes — GPU: NVIDIA H100 80GB |
| JupyterLab |
Yes — GPU: NVIDIA A100 80GB |
Yes — GPU: NVIDIA H100 80GB |
| RStudio Server |
Yes — R 4.3.2 mkl |
Yes — R 4.4.3 mkl |
| Interactive Desktop |
Yes |
No |
| Code Server (VS Code) |
Yes |
No |
| TensorBoard |
Yes |
No |
| Scratch in Files menu |
Yes |
No |
| High-memory routing |
Manual — select Big Memory node type |
Automatic — requests over 250 GB routed to high-memory node |
| Specialized Science Apps |
Cellpose, ChimeraX, Deepchem, HEC-HMS, Napari, ParaView, TASSEL, VisIt |
Haploview only |
General Tips
- Always enter your email for Slurm notifications — it is optional but recommended so you know when your session starts and ends
- Use the Quickq partition on Innovator for short testing sessions as jobs start faster. Quickq has a 12-hour time limit
- Only use the GPU partition if your application actually requires GPU. If not, choose Compute to help optimize cluster resource allocation
- Only one Jupyter or JupyterLab session is allowed at a time on both Innovator and Discovery
- Once scripts are tested in Jupyter, submit them as batch jobs via sbatch for more efficient processing
- To use a personal Conda environment in Jupyter or JupyterLab, ensure the
ipykernel package is installed in that environment
- On Discovery, if you need more than 250 GB of memory, your job will be automatically routed to a high-memory node. There are only 2 of these nodes so request only what you need to avoid long wait times
- View and manage all running sessions by clicking My Interactive Sessions in the top menu
How to Launch an Interactive Application
- Log in to the OnDemand portal
- Click Interactive Apps in the top menu
- Select the application you want to launch
- Fill in the resource request form — node type, number of cores, memory, hours, and optionally your email and additional modules
- Click Launch
- Wait for your session to start — this may take a few minutes depending on cluster availability
- Click Connect when the session is ready
- The application opens in a new browser tab
How to Access the Web Terminal
- Log in to the OnDemand portal
- Click Clusters in the top menu
- Select Innovator Shell Access or Discovery Shell Access
- A terminal opens in your browser — you can run any command line instructions from here
How to Submit a Job through Job Composer
- Log in to the OnDemand portal
- Click Jobs then select Job Composer
- Click New Job to create a new job script
- Edit the job script with your #SBATCH parameters and commands
- Click Submit to submit the job to the Slurm scheduler
- Monitor your job under Jobs → Active Jobs
Questions or Problems
If you have any questions or need assistance with Open OnDemand, contact the SDSU RCi team: