Skip to content

How to Use

How to access

Access to our cluster can be done through Open OnDemand or via the JupyterLab (K8S) Terminal. In both options, it is essential to have a valid account in LIneA's computational environment. If you don't have an account, contact the Service Desk by email (helpdesk@linea.org.br) for more information.

Attention

Even with an active LIneA account, access to the HPC processing environment is not automatic. For more information contact the Service Desk at helpdesk@linea.org.br.

Accessing via JupyterLab terminal

In the home screen of your Jupyter Notebook, in the "Other" section, you will find the terminal button. When clicking it, you will be redirected to a Linux terminal, initially located in your home directory. To access the Apollo Cluster, simply execute the following command:

  ssh loginapl01
The machine loginapl01 is where you can allocate compute nodes to submit your job.

$HOME and $SCRATCH

Compute nodes don't have access to your user (home) directory. Move or copy all files needed for job submission to your SCRATCH directory.

How to use the SCRATCH area

Your SCRATCH directory is the place to send essential files for job submission, as well as to check results after code execution. It's crucial to note that all results and generated files must be transferred back to your user directory (home). Otherwise, there's a risk of losing these files stored in your SCRATCH.

  • To access your SCRATCH directory:
  ssh $SCRATCH
  • To send files to your SCRATCH directory:
  cp <FILE> $SCRATCH

EUPS Package Manager

EUPS is an alternative package manager (and official LSST one) that allows loading environment variables and including paths to programs and libraries in a modular way.

  • To load EUPS:
  . /mnt/eups/linea_eups_setup.sh
  • To list all available packages:
  eups list
  • To list a specific package:
  eups list | grep <PACKAGE>
  • To load a package in current session:
  setup <PACKAGE NAME> <PACKAGE VERSION>
  • To remove loaded package:
  unsetup <PACKAGE NAME> <PACKAGE VERSION>

How to Submit a Job

A Job requests computing resources and specifies applications to be launched on those resources, along with any input data/options and output directives. Cluster task and resource management is done through Slurm. Therefore, to submit a Job you need to use a script like below:

  #!/bin/bash
  #SBATCH -p PARTITION                       #Name of the Partition to use
  #SBATCH --nodelist=NODE                    #Name of the Node to be allocated
  #SBATCH -J simple-job                      #Job name
  #----------------------------------------------------------------------------#
  ##path to executable code
  EXEC=/lustre/t0/scratch/users/YOUR.USER/EXECUTABLE.CODE
  srun $EXEC

In this script you need to specify: the queue name (Partition) to be used; the node name to be allocated for Job execution; and the path to the code/program to be executed.

WARNING

It is strictly prohibited to submit jobs directly to the loginapl01 machine. Any code running on this machine will be immediately interrupted without prior notice.

  • To submit the Job:
  sbatch script-submit-job.sh

If the script is correct there will be an output indicating the job ID.

  • To check job progress and information:
  scontrol show job <ID> 
  • To cancel the Job:
  scancel <ID> 

Internet access

Compute nodes do not have internet access. Packages and libraries must be installed from loginapl01 in your scratch area.

Useful Slurm Commands

To learn about all available options for each command, enter man <command> while connected to the Cluster environment.

Command Definition
sbatch Submits job scripts to execution queue
squeue Displays job status
scontrol Used to display Slurm state (various options available only to root)
sinfo Displays partition and node status
salloc Submits a job for execution or starts a real-time job

Tutorial videos


Last update: April 30, 2025