Skip to main content

Tintin user guide

Printer-friendly version

This is the user guide to Tintin, a high performance computer cluster at UPPMAX. Guides for the other systems at UPPMAX can be found here.

Latest news

Please read this Users Guide for up-to-date information.
All heavy usage of the cluster must go through the batch system. The login nodes only allow up to 30 minutes of cpu time per process.

System configuration

The login node for Tintin is called (In fact, there may be three login nodes hidden behind this name; you will be automatically redirected to any one of these.)

Tintin consists of 160 dual Bulldozer compute servers where each compute server consists of two 8-core Opteron 6220 processors running at 3 GHz. We provide 144 nodes with 64 GB memoryi (ti1-ti144) and 16 nodes with 128 GB /ti145-ti160). All nodes are interconnected with a 2:1 oversubscribed QDR Infiniband fabric.
In total Tintin provides 2560 cpu cores in compute nodes.

The login nodes (tintin1-3) are identical to the compute nodes but with only 32GB memory.

Important information about computer architecture

Tintin has a significantly different architecture, at a comparison with Milou, so it may be important for you to recompile your applications, to get them to run as fast (or faster) as on Milou. To get a good speed, UPPMAX recommends you to compare the speed of binaries, made with more than one compiler.
Note that, due to features available only on Tintin CPUs, codes compiled on Tintin will normally *not* run at all on Milou. So even for non-performance critical codes you should take care to only run codes compiled on Tintin on Tintin and codes compiled on Milou on Milou.

Symptoms of mixing machines when compiling and running could, apart from missing libraries, be the program terminating due to illegal instructions.

OS and software

There are several compilers available on Tintin. This gives you flexibility to obtain programs that run optimally on Tintin.

  • gcc - the new version (6.1.0) generates good code, if you tell it to use the new instructions. You can use this compiler by doing
    module load gcc/6.1.0
    The compiler executable is named gcc for C, g++ for C++, and gfortran for Fortran.
    To use the new instructions available on tintin (AVX and FMA4), give the additional options "-mavx -mfma4" to gcc. For good performance with this compiler you should also specify optimization at least at level -O2, preferably -O3.
  • intel+mkl - usually generates good code, even on the AMD cpus of tintin. You can use this compiler by doing
    module load intel/16.3
    The compiler executable is named icc for C, icpc for C++, and ifort for Fortran.
    You should give optimization options at least -O2, preferably -O3 or -fast. You can also try to use the -mavx option to the compiler to output AVX instructions, but please verify the results you obtain, as we found some additional problems with this option for some codes.
  • pgi - often generates somewhat slower code, but it is stable so often it is easier to obtain working code, even with quite advanced optimizations. You can use this compiler by doing
    module load pgi/16.4
    The compiler executable is named pgcc for C, pgCC for C++, and pgfortran, pgf77, pgf90, or pgf95 for Fortran.
    For this compiler, you can generate code for tintin using the following options "-Mvect=simd:128 -tp bulldozer-64". Also give optimization options at least -O2, preferably -Ofast, even though the compile times are much longer, the result is often worth the wait. It is possible to generate 256 bit vector instructions using "-Mvect=simd:256" instead of "-Mvect=simd:128", but our tests show the compiler to often generate suboptimal code with this option, and 256 bit vector instructions are not very beneficial compared to 128 bit vector instructions for the Bulldozer CPUs anyway.
  • open64 - This compiler has special optimizations for the Bulldozer CPU, and can give good results, but it tends to break code at high optimization levels. You can use this compiler by doing
    module load open64/amd-
    The compiler executable is named opencc for C, openCC for C++, and openf90 or openf95 for Fortran.
    The options "-mavx -mfma4 -mcpu=bdver1 -mtune=bdver1" generates code for tintin. Also use at least -O2 optimization level, preferably -Ofast.

See the our software pages for more details about OS, compilers and installed software.

You will probably have good use of the following commands:

  • uquota - telling you about your file system usage.
  • projinfo - telling you about the CPU hour usage of your projects.
  • jobinfo - telling you about running and waiting jobs on Tintin.
  • finishedjobinfo - telling you about finished jobs on Tintin.
  • projmembers - telling you about project memberships.
  • projsummary [project id] - summarizes some useful information about projects (the script needs some updates though)
For SLURM commands and for commands like projinfo, jobinfo and finishedjobinfo, you may use the "-M" flag to ask for the answer to be given for a system that you are not logged in to. E.g., when logged into Tintin, you may ask about information about current core hour usage on Milou, with the command
projinfo -M Milou

This works for UPPMAX systems where SLURM is installed.

Accounts and log in

All access to this system is done via secure shell (a.k.a SSH) interactive login to the login node, using the domain name
ssh -AX
To get a user account you must register at the UPPMAX user account application page.

For questions concerning accounts and access to Tintin, please contact UPPMAX support.

Note that the machine you arrive at when logged in is only a so called login node, where you can do various smaller tasks. We have a couple of limits in place that restricts your usage. For larger tasks you should use our batch system that pushes your jobs onto other machines within the cluster.

Using the batch system

To allow a fair and efficient usage of the system we use a resource manager to coordinate user demands. On Tintin we use the SLURM software.

Some Limits

  • There is a job walltime limit of ten days (240 hours).
  • We restrict each user to at most 5000 running and waiting jobs in total.
  • Each project has a 30 days running allocation of CPU hours. We do not forbid running jobs after the allocation is overdrafted, but instead allow to submit jobs with a very low queue priority, so that you may be able to run your jobs anyway, if a sufficient number of nodes happens to be free on the system.
  • Very wide jobs will only be started within a maintenance window (just before the maintenance window or at the end of the maintenance window). These are planned for the first Wednesday of each month. On Tintin a "very wide" job asks for 54 nodes or more.

Convenience Variables

  • $SNIC_TMP - Path to node-local temporary disk space

    The $SNIC_TMP variable contains the path to a node-local temporary file directory that you can use when running your jobs, in order to get maxiumum disk performance (since the disks are local to the current compute node). This directory will be automatically created on your (first) compute node before the job starts and automatically deleted when the job has finished.

    The path specified in $SNIC_TMP is equal to the path: /scratch/$SLURM_JOB_ID, where the job variable $SLURM_JOB_ID contains the unique job identifier of your job.

    Please note, that in your "core" (see below) jobs, if you write data in the /scratch directory but outside of the /scratch/$SLURM_JOB_ID directory, your data may be automatically deleteted during your job run.

Comparison between SGE and SLURM

Users familiar to the Grid Engine batch system should look at our guide:

Introduction to SLURM

Nearly all of the compute power of Tintin is found in the compute nodes and SLURM is the tool to utilize that power.

You can use Tintin interactively, e.g. to quickly test your algorithms or explore the Tintin computing environment, but the grand potential is to give Tintin bigger chunks of work packaged into a batch script.

SLURM Commands

The SLURM system is accessed using the following commands:

  • interactive - Start an interactive session
  • salloc - Run a single command on the allocated cores/nodes
  • sbatch - Submit and run a batch job script
  • srun - Typically used inside batch job scripts for running parallel jobs (See examples further down)
  • scancel - Cancel one or more of your jobs.

Specifying job parameters

Whether you use Tintin interactively or in batch mode, you always have to specify a few things, like number of cores needed, running time et.c. These things can be specified in two ways:

  • Either as flags sent to the different SLURM commands (sbatch, srun, the interactive command, et.c.), like so:
    sbatch -A p2012999 -p core -n 1 -t 12:00:00 -J some_job_name my_job_script_file.s
  • ... or, when using the sbatch command, it can be specified inside the job script file itself, by using special "SBATCH" comments, for example like so:
    #!/bin/bash -l
    #SBATCH -A p2012999
    #SBATCH -p core
    #SBATCH -n 1
    #SBATCH -t 12:00:00
    #SBATCH -J some_job_name
    ... the actual job script code ...
    If doing this, then one will only need to start the script like so, without any flags:

Required job parameters

These are the things you typically need to specify for each job (required parameters might differ sligthly depending on which other parameters are set):
  • Which project should be accounted for the running time (Format: -A [project name])
    • For example, if you project is named p2010999, you specify -A p2010999
    • You can find your current projects (and other projects that you have run jobs in) with the program projinfo.
  • What partition to choose (Format: -p [partition])
    • Partitions are a way to tell what type of job you are submitting, e.g. if it needs to reserve a whole node, or only one core.
    • If you need only part of a node, i.e. between one and fifteen cores and at most four GB of RAM for each core, you specify "-p core", otherwise you specify -p node.
      (More about this later).
  • If you specified the "node" partition above and want to run on less than 8 cores per node (for example, running something in only one process per node) you have to give the number of nodes (Format: -N [no of nodes]) in addition to the number of cores.
  • How many cores you will need (Format: -n [no_of_cores]).
    • The most atomic compute element to specify is -n 1, i.e. one core.
    • When using the "node" partition, remember that on Tintin there are 16 cores per node, so you will need to multiply the number of nodes you have specified, to get the correct number of cores. An example, specifying 2 nodes, and thus 32 (2 * 16) cpus, would be -n 32
  • How long you want to reserve those nodes/cores (Format: -t d-hh:mm:ss).
    • Specification is in days, hours, minutes and (not very useful) seconds. A three day timelimit is given as -t 3-00:00:00 Twenty minutes is written as -t 20:00 and three hours as -t 3:00:00
    • A long time will increase the chance that your computations is finished within time, while a shorter time will typically make your job start faster.
    • Your project will be accounted for the time the job runs, which is not necessarily as long as your timelimit. If your job goes over the timelimit, it will be automatically cancelled.

The "--qos=short" option for test runs shorter than 15 min

For short test runs shorter than 15 min, add the --qos=short specification, which gives you a high priority. You are limited to 15 minutes and to a maximum of four nodes, as well as a maximum of two such jobs simultaneously.

Example 1: Interactive job on one core

interactive -p devcore -n 1 -t 2:00:00 -A p2010999

You do not need to say "-p devcore", because the default partition is otherwise devcore, which implies that only one core will be allocated. Automatically, you will get an interactive session with the command interpreter "bash" on one compute node.

Example 2: Interactive job on four nodes

[lka@tintin1 ~]$ salloc -n 64 -p node -t 15:00 -A p2010999 --qos=short
[lka@ti83 ~]$ # How to see on what nodes I am running
[lka@ti83 ~]$ srun hostname -s |sort -u
[lka@ti83 ~]$ # Create the same local directory on all four nodes
[lka@ti83 ~]$ srun -N 4 -n 4 mkdir /scratch/<strong>$SLURM_JOB_ID/</strong>indata
[lka@ti83 ~]$ # Copy indata for my_program to the local directories
[lka@ti83 ~]$ srun -N 4 -n 4 cp -r ~/glob/indata/* /scratch/<strong>$SLURM_JOB_ID/</strong>indata
[lka@ti83 ~]$ #
[lka@ti83 ~]$ cd ~/glob/testprogram
[lka@ti83 testprogram]$ module load intel openmpi
Loaded openMPI 1.4, compiled with intel11.1 (found in /opt/openmpi/1.4intel11.1/)
[lka@ti83 testprogram]$ mpirun -v my_program

We needed to add -p node, because we use more than 15 cores. If you use at most four nodes and fifteen minutes, you may specify --qos=short, which gives you a tremendously higher queue priority.

The srun -N 4 -n 4 construction is very useful, when you want to run a command once on each of your nodes. You need to know how many nodes you have asked for; e.g., for eight nodes you will need an srun -N 8 -n 8 construction.

Example 3: Using interactive command, to let you run X applications

salloc does not allow you to run X applications. If you need to do that, please use the interactive command. The easiest usage example is

interactive -A p2010999

which sets some default values, to give you the highest queue priority allowed, using e.g. "--qos=short". If you do not like the default values, you can add most options that the salloc command allows. You will get a shell prompt from the screen command (the command "man screen" gives more information) and if you have tried screen and do not like it, please escape from it with an "exec xterm" command. To get some more information about the interactive command, please try

interactive -h

You may run one interactive command with a high queue priority at a time, up to a time limit of 12 hours, regardless of your simultaneous use of "--qos=short" jobs.

Example 4: A small batch script, with a Job name

An example of a small batch script, where we also show how to give your job a name (the "-J" flag):

#!/bin/bash -l
#SBATCH -A p2010999
#SBATCH -p node -n 64
#SBATCH -t 1-20:00:00
#SBATCH -J test42
module load intel openmpi
cd ~/glob/testprogram
mpirun my_program

If you name the batch script file "script-v4", you submit the script with a bash command like

sbatch script-v4

Monitoring jobs

To see the status of your program, you can run commands like:

  • jobinfo
  • squeue
  • jobinfo -u your_account_name

Please also see our page about how the job priority and queue works to understand more about when and why your job will start, or perhaps why your job isn't starting at all.

Cancelling jobs

For various reasons, you might want to terminate your running jobs or remove your waiting jobs from the queue. The command is "scancel" and you can read its documentation with the command "man scancel". Straightforward is to run

scancel 123456 123457

to kill two of your jobs, by giving their job number. The command

scancel -i -u your_account_name

kills all your jobs, but asks for each job if you really want to terminate that job.

scancel -u your_account_name --state=pending

terminates all your waiting jobs, while

scancel -u your_account_name -n firsttest -t running

kills all your running jobs that are named "firsttest".

Details about the "core" and "node" partitions

A normal Tintin node contains 64 GB of RAM and sixteen compute cores. An equal share of RAM for each core would mean that each core gets at most four GB of RAM. This simple calculation gives one of the limits mentioned below for a "core" job.

You need to choose between running a "core" job or a "node" job. A "core" job must keep within certain limits, to be able to run together with up to fifteen other "core" jobs on a shared node. A job that cannot keep within those limits must run as a "node" job.

Some serial jobs must run as "node" jobs. You tell Slurm that you need a "node" job with the flag "-p node". (If you forget to tell Slurm, you are by default choosing to run a "core" job.)

A "core" job:

  • Will use a part of the resources on a node, from a 1/16 share to a 15/16 share of a node.

  • Must specify less cores than 16, i.e.between "-n 1" to "-n 15".

  • Must not demand "-N", "--nodes", or "--exclusive".

  • Is recommended not to demand "--mem"

  • Must not demand to run on a fat node (see below, for an explanation of "fat") or a devel node..

  • Must not use more than four GB of RAM for each core it demands. If a job needs half of the RAM, i.e. 32 GB, you need to reserve also at least half of the cores on the node, i.e. eight cores, with the "-n 8" flag.

A "core" job is accounted on your project as one "core hour" (sometimes also named as a "CPU hour") per core you have been allocated, for each wallclock hour that it runs. On the other hand, a "node" job is accounted on your project as sixteen core hours for each wallclock hour that it runs, multiplied with the number of nodes that you have asked for.

To run a non-parallel job as a "node" job, might mean that you "pay" for more than you get out of the arrangement. In that case, you are welcome to get in touch with the UPPMAX staff, for a discussion on the best way to run your application. A common solution is to pack 2-16 job tasks into one "node" job, writing something like this in your job script:

your_application --infile infile1 --outfile outfile1 &
your_application --infile infile2 --outfile outfile2 &
your_application --infile infile3 --outfile outfile3 &
your_application --infile infile4 --outfile outfile4 &
your_application --infile infile5 --outfile outfile5 &
your_application --infile infile6 --outfile outfile6 &
your_application --infile infile7 --outfile outfile7 &
your_application --infile infile8 --outfile outfile8 &

This example pinpoints a few details, needed for task packing:

  • Normally, each task needs individual input and output files.

  • The expected run time of each task should be fairly similar. The whole job will run as long time as the slowest of the tasks need, which will lead to inefficient usage if the run time of the tasks is too big. In these cases it's better to submit the jobs as individual single core jobs instead.

  • Each application call needs an "&" written at the end of the line, to make it start at the same time as the other application calls.

  • The "wait" command at the bottom tells the job script to wait until all the tasks have run to their normal finish. Otherwise the job and thus also the tasks will terminate prematurely.

You may specify a node with more RAM, by adding the words "-C mem128GB" to your job submission line and thus making sure that you will get 128 GB of RAM on each node in your job. Please note that there are only sixteen nodes with this amount (or more) of RAM.

From the squeue command, you can get a lot of information, using different command options. Some of these options are used within a


command, that tells you about running jobs, gives you some statistics about the Tintin node status and gives you a list of all waiting jobs, sorted on job priority. The jobinfo

command has many option flags, most of them the same as for the squeue command. One of the most useful flags is "-u your_user_account_name" to specify that you want to look only on your jobs.

The squeue command has a "--start" option, that is meant to give you a good estimate on when your waiting jobs will start. You can also see this information with the "jobinfo" command.

Node types

Typically at UPPMAX there exists at least two node types, thin being the typical cluster node and fat nodes having more memory. For tintin, the fat nodes available have double the amount of memory available normally (128 Gbyte).

To request a fat node, use

-c fat

in your sbatch command.

Specifications for a job on a single, full node

If you want to run a single application on your node, you specify:

#SBATCH -p node -n 1

This application can use all the memory of your node all by itself.If you have a threaded application or an OpenMP application, you normally use the same specification.

If you want to run e.g. four copies of the same program in parallel, you specify

#SBATCH -p node -n 4

to inform Slurm about this. Slurm then will know that you want to run four tasks on the node. Some tools, like mpirun and srun, ask Slurm for this information and behave differently depending on the specified number of tasks. Most programs and tools do not ask Slurm for this information and thus behave the same, regardless of how many tasks you specify.

By default, mpirun and srun start as many copies of your specified command or program as the number of specified tasks. If you do not want them to go for the default behaviour, you can give them flags to specify, among other things, how many copies you want them to start.

To specify more than sixteen tasks is in most cases a bad idea, because a Tintin node have only sixteen compute elements (cores). For the same reason, if you run a threaded application or an OpenMP application, you would normally not want it to start so many parallel threads that you in total run more than sixteen parallel threads on the node.

Specifications for a multi-node job

If you want to run computations that takes more than one node, but can be run in parts as core jobs and/or single-node jobs, you should probably split them up into several core jobs and/or single-node jobs, and not read any further about multi-node jobs.

If you want to run e.g. 32 copies of your program, e.g. for a openmpi program, you normally specify

#SBATCH -p node -n 32

making your job run with sixteen copies of your program on each of two nodes. openmpi interacts with Slurm to get your program copies distributed over the allocated nodes, when the mpirun tool is called within your jobscript. The script would look something like

#! /bin/bash -l
#SBATCH -p node -n 32 -t 7-00:00:00
#SBATCH -A p2010999 -J elixir_B
module load intel openmpi
mpirun elixir B_gamma.txt

if your application is named "elixir" and is compiled with an intel compiler. mpirun will read from the SLURM environment that it must start the "elixir" program 32 times, i.e. sixteen times on each of two nodes.

It is often advantageous to bind processes to the cores, especially for very wide jobs. You can see more information about process binding by typing "mpirun -help". To bind each process to cores, in system order do:

mpirun -bind-to-core elixir B_gamma.txt

If you want to be sure to use only nodes with 64 GB of memory, you specify

#SBATCH -p node -n 32 -C mem64GB

The main reason for not wanting to use a fat node within your job, is that it is a shortage of fat nodes and someone might need the fat node for a job that is not able to run on a 64 GB node.

If your memory requirements are high, you may want to run your 32 copies distributed over more nodes like in

#SBATCH -p node -N 8 -n 32 -C mem64GB

making your job run with four copies of your program on each of eight nodes. If you use OpenMPI, mpirun will automatically distribute the 32 instances of your program over eight nodes.We now show an example, where you want to run a program "smart_aleck", that communicates over OpenMP within a node, but over OpenMPI between nodes. We want to use ten nodes, i.e. 160 cores. Because a single copy of the program knows how to utilize all sixteen cores within a node, we start one copy (one task) of the program on each node, meaning that we will specify the SLURM flags "-N 10 -n 10" to get the wanted distribution of tasks. Our OpenMP implementation can read the number of cores per node from an environment variable called OMP_NUM_THREADS, so we set the variable to sixteen, letting mpirun distribute it to all nodes. The job script might look like this:

#! /bin/bash -l
#SBATCH -p node -N 10 -n 10
#SBATCH -t 7-00:00:00
#SBATCH -A p2010999 -J another_test
module load intel openmpi
mpirun smart_aleck


We get all 160 cores to work, with one OpenMP thread on each. If we later find that sixteen threads does not fit into a node, we need to find a fatter node or might try lowering the value of OMP_NUM_THREADS.

Short test runs in the "devel" partition

"devel" is an abbreviation for "development".

The "--qos=short" flag allows short jobs, up to 15 minutes in length and up to four nodes in width, to be submitted with a high priority, as described earlier.

If you need to run a half-long test job, up to one hour in length and up to two nodes in width, you may use the "devel" partition. A small number of 64 GB nodes are removed from the "node" partition and can be used in this way.

This partition of nodes are meant only for small experiments and test runs, and not for production jobs. Like for "--qos=short", the "devel" partition makes it easier to develop programs and do small tests on a crowded system.

Here is a compilation of facts about the development partition:

  • Jobs are submitted like "node" jobs, but with a "-p devel" instead of a "-p node".
  • No core jobs can be run in this partition.
  • The maximum timelimit for the job is 60 minutes and the maximum node count is two.
  • You must not have more than one "devel" job in the batch system simultaneously, regardless if they are running or queued. If you by mistake submit more, they will probably all be automatically cancelled.
  • To get information about the current status of the "devel" partition, you can use the command
sinfo -p devel
  • The interactive command is allowed to use the "devel" partition to start short jobs. You do normally not need to tell the interactive command what partition to use, as it can make the choice automatically.

Difference between devel partition and devcore partition

Sometimes it is too expensive to pay for a full node, if you only need one core or a few. So, we have now configured a new partition, named "devcore". It covers the same physical nodes as the "devel" partition, but you can ask for single cores or multiple cores, like in the "core" partition.

Some exemples:

- "-p devcore -n 8" asks for eight cores and the proportional amount of RAM
- "-p devcore -n 1" on Tintin gives you one core and 4 GB of RAM
- "-p devcore -n 10" on Tintin gives you ten cores and 40 GB of RAM
- "-p devel -n 16" on Tintin gives you all cores and 64 GB of RAM
- "-p devcore -n 1" on Milou gives you one core and 8 GB of RAM
- "-p devcore -n 8" on Milou gives you eight cores and 64 GB of RAM
- "-p devel -n 16" on Milou gives you all cores and 128 GB of RAM

So, what is the difference on Tintin between
-p devcore -n 16
-p devel -n 16

None at all! In both cases, you ask for all cores on the node and all RAM on the node.

Project accounting

When you specify "-p node", you allocate full nodes, each containing sixteen cores, so your are accounted a number of "core hours" (sometimes named "CPU hours") that are the number of hours your job did run, multiplied with sixteen times the number of nodes that you did allocate. On the other hand, if you do not specify "-p node" and keep withing the limits mentioned above, you will accounted only the number of hours that your job did run multiplied with the number of cores you allocated.

To get an overview of how much of your project allocation that has been used, please use the projinfo command. Please use the command

projinfo -h

to get details on usage. With no flags given,


will tell you your usage in all your projects during the last 30 days.

To get your usage in project p2010999 during the current year, please use one of the commands

projinfo -y p2010999


projinfo -s january p2010999

The projinfo command extracts information from a system log of all finished jobs, and also includes information from the batch system on currently running jobs.

Finished jobs

In order to see information about finished jobs, use the command


The command gives you, apart for the timings of the job, the amount of memory your job used. If your job was cancelled it might be because your job used more memory than it was allowed to.

Details about memory usage

Historical information can first of all be found by issuing the command "finishedjobinfo -j [job id]". That will print out the maximum memory used by your job. If you want more details then we also save some memory information each 5 minute interval for the job in the file /sw/share/slurm/[cluster_name]/uppmax_jobstats/[node_name]/[job_id]. Notice that this is only stored for 30 days.

You can also ask for an e-mail containing the log, when you submit your job with sbatch or start an "interactive" session, by adding a "-C usage_mail" flag to your command. Two examples:

sbatch -A testproj -p core -n 5 -C usage_mail batchscript1
interactive -A testproj -p node -n 1 -C "fat&usage_mail"

As you see, you have to be careful with the syntax when asking for two features, like "fat" and "usage_mail", at
the same time. If you overdraft the RAM that you asked for, you will probably get an automatic e-mail anyway.

Discovering job resource usage with jobstats

If you want to be able to see even more details about how your jobs have used the requested resources, then please check out our guide about how to use our jobstats scripts.

File storage and disk space

At UPPMAX we have a few different kinds of storage areas for files, see Disk Storage User Guide for more information and recommended use.

Message Passing using MPI

There is currently only one mpi implementation installed. This is openMPI (version 1.4 is the default). The module to use are called openmpi. For more information about this implementation of MPI, see

Advanced Topic 1: Running a detachable screen process in a job

When you run the interactive command, you get a command prompt in the screen program.

When running the screen program in other environments, you can detach from your screen and later reattach to it. Within the environment of the interactive command, you lose this ability: Your job is terminated when you detach. (This is a design decision and not a bug.)

In case you want the best of both worlds, i.e. to be able to detach and reattach to your screen program within a job, you need to start a job in some other way and start your screen session from a separate ssh login. Here is an example of how you can do this:

$ salloc -A p2010999 -t 15:00 -p node -n 1 --qos=short --bell --no-shell
salloc: Pending job allocation 204322
salloc: job 204322 queued and waiting for resources
salloc: job 204322 has been allocated resources
salloc: Granted job allocation 204322
$ squeue -j 204322
 204322      node   (null)      lka   R       1:35      1 ti4
$ xterm -e ssh -AX ti4&
$ xterm -e ssh -AX ti4&

This salloc command gives you a job allocation of one node for 15 minutes (the "--no-shell" option is important here). Instead you can log in to any node of any of your running jobs, started with e.g. the sbatch command.

You get a job number and from that you can find out the node name, in this example q4.

When you log in to the node with the ssh command, you can start the screen program:

$ screen

When you detach from the screen program, with e.g. the "d" command, you can later in the same ssh session or in another ssh session reattach to your screen session:

$ screen -r

When your job has terminated, you can neither reattach to your screen session nor log in to the node.

The screen session of the interactive command is integrated into your job, so e.g. all environment variables for the job is correctly assigned. For a separate ssh session, as in this example, that is not the case.

Please note that it is the job allocation that determines your core hour usage and not your ssh or screen sessions.