Skip to main content

Singularity in the GCS Environment

This is a minimal introduction; for the full details about Singularity, please refer to the official documentation at: https://www.sylabs.io/docs/ .

For an excellent introduction, we also recommend Compute Canada's documentation.

Singularity is “container” software, similar to the better-known Docker.

Documentation

The easiest way to get your hands dirty with Singularity is to use a pre-existing container that someone else has prepared. Your professor may supply one, or else, you can find some at the Singularity Hub. Simply download the image file to your home directory, for example:

% singularity pull shub://vsoch/hello-world

Then run its “main program” with:

% singularity run vsoch-hello-world-master-latest.simg

or get a shell in your container with:

% singularity shell vsoch-hello-world-master-latest.simg

Singularity can run Docker images too, for example:

% singularity pull docker://singularities/spark

% singularity shell .singularity/spark.simg

Please use the “computation” hosts (orwell, awareness, willpower), or, if your class has been assigned access to Apini, the hosts apini, apini-02, and so on. You can use lab desktops as well.

Do not try to run computations (including containers) on the public-access “login” systems (grace and poise).

Singularity uses a lot of diskspace, in particular, when downloading layers of a docker image, it puts several hundred MB in “$HOME/.singularity”, and when setting up to run an ephemeral image, it puts a lot of material in a temporary directory. Because “/tmp” on most Gina Cody School machines is small, we've set the environment variables SINGULARITY_TMPDIR and SINGULARITY_LOCALCACHEDIR to “/scratch”, where there's more room.

Please note these environment variables, in case you want to change where Singularity stores its files:

  • SINGULARITY_CACHEDIR: base folder for caching layers and singularity hub images. (Default: $HOME/.singularity)

  • SINGULARITY_PULLFOLDER: location for pulled images. (Default: current working directory)

  • SINGULARITY_TMPDIR: base folder for squashfs image temporary building. (Default: $TEMPDIR (/tmp); changed on our systems to /scratch)

  • SINGULARITY_LOCALCACHEDIR: temporary folder to generate runtime folders (containers “on the fly”, typically run, exec, shell, or docker://). (Default: /tmp; changed on our systems to /scratch)

  • SINGULARITY_DISABLE_CACHE: avoids using $HOME/.singularity, but forces use of /tmp!!! Therefore, do not use.

If your home directory is too small to store the images you'll need, your professor will have requested extra diskspace for you to use while you're taking a course that requires Singularity. In that case, it is probably easiest to leave the above defaults alone, but to symlink your “.singularity” directory to your temporary diskspace, for example like this:

% mkdir /groups/y/yz_comp123_2/singularity
% ln -s /groups/y/yz_comp123_2/singularity ~/.singularity

There are three formats that can be used for Singularity images:

  • dir: Sandbox containers (chroot-style directories), writable, however these cannot be used in Gina Cody School, since they require “sudo” (“root” access) to install.

  • extfs: Raw filesystem writable container images; these are built in an ext3 filesystem image, and cannot be resized. However, changes you make in the container while running it are written into the filesystem image (as long as you've made it large enough!) and persist after you exit the container.

  • squashfs: Compressed immutable (read-only) container images; filenames usually end in “.simg”. This type will also allow you to write into the “live” container, however, changes are lost when you exit your run. This type also cannot be resized after it is created.

Most methods of building containers require that you be “root”, which is not possible on ENCS-managed hosts. If you have your own computer, you can follow the instructions below, otherwise, you can have Singularity Hub connect to your GitHub repos with build specification files, and build the containers automatically for you.

Because the build methods below require “root”, for this section, we assume that you have successfully installed Singularity on a user-managed or personal system.

Singularity containers are either built from an existing container, or are built from scratch using a recipe file.

To build a container from scratch, a recipe file is needed. Here's a basic recipe file for a CentOS container, which we can save as “basic.txt” on our personal or user-managed machine.

# This is a basic CentOS Singularity container.
BootStrap: yum
MirrorURL: http://mirror.centos.org/centos-7/7/os/$basearch/
Include: yum

The “Bootstrap” line determines which Linux build system will be used by the new container, for example:

  • “yum” for CentOS
  • “debootstrap” for Debian and Ubuntu

The container will be built with the O/S downloaded from “MirrorURL”; Singularity will replace the variable “$basearch” with the appropriate value for the host you're running on. “Include” lists any packages that are to be included in the initial build; here we've included “yum” to allow for future package additions.

Here's how to build each of the three container types, that is “FS” (filesystem aka extfs), “SB” (sandbox aka dir), and “SQU” (squashfs). Remember to do this as root:

% singularity build --writable FS.img basic.txt
% singularity build --sandbox SB basic.txt
% singularity build SQU.simg basic.txt

Once that's done, we see two new files, “FS.img” and “SQU.simg”, as well as a new directory, “SB”.

Note that the default build is a squashfs one – that means that if you specify neither of “–writable” or “–sandbox”, a squashfs container will be built.

Also note what Singularity's authors have to say about the builds: “A common workflow is to use the 'sandbox' mode for development of the container, and then build it as a default (squashfs) Singularity image when done”. “Raw filesystem” containers are considered to be “legacy” at this point in time; feel free to play around with them, but this document will follow Singularity's modus operandi, and will focus on sandbox and squashfs containers.

A container can also be built from an existing container; the command below shows how to “convert” a container to another type. The command syntax is almost identical to that used for building a container from scratch:

% singularity build [--writable|--sandbox] TARGET SOURCE

The default, again, is a squashfs container, so, for example, to build the squashfs container “SQUnew” from the “SB” sandbox container, the command would be:

% singularity build SQUnew.simg SB

(You can do the above as non-root, but you will see a lot of warnings, and depending on which files were not accessible to you in the “SB” container, your new container may or may not work as expected.)

In reality, any of the three container types can be used to build any of the three container types (i.e., any type can be used to build any other type). Note that, with the exception of building a squashfs container from sandbox container, root-level permissions are needed.

Once you have a container image, you can run it on the system where you built it, or you can copy it to an ENCS-managed host and run it there. In the latter case, you'll get best results with a squashfs or raw filesystem image; if you copy a sandbox container, you'll lose the “root” file ownerships, and the special device files will not be created. However, it may still work.

So, let's enter the container “SB” (for example). Poking around, we quickly realise that, with this very minimal install, we cannot really do much:

% singularity shell SB
Singularity: Invoking an interactive shell within container...

Singularity SB:~> more /etc/redhat-release
bash: more: command not found

Let's install the package that provides “more”:

Singularity SB:~> yum -y install util-linux

Singularity SB:~> more /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)

Better! Unfortunately, the container was not invoked in “write mode”, so after we exit, our software installation does not persist:

Singularity SB:~> exit
exit

% singularity shell SB
Singularity: Invoking an interactive shell within container...

Singularity SB:~> more /etc/redhat-release
bash: more: command not found

To make changes persist, enter the container using “–writable” (which will of course work only for extfs and sandbox containers):

Singularity SB:~> exit
exit
% singularity shell --writable SB
Singularity: Invoking an interactive shell within container...

Singularity SB:~> yum -y install util-linux
Singularity SB:~> exit
exit

% singularity shell SB
Singularity: Invoking an interactive shell within container...

Singularity SB:~> more /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)
Singularity SB:~> exit
exit

Let's say that we are happy with having “more”, and want to make it part of any future containers we build. We could add “more” manually (by installing “util-linux”) every time we build a new container, or, better, we could make it a part of our recipe file:

# This is a basic CentOS Singularity container.
BootStrap: yum
MirrorURL: http://mirror.centos.org/centos-7/7/os/$basearch/
Include: yum

%post
yum -y install util-linux

Commands that are placed into the “%post” section of the recipe file are run after the base container is built. As a general rule, “Include” should list framework items (i.e., items that are almost certainly going to be part of any container), while “%post” items should list container customisations.

Let's build three new containers with this updated recipe file:

% singularity build --writable FS2.img basic.txt
% singularity build --sandbox SB2 basic.txt
% singularity build SQU2.simg basic.txt

Let's confirm that “more” is present and works as expected:

% singularity exec FS2.img more /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)
% singularity exec SB2 more /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)
% singularity exec SQU2.simg more /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)

Notice what we did here: instead of starting up an interactive shell, Singularity can execute (using the “exec” argument) a command, and we have asked it to execute “more /etc/redhat-release” within each container. If what we are asking it to execute is not possible, Singularity informs us:

% singularity exec SB2 hostname
/.singularity.d/actions/exec: line 9: exec: hostname: not found

The above behaviour is not entirely unexpected, since “hostname” has not been installed in “SB2”.

We have now built the three container types. Recall the differences between the three: a filesystem container is an ext3 “file” that can hold persistent changes (i.e., it can be written to), but it cannot be resized after it has been built. A sandbox container, created as a directory in your existing structure, also can hold persistent changes, and can be resized post-build (as long as you have the available space on your system), but you cannot use it reliably on ENCS-managed hosts. Lastly, a squashfs container is a read-only “file”, so you cannot make persistent changes inside it, but it is very portable from one system to another.

Back to top

© Concordia University