Batch analysis is pre-configured to run on a local Slurm compute cluster with a single worker node slurmd1. By default, the cluster is configured with nominal CPUs and memory allocations, as defined by the SLURM_NODE_OPTIONS parameter in the default.env file (as shown below)
SLURM_NODE_OPTIONS=CPUs=2 Boards=1 SocketsPerBoard=1 CoresPerSocket=1 ThreadsPerCore=2 RealMemory=10240
It's possible to override this parameter by adding the SLURM_NODE_OPTIONS variable to the custom.env file. If the custom.env file does not have the SLURM_NODE_OPTIONS line then SImA will be using the default resource allocation for batch analysis.
In order to optimise the resource allocation for batch analysis on the local Slurm cluster, you must first determine the resources available to the system as a whole. To do that you can use the following commands executed on the SImA server:
$ lscpu
e.g.
bash-4.4$ lscpu
...
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
$ free -h
e.g.
bash-4.4$ free -h
total used free shared buff/cache available
Mem: 62Gi 402Mi 53Gi 18Mi 9.2Gi 61Gi
In order to prevent resources being exhausted and hence impact the general running of the system, it is recommended to reserve at least 4 CPUs and 16GB of memory. This will vary depending on use case and therefore it might be necessary to reserve more resources for the host in some situations.
Using the above resources as an example, you could allocate 12 CPUs and 46GB of RAM to local batch analysis by adding the SLURM_NODE_OPTIONS variable to the custom.env file as shown below:
SLURM_NODE_OPTIONS=CPUs=12 Boards=1 SocketsPerBoard=1 CoresPerSocket=6 ThreadsPerCore=2 RealMemory=47104
After making the change it is necessary to redeploy the SImA stack using the undeploy.sh and deploy.sh scripts which can be found inside the SImA installation directory as described here.
After making the change, assess whether this has an impact on the batch analysis performance.