Since at least 2019, the JLab Compute Farm's batch system has been configured to allow allocation of a generic resource (gres), disk, representing space available on a compute node's /scratch filesystem. If, for example, a set of four jobs each requested 100 gigabytes, the batch system could then know to not attempt to run them all simultaneously on a node with only 315 gig in /scratch. Such allocations, however, were purely advisory: unlike the hard limits on CPU and memory usage, the batch system would not actually prevent those jobs from potentially using more than 100 gigabytes of scratch space.
On February 17th, 2026, Farm jobs began to be held strictly to their disk request. This is intended to improve the reliability of the Farm by preventing /scratch filesystems from being filled and causing ENOSPC "no space left on device" errors for other jobs. Conversely, however, there may be jobs that don't request disk, or request insufficient disk, and were relying on previously being allowed to use whatever space was left unused by other jobs, and those jobs might stop working.
If you are using Swif, then a disk request is automatically calculated based on the size of the input files. If your jobs started failing because they use additional space beyond the size of the input files, you may need to add additional space using Swif's -disk-scratch parameter.
If you are using Slurm directly, then by default you will receive a minimal /scratch allocation. If your job works entirely in memory and shared filesystems, that may not be a problem! However, it is typical for applications to assume the use of some amount of temporary scratch space even if not told to explicitly, so you may need to add something like #SBATCH --gres=disk:1G if you are running into ENOSPC errors. If you know your job has a /scratch space requirement, then hopefully you were already declaring it, but if not, it is now mandatory.