Available Resources

When using a graphics card, both the computing power and the working memory of the GPU (vRAM) are used. The computing capacities are divided up using a time-slicing process. This means that if several users are using the same graphics card, the calculations are still carried out but may take longer depending on the workload.

This guarantee does not apply to the GPU's RAM. A total of 48 GB vRAM is available per graphics card. When calculations are performed on the GPU (e.g., matrix multiplications, LLM inference, model training), the data required for this is written to the vRAM of the graphics card. If the vRAM is fully utilized, out-of-memory errors can occur, which are signaled as error messages depending on the framework used.

Therefore, according to our fair use policy, please find out how much vRAM your calculations or AI models require. For AI models in particular, there are usually benchmarks on the Internet that provide a rough estimate. For teaching purposes, no more than 6 GB of vRAM should be required. If you need more resources, the use of an HPC system would be conceivable. To view the current utilization of the graphics card, you can execute the command "nvidia-smi" within your JupyterLab environment. We are already working on a visual overview of the available resources.

Note
After you have performed your calculations, the vRAM will remain reserved. We therefore ask you to stop your server manually after use ("File >> Hub Control Panel"), log out, or terminate the Jupyter kernel used within JupyterLab ("Kernel >> Shut Down Kernel").