I believe in Linux is possible to limit the memory used by a user and also it is possible to limit the amount of cpu used so I can limit resources for group user and also if i put oss server in a vm i suppose i can limit cpu and memory usage. My scenario is: i have 34 compute nodes 512 GB RAM and 34 HD 16 TB each that I can arrange in 9 nodes, i have also a management node that can be used for LUSTRE metadata server, infiniband is 200 Gb/s We make mhd simulations. What Lustre configuration do you suggest?
________________________________ Da: Andreas Dilger <[email protected]> Inviato: Venerdì, Ottobre 13, 2023 7:19:11 PM A: Fedele Stabile <[email protected]> Cc: [email protected] <[email protected]> Oggetto: Re: [lustre-discuss] OSS on compute node On Oct 13, 2023, at 20:58, Fedele Stabile <[email protected]<mailto:[email protected]>> wrote: Hello everyone, We are in progress to integrate Lustre on our little HPC Cluster and we would like to know if it is possible to use the same node in a cluster to act as an OSS with disks and to also use it as a Compute Node and then install a Lustre Client. I know that the OSS server require a modified kernel so I suppose it can be installed in a virtual machine using kvm on a compute node. There isn't really a problem with running a client + OSS on the same node anymore, nor is there a problem with an OSS running inside a VM (if you have SR-IOV and enough CPU+RAM to run the server). *HOWEVER*, I don't think it would be good to have the client mounted on the *VM host*, and then run the OSS on a *VM guest*. That could lead to deadlocks and priority inversion if the client becomes busy, but depends on the local OSS to flush dirty data from RAM and the OSS cannot run in the VM because it doesn't have any RAM... If the client and OSS are BOTH run in VMs, or neither run in VMs, or only the client run in a VM, then that should be OK, but may have reduced performance due to the server contending with the client application. Cheers, Andreas -- Andreas Dilger Lustre Principal Architect Whamcloud
_______________________________________________ lustre-discuss mailing list [email protected] http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
