Hello Gabor, thank You for answare. You say mixing of storage pools, but I use only ZFS (thin and not thin provisioned). Here is an example. +-------------------------------------------------------------------------------------------------------------------------------------------+ | StoragePool | Node | Driver | PoolName | FreeCapacity | TotalCapacity | CanSnapshots | State | SharedName | |===========================================================================================================================================| | DfltDisklessStorPool | drbd-03 | DISKLESS | | | | False | Ok | drbd-03;DfltDisklessStorPool | | DfltDisklessStorPool | drbd-04 | DISKLESS | | | | False | Ok | drbd-04;DfltDisklessStorPool | | DfltDisklessStorPool | drbd-05 | DISKLESS | | | | False | Ok | drbd-05;DfltDisklessStorPool | | DfltDisklessStorPool | drbd-06 | DISKLESS | | | | False | Ok | drbd-06;DfltDisklessStorPool | | HDDPool | drbd-03 | ZFS_THIN | HDDPool | XXX.XX XiB | XXX.XX XiB | True | Ok | drbd-03;HDDPool | | HDDPool | drbd-04 | ZFS_THIN | HDDPool | XXX.XX XiB | XXX.XX XiB | True | Ok | drbd-04;HDDPool | | HDDPool | drbd-05 | ZFS | HDDPool | XXX.XX XiB | XXX.XX XiB | True | Ok | drbd-05;HDDPool | | HDDPool | drbd-06 | ZFS | HDDPool | XXX.XX XiB | XXX.XX XiB | True | Ok | drbd-06;HDDPool | | SSDPool | drbd-03 | ZFS_THIN | SSDPool | XXX.XX XiB | XXX.XX XiB | True | Ok | drbd-03;SSDPool | | SSDPool | drbd-04 | ZFS_THIN | SSDPool | XXX.XX XiB | XXX.XX XiB | True | Ok | drbd-04;SSDPool | | SSDPool | drbd-05 | ZFS_THIN | SSDPool | XXX.XX XiB | XXX.XX XiB | True | Ok | drbd-05;SSDPool | | SSDPool | drbd-06 | ZFS_THIN | SSDPool | XXX.XX XiB | XXX.XX XiB | True | Ok | drbd-06;SSDPool | +-------------------------------------------------------------------------------------------------------------------------------------------+
THis mixing is'nt enabled too? On Tue, Jan 30, 2024 at 4:15 PM Gábor Hernádi <gabor.hern...@linbit.com> wrote: > Hello! > > I have added an new node to cluster today (fresh ubuntu installation. >> After successfully setup I want to share an resource on it (extend >> linstor_db resource with one more node), but when I start it, I get error >> message >> Node: 'drbd-05' has DRBD version 9.2.5, but version 9.2.7 (or higher) >> is required >> > > LINSTOR is usually fine with a DRBD version >= 9.0.0. The only feature > that requires the most current version (9.2.7 or 9.1.18) is when you want > to mix storage pools. Most commonly that is achieved by mixing ZFS and LVM > storage pools. > If you want to do that, you will need to upgrade DRBD. > > >> But in some time from one node the linstor_db resource is removed (I >> don't delete it, controller do it). I try to add to this node again the >> resource but always get identical error. >> > > There is a "BalanceTask" where the controller tries to adjust the > deployed-resource-count to the configured place-count from the > corresponding resource-group. > That means if your resource-group (RG) has a place-count of 2 configured, > and you have deployed your 3rd linstor_db resource, LINSTOR will > automatically (after a while) delete one "unnecessary" resource. > If you do want to have your linstor_db resource 3 (or more) times > replicated, you could create a new RG, move the linstor_db > resource-definition into the new RG and configure the new RG with a > --place-count 3 for example. > I suggest using a new RG, since simply modifying the --place-count of an > existing RG will trigger lots of resource-creations since a lot of > resource-definition will "lack" a resource. > > >> The problem that I don't known what will be happend if I upgrade drbd >> kernel modul to new version and restart the node (all nodes), Do linstor >> remove other resources from nodes? The problem is that not only the >> resource config is removed, a zfs volume too. >> > > Although LINSTOR will indeed delete the .res files (briefly before they > get regenerated), LINSTOR will however NOT `drbdadm down` the DRBD devices. > So restarting LINSTOR will not affect any running DRBD. > > > Hope that helps > > Best regards, > Gabor >