On Mon, Feb 25, 2019 at 1:09 PM Roman Szabados wrote:
> Dear All,
>
> Updated question: Is it possible to configure the "N" parameter supplied
> by linstor for --max-peers "N"?
>
> As far as i understand it now, linstor uses "--max-peers 7" as an fixed
> argument, which substantialy limits the D
On Thu, Mar 14, 2019 at 7:18 AM wrote:
> Hello,
>
> I installed DRBD/Linstor on proxmox with redundancy 1, after adding second
> node to the cluster I would like to expand the resources to another node =
> redundancy 2.
>
> The new resources seems be OK, but the older one, created with redundancy
On Wed, Mar 27, 2019 at 12:00 PM Massimo De Nadal
wrote:
> Hi,
> I'm having some problems with Linstor on a Proxmox ZFS pool.
> First strange think is this:
>
> LINSTOR ==> storage-pool list
>
> +-+
>
On Thu, Mar 28, 2019 at 7:07 AM Nicholas Morton wrote:
> I am using Linstor to manage DRBD volumes on a 4 node Proxmox cluster and I
> am having an issue during node reboots.
> After a reboot the node is usually missing some of the .res files in
> /var/lib/linstor.d
> For whatever reason they are
See comments inline
On Thu, Mar 28, 2019 at 9:08 AM Massimo De Nadal
wrote:
> Hi Gabor,
>
> thank you very much for your answer.
> You are right about the "zfs list" output.
>
> My zpool is "shared" with non-drbd datasets, so many datasets doesn't have
> a volsize:
>
> root@pve-DEV1 ~ # zfs list
Hi there,
On Tue, Apr 9, 2019 at 7:22 AM Doug Dimick wrote:
> I failed to upgrade this morning from 0.7.5
> (d74305b420fdc878182afa162378a317e6a4a3b9) to 0.9.5
> (8f1f42f256776eb30161ee29945268c3faf71db9). Are there intermediary steps
> I
> should be taking manually to migrate the database?
>
N
Hello again,
thanks for sending me the database, I was able to find the bug in our
migration as well as a workaround for you until we release a fix for this.
The problem is that you had resources with no volumes in them. The
migration basically iterates over all existing volumes using database-joi
Whenever an error occurs in linstor, and you want to dig a bit deeper, you
should always check the generated ErrorReport (find the error-id with
'linstor error-reports list' and afterwards 'linstor error-reports show
')
If an external command that linstor tried to execute does not return the
expec
Hello,
there are a few things that confuse me quite a lot:
first:
> for n in alpha bravo charlie delta; do \
> linstor storage-pool create $n drbdpool lvmthin pve/drbdpool; \
> done
This is a _very_ old linstor syntax for creating a storage pool. For the
last several months (at least!) you have
Hi Daniel,
I'm afraid that the next few lines of the error report are more interesting
as those should be the standard out and standard error messages from the
failed external command (in your case the 'drbdadm ... create-md ...').
Regards, Gabor
___
dr
Hi Daniel,
I am still a bit confused. You said that you created the thin pool called
"pve/drbdpool" (copied from your first mail from the 'storage-pool create
...' command), but that thin pool does not exist in the shown "lvs" output.
I believe that the thin pool once existed (at least when the "
Hi Daniel,
The external command sent the following error information:
> no resources defined!
>
>
> But the resource definition file is there:
>
> /var/lib/linstor.d/vm-100-disk-1.res
>
The error message basically says that DRBD could not find the resource file.
So you verified that the .
Hello
> Have you tried managing this resource by other means, like by using
> drbdtop or drbdadm commands ?
>
> Yes, drbdadm always say:
>
> no resources defined!
>
> despite the resource definition file is there:
>
> /var/lib/linstor.d/vm-100-disk-1.res
>
That still means that you have
Hi,
I am quite sure that you actually don't want Linstor / DRBD to migrate a
volume from a LVM pool into a volume from a LVM_THIN pool. If you would do
that, DRBD would (have to) trigger a full resync, replcating the whole
content of the LVM volume to the LVM_THIN volume, making the usage of the
t
Hi,
well - yes. you have that mixed up. If you have a dedicated machine for the
linstor-controller, that is the only one which does not even have to have
drbd installed.
I guess on your satellite, you are still missing the
"linstor-resources.res" file with the its content that includes
"/var/lib/l
Hello,
On Mon, May 13, 2019 at 3:56 PM Daniel Ulman wrote:
> Hi Gabor,
>
>
>
> Thank you for your answer
>
>
>
> My linstor-controller is not in a dedicated machine.
>
Fine. That does not really change anything.
> It is a combined node. And it is in this node where I have the problems of
> my
>
> checkDeleted N
> com.linbit.linstor.VolumeDefinitionData:353
>
> getVolumeSizeN
> com.linbit.linstor.VolumeDefinitionData:206
>
> getReservedCapacity N
> com.linbit.linstor.FreeSpaceMgr:200
>
Hello,
thank
Hello,
> I wonder if it's safe to mount lvm snapshot after `linstor snapshot
create` command ...
Linstor can restore a snapshot into a new resource. If you now have a DRBD
on top of LVM, create a snapshot of that, modify the data without DRBD
knowing (as there is no DRBD running on top of a snaps
Hello,
regarding the suggested check of the linstor cluster's health: I am pretty
sure the cluster is healthy, the "missing" ip is most likely just greyed
out from the screenshots manually (otherwise why should there be spaces
before the ":port" and also "root@ :path"?)
Which also leads us t
Hi,
linstor-server gets a new feature, namely support for ETCD
There are some limitations regarding the ETCD support:
* There is no migration from SQL -> ETCD
* Linstor requires the ETCD server to increase its limit for requests per
transactions [1]
Our tests were running well with 1024, bu
Hi.
Yes, the default for new installations changed with 0.7.2 from "MAC" to
"NO_SECURITY".
Upgrading from <0.7.2 to 0.7.2 or something higher will NOT change that
setting for you (more or less on purpose).
Currently there are some scenarios where you simply want to change that
setting to "NO_SECU
Apparently we a bit behind in properly document this feature, but you might
want to take a look at the following commands:
$ linstor resource-connection path create testhost01 testhost04
path1 data data
# the above command alone has the same effect as setting the PrefNic on
both sides, i.e. forci
Hi,
... . Satellite and Controller are
> deployed using docker and are version 1.2.0. The Satellites are started
> using the "--net=host --privileged" options for docker.
>
That sounds like the first problem here. Multiple docker containers still
share the same kernel. As DRBD is a kernel module,
Regarding the ErrorReport: that bug should be fixed in 1.4.1 which will be
released today if not done already.
> Should we worry about warnings?
> " *WARNING: An illegal reflective access operation has occurred* ..."
Not this one :) Beside this being only a warning, this is out of linstor's
scope
In that case you might want to increase the log level on the satellite to
TRACE by adding
[logging]
level="TRACE"
to the linstor_satellite.toml file
With trace logging you should see every external command (lvs, vgs,
lvcreate, ...) linstor executes as well as the time linstor spent waiting
fo
x-gnu/perl5/5.24/AnyEvent/Handle.pm line 1522
> AnyEvent::Handle::__ANON__(AnyEvent::Handle=HASH(0x559ccda3b460)) called at
> /usr/lib/x86_64-linux-gnu/perl5/5.24/AnyEvent/Handle.pm line 1332
> AnyEvent::Handle::_drain_rbuf(AnyEvent::Handle=HASH(0x559ccda3b460)) called
> at /usr/lib/
Hello,
in the logs you can see that `lvs` needs ~150ms, `vgs` ~15-20ms, `lvcreate`
~200ms, `drbdadm ... --create-md` about 3 seconds (that is fine)... so no
external command executed by linstor is exceptionally slow..
however, the reason that a `linstor err show` takes 20 seconds does not
sound g
Hello,
unfortunately there is no straightforward way to correct that lvmthin
instead of lvm mistake.
LINSTOR does not allow you to delete the storage pool while it has
resources in it.
That means, I see only 2 ways to fix this issue. As you found out, the lost
storage-pool is not fully implemented
Hello,
unfortunately there is no straightforward way to correct that lvmthin
instead of lvm mistake.
LINSTOR does not allow you to delete the storage pool while it has
resources in it.
That means, I see only 2 ways to fix this issue. As you found out, the lost
storage-pool is not fully implemented
Hello,
see answers inline
On Thu, Apr 2, 2020 at 12:09 PM Volodymyr Litovka wrote:
> Dear colleagues,
> is it possible to use DB backend for keeping state of controller in order
> to provide HA, instead of drbd volume like described here? -
> https://www.linbit.com/linstor-controller-pacemaker/
Hello,
I tried the lost command even when the satellites are stopped, but the
> command isn´t available.
>
What do you mean with "the command isn't available"?
If you already have added a node to LINSTORs database (you can check with
`linstor node list` ), and the satellite is currently OFFLINE,
Hi All,
This release only contains a small bugfix as described below.
linstor-server 1.6.1
* Fix possible deadlock on startup when DB is unavailable
https://www.linbit.com/downloads/linstor/linstor-server-1.6.1.tar.gz
Linstor PPA (soon):
https://launchpad.net/~linbit/+arc
Hello,
Any ideas what and why is this
>
not yet :)
and whether this can impact something?
>
this definitely impacts something - usually when the controller thinks it
has no connection to a satellite, it refuses some operations on that
satellite. operations like creating new resource, new storage
That the controller warns about satellite being offline (and thus storage
pools are not available / usable currently) is absolutely expected while
restarting the satellite.
If I remember correctly, the controller retries every 10 second or so to
see if the satellite is online again.
I assume that
Hi,
apparently something is quite broken... maybe it's somehow your setup or
environment, I am not sure...
linstor resource list
>
> ╭╮
> ┊ ResourceName ┊ Node ┊ Port ┊ Usage ┊ Conns ┊
> State ┊
>
> ╞
Hi!
This release-candidate contains the following new features:
* SOS-report: using "linstor sos-report create" and afterwards the
"download" subcommand, Linstor can now generate a tar.gz containing
machine- and Linstor-specific information, log files, error reports
and such.
* Snapshot-shipping[
Hi,
I tried to recreate this issue, but without success.
4 Node setup, all LVM
First create a resource with --auto-place 3,
Create 9 other resources with --auto-place 4
Create the first resource on the 4th (missing) node
Check "linstor volume list"
That means, there has to be something else in y
Hi,
> But all the volumes are now and remain in diskless state on Hydra1?
when you added the third node, the command `node create Hydra1` triggered
Linstor's auto-tiebreaker feature. DRBD can use quorum[1] (useful if your
cluster gets split), but needs 3 peers for that (so there can be a majority
Hi,
Sure, you might want to take a look at
linstor controller drbd-options -h
especially the "--protocol B" option :)
This option can also be run on resource-group / -definition level if you
want to override the used protocol for some resources.
--
Best regards,
Gabor Hernadi
_
Hello,
when you executed `linstor node create C ...` Linstor should have given you
a list of supported layers + providers or reasons why layers / providers
are not supported. You can recheck that list with
linstor node info
That command should tell you why Linstor thinks that C does not support
Hello,
can you please give us more details? for example the version of the linstor
controller
linstor controller version
also what exactly happened before you tried to delete ... what exactly? You
are talking about snapshots, but you are showing us a list of resources and
resource-definitions,
Hello!
Since last week's release candidate 1.13.0rc1 had only a few minor issues,
it's now time for the actual release of 1.13.0
lisntor-server 1.13.0
-
* Allowed new BCache layer to be used as topmost layer
* Fixed API-Version reported by Controller
* Removed invalid descript
Hello!
As we received a few questions regarding the recently discovered security
issues with log4j, we want to clarify that Linstor is *not* using log4j but
logback as a logging framework.
Linstor is therefore NOT vulnerable for security issues in the log4j
library.
Linstor is open source and is
Hello,
On Mon, Apr 18, 2022 at 3:08 AM Eric Robinson
wrote:
> We noticed section 2.7 of the Linstor User Guide ends with the following
> statement, apparently intended for internal use:
>
>
>
> “FIXME describe how to route the controller <-> client communication
> through a specific netif.”
>
>
> # vgcreate vg0 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1
> /dev/nvme4n1 /dev/nvme5n1
>
> It's all one big volume group and we let Linstor provision resources
> wherever it wants. What is the effect If one NVME drive fails? Do all
> resources on the node go diskless, or just resources t
Hello,
As mentioned in the other drbd-user thread (Adding a Physical Disk to a
Storage Pool), this behaviour is controlled by LVM, not by Linstor. During
the `linstor sp c ...` command you only gave Linstor the name of the
volume-group, which is also what Linstor will use for the `lvcreate ...`
co
Hello,
This second release candidate is almost the same as the first, but with a
small bugfix regarding creating backups to S3, as well as whitelisting new
DRBD options available since drbd-utils 9.21.1
For completeness sake, here are the release notes from rc.1:
Starting with this version, users
Hello,
What Linstor version are you running?
I tried to reproduce this, and let a resource be auto-snapshotted for a
week with your 1440 setting. It was not perfect, since the 24h timer gets
restarted every time we restart the controller (for example we had a
snapshot at 2022-05-11 11:50:03, rest
Hi,
> linstor 1.6.1; GIT-hash: df9121a4e86502351bd7373446dc0f9233ab3bb5
>
This is the version of the client. Use `linstor controller version` to get
the version of the linstor-controller.
> Active: active (running) since Thu 2021-05-13 16:14:37 CEST; 1 years 0
> months ago
>
Whatever the exact
On Fri, Nov 24, 2023 at 8:25 AM Michał Szamocki wrote:
> Hello,
>
> my DRBD cluster configuration has some leftovers from deleted resources
> and drbdadm returns lots of such warnings:
> # drbdadm role vm-233-disk-1
> /var/lib/linstor.d/vm-111-disk-1.res:6: in resource vm-111-disk-1:
> re
Hello!
I have added an new node to cluster today (fresh ubuntu installation. After
> successfully setup I want to share an resource on it (extend linstor_db
> resource with one more node), but when I start it, I get error message
>Node: 'drbd-05' has DRBD version 9.2.5, but version 9.2.7 (or h
Hello Zsolt,
LINSTOR's storage pool mixing has 2 criteria. If one is fulfilled, you have
a mixed storage pool setup:
1) Different extent sizes (this is what I mentioned in the previous email
with "mixing LVM with ZFS", since LVM by default has 4M extent size and ZFS
has 8k).
2) Mixing thin and thi
olumes
> between this servers. Can e turn off this feature temporary for one or more
> volume/resource?
>
> On Wed, Jan 31, 2024, 5:16 PM Gábor Hernádi
> wrote:
>
>> Hello Zsolt,
>>
>> LINSTOR's storage pool mixing has 2 criteria. If one is fulfilled, you
>
Hello,
However, after a short amount of time, the linstor controller just stops
> responding again, with basic commands like "linstor node list" timing
> out after 5 minutes.
>
We were fixing a few deadlock-related issues in the recent LINSTOR release,
but also have found another one (will be inc
54 matches
Mail list logo