On Mon, Jan 27, 2020 at 03:23:55PM -0500, Dave Hall wrote:
>All,
>
>I've just spent a significant amount of time unsuccessfully chasing
>the _read_fsid unparsable uuid error on Debian 10 / Natilus 14.2.6.
>Since this is a brand new cluster, last night I gave up and moved back
>to Debian 9 / Lu
Yes, data that is not synced is not guaranteed to be written to disk,
this is consistent with POSIX semantics.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Mon,
Hi Eric,
With regards to "From the output of “ceph osd pool ls detail” you can see
min_size=4, the crush rule says min_size=3 however the pool does NOT
survive 2 hosts failing. Am I missing something?"
For your EC profile you need to set the pool min_size=3 to still read/write
to the pool with t
Hi all,
I would like to rename the logical volumes / volume groups used by my OSDs. Do
I need to change anything else than the block and block.db links under
/var/lib/ceph/osd/?
IT-Services
Telefon 02461 61-9243
E-Mail: f.sto...@fz-juelich.de
Hi,
Keep in mind that /var/lib/ceph/osd/ is a tmpfs which is created by
'ceph-bluestore-tool' on OSD startup.
All the data in there comes from the lvtags set on the LVs.
So I *think* you can just rename the Volume Group and rescan with
ceph-volume.
Wido
On 1/28/20 10:25 AM, Stolte, Felix wrote
Hi,In my experience it is also wise to make sure the lvtags reflect the new vg/lv names!KasparOp 28 januari 2020 om 10:38 schreef Wido den Hollander :Hi,Keep in mind that /var/lib/ceph/osd/ is a tmpfs which is created by'ceph-bluestore-tool' on OSD startup.All the data in there come
Say one is forced to move a production cluster (4 nodes) to a different
datacenter. What options do I have, other than just turning it off at
the old location and on on the new location?
Maybe buying some extra nodes, and move one node at a time?
_
Hello Igor,
i updated all servers to latest 4.19.97 kernel but this doesn't fix the
situation.
I can provide you with all those logs - any idea where to upload / how
to sent them to you?
Greets,
Stefan
Am 20.01.20 um 13:12 schrieb Igor Fedotov:
> Hi Stefan,
>
> these lines are result of transa
On 1/28/20 11:19 AM, Marc Roos wrote:
>
> Say one is forced to move a production cluster (4 nodes) to a different
> datacenter. What options do I have, other than just turning it off at
> the old location and on on the new location?
>
> Maybe buying some extra nodes, and move one node at a t
We did this as well, pretty much the same as Wido.
We had a fiber connection with good latency between the locations.
We installed a virtual monitor in the destination datacenter to always
keep quorum then we
simply moved one node at a time after setting noout.
When we took a node up on the de
And us too, exactly as below. One at a time then wait for things to
recover before moving the next host. We didn't have any issues with this
approach either.
Regards,
Simon.
On 28/01/2020 13:03, Tobias Urdin wrote:
We did this as well, pretty much the same as Wido.
We had a fiber connection w
I have a query about https://docs.ceph.com/docs/master/cephfs/createfs/:
"The data pool used to create the file system is the "default" data pool and
the location for storing all inode backtrace information, used for hard link
management and disaster recovery. For this reason, all inodes created
All;
I haven't had a single email come in from the ceph-users list at ceph.io since
01/22/2020.
Is there just that little traffic right now?
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International Inc.
dhils...@performair.com
www.PerformAir.com
__
On Tue, 28 Jan 2020, dhils...@performair.com wrote:
> All;
>
> I haven't had a single email come in from the ceph-users list at ceph.io
> since 01/22/2020.
>
> Is there just that little traffic right now?
I'm seeing 10-20 messages per day. Confirm your registration and/or check
your filters?
Jan,
Unfortunately I'm under immense pressure right now to get some form of
Ceph into production, so it's going to be Luminous for now, or maybe a
live upgrade to Nautilus without recreating the OSDs (if that's possible).
The good news is that in the next couple months I expect to add more
h
Hi.
before I descend into what happened and why it happened: I'm talking about a
test-cluster so I don't really care about the data in this case.
We've recently started upgrading from luminous to nautilus, and for us that
means we're retiring ceph-disk in favour of ceph-volume with lvm and
dmcryp
Hi,
we are planning to use EC
I have 3 questions about it
1 / what is the advantage of having more machines than (k + m)? We are
planning to have 11 nodes and use k=8 and m=3. does it improved
performance to have more node than K+M? of how many ? what ratio?
2 / what behavior should we exp
Den tis 28 jan. 2020 kl 17:34 skrev Zorg :
> Hi,
>
> we are planning to use EC
>
> I have 3 questions about it
>
> 1 / what is the advantage of having more machines than (k + m)? We are
> planning to have 11 nodes and use k=8 and m=3. does it improved
> performance to have more node than K+M? of
On Tue, Jan 28, 2020 at 4:26 PM CASS Philip wrote:
> I have a query about https://docs.ceph.com/docs/master/cephfs/createfs/:
>
>
>
> “The data pool used to create the file system is the “default” data pool
> and the location for storing all inode backtrace information, used for hard
> link manag
Hi Greg,
Thanks – if I understand https://ceph.io/geen-categorie/get-omap-keyvalue-size/
correctly, “rados -p cephfs.fs1-replicated.data ls” should show any such
objects? It’s also returning blank (and correctly returns a lot for the EC
pool).
That being said – if it’s only written to by the
I did this, but with the benefit of taking the network with me, just a forklift
from one datacenter to the next.
Shutdown the clients, then OSDs, then MDS/MON/MGRs, then switches.
Reverse order back up,
> On Jan 28, 2020, at 4:19 AM, Marc Roos wrote:
>
>
> Say one is forced to move a produc
Hi,
i had a problem with one application (seafile) which uses CEPH backend with
librados.
The corresponding pools are defined with size=3 and each object copy is on a
different host.
The cluster health is OK: all the monitors see all the hosts.
Now, a network problem just happens between my
On Tue, Jan 28, 2020 at 6:55 PM CASS Philip wrote:
> Hi Greg,
>
>
> Thanks – if I understand
> https://ceph.io/geen-categorie/get-omap-keyvalue-size/ correctly, “rados
> -p cephfs.fs1-replicated.data ls” should show any such objects? It’s also
> returning blank (and correctly returns a lot for t
Hi,
I've run into the same issue while testing:
ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus
(stable)
debian bullseye
Ceph was installed using ceph-ansible on a vm from the repo
http://download.ceph.com/debian-nautilus
The output of `sudo sh -c 'CEPH_VOLUME_DEBU
I have a server with 12 OSDs on it. Five of them are unable to start, and give
the following error message in the their logs:
2020-01-28 13:00:41.760 7f61fb490c80 0 monclient: wait_auth_rotating timed out
after 30
2020-01-28 13:00:41.760 7f61fb490c80 -1 osd.178 411005 unable to obtain
rotating
On 1/28/20 6:58 PM, Anthony D'Atri wrote:
>
>
>> I did this ones. This cluster was running IPv6-only (still is) and thus
>> I had the flexibility of new IPs.
>
> Dumb question — how was IPv6 a factor in that flexibility? Was it just that
> you had unused addresses within an existing block?
>
On 1/28/20 7:03 PM, David DELON wrote:
> Hi,
>
> i had a problem with one application (seafile) which uses CEPH backend with
> librados.
> The corresponding pools are defined with size=3 and each object copy is on a
> different host.
> The cluster health is OK: all the monitors see all the
https://www.mail-archive.com/ceph-users@ceph.io/
https://www.mail-archive.com/ceph-users@lists.ceph.com/
-Original Message-
Sent: 28 January 2020 16:32
To: ceph-users@ceph.io
Subject: [ceph-users] No Activity?
All;
I haven't had a single email come in from the ceph-users list at cep
After upgrading one of our clusters from Luminous 12.2.12 to Nautilus 14.2.6, I
am seeing 100% CPU usage by a single ceph-mgr thread (found using 'top -H').
The way we found this was due to Prometheus being unable to report out certain
pieces of data, specifically OSD Usage, OSD Apply and Commi
Quoting Stefan Kooman (ste...@bit.nl):
> Hi,
>
> The command "ceph daemon mds.$mds perf dump" does not give the
> collection with MDS specific data anymore. In Mimic I get the following
> MDS specific collections:
>
> - mds
> - mds_cache
> - mds_log
> - mds_mem
> - mds_server
> - mds_sessions
>
On Wed, Jan 29, 2020 at 7:33 AM Stefan Kooman wrote:
>
> Quoting Stefan Kooman (ste...@bit.nl):
> > Hi,
> >
> > The command "ceph daemon mds.$mds perf dump" does not give the
> > collection with MDS specific data anymore. In Mimic I get the following
> > MDS specific collections:
> >
> > - mds
> >
31 matches
Mail list logo