On 04.08.20 14:19, Jason Dillaman wrote:
On Tue, Aug 4, 2020 at 2:12 AM Georg Schönberger
wrote:
On 03.08.20 14:56, Jason Dillaman wrote:
On Mon, Aug 3, 2020 at 4:11 AM Georg Schönberger
wrote:
Hey Ceph users,
we are currently facing some serious problems on our Ceph Cluster with
libvirt (K
Please help me enable ceph iscsi gatewaty in ceph octopus . when i install ceph
complete . i see iscsi gateway not enable. please help me config it
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi all,
Can somebody point me to the timeout parameter for rados_connect
function. When monitors are not available it hangs undefinetely.
Daniel Mezentsev, founder
(+1) 604 313 8592.
Soleks Data Group.
Shaping the clouds.
___
ceph-users mailing list
Hi:
I am using ceph nautilus with CentOS 7.6 and working on adding a pair of
iscsi gateways in our cluster, following the documentation here:
https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/
I was in the "Configuring" section, step #3, "Create the iSCSI gateways"
and ran into problems. Whe
Long running cluster, currently running 14.2.6
I have a certain user whose buckets have become corrupted in that the
following commands:
radosgw-admin bucket check --bucket
radosgw-admin bucket list --bucket=
return with the following:
ERROR: could not init bucket: (2) No such file or director
Hi
I am trying to delete a bucket using the following command:
# radosgw-admin bucket rm --bucket= --purge-objects
However, in console I get the following messages. About 100+ of those messages
per second.
2020-08-04T17:11:06.411+0100 7fe64cacf080 1
RGWRados::Bucket::List::list_objects_or
If with monitor log you mean the cluster log /var/log/ceph/ceph.log, I should
have all of it. Please find a tgz-file here:
https://files.dtu.dk/u/tFCEZJzQhH2mUIRk/logs.tgz?l (valid 100 days).
Contents:
logs/ceph-2020-08-03.log - cluster log for the day of restart
logs/ceph-osd.145.2020-08-03.
Hi Eric,
thanks for the clarification, I did misunderstand you.
> You should not have to move OSDs in and out of the CRUSH tree however
> in order to solve any data placement problems (This is the baffling part).
Exactly. Should I create a tracker issue? I think this is not hard to reproduce
wi
Hi Erik,
I added the disks and started the rebalancing. When I run into the issue, ca. 3
days after start of rebalancing, it was about 25% done. The cluster does not go
to HEALTH_OK before the rebalancing is finished, it shows the "xxx objects
misplaced" warning. The OSD crush locations for the
Hi Vladimir,
What Kingston SSD model?
El 4/8/20 a las 12:22, Vladimir Prokofev escribió:
Here's some more insight into the issue.
Looks like the load is triggered because of a snaptrim operation. We have a
backup pool that serves as Openstack cinder-backup storage, performing
snapshot backups e
Hi Eric,
> Have you adjusted the min_size for pool sr-rbd-data-one-hdd
Yes. For all EC pools located in datacenter ServerRoom, we currently set
min_size=k=6, because we lack physical servers. Hosts ceph-21 and ceph-22 are
logical but not physical, disks in these buckets are co-located such that
Ah, really good question :)
I believe it is stored locally on the monitor host. Saving the cluster map into
RADOS would result in a chicken or egg problem.
This is supported by the following two sections in the docs:
1.
https://docs.ceph.com/docs/master/rados/configuration/mon-config-ref/#bac
Hi,
I've been tasked with moving Jewel clusters to Nautilus. After the final
upgrade Ceph Health warns about legacy tunables. On clusters running SSD's
I enabled the optimal flag. Which took weeks to chug through remappings. My
remaining clusters run HDD's. Does anyone have experience with using t
Thanks Michael. I will try it. Cheers
Andrei
- Original Message -
> From: "Michael Fladischer"
> To: "ceph-users"
> Sent: Tuesday, 4 August, 2020 08:51:52
> Subject: [ceph-users] Re: Module crash has failed (Octopus)
> Hi Andrei,
>
> Am 03.08.2020 um 16:26 schrieb Andrei Mikhailovsky
On Tue, Aug 4, 2020 at 2:12 AM Georg Schönberger
wrote:
>
> On 03.08.20 14:56, Jason Dillaman wrote:
> > On Mon, Aug 3, 2020 at 4:11 AM Georg Schönberger
> > wrote:
> >> Hey Ceph users,
> >>
> >> we are currently facing some serious problems on our Ceph Cluster with
> >> libvirt (KVM), RBD device
Do you have any monitor / OSD logs from the maintenance when the issues
occurred?
Original message
From: Frank Schilder
Date: 8/4/20 8:07 AM (GMT-05:00)
To: Eric Smith , ceph-users
Subject: Re: Ceph does not recover from OSD restart
Hi Eric,
thanks for the clarification, I
All seems in order in terms of your CRUSH layout. You can speed up the
rebalancing / scale-out operations by increasing the osd_max_backfills on each
OSD (Especially during off hours). The unnecessary degradation is not expected
behavior with a cluster in HEALTH_OK status, but with backfill / re
I really would not focus that much on a particular device model.
Yes, Kingston SSDs are slower for reads, we knew that since we tested them.
But that was before they were used as block.db devices, they first were
intended purely as block.wal devices. This was even before bluestore
actually, so thei
Thank you Gregor for the reply. I have read that page. It does say what a Crush
map is and how it’s used by monitors and OSDs, but does not say how or where
the map is stored in the system. Is it replicated on all OSD, vis a distributed
hidden pool? Is it stored on the local linux disk of the ho
> What Kingston SSD model?
=== START OF INFORMATION SECTION ===
Model Family: SandForce Driven SSDs
Device Model: KINGSTON SE50S3100G
Serial Number:
LU WWN Device Id:
Firmware Version: 611ABBF0
User Capacity:100,030,242,816 bytes [100 GB]
Sector Siz
All seems in order then - when you ran into your maintenance issue, how long
was if after you added the new OSDs and did Ceph ever get to HEALTH_OK so it
could trim PG history? Also did the OSDs just start back up in the wrong place
in the CRUSH tree?
-Original Message-
From: Frank Schi
Have you adjusted the min_size for pool sr-rbd-data-one-hdd at all? Also can
you send the output of "ceph osd erasure-code-profile ls" and for each EC
profile, "ceph osd erasure-code-profile get "?
-Original Message-
From: Frank Schilder
Sent: Monday, August 3, 2020 11:05 AM
To: Eric S
Here's some more insight into the issue.
Looks like the load is triggered because of a snaptrim operation. We have a
backup pool that serves as Openstack cinder-backup storage, performing
snapshot backups every night. Old backups are also deleted every night, so
snaptrim is initiated.
This snaptrim
Good day, cephers!
We've recently upgraded our cluster from 14.2.8 to 14.2.10 release, also
performing full system packages upgrade(Ubuntu 18.04 LTS).
After that performance significantly dropped, main reason beeing that
journal SSDs are now have no merges, huge queues, and increased latency.
Ther
Is it already possible to save some descriptions when creating an rbd
snapshot?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
If your system runs out of battery very rapidly then avast is the best solution
for you. It is one of the most trusted battery savers that stops applications
that you are not using, will speed up your operating device and will extend
your device’s battery life. It also tells how much time you ar
Hi Andrei,
Am 03.08.2020 um 16:26 schrieb Andrei Mikhailovsky:
Module 'crash' has failed: dictionary changed size during iteration
I had the same error after upgrading to Octopus and I fixed it by
stopping all MGRs, removing /var/lib/ceph/crash/posted on all MGR nodes
(make a backup copy on
27 matches
Mail list logo