Hi,
You should crush reweight this OSD (sde) to zero, and ceph will remap all PG to
another OSD, after draining you may replace your drive
k
Sent from my iPhone
> On 29 Apr 2021, at 06:00, Lomayani S. Laizer wrote:
>
> Any advice on this. Am stuck because one VM is not working now. Looks t
On Tue, Apr 27, 2021 at 11:55:15AM -0400, Gavin Chen wrote:
> Hello,
>
> We’ve got some issues when uploading s3 objects with a double slash //
> in the name, and was wondering if anyone else has observed this issue
> with uploading objects to the radosgw?
>
> When connecting to the cluster to up
Hello,
Any advice on this. Am stuck because one VM is not working now. Looks there
is a read error in primary osd(15) for this pg. Should i mark osd 15 down
or out? Is there any risk of doing this?
Apr 28 20:22:31 ceph-node3 kernel: [369172.974734] sd 0:2:4:0: [sde]
tag#358 CDB: Read(16) 88 00 00
Hello everyone,
I am running ceph version 15.2.8 on Ubuntu servers. I am using bluestore osds
with data on hdd and db and wal on ssd drives. Each ssd has been partitioned
such that it holds 5 dbs and 5 wals. The ssd were were prepared a while back
probably when I was running ceph 13.x. I have
Hello,
Last week I upgraded my production cluster to Pacific. the cluster was
healthy until a few hours ago.
When scrub run 4hrs ago left the cluster in an inconsistent state. Then
issued the command ceph pg repair 7.182 to try to repair the cluster but
ended with active+recovery_unfound+degraded
Hi,
when specifying the db device you should use --block.db VG/LV not /dev/VG/LV
Zitat von Andrei Mikhailovsky :
Hello everyone,
I am running ceph version 15.2.8 on Ubuntu servers. I am using
bluestore osds with data on hdd and db and wal on ssd drives. Each
ssd has been partitioned such
hello all,
I faced an incident for one of my very important rbd volumes with 5TB data,
which is managed by OpenStack.
I was about to increase the volume size live but I shrinked the volume
unintentionally by running a wrong command of "virsh qemu-monitor-command".
then I realized it and expand
Hello Anthony,
it was introduced in octopus 15.2.10
See: https://docs.ceph.com/en/latest/releases/octopus/
Do you know how you would set it in pacific? :)
Guess, there shouldnt be much difference...
Thank you
Mehmet
Am 28. April 2021 19:21:19 MESZ schrieb Anthony D'Atri
:
>I think that’s new
Complementing the information: I'm using mimic (13.2) on the cluster. I
noticed that during the PG repair process the entire cluster was extremely
slow, however, there was no overhead on the OSD nodes. The load of these
nodes, which in normal production is between 10.00 and 20.00 was less than
5. W
On Sun, Apr 25, 2021 at 11:42 AM Ilya Dryomov wrote:
>
> On Sun, Apr 25, 2021 at 12:37 AM Markus Kienast wrote:
> >
> > I am seeing these messages when booting from RBD and booting hangs there.
> >
> > libceph: get_reply osd2 tid 1459933 data 3248128 > preallocated
> > 131072, skipping
> >
> > Ho
Hey all!
rbd export images/ec8a7ff8-6609-4b7d-8bdd-fadcf3b7973e /root/foo.img
DOES NOT produce the target file
no matter if I use --pool --image format or the one above the target
file is not there.
Progress bar shows up and prints percentage. It ends up with exit 0
[root@controller-0 mnt]# rbd
Hi,
On a daily basis, one of my monitors goes down
[root@cube ~]# ceph health detail
HEALTH_WARN 1 failed cephadm daemon(s); 1/3 mons down, quorum
rhel1.robeckert.us,story
[WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
daemon mon.cube on cube.robeckert.us is in error state
[WRN] MON_
Hi,
What we have found seems like it is a blocking issue when I terminate https on
a loadbalancer and between the loadbalancer and rgw http is the mode. So seems
liket he ssl termination has to be done on the rgw and can't be done on the
loadbalancer? Or how we can workaround it any idea?
Here
I don't think there is a way around that. the RGW code does not allow
user/password on non-ssl transport.
what is the issue with SSL between the balancer and the RGW?
if you have issues with self-signed certificates, maybe there is a way on
the balancer to not verify them?
On Wed, Apr 28, 2021 at
Hi,
Recently was added [1] protection against BlueFS log growth infinite, I get
assert on 14.2.19:
/build/ceph-14.2.19/src/os/bluestore/BlueFS.cc: 2404: FAILED
ceph_assert(bl.length() <= runway)
Then OSD dead. Tracker (may be already exists?), logs is interested for this
case?
[1] https://
Hi,
Yes, If I set "show_image_direct_url" to false, creation of volumes
from images works fine.
But creation takes much more time, because of data movements
out-and-in ceph cluster, instead snap and copy-on-write approach.
All documentation recommends "show_image_direct_url" setted to true
Thanks, Eugen, for your quick answer!
Yes, If I set "show_image_direct_url" to false, creation of volumes from
images works fine.
But creation takes much more time, because of data movements out-and-in
ceph cluster, instead snap and copy-on-write approach.
All documentation recommends "show_i
Dear Cephers,
I encountered a strange issue when using rbd map (Luminous 12.2.13 version),
rbd map not always fail, but occasionally, with the following dmesg
[16818.70] module libceph: Relocation (type 6) overflow vs section 4
[16857.46] module libceph: Relocation (type 6) overflow vs
Hello,
I have an octopus cluster and want to change some values - but i cannot find
any documentation on how to set values(multiple) with
bluestore_rocksdb_options_annex
Could someone give me some examples.
I would like to do this like ceph config set ...
Thanks in advice
Mehmet
___
19 matches
Mail list logo