Thanks paul
Yes listing v2 is not supported yet. I checked metadata osds and all of them
are 600gb 10k hdd I dont think this was the issue.
I will test the --allow-unordered
Regards
From: Paul Emmerich
Sent: Thursday, October 17, 2019 10:00 AM
To: Arash Shams
C
Hi Matthew,
that's normal because the session is not authenticated on the failover
manager/dashboard.
Regards
Volker
Am 01.10.19 um 19:53 schrieb Matthew Stroud:
>
> For some reason the active MGR process just resets the connection
> after failover. Nothing really sticks out in the logs to expla
I am trying to install ceph in ubuntu 16.04 by this link
https://www.supportsages.com/ceph-part-5-ceph-configuration-on-ubuntu/
but when I am run this command #ceph-deploy install ceph-deploy monnode1
osd0 osd1
I am facing this error.
[ceph-deploy][WARNIN] E: Sub-process /usr/bin/dpkg returned
Den mån 21 okt. 2019 kl 13:15 skrev masud parvez :
> I am trying to install ceph in ubuntu 16.04 by this link
> https://www.supportsages.com/ceph-part-5-ceph-configuration-on-ubuntu/
>
>
It's kind of hard to support someone elses documentation, you should really
have started with contacting them
On Mon, Oct 21, 2019 at 11:20 AM Arash Shams wrote:
> Yes listing v2 is not supported yet. I checked metadata osds and all of them
> are 600gb 10k hdd I dont think this was the issue.
> I will test the --allow-unordered
5 million objects in a single bucket and metadata on HDD is a disaster
waiti
Hi,
I have a working RBD Mirror Setup using ceph version 14.2.4 on both sides.
I want to have a clone of a non primay image.
I do it this way:
1. create snapshot of primary image
2. wait for the snapshot to appear on the backup cluster
3. create a clone in backup cluster (using Simplified RBD Im
Hi Cephers,
I'm pleased to announce that we're starting Ceph Tech Talks back up
(unfortunately probably the last one for this year due to holidays).
On October 24th at 17:00 UTC Kevin Hrpcek will be presenting on Ceph at
Nasa, and why they use librados instead of higher-level features.
For infor
I have an rgw index pool that is alerting as "large" in 2 of the 3 osds on
the PG. The primary has a large omap. The index is definitely in use by the
bucket. Any opinions on the best way to solve this?
1. Remove the 2 osds with large index from cluster and rebalance?
2. Delete 2 of the 3 and deep
Hello Everyone,
My osd is broken recently, the first 8M size block have been clean.
Since I use ceph-volume create bluestore with all wal, db, slow on
one disk, I lost the superblock.
Thanks to the lvm backup, I save the superblock of Bluestore, but I
can't get the bluefs superblock
Thanks for responding.
It isn’t a session issue, because the port is closed. It wouldn’t bother me if
I had to log in again.
Thanks,
Matthew Stroud
On Oct 21, 2019, at 3:25 AM, Volker Theile wrote:
Hi Matthew,
that's normal because the session is not authenticated on the failover
manager/
On Mon, Oct 21, 2019 at 8:03 AM wrote:
>
> Hi,
>
> I have a working RBD Mirror Setup using ceph version 14.2.4 on both sides.
> I want to have a clone of a non primay image.
>
> I do it this way:
>
> 1. create snapshot of primary image
> 2. wait for the snapshot to appear on the backup cluster
> 3
Hi again
I've managed to simplify this. I think it only affects empty
directories. It is still non-deterministic, ceph.dir.rctime will be set
correctly between 30% and 80% of the time, the rest of the time it will
be the same as the directory's original mtime.
#!/bin/bash
source="/teraraid4/toby/
Hi all,
it seems Ceph on Ubuntu Disco (19.04) with the most recent kernel
5.0.0-32 is instable. It crashes sometimes after a few hours, sometimes
even after a few minutes. I found this bug here on CoreOS:
https://github.com/coreos/bugs/issues/2616
Which is exactly also the error message I get ("
On Mon, Oct 21, 2019 at 5:09 PM Ranjan Ghosh wrote:
>
> Hi all,
>
> it seems Ceph on Ubuntu Disco (19.04) with the most recent kernel
> 5.0.0-32 is instable. It crashes sometimes after a few hours, sometimes
> even after a few minutes. I found this bug here on CoreOS:
>
> https://github.com/coreos
Hi Ilya,
thanks for your answer - really helpful! We were so desparate today due
to this bug that we downgraded to -23. But it's very good to know that
-31 doesnt contain this bug and we could safely update back to this release.
If a new version (say -33 is released): How/Where can I find out if
On Mon, Oct 21, 2019 at 6:12 PM Ranjan Ghosh wrote:
>
> Hi Ilya,
>
> thanks for your answer - really helpful! We were so desparate today due
> to this bug that we downgraded to -23. But it's very good to know that
> -31 doesnt contain this bug and we could safely update back to this release.
>
> I
Understood. Perfect. Thanks again for all the information!
BR
Ranjan
Am 21.10.19 um 19:07 schrieb Ilya Dryomov:
> On Mon, Oct 21, 2019 at 6:12 PM Ranjan Ghosh wrote:
>> Hi Ilya,
>>
>> thanks for your answer - really helpful! We were so desparate today due
>> to this bug that we downgraded to -2
Hi all,
Has anyone successfully created multiple partitions on an NVME device
using ceph-disk?
If so, which commands were used?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Frank
We use such a setup on our nautilus cluster. I manually partitioned the NVME
drive to 8 equally sized partitions with fdisk (saved the partition layout to a
file for later reference). You can then create OSDs with
> ceph-volume lvm create --bluestore --data /dev/sd --block.db
> /dev
19 matches
Mail list logo