Sure have created such @ https://tracker.ceph.com/issues/38284
Thanks
On Wed, Feb 13, 2019 at 12:02 AM Lenz Grimmer wrote:
> Hi Ashley,
>
> On 2/9/19 4:43 PM, Ashley Merrick wrote:
>
> > Any further suggestions, should i just ignore the error "Failed to load
> > ceph-mgr modules: telemetry" or
Hi, cephersI am building a ceph EC cluster.when a disk is error,I out it.But its all PGs remap to the osds in the same host,which I think they should remap to other hosts in the same rack.test process is:ceph osd pool create .rgw.buckets.data 8192 8192 erasure ISA-4-2 site1_sata_
On Tue, Feb 12, 2019 at 5:10 AM Hector Martin wrote:
> On 12/02/2019 06:01, Gregory Farnum wrote:
> > Right. Truncates and renames require sending messages to the MDS, and
> > the MDS committing to RADOS (aka its disk) the change in status, before
> > they can be completed. Creating new files wil
Hi,
I've got a situation where I need to split a Ceph cluster into two.
This cluster is currently running a mix of RBD and RGW and in this case
I am splitting it into two different clusters.
A difficult thing to do, but it's possible.
One problem that stays though is that after the split both C
Hi Ashley,
On 2/9/19 4:43 PM, Ashley Merrick wrote:
> Any further suggestions, should i just ignore the error "Failed to load
> ceph-mgr modules: telemetry" or is this my route cause for no realtime
> I/O readings in the Dashboard?
I don't think this is related. It you don't plan to enable the t
Mandi! Michel Raabe
In chel di` si favelave...
> Have you changed/add the journal_uuid from the old partition?
> https://ceph.com/geen-categorie/ceph-recover-osds-after-ssd-journal-failure/
root@blackpanther:~# ls -la /var/lib/ceph/osd/ceph-15
totale 56
drwxr-xr-x 3 root root 199 nov 21 2
On 2/9/19 5:40 PM, Brad Hubbard wrote:
> On Sun, Feb 10, 2019 at 1:56 AM Ruben Rodriguez wrote:
>>
>> Hi there,
>>
>> Running 12.2.11-1xenial on a machine with 6 SSD OSD with bluestore.
>>
>> Today we had two disks fail out of the controller, and after a reboot
>> they both seemed to come back f
On 11/02/2019 18:52, Yan, Zheng wrote:
how about directly reading backtrace, something equivalent to:
rados -p cephfs1_data getxattr xxx. parent >/tmp/parent
ceph-dencoder import /tmp/parent type inode_backtrace_t decode dump_json
Where xxx is just the hex inode from stat(), ri
On 12/02/2019 06:01, Gregory Farnum wrote:
Right. Truncates and renames require sending messages to the MDS, and
the MDS committing to RADOS (aka its disk) the change in status, before
they can be completed. Creating new files will generally use a
preallocated inode so it's just a network round
Will there be much difference in performance between EC and
replicated? Thanks.
Hope can do more testing on EC before deadline of our first
production CEPH...
In general, yes, there will be a difference in performance. Of course
it depends on the actual configuration, but if you rely on p
Hi,
Thanks.As power supply to one of our server rooms is not so stable, will
probably use size=4,min_size=2 to prevent data lose.
> If the overhead is too high could EC be an option for your setup?
Will there be much difference in performance between EC and replicated? Thanks.
Hope can do
Hello - I have a couple of questions on ceph cluster stability, even
we follow all recommendations as below:
- Having separate replication n/w and data n/w
- RACK is the failure domain
- Using SSDs for journals (1:4ratio)
Q1 - If one OSD down, cluster IO down drastically and customer Apps impacted
Hi,
I came to the same conclusion after doing various tests with rooms and
failure domains. I agree with Maged and suggest to use size=4,
min_size=2 for replicated pools. It's more overhead but you can
survive the loss of one room and even one more OSD (of the affected
PG) without losing
> Am 12.02.2019 um 00:03 schrieb Patrick Donnelly :
>
> On Mon, Feb 11, 2019 at 12:10 PM Götz Reinicke
> wrote:
>> as 12.2.11 is out for some days and no panic mails showed up on the list I
>> was planing to update too.
>>
>> I know there are recommended orders in which to update/upgrade the
14 matches
Mail list logo