Hi Klimenko,
I did a migration from filestore to bluestroe on centos7 with ceph version
12.2.5.
As it's the pro environment, I removed and recreated OSDs on each server at
a time, online.
Although I migreated on centos, I create osd manually so that you can have
a try.
Except one raid1 disk for s
On 11/16/18 11:57 AM, Vlad Kopylov wrote:
Exactly. But write operations should go to all nodes.
This can be set via primary affinity [1], when a ceph client reads or
writes data, it always contacts the primary OSD in the acting set.
If u want to totally segregate IO, you can use device clas
Thank you very much, Jason.Our cluster’s target workload is something like monitoring system data center, we need save a lot of video stream into cluster.I have to reconsider test case.Besides, a lot tests to do about the config parameters as you mentioned.Help me a lot, thanks.在 2018年11月16日,下午12
I am not sure that is going to work, because I have this error quite
some time, from before I added the 4th node. And on the 3 node cluster
it was:
osdmap e18970 pg 17.36 (17.36) -> up [9,0,12] acting [9,0,12]
If I understand correctly what you intent to do, moving the data around.
This wa
Hi,
Does anyone know if slides/recordings will be available online?
Thanks,
Serkan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Serkan,
On 11/16/18 11:29 AM, Serkan Çoban wrote:
> Does anyone know if slides/recordings will be available online?
Unfortunately, the presentations were not recorded. However, the slides
are usually made available on the corresponding event page,
https://ceph.com/cephdays/ceph-day-berlin/ in
How do you confirm that cephfs files and rados objects are being compressed?
I don't see how in the docs.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Looks similar to a problem I had after a several OSDs crashed while
trimming snapshots. In my case, the primary OSD thought the snapshot was
gone, but some of the replicas are still there, so scrubbing flags it.
First I purged all snapshots and then ran ceph pg repair on the
problematic placement
This is what Jean suggested. I understand it and it works with primary.
*But what I need is for all clients to access same files, not separate sets
(like red blue green)*
Thanks Konstantin.
On Fri, Nov 16, 2018 at 3:43 AM Konstantin Shalygin wrote:
> On 11/16/18 11:57 AM, Vlad Kopylov wrote:
>
The difference for 2+2 vs 2x replication isn't in the amount of space being
used or saved, but in the amount of OSDs you can safely lose without any
data loss or outages. 2x replication is generally considered very unsafe
for data integrity, but 2+2 would is as resilient as 3x replication while
on
Hi all,
I'm running ceph tool in interactive mode. However, there's no output.
Does anyone know how to solve it?
jerry@nstcloud:~$ ls -l /etc/ceph/
total 12
-rw--- 1 root root 151 Nov 13 16:50 ceph.client.admin.keyring
-rw-r--r-- 1 root root 232 Nov 13 16:50 ceph.conf
-rw-r--r-- 1 roo
Is a bug that will be fixed in the next point release.
On Sat, 17 Nov 2018 at 3:38 PM, Liu, Changcheng
wrote:
> Hi all,
>
> I’m running ceph tool in interactive mode. However, there’s no output.
>
> Does anyone know how to solve it?
>
> *jerry@nstcloud:~$ ls -l /etc/ceph/*
>
> *total 1
Thanks Ashley Merrick. Does this problem have bug track id?
From: Ashley Merrick [mailto:singap...@amerrick.co.uk]
Sent: Saturday, November 17, 2018 3:41 PM
To: Liu, Changcheng
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph tool in interactive mode: not work
Is a bug that will be f
http://tracker.ceph.com/issues/36358
On Sat, 17 Nov 2018 at 3:43 PM, Liu, Changcheng
wrote:
> Thanks Ashley Merrick. Does this problem have bug track id?
>
>
>
> *From:* Ashley Merrick [mailto:singap...@amerrick.co.uk]
> *Sent:* Saturday, November 17, 2018 3:41 PM
> *To:* Liu, Changcheng
> *Cc
Thanks. Watching this problem.
From: Ashley Merrick [mailto:singap...@amerrick.co.uk]
Sent: Saturday, November 17, 2018 3:47 PM
To: Liu, Changcheng
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph tool in interactive mode: not work
http://tracker.ceph.com/issues/36358
On Sat, 17 No
15 matches
Mail list logo