Re: [ceph-users] OSDs are crashing during PG replication

2016-03-11 Thread Shinobu Kinjo
On Mar 11, 2016 3:12 PM, "Alexander Gubanov" wrote: > > Sorry, I didn't have time to answer. > > >1st you said, 2 osds were crashed every time. From the log you pasted, > >it makes sense to do something for osd.3. > > The problem is one PG 3.2. This PG is on osd.3 and osd.16 and this osds are both

[ceph-users] Disk usage

2016-03-11 Thread Maxence Sartiaux
Hello, I've a little problem or i don't understand something A ceph df return me a used total of ~5To but a rbd ls return me some object with a total of ~1.1To where is the other ~4To used ? $ rbd ls -l NAME SIZE PARENT FMT PROT LOCK vm-105-disk-1 51200M 2 vm-105-disk-2 102400M 2 volume

[ceph-users] CephFS question

2016-03-11 Thread Sándor Szombat
Hi guys! We use Ceph and we need a distributed storage cluster for our files. I check CephFS but documentation says we can only use 1 MDS this time. So because of the HA we need 3 MDS on three master node. What is your experience with Ceph

Re: [ceph-users] User Interface

2016-03-11 Thread Josef Johansson
Proxmox handles the block storage at least, I know that ownCloud handles object storage through rgw nowadays :) Regards, Josef > On 02 Mar 2016, at 20:51, Michał Chybowski > wrote: > > Unfortunately, VSM can manage only pools / clusters created by itself. > Pozdrawiam > Michał Chybowski > Tik

Re: [ceph-users] CephFS question

2016-03-11 Thread Gregory Farnum
On Friday, March 11, 2016, Sándor Szombat wrote: > Hi guys! > > We use Ceph and we need a distributed storage cluster for our files. I > check CephFS but documentation says > we can only > use 1 MDS this time. > This is referring to the