Re: [ceph-users] OSDs are crashing during PG replication

2016-03-11 Thread Shinobu Kinjo
On Mar 11, 2016 3:12 PM, "Alexander Gubanov"  wrote:
>
> Sorry, I didn't have time to answer.
>
> >1st you said, 2 osds were crashed every time. From the log you pasted,
> >it makes sense to do something for osd.3.
>
> The problem is one PG 3.2. This PG is on osd.3 and osd.16 and this osds
are both were crashed every time.
>
> >> rm -rf
> >>
/var/lib/ceph/osd/ceph-4/current/3.2_head/rb.0.19f2e.238e1f29.0728__head_813E90A3__3
>
> >What makes me confused now is this.
> >Was osd.4 also crashed like osd.3?
>
> I thought that the problem is osd.13 or osd.16. I tried to disable these
osds:
> # ceph osd crush reweight osd.3 0
> # ceph osd crush reweight osd.16 0
> but when I did it 2 another osds were crashed and one of them is osd.4
and  the pg 3.2 was on osd.4.
>
> After this I decided to remove cache pool.
> Now I'm moving all data to new big ssd and so far all all right.
>

Thanks for letting me know.
That is good to know.

I hope you are playing with the Ceph again!

> On Fri, Mar 4, 2016 at 10:44 AM, Shinobu Kinjo 
wrote:
>>
>> Thank you for your explanation.
>>
>> > Every time 2 of 18 OSDs are crashing. I think it's happening when run
PG replication because crashing only 2 OSDs and every time they're are the
same.
>>
>> 1st you said, 2 osds were crashed every time. From the log you pasted,
>> it makes sense to do something for osd.3.
>>
>> > rm -rf
>> >
/var/lib/ceph/osd/ceph-4/current/3.2_head/rb.0.19f2e.238e1f29.0728__head_813E90A3__3
>>
>> What makes me confused now is this.
>> Was osd.4 also crashed like osd.3?
>>
>> >-1> 2016-02-24 04:51:45.904673 7fd995026700  5 -- op tracker -- ,
seq: 19231, time: 2016-02-24 04:51:45.904673, event: started, request:
osd_op(osd.13.12097:806247 rb.0.218d6.238e1f29.00010db3 [copy-get max
8388608] 3.94c2bed2 ack+read+ignore_cache+ignore_overlay+map_snap_clone
e13252) v4
>>
>> And crash seems to happen during this process, what I really want to
>> know is what this message inferred.
>> Did you check osd.13?
>>
>> Anyhow your cluster is now fine...no?
>> That's good news.
>>
>> Cheers,
>> Shinobu
>>
>> On Fri, Mar 4, 2016 at 11:05 AM, Alexander Gubanov 
wrote:
>> > I decided to refuse use of ssd cache pool and create just 2 pool. 1st
pool
>> > only of ssd for fast storage 2nd only of hdd for slow storage.
>> > What about this file, honestly, I don't know why it is created. As I
say I
>> > flush the journal for fallen OSD and remove this file and then I start
osd
>> > damon:
>> >
>> > ceph-osd --flush-journal osd.3
>> > rm -rf
>> >
/var/lib/ceph/osd/ceph-4/current/3.2_head/rb.0.19f2e.238e1f29.0728__head_813E90A3__3
>> > service ceph start osd.3
>> >
>> > But if I turn the cache pool off  the file isn't created:
>> >
>> > ceph osd tier cache-mode ${cahec_pool} forward
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>>
>>
>> --
>> Email:
>> shin...@linux.com
>> GitHub:
>> shinobu-x
>> Blog:
>> Life with Distributed Computational System based on OpenSource
>
>
>
>
> --
> Alexander Gubanov
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Disk usage

2016-03-11 Thread Maxence Sartiaux
Hello, 

I've a little problem or i don't understand something 

A ceph df return me a used total of ~5To but a rbd ls return me some object 
with a total of ~1.1To where is the other ~4To used ? 

$ rbd ls -l 

NAME SIZE PARENT FMT PROT LOCK 
vm-105-disk-1 51200M 2 
vm-105-disk-2 102400M 2 
volume-45cde9d2-3a14-4138-b51f-2f3077ebcbb2 51200M 2 
volume-45cde9d2-3a14-4138-b51f-2f3077ebcbb2@volume-ddedca3f-82aa-4cae-ae10-91c726bc2f65.clone_snap
 51200M 2 yes 
volume-75dc06e1-aa27-49aa-9101-333d92e9d291 120G 2 
volume-bf5cc58c-4674-4ecc-b434-c814d3ce1f5c 500G 2 
volume-ddedca3f-82aa-4cae-ae10-91c726bc2f65 51200M 
rbd/volume-45cde9d2-3a14-4138-b51f-2f3077ebcbb2@volume-ddedca3f-82aa-4cae-ae10-91c726bc2f65.clone_snap
 2 
volume-f4fe4023-f80a-4b4d-a232-49787a0e2101 120G 2 

TOTAL : 1047.2 Go 

$ ceph df 

GLOBAL: 
SIZE AVAIL RAW USED %RAW USED 
37867G 27457G 10409G 27.49 
POOLS: 
NAME ID USED %USED MAX AVAIL OBJECTS 
rbd 1 4630G 12.23 11653G 1196797 
ssd 2 546G 1.44 1599G 3212442 

$ ceph pg stat 

v364130: 3600 pgs: 3599 active+clean, 1 active+clean+scrubbing+deep; 5176 GB 
data, 10409 GB used, 27457 GB / 37867 GB avail; 68 B/s wr, 0 op/s 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] CephFS question

2016-03-11 Thread Sándor Szombat
Hi guys!

We use Ceph and we need a distributed storage cluster for our files. I
check CephFS but documentation says
  we can only use
1 MDS this time. So because of the HA we need 3 MDS on three master node.
What is your experience with CephFS? It is buggy? Anybody has an idea when
it will be "product ready"?

Thanks for your help!
Have a nice day!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] User Interface

2016-03-11 Thread Josef Johansson
Proxmox handles the block storage at least, I know that ownCloud handles object 
storage through rgw nowadays :)

Regards,
Josef

> On 02 Mar 2016, at 20:51, Michał Chybowski  
> wrote:
> 
> Unfortunately, VSM can manage only pools / clusters created by itself.
> Pozdrawiam
> Michał Chybowski
> Tiktalik.com
> W dniu 02.03.2016 o 20:23, Василий Ангапов pisze:
>> You may also look at Intel Virtual Storage Manager:
>> https://github.com/01org/virtual-storage-manager 
>> 
>> 
>> 
>> 2016-03-02 13:57 GMT+03:00 John Spray > >:
>> On Tue, Mar 1, 2016 at 2:42 AM, Vlad Blando < 
>> vbla...@morphlabs.com 
>> > wrote:
>> Hi,
>> 
>> We already have a user interface that is admin facing (ex. calamari, kraken, 
>> ceph-dash), how about a client facing interface, that can cater for both 
>> block and object store. For object store I can use Swift via Horizon 
>> dashboard, but for block-store, I'm not sure how.
>> 
>> So you're thinking of something that would be a UI equivalent of the rbd 
>> command line, right?  In an openstack environment I guess you'd be doing 
>> that via the Cinder integration with Horizon.  Outside of openstack, I think 
>> that the people working on  
>> https://github.com/skyrings/skyring 
>>  have ambitions along these lines too.
>> 
>> John
>>  
>> 
>> Thanks.
>> 
>> 
>> /Vlad
>> ᐧ
>> 
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com 
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> 
>> 
>> 
>> 
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com 
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> 
>> 
>> 
>> 
>> 
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com 
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS question

2016-03-11 Thread Gregory Farnum
On Friday, March 11, 2016, Sándor Szombat  wrote:

> Hi guys!
>
> We use Ceph and we need a distributed storage cluster for our files. I
> check CephFS but documentation says
>   we can only
> use 1 MDS this time.
>

This is referring to the number of active MDS servers. You can have an
arbitrary number of standby servers which will take over in case of failure
on the master.


> So because of the HA we need 3 MDS on three master node. What is your
> experience with CephFS? It is buggy? Anybody has an idea when it will be
> "product ready"?
>

We use CephFS internally to store test logs and back the ceph-post-file
infrastructure and it's been well-behaved, but that's a pretty limited set
of workloads.

That said, we're declaring a basic set of functionality to be stable in
Jewel now that we have repair tools ready to use.
-Greg


>
> Thanks for your help!
> Have a nice day!
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com