[ceph-users] Radosgw keeps writing to specific OSDs wile there other free OSDs

2015-02-21 Thread B L
Hi Ceph community, I’m trying to upload some file with 5GB size, through radosgw, I have 9 OSDs deployed on 3 machines, and my cluster is healthy. The problem is: the 5GB file is being uploaded to osd.0 and osd.1 ,which are near full, while the other OSDs have more space that can have this file

[ceph-users] My PG is UP and Acting, yet it is unclean

2015-02-17 Thread B L
Hi All, I have a group of PGs that are up and acting, yet they are not clean, and causing the cluster to be in a warning mode, i.e. non-health. This is my cluster status: $ ceph -s cluster 17bea68b-1634-4cd1-8b2a-00a60ef4761d health HEALTH_WARN 203 pgs stuck unclean; recovery 6/132 obje

Re: [ceph-users] Having problem to start Radosgw

2015-02-16 Thread B L
so I had to suffer a little, since it was my first experience to install RGW and add it to the cluster. Now we can run it like: sudo service radosgw start — or — sudo /etc/init.d/radosgw start And everything should work .. Thanks Yehuda for your support .. Beanos! > On Feb 15, 2015,

Re: [ceph-users] Having problem to start Radosgw

2015-02-14 Thread B L
. > > Yehuda > > - Original Message - >> From: "B L" >> To: "Yehuda Sadeh-Weinraub" >> Cc: ceph-users@lists.ceph.com >> Sent: Saturday, February 14, 2015 2:56:54 PM >> Subject: Re: [ceph-users] Having problem to start Radosgw &g

Re: [ceph-users] Having problem to start Radosgw

2015-02-14 Thread B L
ithout using RGW) Best! > On Feb 15, 2015, at 12:39 AM, B L wrote: > > That’s what I usually do to check if rgw is running with no problems: sudo > radosgw -c ceph.conf -d > > I already pumped up the log level, but I can’t see any change or verbosity > level increase o

Re: [ceph-users] Having problem to start Radosgw

2015-02-14 Thread B L
w if I can do something more .. Now I have 2 questions: 1- what RADOS user you refer to? 2- How would I know that I use wrong cephx keys unless I see authentication error or relevant warning? Thanks! Beanos > On Feb 14, 2015, at 11:29 PM, Yehuda Sadeh-Weinraub wrote: > > > > F

Re: [ceph-users] Having problem to start Radosgw

2015-02-14 Thread B L
Hello Yehyda, The strace command you referred to me, shows this: https://gist.github.com/anonymous/8e9f1ced485996a263bb Additionally, I traced this log file: /var/log/radosgw/ceph-client.radosgw.gateway it has the following: 2015-02-12

Re: [ceph-users] Having problem to start Radosgw

2015-02-14 Thread B L
Shall I run it like this: sudo radosgw -c ceph.conf -d strace -F -T -tt -o/tmp/strace.out radosgw -f > On Feb 14, 2015, at 6:55 PM, Yehuda Sadeh-Weinraub wrote: > > strace -F -T -tt -o/tmp/strace.out radosgw -f ___ ceph-users mailing list ceph-users

Re: [ceph-users] Having problem to start Radosgw

2015-02-14 Thread B L
<https://gist.github.com/anonymous/90b77c168ed0606db03d> Please let me know if you need something else? Best! > On Feb 14, 2015, at 6:22 PM, Yehuda Sadeh-Weinraub wrote: > > > > - Original Message - >> From: "B L" >> To: ceph-users@lists.ceph.c

[ceph-users] Having problem to start Radosgw

2015-02-13 Thread B L
Hi all, I’m having a problem to start radosgw, giving me error that I can’t diagnose: $ radosgw -c ceph.conf -d 2015-02-14 07:46:58.435802 7f9d739557c0 0 ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3), process radosgw, pid 27609 2015-02-14 07:46:58.437284 7f9d739557c0 -1 asok(0

[ceph-users] Can't add RadosGW keyring to the cluster

2015-02-12 Thread B L
Hi all, Trying to do this: ceph -k ceph.client.admin.keyring auth add client.radosgw.gateway -i ceph.client.radosgw.keyring Getting this error: Error EINVAL: entity client.radosgw.gateway exists but key does not match What can this be?? Thanks! Beanos___

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-11 Thread B L
). > > > Best wishes, > Vickie > > 2015-02-10 22:25 GMT+08:00 B L <mailto:super.itera...@gmail.com>>: > Thanks for everyone!! > > After applying the re-weighting command (ceph osd crush reweight osd.0 > 0.0095), my cluster is getting healthy now :)) > &

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
t; Regards, > Vikhyat > > On 02/10/2015 07:31 PM, B L wrote: >> Thanks Vikhyat, >> >> As suggested .. >> >> ceph@ceph-node1:/home/ubuntu$ ceph osd crush reweight 0.0095 osd.0 >> >> Invalid command: osd.0 doesn't represent a float >> osd

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
Thanks Vikhyat, As suggested .. ceph@ceph-node1:/home/ubuntu$ ceph osd crush reweight 0.0095 osd.0 Invalid command: osd.0 doesn't represent a float osd crush reweight : change 's weight to in crush map Error EINVAL: invalid command What do you think > On Feb 10, 2015, at 3:18 PM, Vikhy

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
D. This mean, you OSD must be 10GB > or greater! > > > Udo > > Am 10.02.2015 12:22, schrieb B L: >> Hi Vickie, >> >> My OSD tree looks like this: >> >> ceph@ceph-node3:/home/ubuntu$ ceph osd tree >> # idweighttype nameup/downreweight >>

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
ose changes mean 2- How changing replication size can cause the cluster to be un healthy Thanks Vickie! Beanos > On Feb 10, 2015, at 1:28 PM, B L wrote: > > I changed the size and min_size as you suggested while opening the ceph -w on > a different window, and I got this: &g

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
11:23:40.769794 mon.0 [INF] pgmap v94: 256 pgs: 256 active+degraded; 0 bytes data, 200 MB used, 18165 MB / 18365 MB avail 2015-02-10 11:23:45.530713 mon.0 [INF] pgmap v95: 256 pgs: 256 active+degraded; 0 bytes data, 200 MB used, 18165 MB / 18365 MB avail > On Feb 10, 2015, at 1:24 PM, B L

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
I will try to change the replication size now as you suggested .. but how is that related to the non-healthy cluster? > On Feb 10, 2015, at 1:22 PM, B L wrote: > > Hi Vickie, > > My OSD tree looks like this: > > ceph@ceph-node3:/home/ubuntu$ ceph osd tree > # id we

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
..@gmail.com>>: > Hi Beanos: > So you have 3 OSD servers and each of them have 2 disks. > I have a question. What result of "ceph osd tree". Look like the osd status > is "down". > > > Best wishes, > Vickie > > 2015-02-10 19:00 GMT+08:00

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
08a-8022-6397c78032be osd.5 up in weight 1 up_from 22 up_thru 0 down_at 0 last_clean_interval [0,0) 172.31.3.56:6805/7019 172.31.3.56:6806/7019 172.31.3.56:6807/7019 172.31.3.56:6808/7019 exists,up da67b604-b32a-44a0-9920-df0774ad2ef3 > On Feb 10, 2015, at 12:55 PM, B L wrote: > >

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
> On Feb 10, 2015, at 12:37 PM, B L wrote: > > Hi Vickie, > > Thanks for your reply! > > You can find the dump in this link: > > https://gist.github.com/anonymous/706d4a1ec81c93fd1eca > <https://gist.github.com/anonymous/706d4a1ec81c93fd1eca> > >

[ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
Having problem with my fresh non-healthy cluster, my cluster status summary shows this: ceph@ceph-node1:~$ ceph -s cluster 17bea68b-1634-4cd1-8b2a-00a60ef4761d health HEALTH_WARN 256 pgs incomplete; 256 pgs stuck inactive; 256 pgs stuck unclean; pool data pg_num 128 > pgp_num 64 m