[ceph-users] RGW performance test , put 30 thousands objects to one bucket, average latency 3 seconds

2014-07-02 Thread baijia...@126.com
hi, everyone when I user rest bench testing RGW with cmd : rest-bench --access-key=ak --secret=sk --bucket=bucket --seconds=360 -t 200 -b 524288 --no-cleanup write I found when RGW call the method "bucket_prepare_op " is very slow. so I observed from 'dump_historic_ops',to see: { "descript

[ceph-users] Teuthology: Need some input on how to add osd after cluster setup is done using Teuthology

2014-07-02 Thread Shambhu Rajak
Hi Teuthology Users, Can some help me on how to add osd to a cluster already setup by Teuthology via yaml file. Other than those osd's that are mentioned in roles of yaml file, I want to add additional few osd to the cluster as a part of my scenario. So far I haven't seen any task or any method

Re: [ceph-users] Bypass Cache-Tiering for special reads (Backups)

2014-07-02 Thread Kyle Bader
> I was wondering, having a cache pool in front of an RBD pool is all fine > and dandy, but imagine you want to pull backups of all your VMs (or one > of them, or multiple...). Going to the cache for all those reads isn't > only pointless, it'll also potentially fill up the cache and possibly > evi

[ceph-users] Ceph RBD and Backup.

2014-07-02 Thread Irek Fasikhov
Hi,All. Dear community. How do you make backups CEPH RDB? Thanks -- Fasihov Irek (aka Kataklysm). С уважением, Фасихов Ирек Нургаязович Моб.: +79229045757 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-us

Re: [ceph-users] Performance is really bad when I run from vstart.sh

2014-07-02 Thread David Zafman
By default the vstart.sh setup would put all data below a directory called “dev” in the source tree. In that case you’re using a single spindle. The vstart script isn’t intended for performance testing. David Zafman Senior Developer http://www.inktank.com http://www.redhat.com On Jul 2, 2014

[ceph-users] Performance is really bad when I run from vstart.sh

2014-07-02 Thread Zhe Zhang
Hi folks, I run ceph on a single node which contains 25 hard drives and each @7200 RPM. I write raw data into the array, it achieved 2 GB/s. I presumed the performance of ceph could go beyond 1 GB/s. but when I compile and ceph code and run development mode with vstart.sh, the average throughpu

Re: [ceph-users] Some OSD and MDS crash

2014-07-02 Thread Samuel Just
Yes, thanks. -Sam On Wed, Jul 2, 2014 at 4:21 PM, Pierre BLONDEAU wrote: > Like that ? > > # ceph --admin-daemon /var/run/ceph/ceph-mon.william.asok version > {"version":"0.82"} > # ceph --admin-daemon /var/run/ceph/ceph-mon.jack.asok version > {"version":"0.82"} > # ceph --admin-daemon /var/run/

Re: [ceph-users] Some OSD and MDS crash

2014-07-02 Thread Pierre BLONDEAU
Like that ? # ceph --admin-daemon /var/run/ceph/ceph-mon.william.asok version {"version":"0.82"} # ceph --admin-daemon /var/run/ceph/ceph-mon.jack.asok version {"version":"0.82"} # ceph --admin-daemon /var/run/ceph/ceph-mon.joe.asok version {"version":"0.82"} Pierre Le 03/07/2014 01:17, Samuel

Re: [ceph-users] Some OSD and MDS crash

2014-07-02 Thread Samuel Just
Can you confirm from the admin socket that all monitors are running the same version? -Sam On Wed, Jul 2, 2014 at 4:15 PM, Pierre BLONDEAU wrote: > Le 03/07/2014 00:55, Samuel Just a écrit : > >> Ah, >> >> ~/logs » for i in 20 23; do ../ceph/src/osdmaptool --export-crush >> /tmp/crush$i osd-$i*;

Re: [ceph-users] Some OSD and MDS crash

2014-07-02 Thread Pierre BLONDEAU
Le 03/07/2014 00:55, Samuel Just a écrit : Ah, ~/logs » for i in 20 23; do ../ceph/src/osdmaptool --export-crush /tmp/crush$i osd-$i*; ../ceph/src/crushtool -d /tmp/crush$i > /tmp/crush$i.d; done; diff /tmp/crush20.d /tmp/crush23.d ../ceph/src/osdmaptool: osdmap file 'osd-20_osdmap.13258__0_4E62

Re: [ceph-users] Some OSD and MDS crash

2014-07-02 Thread Samuel Just
Ah, ~/logs » for i in 20 23; do ../ceph/src/osdmaptool --export-crush /tmp/crush$i osd-$i*; ../ceph/src/crushtool -d /tmp/crush$i > /tmp/crush$i.d; done; diff /tmp/crush20.d /tmp/crush23.d ../ceph/src/osdmaptool: osdmap file 'osd-20_osdmap.13258__0_4E62BB79__none' ../ceph/src/osdmaptool: exported

Re: [ceph-users] Some OSD and MDS crash

2014-07-02 Thread Samuel Just
Yeah, divergent osdmaps: 555ed048e73024687fc8b106a570db4f osd-20_osdmap.13258__0_4E62BB79__none 6037911f31dc3c18b05499d24dcdbe5c osd-23_osdmap.13258__0_4E62BB79__none Joao: thoughts? -Sam On Wed, Jul 2, 2014 at 3:39 PM, Pierre BLONDEAU wrote: > The files > > When I upgrade : > ceph-deploy ins

Re: [ceph-users] Some OSD and MDS crash

2014-07-02 Thread Samuel Just
Joao: this looks like divergent osdmaps, osd 20 and osd 23 have differing ideas of the acting set for pg 2.11. Did we add hashes to the incremental maps? What would you want to know from the mons? -Sam On Wed, Jul 2, 2014 at 3:10 PM, Samuel Just wrote: > Also, what version did you upgrade from,

Re: [ceph-users] Some OSD and MDS crash

2014-07-02 Thread Samuel Just
Also, what version did you upgrade from, and how did you upgrade? -Sam On Wed, Jul 2, 2014 at 3:09 PM, Samuel Just wrote: > Ok, in current/meta on osd 20 and osd 23, please attach all files matching > > ^osdmap.13258.* > > There should be one such file on each osd. (should look something like > o

Re: [ceph-users] Some OSD and MDS crash

2014-07-02 Thread Samuel Just
Ok, in current/meta on osd 20 and osd 23, please attach all files matching ^osdmap.13258.* There should be one such file on each osd. (should look something like osdmap.6__0_FD6E4C01__none, probably hashed into a subdirectory, you'll want to use find). What version of ceph is running on your mon

[ceph-users] Bypass Cache-Tiering for special reads (Backups)

2014-07-02 Thread Marc
Hi, I was wondering, having a cache pool in front of an RBD pool is all fine and dandy, but imagine you want to pull backups of all your VMs (or one of them, or multiple...). Going to the cache for all those reads isn't only pointless, it'll also potentially fill up the cache and possibly evict ac

Re: [ceph-users] Some OSD and MDS crash

2014-07-02 Thread Pierre BLONDEAU
Hi, I do it, the log files are available here : https://blondeau.users.greyc.fr/cephlog/debug20/ The OSD's files are really big +/- 80M . After starting the osd.20 some other osd crash. I pass from 31 osd up to 16. I remark that after this the number of down+peering PG decrease from 367 to

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-02 Thread Brian Lovett
Alright, I was finally able to get this resolved without adding another node. As pointed out, even though I had a config variable that defined the default replicated size at 2, ceph for some reason created the default pools (data, and metadata) with a value of 3. After digging trough documentat

Re: [ceph-users] Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?

2014-07-02 Thread Gregory Farnum
On Wed, Jul 2, 2014 at 12:44 PM, Stefan Priebe wrote: > Hi Greg, > > Am 02.07.2014 21:36, schrieb Gregory Farnum: >> >> On Wed, Jul 2, 2014 at 12:00 PM, Stefan Priebe >> wrote: >>> >>> >>> Am 02.07.2014 16:00, schrieb Gregory Farnum: >>> Yeah, it's fighting for attention with a lot of other

Re: [ceph-users] Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?

2014-07-02 Thread Stefan Priebe
Hi Greg, Am 02.07.2014 21:36, schrieb Gregory Farnum: On Wed, Jul 2, 2014 at 12:00 PM, Stefan Priebe wrote: Am 02.07.2014 16:00, schrieb Gregory Farnum: Yeah, it's fighting for attention with a lot of other urgent stuff. :( Anyway, even if you can't look up any details or reproduce at this

Re: [ceph-users] Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?

2014-07-02 Thread Gregory Farnum
On Wed, Jul 2, 2014 at 12:00 PM, Stefan Priebe wrote: > > Am 02.07.2014 16:00, schrieb Gregory Farnum: > >> Yeah, it's fighting for attention with a lot of other urgent stuff. :( >> >> Anyway, even if you can't look up any details or reproduce at this >> time, I'm sure you know what shape the clus

Re: [ceph-users] Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?

2014-07-02 Thread Stefan Priebe
Am 02.07.2014 16:00, schrieb Gregory Farnum: Yeah, it's fighting for attention with a lot of other urgent stuff. :( Anyway, even if you can't look up any details or reproduce at this time, I'm sure you know what shape the cluster was (number of OSDs, running on SSDs or hard drives, etc), and th

Re: [ceph-users] Some OSD and MDS crash

2014-07-02 Thread Samuel Just
You should add debug osd = 20 debug filestore = 20 debug ms = 1 to the [osd] section of the ceph.conf and restart the osds. I'd like all three logs if possible. Thanks -Sam On Wed, Jul 2, 2014 at 5:03 AM, Pierre BLONDEAU wrote: > Yes, but how i do that ? > > With a command like that ? > > cep

[ceph-users] [ANN] ceph-deploy 1.5.7 released

2014-07-02 Thread Alfredo Deza
Hi All, There is a new bug-fix release of ceph-deploy, the easy deployment tool for Ceph. The full list of fixes for this release can be found in the changelog: http://ceph.com/ceph-deploy/docs/changelog.html#id1 Make sure you update! -Alfredo ___

Re: [ceph-users] RBD layering

2014-07-02 Thread NEVEU Stephane
Ok thanks : mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kvm1, allow rx pool=templates' seems to be enough. One more question about RBD layering : I've made a clone (child) in my pool 'kvm' from my protected snapshot in my pool 'template' and after launching m

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-02 Thread Brian Lovett
Christian Balzer writes: > Read EVERYTHING you can find about crushmap rules. > > The quickstart (I think) talks about 3 storage nodes, not OSDs. > > Ceph is quite good when it comes to defining failure domains, the default > is to segregate at the storage node level. > What good is a replicati

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-02 Thread Christian Balzer
On Wed, 2 Jul 2014 14:25:49 + (UTC) Brian Lovett wrote: > Christian Balzer writes: > > > > So either make sure these pools really have a replication of 2 by > > deleting and re-creating them or add a third storage node. > > > > I just executed "ceph osd pool set {POOL} size 2" for both p

Re: [ceph-users] Issues upgrading from 0.72.x (emperor) to 0.81.x (firefly)

2014-07-02 Thread Sylvain Munaut
Hi, > I can't help you with packaging issues, but i can tell you that the > rbdmap executable got moved to a different package at some point, but > I believe the official ones handle it properly. I'll see tonight when doing the other nodes. Maybe it's a result of using dist-upgrade rather than j

Re: [ceph-users] Mixing CEPH versions on new ceph nodes...

2014-07-02 Thread Wido den Hollander
On 07/02/2014 04:08 PM, Andrija Panic wrote: Hi, I have existing CEPH cluster of 3 nodes, versions 0.72.2 I'm in a process of installing CEPH on 4th node, but now CEPH version is 0.80.1 Will this make problems running mixed CEPH versions ? No, but the recommendation is not to have this runn

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-02 Thread Brian Lovett
Gregory Farnum writes: > > On Tue, Jul 1, 2014 at 1:26 PM, Brian Lovett > wrote: > > "profile": "bobtail", > > Okay. That's unusual. What's the oldest client you need to support, > and what Ceph version are you using? You probably want to set the > crush tunables to "optimal"; the "bobta

Re: [ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

2014-07-02 Thread Brian Lovett
Christian Balzer writes: > So either make sure these pools really have a replication of 2 by deleting > and re-creating them or add a third storage node. I just executed "ceph osd pool set {POOL} size 2" for both pools. Anything else I need to do? I still don't see any changes to the status

Re: [ceph-users] Issues upgrading from 0.72.x (emperor) to 0.81.x (firefly)

2014-07-02 Thread Gregory Farnum
On Wed, Jul 2, 2014 at 6:18 AM, Sylvain Munaut wrote: > Hi, > > > I'm having a couple of issues during this update. On the test cluster > it went fine, but when running it on production I have a few issues. > (I guess there is some subtle difference I missed, I updated the test > one back when emp

[ceph-users] Mixing CEPH versions on new ceph nodes...

2014-07-02 Thread Andrija Panic
Hi, I have existing CEPH cluster of 3 nodes, versions 0.72.2 I'm in a process of installing CEPH on 4th node, but now CEPH version is 0.80.1 Will this make problems running mixed CEPH versions ? I intend to upgrade CEPH on exsiting 3 nodes anyway ? Recommended steps ? Thanks -- Andrija Pani

Re: [ceph-users] Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?

2014-07-02 Thread Gregory Farnum
Yeah, it's fighting for attention with a lot of other urgent stuff. :( Anyway, even if you can't look up any details or reproduce at this time, I'm sure you know what shape the cluster was (number of OSDs, running on SSDs or hard drives, etc), and that would be useful guidance. :) -Greg Software E

Re: [ceph-users] Replacing an OSD

2014-07-02 Thread Sylvain Munaut
Hi, > Did you also recreate the journal?! It was a journal file and got re-created automatically. Cheers, Sylvain ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Replacing an OSD

2014-07-02 Thread Smart Weblications GmbH
Am 01.07.2014 17:48, schrieb Sylvain Munaut: > Hi, > > > As an exercise, I killed an OSD today, just killed the process and > removed its data directory. > > To recreate it, I recreated an empty data dir, then > > ceph-osd -c /etc/ceph/ceph.conf -i 3 --monmap /tmp/monmap --mkfs > > (I tried wi

[ceph-users] Issues upgrading from 0.72.x (emperor) to 0.81.x (firefly)

2014-07-02 Thread Sylvain Munaut
Hi, I'm having a couple of issues during this update. On the test cluster it went fine, but when running it on production I have a few issues. (I guess there is some subtle difference I missed, I updated the test one back when emperor came out). For reference, I'm on ubuntu precise, I use self-b

Re: [ceph-users] Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?

2014-07-02 Thread Stefan Priebe - Profihost AG
Am 02.07.2014 15:07, schrieb Haomai Wang: > Could you give some perf counter from rbd client side? Such as op latency? Sorry haven't any counters. As this mail was some days unseen - i thought nobody has an idea or could help. Stefan > On Wed, Jul 2, 2014 at 9:01 PM, Stefan Priebe - Profihost A

Re: [ceph-users] Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?

2014-07-02 Thread Haomai Wang
Could you give some perf counter from rbd client side? Such as op latency? On Wed, Jul 2, 2014 at 9:01 PM, Stefan Priebe - Profihost AG wrote: > Am 02.07.2014 00:51, schrieb Gregory Farnum: >> On Thu, Jun 26, 2014 at 11:49 PM, Stefan Priebe - Profihost AG >> wrote: >>> Hi Greg, >>> >>> Am 26.06.

Re: [ceph-users] Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?

2014-07-02 Thread Stefan Priebe - Profihost AG
Am 02.07.2014 00:51, schrieb Gregory Farnum: > On Thu, Jun 26, 2014 at 11:49 PM, Stefan Priebe - Profihost AG > wrote: >> Hi Greg, >> >> Am 26.06.2014 02:17, schrieb Gregory Farnum: >>> Sorry we let this drop; we've all been busy traveling and things. >>> >>> There have been a lot of changes to li

Re: [ceph-users] Some OSD and MDS crash

2014-07-02 Thread Pierre BLONDEAU
Yes, but how i do that ? With a command like that ? ceph tell osd.20 injectargs '--debug-osd 20 --debug-filestore 20 --debug-ms 1' By modify the /etc/ceph/ceph.conf ? This file is really poor because I use udev detection. When I have made these changes, you want the three log files or only

Re: [ceph-users] RBD layering

2014-07-02 Thread NEVEU Stephane
>Objet : Re: [ceph-users] RBD layering On 07/02/2014 10:08 AM, NEVEU Stephane wrote: >> Hi all, >> >> I'm missing around with "rbd layering" to store some ready-to-use >> templates (format 2) in a template pool : >> >> /Rbd -p templates ls/ >> >> /Ubuntu1404/ >> >> /Centos6/ >> >> /./ >> >> // >>

Re: [ceph-users] Replacing an OSD

2014-07-02 Thread Sylvain Munaut
Hi Loic, > By restoring the fsid file from the back, presumably. I did not think of that > when you showed the ceph-osd mkfs line, but it makes sense. This is not the > ceph fsid. Yeah, I though about that and I saw fsid and ceph_fsid, but I wasn't just that just replacing the file would be en

Re: [ceph-users] Replacing an OSD

2014-07-02 Thread Loic Dachary
Hi Sylvain, On 02/07/2014 11:13, Sylvain Munaut wrote: > Ah, I finally fond something that looks like an error message : > > 2014-07-02 11:07:57.817269 7f0692e3a700 7 mon.a@0(leader).osd e1147 > preprocess_boot from osd.3 10.192.2.70:6807/9702 clashes with existing > osd: different fsid (ours: e

Re: [ceph-users] Replacing an OSD

2014-07-02 Thread Sylvain Munaut
Just for future reference, you actually do need to remove the OSD even if you're going to re-add it like 10 sec later ... $ ceph osd rm 3 removed osd.3 $ ceph osd create 3 Then it works fine. No need to remove from crusmap or remove the auth key (you can re-use both), but you need to remove/add

Re: [ceph-users] RBD layering

2014-07-02 Thread Wido den Hollander
On 07/02/2014 10:08 AM, NEVEU Stephane wrote: Hi all, I’m missing around with “rbd layering” to store some ready-to-use templates (format 2) in a template pool : /Rbd –p templates ls/ /Ubuntu1404/ /Centos6/ /…/ // /Rbd snap create templates/Ubuntu1404@Ubuntu1404-snap-protected/ /Rbd snap

Re: [ceph-users] Replacing an OSD

2014-07-02 Thread Sylvain Munaut
Ah, I finally fond something that looks like an error message : 2014-07-02 11:07:57.817269 7f0692e3a700 7 mon.a@0(leader).osd e1147 preprocess_boot from osd.3 10.192.2.70:6807/9702 clashes with existing osd: different fsid (ours: e44c914a-23e9-4756-9713-166de401dec6 ; theirs: c1cfff2f-4f2e-4c1d-a

Re: [ceph-users] [Solved] Init scripts in Debian not working

2014-07-02 Thread Dieter Scholz
Hello, I tried the ceph packages from jessie, too. After some time penetrating Google I think I found the solution. This will probably work for all package sources. You have to create an empty marker file named 'sysvinit' in the directories below /var/lib/ceph/XXX. Then everythink works fine.

[ceph-users] RBD layering

2014-07-02 Thread NEVEU Stephane
Hi all, I'm missing around with "rbd layering" to store some ready-to-use templates (format 2) in a template pool : Rbd -p templates ls Ubuntu1404 Centos6 ... Rbd snap create templates/Ubuntu1404@Ubuntu1404-snap-protected Rbd snap protect templates/Ubuntu1404@Ubuntu1404-snap-protected Rbd clone

Re: [ceph-users] Replacing an OSD

2014-07-02 Thread Sylvain Munaut
Hi, > Does OSD 3 show when you ceph pg dump ? If so I would look in the logs of an > OSD which is participating in the same PG. It appears at the end but not in any PG, it's now been marked out and all was redistributed. osdstat kbused kbavail kb hb in hb out 0 15602352158