hi, everyone
when I user rest bench testing RGW with cmd : rest-bench --access-key=ak
--secret=sk --bucket=bucket --seconds=360 -t 200 -b 524288 --no-cleanup
write
I found when RGW call the method "bucket_prepare_op " is very slow. so I
observed from 'dump_historic_ops',to see:
{ "descript
Hi
Teuthology Users,
Can some help me on how to add osd to a cluster already setup by Teuthology via
yaml file.
Other than those osd's that are mentioned in roles of yaml file, I want to add
additional few osd to the cluster as a part of my scenario. So far I haven't
seen any task or any method
> I was wondering, having a cache pool in front of an RBD pool is all fine
> and dandy, but imagine you want to pull backups of all your VMs (or one
> of them, or multiple...). Going to the cache for all those reads isn't
> only pointless, it'll also potentially fill up the cache and possibly
> evi
Hi,All.
Dear community. How do you make backups CEPH RDB?
Thanks
--
Fasihov Irek (aka Kataklysm).
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-us
By default the vstart.sh setup would put all data below a directory called
“dev” in the source tree. In that case you’re using a single spindle. The
vstart script isn’t intended for performance testing.
David Zafman
Senior Developer
http://www.inktank.com
http://www.redhat.com
On Jul 2, 2014
Hi folks,
I run ceph on a single node which contains 25 hard drives and each @7200 RPM. I
write raw data into the array, it achieved 2 GB/s. I presumed the performance
of ceph could go beyond 1 GB/s. but when I compile and ceph code and run
development mode with vstart.sh, the average throughpu
Yes, thanks.
-Sam
On Wed, Jul 2, 2014 at 4:21 PM, Pierre BLONDEAU
wrote:
> Like that ?
>
> # ceph --admin-daemon /var/run/ceph/ceph-mon.william.asok version
> {"version":"0.82"}
> # ceph --admin-daemon /var/run/ceph/ceph-mon.jack.asok version
> {"version":"0.82"}
> # ceph --admin-daemon /var/run/
Like that ?
# ceph --admin-daemon /var/run/ceph/ceph-mon.william.asok version
{"version":"0.82"}
# ceph --admin-daemon /var/run/ceph/ceph-mon.jack.asok version
{"version":"0.82"}
# ceph --admin-daemon /var/run/ceph/ceph-mon.joe.asok version
{"version":"0.82"}
Pierre
Le 03/07/2014 01:17, Samuel
Can you confirm from the admin socket that all monitors are running
the same version?
-Sam
On Wed, Jul 2, 2014 at 4:15 PM, Pierre BLONDEAU
wrote:
> Le 03/07/2014 00:55, Samuel Just a écrit :
>
>> Ah,
>>
>> ~/logs » for i in 20 23; do ../ceph/src/osdmaptool --export-crush
>> /tmp/crush$i osd-$i*;
Le 03/07/2014 00:55, Samuel Just a écrit :
Ah,
~/logs » for i in 20 23; do ../ceph/src/osdmaptool --export-crush
/tmp/crush$i osd-$i*; ../ceph/src/crushtool -d /tmp/crush$i >
/tmp/crush$i.d; done; diff /tmp/crush20.d /tmp/crush23.d
../ceph/src/osdmaptool: osdmap file 'osd-20_osdmap.13258__0_4E62
Ah,
~/logs » for i in 20 23; do ../ceph/src/osdmaptool --export-crush
/tmp/crush$i osd-$i*; ../ceph/src/crushtool -d /tmp/crush$i >
/tmp/crush$i.d; done; diff /tmp/crush20.d /tmp/crush23.d
../ceph/src/osdmaptool: osdmap file 'osd-20_osdmap.13258__0_4E62BB79__none'
../ceph/src/osdmaptool: exported
Yeah, divergent osdmaps:
555ed048e73024687fc8b106a570db4f osd-20_osdmap.13258__0_4E62BB79__none
6037911f31dc3c18b05499d24dcdbe5c osd-23_osdmap.13258__0_4E62BB79__none
Joao: thoughts?
-Sam
On Wed, Jul 2, 2014 at 3:39 PM, Pierre BLONDEAU
wrote:
> The files
>
> When I upgrade :
> ceph-deploy ins
Joao: this looks like divergent osdmaps, osd 20 and osd 23 have
differing ideas of the acting set for pg 2.11. Did we add hashes to
the incremental maps? What would you want to know from the mons?
-Sam
On Wed, Jul 2, 2014 at 3:10 PM, Samuel Just wrote:
> Also, what version did you upgrade from,
Also, what version did you upgrade from, and how did you upgrade?
-Sam
On Wed, Jul 2, 2014 at 3:09 PM, Samuel Just wrote:
> Ok, in current/meta on osd 20 and osd 23, please attach all files matching
>
> ^osdmap.13258.*
>
> There should be one such file on each osd. (should look something like
> o
Ok, in current/meta on osd 20 and osd 23, please attach all files matching
^osdmap.13258.*
There should be one such file on each osd. (should look something like
osdmap.6__0_FD6E4C01__none, probably hashed into a subdirectory,
you'll want to use find).
What version of ceph is running on your mon
Hi,
I was wondering, having a cache pool in front of an RBD pool is all fine
and dandy, but imagine you want to pull backups of all your VMs (or one
of them, or multiple...). Going to the cache for all those reads isn't
only pointless, it'll also potentially fill up the cache and possibly
evict ac
Hi,
I do it, the log files are available here :
https://blondeau.users.greyc.fr/cephlog/debug20/
The OSD's files are really big +/- 80M .
After starting the osd.20 some other osd crash. I pass from 31 osd up to
16. I remark that after this the number of down+peering PG decrease from
367 to
Alright, I was finally able to get this resolved without adding another node.
As pointed out, even though I had a config variable that defined the default
replicated size at 2, ceph for some reason created the default pools (data,
and metadata) with a value of 3. After digging trough documentat
On Wed, Jul 2, 2014 at 12:44 PM, Stefan Priebe wrote:
> Hi Greg,
>
> Am 02.07.2014 21:36, schrieb Gregory Farnum:
>>
>> On Wed, Jul 2, 2014 at 12:00 PM, Stefan Priebe
>> wrote:
>>>
>>>
>>> Am 02.07.2014 16:00, schrieb Gregory Farnum:
>>>
Yeah, it's fighting for attention with a lot of other
Hi Greg,
Am 02.07.2014 21:36, schrieb Gregory Farnum:
On Wed, Jul 2, 2014 at 12:00 PM, Stefan Priebe wrote:
Am 02.07.2014 16:00, schrieb Gregory Farnum:
Yeah, it's fighting for attention with a lot of other urgent stuff. :(
Anyway, even if you can't look up any details or reproduce at this
On Wed, Jul 2, 2014 at 12:00 PM, Stefan Priebe wrote:
>
> Am 02.07.2014 16:00, schrieb Gregory Farnum:
>
>> Yeah, it's fighting for attention with a lot of other urgent stuff. :(
>>
>> Anyway, even if you can't look up any details or reproduce at this
>> time, I'm sure you know what shape the clus
Am 02.07.2014 16:00, schrieb Gregory Farnum:
Yeah, it's fighting for attention with a lot of other urgent stuff. :(
Anyway, even if you can't look up any details or reproduce at this
time, I'm sure you know what shape the cluster was (number of OSDs,
running on SSDs or hard drives, etc), and th
You should add
debug osd = 20
debug filestore = 20
debug ms = 1
to the [osd] section of the ceph.conf and restart the osds. I'd like
all three logs if possible.
Thanks
-Sam
On Wed, Jul 2, 2014 at 5:03 AM, Pierre BLONDEAU
wrote:
> Yes, but how i do that ?
>
> With a command like that ?
>
> cep
Hi All,
There is a new bug-fix release of ceph-deploy, the easy deployment tool
for Ceph.
The full list of fixes for this release can be found in the changelog:
http://ceph.com/ceph-deploy/docs/changelog.html#id1
Make sure you update!
-Alfredo
___
Ok thanks :
mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx
pool=kvm1, allow rx pool=templates'
seems to be enough.
One more question about RBD layering :
I've made a clone (child) in my pool 'kvm' from my protected snapshot in my
pool 'template' and after launching m
Christian Balzer writes:
> Read EVERYTHING you can find about crushmap rules.
>
> The quickstart (I think) talks about 3 storage nodes, not OSDs.
>
> Ceph is quite good when it comes to defining failure domains, the default
> is to segregate at the storage node level.
> What good is a replicati
On Wed, 2 Jul 2014 14:25:49 + (UTC) Brian Lovett wrote:
> Christian Balzer writes:
>
>
> > So either make sure these pools really have a replication of 2 by
> > deleting and re-creating them or add a third storage node.
>
>
>
> I just executed "ceph osd pool set {POOL} size 2" for both p
Hi,
> I can't help you with packaging issues, but i can tell you that the
> rbdmap executable got moved to a different package at some point, but
> I believe the official ones handle it properly.
I'll see tonight when doing the other nodes. Maybe it's a result of
using dist-upgrade rather than j
On 07/02/2014 04:08 PM, Andrija Panic wrote:
Hi,
I have existing CEPH cluster of 3 nodes, versions 0.72.2
I'm in a process of installing CEPH on 4th node, but now CEPH version is
0.80.1
Will this make problems running mixed CEPH versions ?
No, but the recommendation is not to have this runn
Gregory Farnum writes:
>
> On Tue, Jul 1, 2014 at 1:26 PM, Brian Lovett
> wrote:
> > "profile": "bobtail",
>
> Okay. That's unusual. What's the oldest client you need to support,
> and what Ceph version are you using? You probably want to set the
> crush tunables to "optimal"; the "bobta
Christian Balzer writes:
> So either make sure these pools really have a replication of 2 by deleting
> and re-creating them or add a third storage node.
I just executed "ceph osd pool set {POOL} size 2" for both pools. Anything
else I need to do? I still don't see any changes to the status
On Wed, Jul 2, 2014 at 6:18 AM, Sylvain Munaut
wrote:
> Hi,
>
>
> I'm having a couple of issues during this update. On the test cluster
> it went fine, but when running it on production I have a few issues.
> (I guess there is some subtle difference I missed, I updated the test
> one back when emp
Hi,
I have existing CEPH cluster of 3 nodes, versions 0.72.2
I'm in a process of installing CEPH on 4th node, but now CEPH version is
0.80.1
Will this make problems running mixed CEPH versions ?
I intend to upgrade CEPH on exsiting 3 nodes anyway ?
Recommended steps ?
Thanks
--
Andrija Pani
Yeah, it's fighting for attention with a lot of other urgent stuff. :(
Anyway, even if you can't look up any details or reproduce at this
time, I'm sure you know what shape the cluster was (number of OSDs,
running on SSDs or hard drives, etc), and that would be useful
guidance. :)
-Greg
Software E
Hi,
> Did you also recreate the journal?!
It was a journal file and got re-created automatically.
Cheers,
Sylvain
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Am 01.07.2014 17:48, schrieb Sylvain Munaut:
> Hi,
>
>
> As an exercise, I killed an OSD today, just killed the process and
> removed its data directory.
>
> To recreate it, I recreated an empty data dir, then
>
> ceph-osd -c /etc/ceph/ceph.conf -i 3 --monmap /tmp/monmap --mkfs
>
> (I tried wi
Hi,
I'm having a couple of issues during this update. On the test cluster
it went fine, but when running it on production I have a few issues.
(I guess there is some subtle difference I missed, I updated the test
one back when emperor came out).
For reference, I'm on ubuntu precise, I use self-b
Am 02.07.2014 15:07, schrieb Haomai Wang:
> Could you give some perf counter from rbd client side? Such as op latency?
Sorry haven't any counters. As this mail was some days unseen - i
thought nobody has an idea or could help.
Stefan
> On Wed, Jul 2, 2014 at 9:01 PM, Stefan Priebe - Profihost A
Could you give some perf counter from rbd client side? Such as op latency?
On Wed, Jul 2, 2014 at 9:01 PM, Stefan Priebe - Profihost AG
wrote:
> Am 02.07.2014 00:51, schrieb Gregory Farnum:
>> On Thu, Jun 26, 2014 at 11:49 PM, Stefan Priebe - Profihost AG
>> wrote:
>>> Hi Greg,
>>>
>>> Am 26.06.
Am 02.07.2014 00:51, schrieb Gregory Farnum:
> On Thu, Jun 26, 2014 at 11:49 PM, Stefan Priebe - Profihost AG
> wrote:
>> Hi Greg,
>>
>> Am 26.06.2014 02:17, schrieb Gregory Farnum:
>>> Sorry we let this drop; we've all been busy traveling and things.
>>>
>>> There have been a lot of changes to li
Yes, but how i do that ?
With a command like that ?
ceph tell osd.20 injectargs '--debug-osd 20 --debug-filestore 20
--debug-ms 1'
By modify the /etc/ceph/ceph.conf ? This file is really poor because I
use udev detection.
When I have made these changes, you want the three log files or only
>Objet : Re: [ceph-users] RBD layering
On 07/02/2014 10:08 AM, NEVEU Stephane wrote:
>> Hi all,
>>
>> I'm missing around with "rbd layering" to store some ready-to-use
>> templates (format 2) in a template pool :
>>
>> /Rbd -p templates ls/
>>
>> /Ubuntu1404/
>>
>> /Centos6/
>>
>> /./
>>
>> //
>>
Hi Loic,
> By restoring the fsid file from the back, presumably. I did not think of that
> when you showed the ceph-osd mkfs line, but it makes sense. This is not the
> ceph fsid.
Yeah, I though about that and I saw fsid and ceph_fsid, but I wasn't
just that just replacing the file would be en
Hi Sylvain,
On 02/07/2014 11:13, Sylvain Munaut wrote:
> Ah, I finally fond something that looks like an error message :
>
> 2014-07-02 11:07:57.817269 7f0692e3a700 7 mon.a@0(leader).osd e1147
> preprocess_boot from osd.3 10.192.2.70:6807/9702 clashes with existing
> osd: different fsid (ours: e
Just for future reference, you actually do need to remove the OSD even
if you're going to re-add it like 10 sec later ...
$ ceph osd rm 3
removed osd.3
$ ceph osd create
3
Then it works fine.
No need to remove from crusmap or remove the auth key (you can re-use
both), but you need to remove/add
On 07/02/2014 10:08 AM, NEVEU Stephane wrote:
Hi all,
I’m missing around with “rbd layering” to store some ready-to-use
templates (format 2) in a template pool :
/Rbd –p templates ls/
/Ubuntu1404/
/Centos6/
/…/
//
/Rbd snap create templates/Ubuntu1404@Ubuntu1404-snap-protected/
/Rbd snap
Ah, I finally fond something that looks like an error message :
2014-07-02 11:07:57.817269 7f0692e3a700 7 mon.a@0(leader).osd e1147
preprocess_boot from osd.3 10.192.2.70:6807/9702 clashes with existing
osd: different fsid (ours: e44c914a-23e9-4756-9713-166de401dec6 ;
theirs: c1cfff2f-4f2e-4c1d-a
Hello,
I tried the ceph packages from jessie, too. After some time penetrating
Google I think I found the solution. This will probably work for all package
sources.
You have to create an empty marker file named 'sysvinit' in the directories
below /var/lib/ceph/XXX. Then everythink works fine.
Hi all,
I'm missing around with "rbd layering" to store some ready-to-use templates
(format 2) in a template pool :
Rbd -p templates ls
Ubuntu1404
Centos6
...
Rbd snap create templates/Ubuntu1404@Ubuntu1404-snap-protected
Rbd snap protect templates/Ubuntu1404@Ubuntu1404-snap-protected
Rbd clone
Hi,
> Does OSD 3 show when you ceph pg dump ? If so I would look in the logs of an
> OSD which is participating in the same PG.
It appears at the end but not in any PG, it's now been marked out and
all was redistributed.
osdstat kbused kbavail kb hb in hb out
0 15602352158
50 matches
Mail list logo