[cc'ing ceph-users, fishing for experienced users ;-]
On 03/02/2014 11:55, Federico Simoncelli wrote:
> Hi, do you have any news about the /dev/mapper device for ceph?
> Is it there? What's the output of:
>
> # multipath -ll
root@bm0014:~# rbd --pool ovh create --size 1 foobar
root@bm0014:~# rbd
I have the same behaviour here.
I believe this is somehow expected since you’re calling “copy”, clone will do
the cow.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Vic
On Sun, Feb 2, 2014 at 12:18 AM, Kei.masumoto wrote:
> Hi,
>
> I am newbie of ceph, now I am trying to deploy following
> "http://ceph.com/docs/master/start/quick-ceph-deploy/";
> ceph1, ceph2 and ceph3 exists according to the above tutorial. I got a
> WARNING message when I exec ceph-deploy "mon
+ceph-users.
Does anybody have the similar experience of scrubbing / deep-scrubbing?
Thanks,
Guang
On Jan 29, 2014, at 10:35 AM, Guang wrote:
> Glad to see there are some discussion around scrubbing / deep-scrubbing.
>
> We are experiencing the same that scrubbing could affect latency quite a
Hi,
After command:
"ceph osd reweight-by-utilization 105"
cluster stopped on " 249 active+remapped;" state.
I have 'crush tunables optimal'.
head -n 6 /tmp/crush.txt
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooselea
Hi Alfredo,
Thanks for reply! I pasted the logs below.
2014-02-01 14:06:33,350 [ceph_deploy.cli][INFO ] Invoked (1.3.4):
/usr/bin/ceph-deploy mon create-initial
2014-02-01 14:06:33,353 [ceph_deploy.mon][DEBUG
In other words,
1. we've got 3 racks ( 1 replica per rack )
2. in every rack we have 3 hosts
3. every host has 22 OSD's
4. all pg_num's are 2^n for every pool
5. we enabled "crush tunables optimal".
6. on every machine we disabled 4 unused disk's (osd out, osd reweight
0 and osd rm)
Pool ".rgw.buc
On Mon, Feb 3, 2014 at 10:07 AM, Kei.masumoto wrote:
> Hi Alfredo,
>
> Thanks for reply! I pasted the logs below.
>
>
> 2014-02-01 14:06:33,350 [ceph_deploy.cli][INFO ] Invoked (1.3.4):
> /usr/bin/ceph-deploy mo
hi,
We use rbd pool for
and I wonder how can i have
the real size use by my drb image
I can have the virtual size rbd info
but how can i have the real size use by my drbd image
--
probeSys - spécialiste GNU/Linux
site web : http://www.probesys.com
_
Hi Alfredo,
Thanks for your reply!
I think I pasted all logs from ceph.log, but anyway, I re-executed
"ceph-deploy mon create-initial again"
Does that make sense? It seems like stack strace are added...
-
Hi Dominik,
Can you send a copy of your osdmap?
ceph osd getmap -o /tmp/osdmap
(Can send it off list if the IP addresses are sensitive.) I'm tweaking
osdmaptool to have a --test-map-pgs option to look at this offline.
Thanks!
sage
On Mon, 3 Feb 2014, Dominik Mostowiec wrote:
> In other wo
Sory, i forgot to tell You.
It can be important.
We done:
ceph osd reweight-by-utilization 105 ( as i wrote in second mail ).
and after cluster stack on 'active+remapped' PGs we had to reweight it
back to 1.0. (all reweighted osd's)
This osdmap is not from active+clean cluster, rebalancing is in pr
On Mon, 3 Feb 2014, Dominik Mostowiec wrote:
> Sory, i forgot to tell You.
> It can be important.
> We done:
> ceph osd reweight-by-utilization 105 ( as i wrote in second mail ).
> and after cluster stack on 'active+remapped' PGs we had to reweight it
> back to 1.0. (all reweighted osd's)
> This os
I've been noticing somethings strange with my RGW federation. I added
some statistics to radosgw-agent to try and get some insight
(https://github.com/ceph/radosgw-agent/pull/7), but that just showed me
that I don't understand how replication works.
When PUT traffic was relatively slow to the
On Mon, Feb 3, 2014 at 10:43 AM, Craig Lewis wrote:
> I've been noticing somethings strange with my RGW federation. I added some
> statistics to radosgw-agent to try and get some insight
> (https://github.com/ceph/radosgw-agent/pull/7), but that just showed me that
> I don't understand how replic
The Chef recipes support the ceph daemons, but not things that live
inside ceph. You can't manage pools or users (yet). Github has a few
open tickets for managing things that live inside Ceph.
You'll want to browse through the open pull requests. There are a bunch
of minor fixes waiting to
Hi,
$ rbd diff rbd/toto | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }’
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire - 75009 Paris
Web : www.enovance
Hey folks,
It's time for our favorite gameshow again, the Ceph Developer
Summit...where everyone is a winner! This quarter the grand prize is a
google hangout date with Sage and the gang to talk about Giant.
Hooray!
http://ceph.com/community/ceph-developer-summit-giant/
As you can see from the f
Hello all,
I noticed ceph has an interactive mode.
I did quick search and I don't see that tab completion is in there,
but there are some mentions of readline in the source, so I'm
wondering if it is on the horizon.
--ben
___
ceph-users mailing list
On 2/3/14 10:51 , Gregory Farnum wrote:
On Mon, Feb 3, 2014 at 10:43 AM, Craig Lewis wrote:
I've been noticing somethings strange with my RGW federation. I added some
statistics to radosgw-agent to try and get some insight
(https://github.com/ceph/radosgw-agent/pull/7), but that just showed
Hi,
I spent a couple hours looking at your map because it did look like there
was something wrong. After some experimentation and adding a bucnh of
improvements to osdmaptool to test the distribution, though, I think
everything is working as expected. For pool 3, your map has a standard
devi
Hi folks-
I'm having trouble demonstrating reasonable performance of RBDs. I'm running
Ceph 0.72.2 on Ubuntu 13.04 with the 3.12 kernel. I have four dual-Xeon
servers, each with 24GB RAM, and an Intel 320 SSD for journals and four WD 10K
RPM SAS drives for OSDs, all connected with an LSI 1078
Hello,
On Tue, 4 Feb 2014 01:29:18 + Gruher, Joseph R wrote:
[snip, nice enough test setup]
> I notice in the FIO output despite the iodepth setting it seems to be
> reporting an IO depth of only 1, which would certainly help explain poor
> performance, but I'm at a loss as to why, I wonder
On 02/03/2014 07:29 PM, Gruher, Joseph R wrote:
Hi folks-
I’m having trouble demonstrating reasonable performance of RBDs. I’m
running Ceph 0.72.2 on Ubuntu 13.04 with the 3.12 kernel. I have four
dual-Xeon servers, each with 24GB RAM, and an Intel 320 SSD for journals
and four WD 10K RPM SAS
This release includes another batch of updates for firefly
functionality. Most notably, the cache pool infrastructure now
support snapshots, the OSD backfill functionality has been generalized
to include multiple targets (necessary for the coming erasure pools),
and there were performance improvem
25 matches
Mail list logo