Hey Loic,
On 01/30/2014 07:48 PM, Loic Dachary wrote:
Hi Constantinos,
Count me in https://fosdem.org/2014/schedule/event/virtiaas02/ :-)
Great!
If you're in Brussels tomorrow (friday), you're welcome to join the Ceph meetup
http://www.meetup.com/Ceph-Brussels/ !
Unfortunately, I'm not
Hi,
We observe that we can easily create slow requests with a simple dd on CephFS:
-->
[root@p05153026953834 dd]# dd if=/dev/zero of=xxx bs=4M count=1000
1000+0 records in
1000+0 records out
4194304000 bytes (4.2 GB) copied, 4.27824 s, 980 MB/s
ceph -w:
2014-01-31 14:28:44.009543 osd.450 [WRN] 1
I have all this information in my ceph.conf file.
My question was more about the executable itself and the fact that it differs
from what I can see documented. It is not in the location mentioned in the
docs (/usr/bin instead of /etc/init.d) and seems to want some parameters which
is also not
Found the problem, I think.
The executable IS in /etc/init.d, but it called 'ceph-radosgw', not 'radosgw'
as per the documentation.
-Original Message-
From: Whittle, Alistair: Investment Bank (LDN)
Sent: Friday, January 31, 2014 3:01 PM
To: 'Derek Yarnell'; ceph-users@lists.ceph.co
Greetings cephalopods!
This year's Ceph Day program is really starting to take shape. While
we already have Ceph Day Frankfurt at the end of Feb and events in
March and April starting to take shape in the US, we now have an event
that is starting to solidify for our APAC friends as well.
Ceph Da
Once OSDs are created by ceph-deploy, ceph udev rules recognise OSDs
by GPT labels of their partitions. You can see how this happens in
/lib/udev/rules.d/95-ceph-osd.rules (that's the path on an ubuntu
server).
Cheers,
John
On Thu, Jan 30, 2014 at 1:07 PM, Daniel Schwager
wrote:
> Hi,
>
> just
Hi, anyone knows when v0.75 pkgs will be available for Ubuntu Raring?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi list,
I'm trying to play with ceph, but I can't get the machine to reach a
clean state.
How I did the installation:
ceph-deploy new ceph-a1
ceph-deploy install ceph-a1
ceph-deploy mon create-initial
ceph-deploy mon create ceph-a1
ceph-deploy gatherkeys ceph-a1
ceph-deploy disk zap ceph-a1:vd
Actually, it is. We took the single host getting started out, because
nobody would really deploy a distributed system like Ceph for production on
single host. The problem is that the default crush rule is set to the host
level, not the osd level.
Note, I think ceph-deploy mon create-initial will d
Thank you,
That did it.
On Sat, Feb 1, 2014 at 1:15 AM, John Wilkins wrote:
> Actually, it is. We took the single host getting started out, because nobody
> would really deploy a distributed system like Ceph for production on single
> host. The problem is that the default crush rule is set to t
Hi,
Change pg_num for .rgw.buckets to power of 2, an 'crush tunables
optimal' didn't help :(
Graph: http://dysk.onet.pl/link/BZ968
What can i do with dhis?
Something is broken because cluster before increase pg_num reported
10T of data, now it is
18751 GB data, 34612 GB used, 20497 GB / 55110 G
11 matches
Mail list logo