[ceph-users] Create file bigger than osd

2015-01-19 Thread Fabian Zimmermann
Hi, if I understand the pg-system correctly it's impossible to create a file/volume which is bigger than the smallest osd of a pg, isn't it? What could I do to get rid of this limitation? Thanks, Fabian ___ ceph-users mailing list ceph-users@lists.c

Re: [ceph-users] Create file bigger than osd

2015-01-19 Thread Luis Periquito
On Mon, Jan 19, 2015 at 11:38 AM, Fabian Zimmermann wrote: > Hi, > > Am 19.01.15 um 12:24 schrieb Luis Periquito: > > AFAIK there is no such limitation. > > > > When you create a file, that file is split into several objects (4MB IIRC > > each by default), and those objects will get mapped to a P

Re: [ceph-users] radosgw-agent failed to parse

2015-01-19 Thread ghislain.chevalier
HI all, Context : Ubuntu 14.04 TLS firefly 0.80.7 I recently encountered the same issue as described below. Maybe I missed something between July and January… I found that the http request was malformed by /usr/lib/python2.7/dist-packages/radosgw_agent/client.py I did the changes below #    ur

[ceph-users] RBD backup and snapshot

2015-01-19 Thread Luis Periquito
Hi, I'm currently creating a business case around ceph RBD, and one of the issues revolves around backup. After having a look at http://ceph.com/dev-notes/incremental-snapshots-with-rbd/ I was thinking on creating hourly snapshots (corporate policy) on the original cluster (replicated pool), and

Re: [ceph-users] Create file bigger than osd

2015-01-19 Thread Luis Periquito
> > I'm just trying to debug a situation which filled my cluster/osds tonight. > > We are currently running a small testcluster: > > 3 mon's > 2 mds (active + standby) > 2 nodes = 2x12x410G HDD/OSDs > > A user created a 500G rbd-volume. First I thought the 500G rbd may have > caused the osd to fill

[ceph-users] subscribe

2015-01-19 Thread Brian Rak
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Cache data consistency among multiple RGW instances

2015-01-19 Thread Gregory Farnum
On Sun, Jan 18, 2015 at 6:40 PM, ZHOU Yuan wrote: > Hi list, > > I'm trying to understand the RGW cache consistency model. My Ceph > cluster has multiple RGW instances with HAProxy as the load balancer. > HAProxy would choose one RGW instance to serve the request(with > round-robin). > The questio

Re: [ceph-users] Create file bigger than osd

2015-01-19 Thread Fabian Zimmermann
Hi, Am 19.01.15 um 12:47 schrieb Luis Periquito: > Each object will get mapped to a different PG. The size of an OSD will > affect its weight and the number of PGs assigned to it, so a smaller OSD > will get less PGs. Great! Good to know, thanks a lot! > And BTW, with a replica of 3, a 2TB will ne

Re: [ceph-users] Create file bigger than osd

2015-01-19 Thread Luis Periquito
AFAIK there is no such limitation. When you create a file, that file is split into several objects (4MB IIRC each by default), and those objects will get mapped to a PG - http://ceph.com/docs/master/rados/operations/placement-groups/ On Mon, Jan 19, 2015 at 11:15 AM, Fabian Zimmermann wrote: >

Re: [ceph-users] Is it possible to compile and use ceph with Raspberry Pi single-board computers?

2015-01-19 Thread Joao Eduardo Luis
On 01/19/2015 02:54 PM, Gregory Farnum wrote: Joao has done it in the past so it's definitely possible, but I confess I don't know what if anything he had to hack up to make it work or what's changed since then. ARMv6 is definitely not something we worry about when adding dependencies. :/ -Greg

Re: [ceph-users] Create file bigger than osd

2015-01-19 Thread Fabian Zimmermann
Hi, Am 19.01.15 um 13:08 schrieb Luis Periquito: >> What is the current issue? Cluster near-full? cluster too-full? Can you > send the output of ceph -s? cluster 0d75b6f9-83fb-4287-aa01-59962bbff4ad health HEALTH_ERR 1 full osd(s); 1 near full osd(s) monmap e1: 3 mons at {ceph0=10.

Re: [ceph-users] Unexplainable slow request

2015-01-19 Thread Christian Balzer
Hello, Sorry for the thread necromancy, but this just happened again. Still the exact same cluster as in the original thread (0.80.7). Same OSD, same behavior. Slow requests that never returned and any new requests to that OSD also went into that state until the OSD was restarted. Causing of co

[ceph-users] PGs degraded with 3 MONs and 1 OSD node

2015-01-19 Thread Jiri Kanicky
Hi, I just would like to clarify if I should expect degraded PGs with 11 OSD in one node. I am not sure if a setup with 3 MON and 1 OSD (11 disks) nodes allows me to have healthy cluster. $ sudo ceph osd pool create test 512 pool 'test' created $ sudo ceph status cluster 4e77327a-118d-45

Re: [ceph-users] PGs degraded with 3 MONs and 1 OSD node

2015-01-19 Thread Lindsay Mathieson
On 20 January 2015 at 14:10, Jiri Kanicky wrote: > Hi, > > BTW, is there a way how to achieve redundancy over multiple OSDs in one > box by changing CRUSH map? > I asked that same question myself a few weeks back :) The answer was yes - but fiddly and why would you do that? Its kinda breakin

Re: [ceph-users] Cache data consistency among multiple RGW instances

2015-01-19 Thread Gregory Farnum
You don't need to list them anywhere for this to work. They set up the necessary communication on their own by making use of watch-notify. On Mon, Jan 19, 2015 at 6:55 PM ZHOU Yuan wrote: > Thanks Greg, that's a awesome feature I missed. I find some > explanation on the watch-notify thing: > http

Re: [ceph-users] PGs degraded with 3 MONs and 1 OSD node

2015-01-19 Thread Jiri Kanicky
Hi. I am just curious. This is just lab environment and we are short on hardware :). We will have more hardware later, but right now this is all I have. Monitors are VMs. Anyway, we will have to survive with this somehow :). Thanks Jiri On 20/01/2015 15:33, Lindsay Mathieson wrote: On 20