Hi,
if I understand the pg-system correctly it's impossible to create a
file/volume which is bigger than the smallest osd of a pg, isn't it?
What could I do to get rid of this limitation?
Thanks,
Fabian
___
ceph-users mailing list
ceph-users@lists.c
On Mon, Jan 19, 2015 at 11:38 AM, Fabian Zimmermann
wrote:
> Hi,
>
> Am 19.01.15 um 12:24 schrieb Luis Periquito:
> > AFAIK there is no such limitation.
> >
> > When you create a file, that file is split into several objects (4MB IIRC
> > each by default), and those objects will get mapped to a P
HI all,
Context : Ubuntu 14.04 TLS firefly 0.80.7
I recently encountered the same issue as described below.
Maybe I missed something between July and January…
I found that the http request was malformed by
/usr/lib/python2.7/dist-packages/radosgw_agent/client.py
I did the changes below
# ur
Hi,
I'm currently creating a business case around ceph RBD, and one of the
issues revolves around backup.
After having a look at
http://ceph.com/dev-notes/incremental-snapshots-with-rbd/ I was thinking on
creating hourly snapshots (corporate policy) on the original cluster
(replicated pool), and
>
> I'm just trying to debug a situation which filled my cluster/osds tonight.
>
> We are currently running a small testcluster:
>
> 3 mon's
> 2 mds (active + standby)
> 2 nodes = 2x12x410G HDD/OSDs
>
> A user created a 500G rbd-volume. First I thought the 500G rbd may have
> caused the osd to fill
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Sun, Jan 18, 2015 at 6:40 PM, ZHOU Yuan wrote:
> Hi list,
>
> I'm trying to understand the RGW cache consistency model. My Ceph
> cluster has multiple RGW instances with HAProxy as the load balancer.
> HAProxy would choose one RGW instance to serve the request(with
> round-robin).
> The questio
Hi,
Am 19.01.15 um 12:47 schrieb Luis Periquito:
> Each object will get mapped to a different PG. The size of an OSD will
> affect its weight and the number of PGs assigned to it, so a smaller OSD
> will get less PGs.
Great! Good to know, thanks a lot!
> And BTW, with a replica of 3, a 2TB will ne
AFAIK there is no such limitation.
When you create a file, that file is split into several objects (4MB IIRC
each by default), and those objects will get mapped to a PG -
http://ceph.com/docs/master/rados/operations/placement-groups/
On Mon, Jan 19, 2015 at 11:15 AM, Fabian Zimmermann
wrote:
>
On 01/19/2015 02:54 PM, Gregory Farnum wrote:
Joao has done it in the past so it's definitely possible, but I
confess I don't know what if anything he had to hack up to make it
work or what's changed since then. ARMv6 is definitely not something
we worry about when adding dependencies. :/
-Greg
Hi,
Am 19.01.15 um 13:08 schrieb Luis Periquito:
>> What is the current issue? Cluster near-full? cluster too-full? Can you
> send the output of ceph -s?
cluster 0d75b6f9-83fb-4287-aa01-59962bbff4ad
health HEALTH_ERR 1 full osd(s); 1 near full osd(s)
monmap e1: 3 mons at
{ceph0=10.
Hello,
Sorry for the thread necromancy, but this just happened again.
Still the exact same cluster as in the original thread (0.80.7).
Same OSD, same behavior.
Slow requests that never returned and any new requests to that OSD also
went into that state until the OSD was restarted.
Causing of co
Hi,
I just would like to clarify if I should expect degraded PGs with 11 OSD
in one node. I am not sure if a setup with 3 MON and 1 OSD (11 disks)
nodes allows me to have healthy cluster.
$ sudo ceph osd pool create test 512
pool 'test' created
$ sudo ceph status
cluster 4e77327a-118d-45
On 20 January 2015 at 14:10, Jiri Kanicky wrote:
> Hi,
>
> BTW, is there a way how to achieve redundancy over multiple OSDs in one
> box by changing CRUSH map?
>
I asked that same question myself a few weeks back :)
The answer was yes - but fiddly and why would you do that?
Its kinda breakin
You don't need to list them anywhere for this to work. They set up the
necessary communication on their own by making use of watch-notify.
On Mon, Jan 19, 2015 at 6:55 PM ZHOU Yuan wrote:
> Thanks Greg, that's a awesome feature I missed. I find some
> explanation on the watch-notify thing:
> http
Hi.
I am just curious. This is just lab environment and we are short on
hardware :). We will have more hardware later, but right now this is all
I have. Monitors are VMs.
Anyway, we will have to survive with this somehow :).
Thanks
Jiri
On 20/01/2015 15:33, Lindsay Mathieson wrote:
On 20
16 matches
Mail list logo