Hi
Correct me if I am wrong: when uploading file to RGW it becomes split
into stripe units and this stripe units mapped to RADOS objects. This
RADOS objects are files on OSD filestore.
Whats going under the hood when I delete RGW object? If RADOS object
consists of multple stripe units belon
On Tue, May 23, 2017 at 6:28 AM wrote:
> Hi Wido,
>
> I see your point. I would expect OMAPs to grow with the number of objects
> but multiple OSDs getting to multiple tens of GBs for their omaps seems
> excessive. I find it difficult to believe that not sharding the index for a
> bucket of 500k
Hi, thanks for the explanation:-)
On the other hand, I wonder if the following scenario could happen:
A program in a virtual machine that uses "libaio" to access a file
continuous submit "write" requests to the underlying file system which
translates the request into rbd requests. Say,
On 05/22/2017 07:36 AM, Patrick McGarry wrote:
> I'm writing to you today to share that my time in the Ceph community
> is coming to an end this year.
You'll leave a big hole, Patrick. It's been great having you along for
the ride.
--
Dan Mick
Red Hat, Inc.
Ceph docs: http://ceph.com/docs
___
Have a 20 OSD cluster -"my first ceph cluster" that has another 400 OSDs
enroute.
I was "beating up" on the cluster, and had been writing to a 6TB file in
CephFS for several hours, during which I changed the crushmap to better
match my environment, generating a bunch of recovery IO. After about 5.
Hi,
Another newbie question. Do people using radosgw mirror their buckets
to AWS S3 or compatible services as a backup? We're setting up a
small cluster and are thinking of ways to mitigate total disaster.
What do people recommend?
Thanks,
Sean Purdy
__
On Tue, May 23, 2017 at 4:27 PM, James Wilkins
wrote:
> Thanks : -)
>
> If we are seeing this rise unnaturally high (e.g >140K - which corresponds
> with slow access to CephFS) do you have any recommendations of where we
> should be looking - is this related to the messenger service and its
> d
On 05/23/2017 04:04 PM, Sean Purdy wrote:
Hi,
This is my first ceph installation. It seems to tick our boxes. Will be
using it as an object store with radosgw.
I notice that ceph-mon uses zookeeper behind the scenes. Is there a way to
point ceph-mon at an existing zookeeper cluster, using a
Thanks : -)
If we are seeing this rise unnaturally high (e.g >140K - which corresponds with
slow access to CephFS) do you have any recommendations of where we should be
looking - is this related to the messenger service and its dispatch/throttle
bytes?
-Original Message-
From: John S
On Tue, May 23, 2017 at 4:04 PM, Sean Purdy wrote:
> Hi,
>
>
> This is my first ceph installation. It seems to tick our boxes. Will be
> using it as an object store with radosgw.
>
> I notice that ceph-mon uses zookeeper behind the scenes. Is there a way to
> point ceph-mon at an existing zooke
Hi,
This is my first ceph installation. It seems to tick our boxes. Will be
using it as an object store with radosgw.
I notice that ceph-mon uses zookeeper behind the scenes. Is there a way to
point ceph-mon at an existing zookeeper cluster, using a zookeeper chroot?
Alternatively, might cep
Sorry to see you go Patrick. You've been at this as long as I have. Best of
luck to you!
On Tue, May 23, 2017 at 6:01 AM, Wido den Hollander wrote:
> Hey Patrick,
>
> Thanks for all your work in the last 5 years! Sad to see you leave, but
> again, your effort is very much appreciated!
>
> Wido
>
Hi Wido,
I see your point. I would expect OMAPs to grow with the number of objects but
multiple OSDs getting to multiple tens of GBs for their omaps seems excessive.
I find it difficult to believe that not sharding the index for a bucket of 500k
objects in RGW causes the 10 largest OSD omaps to
Hey Patrick,
Thanks for all your work in the last 5 years! Sad to see you leave, but again,
your effort is very much appreciated!
Wido
> Op 22 mei 2017 om 16:36 schreef Patrick McGarry :
>
>
> Hey cephers,
>
> I'm writing to you today to share that my time in the Ceph community
> is coming t
> Op 23 mei 2017 om 13:01 schreef george.vasilaka...@stfc.ac.uk:
>
>
> > Your RGW buckets, how many objects in them, and do they have the index
> > sharded?
>
> > I know we have some very large & old buckets (10M+ RGW objects in a
> > single bucket), with correspondingly large OMAPs wherever th
On Tue, May 23, 2017 at 1:42 PM, James Wilkins
wrote:
> Quick question on CephFS/MDS but I can’t find this documented (apologies if
> it is)
>
>
>
> What does the q: represent in a ceph daemon perf dump mds
> represent?
mds]$ git grep "\"q\""
MDSRank.cc:mds_plb.add_u64(l_mds_dispatch_queue_l
Quick question on CephFS/MDS but I can't find this documented (apologies if it
is)
What does the q: represent in a ceph daemon perf dump mds represent?
[root@hp3-ceph-mds2 ~]# ceph daemon
/var/run/ceph/ceph-mds.hp3-ceph-mds2.ceph.hostingp3.local.asok perf dump mds
{
"mds": {
"requ
> Your RGW buckets, how many objects in them, and do they have the index
> sharded?
> I know we have some very large & old buckets (10M+ RGW objects in a
> single bucket), with correspondingly large OMAPs wherever that bucket
> index is living (sufficently large that trying to list the entire thin
Hi Alfredo,
This is solved, all the listening ports were disabled in my setup, after
allowing the monitor/osd ports.
Thanks,
Shambhu
-Original Message-
From: Shambhu Rajak
Sent: Tuesday, May 23, 2017 10:33 AM
To: 'Alfredo Deza'
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Some
Hi Ben!
Thanks for your advice. I included the names of our gateways, but did omit
the external name of the service itself. Now, everything is working again.
And yes, this change is worth a note J
Best regards,
Ingo
Von: Ben Hines [mailto:bhi...@gmail.com]
Gesendet: Dienstag, 23. Ma
20 matches
Mail list logo