> Op 5 april 2017 om 8:14 schreef SJ Zhu :
>
>
> Wido, ping?
>
This might take a while! Has to go through a few hops for this to get fixed.
It's on my radar!
Wido
> On Sat, Apr 1, 2017 at 8:40 PM, SJ Zhu wrote:
> > On Sat, Apr 1, 2017 at 8:10 PM, Wido den Hollander wrote:
> >> Great! Very
Hello,
We have an issue when writing to ceph. From time to time the write operation
seems to hang for a few seconds.
We've seen the https://bugzilla.redhat.com/show_bug.cgi?id=1389503, and there it is said
that when the qemu process would reach the max open files limit, then "the guest OS
shou
Adding Patrick who might be the best person.
Regards,
On Wed, Apr 5, 2017 at 6:16 PM, Wido den Hollander wrote:
>
>> Op 5 april 2017 om 8:14 schreef SJ Zhu :
>>
>>
>> Wido, ping?
>>
>
> This might take a while! Has to go through a few hops for this to get fixed.
>
> It's on my radar!
>
> Wido
>
Hello,
Env:- 11.2.0
bluestore, EC 4+1 , RHEL7.2
We are facing one OSD's booting again and again which caused the cluster
crazy :( . As you can see one PG got in inconsistent state while we tried
to repair that partular PG, as its primary OSD's went down. After some time
we found some tr
To try and make my life easier do you have such a script already done?
Also, has the source of the orphans been found, or will they continue
to happen after the upgrade to the newer version?
thanks,
On Mon, Apr 3, 2017 at 4:59 PM, Yehuda Sadeh-Weinraub wrote:
> On Mon, Apr 3, 2017 at 1:32 AM, L
This is a big development for us. I have not heard of this option either. I
am excited to play with this feature and the implications it may have in
improving RBD reads in our multi-datacenter RBD pools.
Just to clarify the following options:
"rbd localize parent reads = true" and "crush location
A new set of 'radosgw-admin global quota' commands were added for this,
which we'll backport to kraken and jewel. You can view the updated
documentation here:
http://docs.ceph.com/docs/master/radosgw/admin/#reading-writing-global-quotas
Thanks again for pointing this out,
Casey
On 04/03/2017
Hey cephers,
Just a friendly reminder that the Ceph Developer Monthly call will be
at 12:30 EDT / 16:30 UTC. We already have a few things on the docket
for discussion, but if you are doing any Ceph feature or backport
work, please add your item to the list to be discussed:
http://wiki.ceph.com/CD
Another thing that i would love to ask and clarify is, would this work
for openstack vms that uses cinder, instead of vms that uses direct
integration between nova and ceph ?
We use cinder bootable volumes and normal cinder attached volumes to vms.
thx
On Wed, Apr 5, 2017 at 10:36 AM, Wes Dilling
Yes, it's a general solution for any read-only parent images. This
will *not* help localize reads for any portions of your image that
have already been copied-on-written from the parent image down to the
cloned image (i.e. the Cinder volume or Nova disk).
On Wed, Apr 5, 2017 at 10:25 AM, Alejandro
Ok, I have added the following to ceph dns:
cnINCNAMEmirrors.ustc.edu.cn.
Wido, I haven't added this to the website, doc, or anywhere else for
informational purposes, but the mechanics should be live and accepting
traffic in a bit here. Let me know if you guys need anything else.
Than
On Wed, Apr 5, 2017 at 10:55 PM, Patrick McGarry wrote:
> Ok, I have added the following to ceph dns:
>
> cnINCNAMEmirrors.ustc.edu.cn.
Great, thanks.
Besides, I have enabled HTTPS for https://cn.ceph.com just now.
--
Regards,
Shengjing Zhu
__
Folks,
Trying to test the S3 object GW. When I try to upload any files the space is
shown used(that's normal behavior), but when the object is deleted it shows as
used(don't understand this). Below example.
Currently there is no files in the entire S3 bucket, but it still shows space
used. An
Hi all,
I wanted to ask if anybody is using librbd (user mode lib) with rbd-nbd
(kernel module) on their Ceph clients. We're currently using krbd, but that
doesn't support some of the features (such as rbd mirroring). So, I wanted
to check if anybody has experience running with nbd + librbd on th
Ceph's RadosGW uses garbage collection by default.
Try running 'radosgw-admin gc list' to list the objects to be garbage
collected, or 'radosgw-admin gc process' to trigger them to be deleted.
-Ben
On Wed, Apr 5, 2017 at 12:15 PM, Deepak Naidu wrote:
> Folks,
>
>
>
> Trying to test the S3 obje
Thanks Ben.
Is there are tuning param I need to use to fasten the process.
"rgw_gc_max_objs": "32",
"rgw_gc_obj_min_wait": "7200",
"rgw_gc_processor_max_time": "3600",
"rgw_gc_processor_period": "3600",
--
Deepak
From: Ben Hines [mailto:bhi...@gmail.com]
Sent: Wednesday, Apri
> Just to follow-up on this: we have yet experienced a clock skew since we
> starting using chrony. Just three days ago, I know, bit still...
did you mean "we have not yet..."?
> Perhaps you should try it too, and report if it (seems to) work better
> for you as well.
>
> But again, just three
Hi,
we have 21 hosts, each has 12 disks (4T sata), no SSD as journal or
cache tier.
so the total OSD number is 21x12=252.
there are three separate hosts for monitor nodes.
network is 10Gbps. replicas are 3.
under this setup, we can get only 3000+ IOPS for random writes for whole
cluster.test
what I meant is, when the total IOPS reach to 3000+, the total cluster
gets very slow. so any idea? thanks.
On 2017/4/6 9:51, PYH wrote:
Hi,
we have 21 hosts, each has 12 disks (4T sata), no SSD as journal or
cache tier.
so the total OSD number is 21x12=252.
there are three separate hosts fo
Hello,
first and foremost, do yourself and everybody else a favor by thoroughly
searching net and thus the ML archives.
This kind of question has come up and been answered countless times.
On Thu, 6 Apr 2017 09:59:10 +0800 PYH wrote:
> what I meant is, when the total IOPS reach to 3000+, the t
I apologize if this is a duplicate of something recent, but I'm not finding
much. Does the issue still exist where dropping an OSD results in a LUN's
I/O hanging?
I'm attempting to determine if I have to move off of VMWare in order to
safely use Ceph as my VM storage.
_
I am not sure if there is a hard and fast rule you are after, but pretty much
anything that would cause ceph transactions to be blocked (flapping OSD,
network loss, hung host) has the potential to block RBD IO which would cause
your iSCSI LUNs to become unresponsive for that period.
For the mo
Hello,
I am simulating the recovery when all of our 3 monitors are down in our test
environment. I refered to the Ceph mon troubleshoting document, but encourtered
the problem that “OSD has the store locked”.
I stop all the 3 mom, and plan to get the monmap from the OSDs. To get the
monmap, the
23 matches
Mail list logo