For mailing list archive readers in the future:
On Tue, Jul 9, 2019 at 1:22 PM Paul Emmerich wrote:
> Try to add "--inconsistent-index" (caution: will obviously leave your
> bucket in a broken state during the deletion, so don't try to use the
> bucket)
>
this was bad advice as long as https://
Just wanted to post an observation here. Perhaps someone with resources to
perform some performance tests is interested in comparing or has some
insight into why I observed this.
Background:
12 node ceph cluster
3-way replicated by chassis group
3 chassis groups
4 nodes per chassis
running Lumin
Hi:
I found cosbench is a very convenient tool for benchmaring rgw. But
when I read papers , I found YCSB tool,
https://github.com/brianfrankcooper/YCSB/tree/master/s3 . It seems
that this is used for test cloud service , and seems a right tool for
our service . Has anyone tried this tool ?
Hi Wei Zhao,
I've used ycsb for mongodb on rbd testing before. It worked fine and
was pretty straightforward to run. The only real concern I had was that
many of the default workloads used a zipfian distribution for reads.
This basically meant reads were entirely coming from cache and didn
FWIW, the DB and WAL don't really do the same thing that the cache tier
does. The WAL is similar to filestore's journal, and the DB is
primarily for storing metadata (onodes, blobs, extents, and OMAP data).
Offloading these things to an SSD will definitely help, but you won't
see the same kin
I have a rbd mirroring setup with primary and secondary clusters as peers
and I have a pool enabled image mode.., In this i created a rbd image ,
enabled with journaling.
But whenever i enable mirroring on the image, I m getting error in
osd.log. I couldnt trace it out. please guide me to solve
I have a test cluster running centos 7.6 setup with two iscsi gateways ( per
the requirement ). I have the dashboard setup in nautilus ( 14.2.2 ) and I
added the iscsi gateways via the command. Both show down and when I go to
the dashboard it states:
" Unsupported `ceph-iscsi` config version.
This may be somewhat controversial, so I’ll try to tread lightly.
Might we infer that your OSDs are on spinners? And at 500 GB it would seem
likely that they and the servers are old? Please share hardware details and OS.
Having suffered an “enterprise” dogfood deployment in which I had to atte
12 months sounds good to me, I like the idea of march as well since we plan
on doing upgrades in June/July each year. Gives it time to be discussed and
marinate before we decide to upgrade.
-Brent
-Original Message-
From: ceph-users On Behalf Of Sage Weil
Sent: Wednesday, June 5, 2019 1
Thank you Paul Emmerich
On Fri, Jul 19, 2019 at 5:22 PM Paul Emmerich
wrote:
> bluestore warn on legacy statfs = false
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49
Hello,
Your issue look like mine - I had op stuck with the same status: check
"Random slow requests without any load" on this month list archive.
Bests,
Le 7/20/19 à 6:06 PM, Wei Zhao a écrit :
Hi ceph users:
I was doing write benchmark, and found some io will be blocked for a
very long tim
11 matches
Mail list logo