Hi, everyone.
Recently, we did some stress test on ceph using three machines. We tested the
IOPS of the whole small cluster when there are 1~8 OSDs per machines separately
and the result is as follows:
OSD num per machine fio iops
1
Aha! Found some docs here in the RHCS site:
https://access.redhat.com/documentation/en/red-hat-ceph-storage/2/paged/object-gateway-guide-for-red-hat-enterprise-linux/chapter-2-configuration
Really, ceph.com should have all this too...
-Ben
On Wed, Jan 18, 2017 at 5:15 PM, Ben Hines wrote:
> A
Are there docs on the RGW static website feature?
I found 'rgw enable static website' config setting only via the mailing
list. A search for 'static' on ceph.com turns up release notes, but no
other documentation. Anyone have pointers on how to set this up and what i
can do with it? Does it requir
Hey cephers,
Just wanted to send a friendly reminder that the Google Summer of Code
program has their submissions window opening tomorrow, and will remain
open for 3 weeks. If you are interested in mentoring or suggesting a
project please contact me as soon as possible. Thanks.
--
Best Regards
Performance related data I found so far on RadosGW don't contain anything
on Copy operation. Can anyone comment on the Copy operation? Is it
basically just a GET + INSERT?
Thanks,
Eric
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.
Hi all,
I'm running a 6 node 24 OSD cluster, Jewel 10.2.5 with kernel 4.8.
I put about 1TB of data in the cluster, with all pools having size 3. Yet
about 5TB of raw disk is used as opposed to the expected 3TB.
result of ceph -s:
pgmap v1057361: 2400 pgs, 3 pools, 984 GB data, 125 Mobject
On Tue, Jan 17, 2017 at 10:56 AM, Darrell Enns wrote:
> Thanks for the info, I'll be sure to "dump_ops_in_flight" and "session ls" if
> it crops up again. Is there any other info you can think of that might be
> useful? I want to make sure I capture all the evidence needed if it happens
> again
On 01/17/2017 12:52 PM, Piotr Dałek wrote:
During our testing we found out that during upgrade from 0.94.9 to 10.2.5
we're hitting issue http://tracker.ceph.com/issues/17386 ("Upgrading 0.94.6
-> 0.94.9 saturating mon node networking"). Apparently, there's a few
commits for both hammer and jewel
hi guys:
I am using kernel client (4.9) to mount cephfs (10.2.5) that just upgraded
from Hammer(0.94.9).
It became slow while doing fsync write VS same process in Hammer.
(Yes, I am sure, fsync is the key.)
1. anyone knows what's goning on ?
2. any way to import that?
===