Thanks for your reply.
I am sure that there is only one web server in CentOS.
All my steps are as follows:
First of all, I have set DNS, so that I can
nslookup ceph65
nslookup a.ceph65
nslookup anyother.ceph65
Then
1. yum install httpd mod_fastcgi mod_ssl
rm /etc/httpd/conf.d/welcome.conf
rm /
Hi Everyone,
I am just wondering if any of you are running a ceph cluster with an iSCSI
target front end? I know this isn’t available out of the box, unfortunately in
one particular use case we are looking at providing iSCSI access and it's a
necessity. I am liking the idea of having rbd device
Which model you have hard drives?
2014-03-14 21:59 GMT+04:00 Greg Poirier :
> We are stressing these boxes pretty spectacularly at the moment.
>
> On every box I have one OSD that is pegged for IO almost constantly.
>
> ceph-1:
> Device: rrqm/s wrqm/s r/s w/srkB/swkB/s
On 03/15/2014 04:11 PM, Karol Kozubal wrote:
Hi Everyone,
I am just wondering if any of you are running a ceph cluster with an
iSCSI target front end? I know this isn’t available out of the box,
unfortunately in one particular use case we are looking at providing
iSCSI access and it's a necessit
Hi Wido,
I will have some new hardware for running tests in the next two weeks or
so and will report my findings once I get a chance to run some tests. I
will disable writeback on the target side as I will be attempting to
configure an ssd caching pool of 24 ssd's with writeback for the main pool
On 03/15/2014 05:40 PM, Karol Kozubal wrote:
Hi Wido,
I will have some new hardware for running tests in the next two weeks or
so and will report my findings once I get a chance to run some tests. I
will disable writeback on the target side as I will be attempting to
configure an ssd caching poo
How are the SSDs going to be in writeback? Is that the new caching pool
Feature?
I am not sure what version implemented this, but it is documented here
(https://ceph.com/docs/master/dev/cache-pool/).
I will be using the latest stable release for my next batch of testing,
right now I am on 0.67.4 a
I just re-read the documentation… It looks like its a proposed feature
that is in development. I will have to adjust my test in consequence in
that case.
Any one out there have any ideas when this will be implemented? Or what
the plans look like as of right now?
On 2014-03-15, 1:17 PM, "Karol K
On Sat, 15 Mar 2014, Karol Kozubal wrote:
> I just re-read the documentation… It looks like its a proposed feature
> that is in development. I will have to adjust my test in consequence in
> that case.
>
> Any one out there have any ideas when this will be implemented? Or what
> the plans look lik
Hello Everyone
If you see ceph day presentation delivered by Sebastien ( slide number 23 )
http://www.slideshare.net/Inktank_Ceph/ceph-performance
It looks like Firefly has dropped support to Journals , How concrete is this
news ???
-Karan-
On 14 Mar 2014, at 15:35, Jake Young wrote:
>
Hi,
This is the new objectstore multi backend, instead of using a filesystem
(xfs,btrfs) , you can use leveldb,rocksdb,... which don't need journal,
because operations are atomic.
I think it should be release with firefly, if I remember.
About this, can somebody tell me if write are same speed
Just FYI for any who might be using the AWS Java SDK with rgw.
There is a bug in older versions of the AWS SDK in the
CompleteMultipartUpload call. The Etag that is sent in the manifest is not
formatted correctly. This will cause rgw to return a 400.
e.g.
T 192.168.1.16:46532 -> 192.168.1.51:80 [
12 matches
Mail list logo