red fuse.ceph defaults 0 0
Am I passing the argument correctly? And should I include the m argument as
well?
Thank you for your input.
Med vänlig hälsning/Best Regards
Karol Kozubal
ELITS Canada Inc.
Email karol.kozu...@elits.se
___
ceph-users mail
Correction: Sorry min_size is at 1 everywhere.
Thank you.
Karol Kozubal
From: Karol Kozubal mailto:karol.kozu...@elits.com>>
Date: Wednesday, March 12, 2014 at 12:06 PM
To: "ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>"
mailto:ceph-users@lists.ceph.com>
acceptable?
3. Is it possible to scale down the number of pg’s ?
Thank you for your input.
Karol Kozubal
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>From what I understand about Ceph architecture you would be causing a
bottleneck for your ceph traffic. Ceph advantage is the potential
concurrency of the traffic and the decentralization of the client facing
interfaces increasing scale-out capabilities.
Can you give a bit more details about your
operation while the cluster isnt as busy.
Karol
From: , Bradley
mailto:bradley.mcnam...@seattle.gov>>
Date: Wednesday, March 12, 2014 at 1:54 PM
To: Karol Kozubal mailto:karol.kozu...@elits.com>>,
"ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>"
mailto:ceph-us
.
Karol
From: , Bradley
mailto:bradley.mcnam...@seattle.gov>>
Date: Wednesday, March 12, 2014 at 7:01 PM
To: Karol Kozubal mailto:karol.kozu...@elits.com>>,
"ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>"
mailto:ceph-users@lists.ceph.com>>
Subject: RE:
http://ceph.com/docs/master/rados/operations/placement-groups/
Its provided in the example calculation on that page.
Karol
On 2014-03-14, 10:37 AM, "Christian Kauhaus" wrote:
>Am 12.03.2014 18:54, schrieb McNamara, Bradley:
>> Round up your pg_num and pgp_num to the next power of 2, 2048.
>
Dan, I think your interpretation is indeed correct.
The documentation on this page looks to be saying this.
http://ceph.com/docs/master/rados/operations/placement-groups/
Increasing the number of placement groups reduces the variance in per-OSD load
across your cluster. We recommend approximate
Hi Everyone,
I am just wondering if any of you are running a ceph cluster with an iSCSI
target front end? I know this isn’t available out of the box, unfortunately in
one particular use case we are looking at providing iSCSI access and it's a
necessity. I am liking the idea of having rbd device
ote:
>On 03/15/2014 04:11 PM, Karol Kozubal wrote:
>> Hi Everyone,
>>
>> I am just wondering if any of you are running a ceph cluster with an
>> iSCSI target front end? I know this isn¹t available out of the box,
>> unfortunately in one particular use case we are
and I will be moving towards the 0.72.x branch.
As for the IOPS, it would be a total cluster IO throughput estimate based
on an application that would be reading/writing to more than 60 rbd
volumes.
On 2014-03-15, 1:11 PM, "Wido den Hollander" wrote:
>On 03/15/2014 05:40 PM,
n 2014-03-15, 1:11 PM, "Wido den Hollander" wrote:
>
>>On 03/15/2014 05:40 PM, Karol Kozubal wrote:
>>> Hi Wido,
>>>
>>> I will have some new hardware for running tests in the next two weeks
>>>or
>>> so and will report my findings on
ernel rbd" using my
>virtual machines. It seems that "tgt with librdb" doesn't perform
>well. It has only 1/5 iops of kernel rbd.
>
>We are new to Ceph and still finding ways to improve the performance. I
>am really looking forward to your benchmark.
>
>On Sun 16
Hi All,
I am curious to know what is the largest known ceph production deployment?
I am looking for information in regards to:
* number of nodes
* number of OSDs
* total capacity
And if available details in regards to IOPS, types of disks, types of network
interfaces, switches and
Anyone know why this happens? What datastore fills up specifically?
2014-04-04 17:01:51.277954 mon.0 [WRN] reached concerning levels of available
space on data store (16% free)
2014-04-04 17:03:51.279801 7ffd0f7fe700 0 monclient: hunting for new mon
2014-04-04 17:03:51.280844 7ffd0d6f9700 0 -- 19
/var/lib/ceph/mon/$cluster-$id
On 2014-04-04, 1:22 PM, "Joao Eduardo Luis" wrote:
>Well, that's no mon crash.
>
>On 04/04/2014 06:06 PM, Karol Kozubal wrote:
>> Anyone know why this happens? What datastore fills up specifically?
>
>The monitor's.
16 matches
Mail list logo