We have 48 OSDs (on 12 boxes, 4T per OSD) and 4 pools:
- 3 replicated pools (3x)
- 1 RS pool (5+2, size 7)
The docs say:
http://ceph.com/docs/master/rados/operations/placement-groups/
"Between 10 and 50 OSDs set pg_num to 4096"
Which is what we did when creating those pools. This yields 16384 PG
On 06-08-15 10:16, Hector Martin wrote:
> We have 48 OSDs (on 12 boxes, 4T per OSD) and 4 pools:
> - 3 replicated pools (3x)
> - 1 RS pool (5+2, size 7)
>
> The docs say:
> http://ceph.com/docs/master/rados/operations/placement-groups/
> "Between 10 and 50 OSDs set pg_num to 4096"
>
> Which is
I should have probably condensed my finding over the course of the day into
one post but, I guess that just not how i'm built.
Another data point. I ran the `ceph daemon mds.cephmds02 perf dump` in a
while loop w/ 1 second sleep and grepping out the stats John mentioned and
at times(~every 10
On 2015-08-06 17:18, Wido den Hollander wrote:
The mount of PGs is cluster wide and not per pool. So if you have 48
OSDs the rule of thumb is: 48 * 100 / 3 = 1600 PGs cluster wide.
Now, with enough memory you can easily have 100 PGs per OSD, but keep in
mind that the PG count is cluster-wide and
hi, wido,
we would love to provide such mirror in china(cn.ceph.com). we are using
ceph heavily in our system while we also want to give
feedback to the community.
I am now consulting our IDC operators how we can do this.
I will see what I can do for ceph community very soon.
cheers,
Hi,
Whenever I restart or check the logs for OSD, MON, I get below warning
message..
I am running a test cluster of 09 OSD's and 03 MON nodes.
[ceph-node1][WARNIN] libust[3549/3549]: Warning: HOME environment variable
not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at
lttng
On Thu, Aug 6, 2015 at 1:55 PM, Hector Martin wrote:
> On 2015-08-06 17:18, Wido den Hollander wrote:
>>
>> The mount of PGs is cluster wide and not per pool. So if you have 48
>> OSDs the rule of thumb is: 48 * 100 / 3 = 1600 PGs cluster wide.
>>
>> Now, with enough memory you can easily have 100
Hi,
I am trying to mount my CephFS and getting the following message. It was
all working previously, but after power failure I am not able to mount
it anymore (Debian Jessie).
cephadmin@maverick:/etc/ceph$ sudo mount -t ceph
ceph1.allsupp.corp,ceph2.allsupp.corp:6789:/ /mnt/cephdata/ -o
nam
Hi,
I can answer this myself. It was a kernel. After upgrade to lates Debian
Jessie 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u2 (2015-07-17)
x86_64 GNU/Linux. Everything started to work as normal.
Thanks :)
On 6/08/2015 22:38, Jiri Kanicky wrote:
Hi,
I am trying to mount my CephFS a
Hi Burkhard,
I found my problem and it makes me feel like I need to slap myself awake now. I
will let you see my mistake.
What I had
client.libvirt
caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=rbd,
allow rwx pool=ssd
What I have now
client.libvirt
caps: [mon] al
Hi Ilya,
We just tried the 3.10.83 kernel with more rbd fixes back-ported from higher
kernel version. At this time, we tried again to run rbd and 3 OSD deamons on
the same node, but rbd IO will still hang and OSD filestore thread will time
out to suicide when the memory becomes very low under h
Hi Srikanth,
Can you make a ticket on tracker.ceph.com for this? We'd like to not
loose track of it.
Thanks!
Mark
On 08/05/2015 07:01 PM, Srikanth Madugundi wrote:
Hi,
After upgrading to Hammer and moving from apache to civetweb. We
started seeing high PUT latency in the order of 2 sec for
On 08/06/2015 03:10 AM, Daleep Bais wrote:
Hi,
Whenever I restart or check the logs for OSD, MON, I get below warning
message..
I am running a test cluster of 09 OSD's and 03 MON nodes.
[ceph-node1][WARNIN] libust[3549/3549]: Warning: HOME environment
variable not set. Disabling LTTng-UST per-
On 08/05/2015 04:48 PM, David Moreau Simard wrote:
> Would love to be a part of this Wido, we currently have a mirror at
> ceph.mirror.iweb.ca based on the script you provided me a while back. It is
> already available over http, rsync, IPv4 and IPv6.
>
Great!
>
> The way we currently mirror i
Hi,
I was using RADOS bench to test on a single node ceph cluster with a dedicated
SSD as storage for my OSD.
I created a pool to do the same and filled up my ssd until maximum capacity
using RADOS bench with my object size as 4k. On removing the pool, I noticed
that it seems to take a really lo
@John,
Can you clarify which values would suggest that my metadata pool is too
slow? I have added a link that includes values for the "op_active"
& "handle_client_request"gathered in a crude fashion but should
hopefully give enough data to paint a picture of what is happening.
http://pasteb
Hello,
Version 0.94.1
I'm passing settings to the admin socket ie:
ceph tell osd.* injectargs '--osd_deep_scrub_begin_hour 20'
ceph tell osd.* injectargs '--osd_deep_scrub_end_hour 4'
ceph tell osd.* injectargs '--osd_deep_scrub_interval 1209600'
Then I check to see if they're in the configs now
Injecting args into the running procs is not meant to be persistent. You'll
need to modify /etc/ceph/ceph.conf for that.
Warren
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Steve
Dainard
Sent: Thursday, August 06, 2015 9:16 PM
To: ceph-user
Trying to get an understanding why direct IO would be so slow on my cluster.
Ceph 0.94.1
1 Gig public network
10 Gig public network
10 Gig cluster network
100 OSD's, 4T disk sizes, 5G SSD journal.
As of this morning I had no SSD journal and was finding direct IO was
sub 10MB/s so I decided to ad
hi,wido,
We would love to provide a ceph mirror in china mainland and Hongkong.
Hosting a site in main land of china is a bit complicated, you have to
subscribe to china goverment on http://www.miitbeian.gov.cn which is
totally chinese.
And it may take quit a while to prepare applicant form. we
Hello,
On Thu, 6 Aug 2015 21:41:00 + Sai Srinath Sundar-SSI wrote:
> Hi,
> I was using RADOS bench to test on a single node ceph cluster with a
> dedicated SSD as storage for my OSD. I created a pool to do the same and
> filled up my ssd until maximum capacity using RADOS bench with my objec
That would make sense..
Thanks!
On Thu, Aug 6, 2015 at 6:29 PM, Wang, Warren
wrote:
> Injecting args into the running procs is not meant to be persistent. You'll
> need to modify /etc/ceph/ceph.conf for that.
>
> Warren
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun..
Why are you using cookies? Try without and see if it works.
Kobi Laredo
Cloud Systems Engineer | (408) 409-KOBI
On Aug 5, 2015 8:42 AM, "Ray Sun" wrote:
> Cephers,
> I try to use haproxy as a load balancer for my radosgw, but I always got
> 405 not allowed when I run s3cmd md s3://mys3 on my hap
I'm seeing the same sort of issue.
Any suggestions on how to get Ceph to not start the ceph-osd processes
on host boot? It does not seem to be as simple as just disabling the
service
Regards
Nathan
On 15/07/2015 7:15 PM, Jan Schermer wrote:
We have the same problems, we need to start the
24 matches
Mail list logo