Hi Fredrick,
See my response inline.
Thanks & Regards
Somnath
From: f...@univ-lr.fr [mailto:f...@univ-lr.fr]
Sent: Wednesday, March 25, 2015 8:07 AM
To: Somnath Roy
Cc: Ceph Users
Subject: Re: [ceph-users] Uneven CPU usage on OSD nodes
Hi Somnath,
Thanks, the tcmalloc env variable t
v-lr.fr [mailto:f...@univ-lr.fr]
*Sent:* Monday, March 23, 2015 4:31 AM
*To:* Somnath Roy
*Cc:* Ceph Users
*Subject:* Re: [ceph-users] Uneven CPU usage on OSD nodes
Hi Somnath,
Thank you, please find my answers below
Somnath Roy <mailto:somnath@sandisk.com>
a écrit le 22/03/1
:ceph-users-boun...@lists.ceph.com] On Behalf Of
f...@univ-lr.fr<mailto:f...@univ-lr.fr>
Sent: Sunday, March 22, 2015 2:15 AM
To: Craig Lewis
Cc: Ceph Users
Subject: Re: [ceph-users] Uneven CPU usage on OSD nodes
Hi Craig,
An uneven primaries distribution was indeed my first thought.
I shou
Hi Greg,
the low-/high-CPU comportement is absolutely persistent while a host is
UP, no oscillation.
But rebooting a node can make its comportment switch low-/high-CPU, as
seen this morning after checking the BIOS settings (especially numa)
were the same on 2 hosts.
Hosts are identical, pupp
On Mon, Mar 23, 2015 at 4:31 AM, f...@univ-lr.fr wrote:
> Hi Somnath,
>
> Thank you, please find my answers below
>
> Somnath Roy a écrit le 22/03/15 18:16 :
>
> Hi Frederick,
>
> Need some information here.
>
>
>
> 1. Just to clarify, you are saying it is happening g in 0.87.1 and not in
> Firef
.@lists.ceph.com] *On
Behalf Of *f...@univ-lr.fr
*Sent:* Sunday, March 22, 2015 2:15 AM
*To:* Craig Lewis
*Cc:* Ceph Users
*Subject:* Re: [ceph-users] Uneven CPU usage on OSD nodes
Hi Craig,
An uneven primaries distribution was indeed my first thought.
I should have been more explicit on th
node as well.
Thanks & Regards
Somnath
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
f...@univ-lr.fr
Sent: Sunday, March 22, 2015 2:15 AM
To: Craig Lewis
Cc: Ceph Users
Subject: Re: [ceph-users] Uneven CPU usage on OSD nodes
Hi Craig,
An uneven primaries distribu
Hi Craig,
An uneven primaries distribution was indeed my first thought.
I should have been more explicit on the percentages of the histograms I
gave, lets see them in detail in a more comprehensive way.
On a 27938 bench objects seen by osdmap, the hosts are distributed like
that :
20904 host
I'm neither a dev or a well informed Cepher. But I've seen posts that the
pg count may be set too high, see
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg16205.html
Also, we use 128GB+ in production on the OSD servers with 10 osd per server
because it boosts the read cache,so you may w
I would say you're a little light on RAM. With 4TB disks 70% full, I've
seen some ceph-osd processes using 3.5GB of RAM during recovery. You'll be
fine during normal operation, but you might run into issues at the worst
possible time.
I have 8 OSDs per node, and 32G of RAM. I've had ceph-osd pr
Hi to the ceph-users list !
We're setting up a new Ceph infrastructure :
- 1 MDS admin node
- 4 OSD storage nodes (60 OSDs)
each of them running a monitor
- 1 client
Each 32GB RAM/16 cores OSD node supports 15 x 4TB SAS OSDs (XFS) and 1
SSD with 5GB journal partitions, all in JBOD attachement
11 matches
Mail list logo