On Mon, Mar 26, 2018 at 11:40 PM, Sam Huracan
wrote:
> Hi,
>
> We are using Raid cache mode Writeback for SSD journal, I consider this is
> reason of utilization of SSD journal is so low.
> Is it true? Anybody has experience with this matter, plz confirm.
>
I turn the writeback mode off for th
Hi,
We are using Raid cache mode Writeback for SSD journal, I consider this is
reason of utilization of SSD journal is so low.
Is it true? Anybody has experience with this matter, plz confirm.
Thanks
2018-03-26 23:00 GMT+07:00 Sam Huracan :
> Thanks for your information.
> Here is result when
Hello,
in general and as reminder for others, the more information you supply,
the more likely are people to answer and answer with actually pertinent
information.
Since you haven't mentioned the hardware (actual HDD/SSD models, CPU/RAM,
controllers, etc) we're still missing a piece of the puzzle
Besides checking what David told you, you can tune the scrub operation. (your
ceph -s shows 2 deep scrub operations being performed that could have an impact
on your user traffic).
For instance you could set the following parameters:
osd scrub chunk max = 5
osd scrub chunk min=1
osd scrub sleep
I recommend that people check their disk controller caches/batteries as
well as checking for subfolder splitting on filestore (which is the only
option on Jewel). The former leads to high await, the later contributes to
blocked requests.
On Sun, Mar 25, 2018, 3:36 AM Sam Huracan wrote:
> Thank y
Thank you all.
1. Here is my ceph.conf file:
https://pastebin.com/xpF2LUHs
2. Here is result from ceph -s:
root@ceph1:/etc/ceph# ceph -s
cluster 31154d30-b0d3-4411-9178-0bbe367a5578
health HEALTH_OK
monmap e3: 3 mons at {ceph1=
10.0.30.51:6789/0,ceph2=10.0.30.52:6789/0,ceph3=10.0.30
could you post the result of "ceph -s" ? besides the health status there are
other details that could help, like the status of your PGs., also the result of
"ceph-disk list" would be useful to understand how your disks are organized.
For instance with 1 SSD for 7 HDD the SSD could be the bottlen
SQL is a no brainer really)
>-- Ursprungligt meddelande--Från: Sam HuracanDatum: lör 24 mars
>2018 19:20Till: c...@elchaka.de;Kopia:
>ceph-users@lists.ceph.com;Ämne:Re: [ceph-users] Fwd: High IOWait Issue
>This is from iostat:
>I'm using Ceph jewel, has no HW error.Cep
@lists.ceph.com;Ämne:Re:
[ceph-users] Fwd: High IOWait Issue
This is from iostat:
I'm using Ceph jewel, has no HW error.Ceph health OK, we've just use 50% total
volume.
2018-03-24 22:20 GMT+07:00 :
I would Check with Tools like atop the utilization of your Disks also. Perhaps
something
This is from iostat:
I'm using Ceph jewel, has no HW error.
Ceph health OK, we've just use 50% total volume.
2018-03-24 22:20 GMT+07:00 :
> I would Check with Tools like atop the utilization of your Disks also.
> Perhaps something Related in dmesg or dorthin?
>
> - Mehmet
>
> Am 24. März 2018
I would Check with Tools like atop the utilization of your Disks also. Perhaps
something Related in dmesg or dorthin?
- Mehmet
Am 24. März 2018 08:17:44 MEZ schrieb Sam Huracan :
>Hi guys,
>We are running a production OpenStack backend by Ceph.
>
>At present, we are meeting an issue relating
Also, you only posted a total io wait through top. Please use iostat to check
each backend disk utilization.
-- Ursprungligt meddelande--Från: Budai LaszloDatum: lör 24 mars 2018
08:57Till: ceph-users@lists.ceph.com;Kopia: Ämne:Re: [ceph-users] Fwd: High
IOWait Issue
Hi,
what version
Hi,
what version of ceph are you using? what is HW config of your OSD nodes?
Have you checked your disks for errors (dmesg, smartctl).
What status is the ceph reporting? (ceph -s)
What is the saturation level of your ceph ? (ceph dt)
Kind regards,
Laszlo
_
Hi guys,
We are running a production OpenStack backend by Ceph.
At present, we are meeting an issue relating to high iowait in VM, in some
MySQL VM, we see sometime IOwait reaches abnormal high peaks which lead to
slow queries increase, despite load is stable (we test with script simulate
real lo
14 matches
Mail list logo