could you post the result of "ceph -s" ? besides the health status there are
other details that could help, like the status of your PGs., also the result of
"ceph-disk list" would be useful to understand how your disks are organized.
For instance with 1 SSD for 7 HDD the SSD could be the bottlen
Perhaps unbelanced OSDs?
Could you send us an osd tree Output?
- Mehmet
Am 24. März 2018 19:46:44 MEZ schrieb "da...@visions.se" :
>You have 2 drives at almost 100% util which means they are maxed. So
>you need more disks or better drives to fix your io issues (SSDs for
>MySQL is a no brainer re
You have 2 drives at almost 100% util which means they are maxed. So you need
more disks or better drives to fix your io issues (SSDs for MySQL is a no
brainer really)
-- Ursprungligt meddelande--Från: Sam HuracanDatum: lör 24 mars 2018
19:20Till: c...@elchaka.de;Kopia: ceph-users@lists
This is from iostat:
I'm using Ceph jewel, has no HW error.
Ceph health OK, we've just use 50% total volume.
2018-03-24 22:20 GMT+07:00 :
> I would Check with Tools like atop the utilization of your Disks also.
> Perhaps something Related in dmesg or dorthin?
>
> - Mehmet
>
> Am 24. März 2018
Am 24. März 2018 00:05:12 MEZ schrieb Thiago Gonzaga :
>Hi All,
>
>I'm starting with ceph and faced a problem while using object-map
>
>root@ceph-mon-1:/home/tgonzaga# rbd create test -s 1024 --image-format
>2
>--image-feature exclusive-lock
>root@ceph-mon-1:/home/tgonzaga# rbd feature enable tes
I would Check with Tools like atop the utilization of your Disks also. Perhaps
something Related in dmesg or dorthin?
- Mehmet
Am 24. März 2018 08:17:44 MEZ schrieb Sam Huracan :
>Hi guys,
>We are running a production OpenStack backend by Ceph.
>
>At present, we are meeting an issue relating
To clarify if I understand correctly:
It is NOT POSSIBLE to use an s3 client like eg. cyberduck/mountainduck
and supply a user with an 'Access key' and a 'Password' regardless if
the user is defined in ldap or local?
I honestly cannot see how this ldap integration should even work,
without a
Hi,
Looks like there is some misinformation about exclusive lock feature, here is
some information already on the mailing list:
The naming of the "exclusive-lock" feature probably implies too much compared
to what it actually does. In reality, when you enable the "exclusive-lock"
feature, on
This one is downloading a file from the bucket or accesses admin without
modifications.
#!/bin/bash
#
file="bucket"
bucket="admin"
file="test.txt"
bucket="test"
key=""
secret=""
host="192.168.1.114:7480"
resource="/${bucket}/${file}"
contentType="application/x-compressed-tar"
conten
Also, you only posted a total io wait through top. Please use iostat to check
each backend disk utilization.
-- Ursprungligt meddelande--Från: Budai LaszloDatum: lör 24 mars 2018
08:57Till: ceph-users@lists.ceph.com;Kopia: Ämne:Re: [ceph-users] Fwd: High
IOWait Issue
Hi,
what version o
I am glad to make your day! It took me a bit to come up with fitting
answer to your question ;)
Have a nice weekend
-Original Message-
From: Max Cuttins [mailto:m...@phoenixweb.it]
Sent: zaterdag 24 maart 2018 13:18
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] where is it p
Thanks Marc,
your answer is so illuminating.
If it was so easy I would had already downloaded since 2 months.
But it's not on the official channel and there is not any mention
anywhere of this release (sorry for you but neither on Google).
Well ...except on the Ceph documention of course.
https://www.google.pl/search?dcr=0&source=hp&q=where+can+i+download+centos+7.5&oq=where+can+i+download+centos+7.5
-Original Message-
From: Max Cuttins [mailto:m...@phoenixweb.it]
Sent: zaterdag 24 maart 2018 12:36
To: ceph-users@lists.ceph.com
Subject: [ceph-users] where is it possib
As stated in the documentation, in order to use iSCSI it's needed use
CentOS7.5.
Where can I download it?
Thanks
iSCSI Targets
Traditionally, block-level access to a Ceph storage cluster has been
limited to QEMU and |librbd|, which is a key enabler for adoption within
OpenStack environmen
This is working but I want to modify it to download some file, I am not
to interested at this time testing admin caps.
#!/bin/bash
#
# radosgw-admin caps add --uid='' --caps "buckets=read"
file=1MB.bin
bucket=test
key=""
secret=""
host="192.168.1.114:7480"
resource="/${bucket}
On Fri, Mar 23, 2018 at 7:45 PM, Perrin, Christopher (zimkop1)
wrote:
> Hi,
>
> Last week out MDSs started failing one after another, and could not be
> started anymore. After a lot of tinkering I found out that MDSs crashed after
> trying to rejoin the Cluster. The only Solution I found that, l
On Sat, Mar 24, 2018 at 11:34 AM, Josh Haft wrote:
>
>
> On Fri, Mar 23, 2018 at 8:49 PM, Yan, Zheng wrote:
>>
>> On Fri, Mar 23, 2018 at 9:50 PM, Josh Haft wrote:
>> > On Fri, Mar 23, 2018 at 12:14 AM, Yan, Zheng wrote:
>> >>
>> >> On Fri, Mar 23, 2018 at 5:14 AM, Josh Haft wrote:
>> >> > Hel
Hi,
what version of ceph are you using? what is HW config of your OSD nodes?
Have you checked your disks for errors (dmesg, smartctl).
What status is the ceph reporting? (ceph -s)
What is the saturation level of your ceph ? (ceph dt)
Kind regards,
Laszlo
_
Hi guys,
We are running a production OpenStack backend by Ceph.
At present, we are meeting an issue relating to high iowait in VM, in some
MySQL VM, we see sometime IOwait reaches abnormal high peaks which lead to
slow queries increase, despite load is stable (we test with script simulate
real lo
19 matches
Mail list logo