from what i've heard, xfs has problems on arm. use btrfs, or (i
believe?) ext4+bluestore will work.
On Sun, Mar 11, 2018 at 9:49 PM, Christian Wuerdig
wrote:
> Hm, so you're running OSD nodes with 2GB of RAM and 2x10TB = 20TB of
> storage? Literally everything posted on this list in relation to H
I was following this conversation on tracker and got the same question.
I've got a situation with slow requests and had no any idea on how to find
the reason. Finally I found it but only because I knew I've upgraded
Mellanox drivers on one host, and just decided to check IB config (and the
root was
On Fri, 09 Mar 2018 11:23:02 +0200, Maged Mokhtar wrote:
> 2)I undertand that before switching the path, the initiator will send a
> TMF ABORT can we pass this to down to the same abort_request() function
> in osd_client that is used for osd_request_timeout expiry ?
IIUC, the existing abort_re
Hallo all,
I am not sure RBD discard is working in my setup, and I am asking
for your help.
(I searched this mailing list for related messages and found one by
Nathan Harper last 29th Jan 2018 "Debugging fstrim issues" which
however mentions trimming was masked by logging... so I
Hi all,
Is there a way to mount ceph kernel client with the nofail option ?
I get an invalid argument when trying to mount ceph with nofail option,
in fstab / mount
mon01,mon02,mon03:/ /mnt/ceph ceph
name=cephfs,secretfile=/etc/ceph/secret,noatime,nofail 0 0
or
[root@osd003 ~]# mount -t ceph
Dear moderator, i subscribed to ceph list today, could you please post my
message?
-- Forwarded message --
From: Sergey Kotov
Date: 2018-03-06 10:52 GMT+03:00
Subject: [ceph bad performance], can't find a bottleneck
To: ceph-users@lists.ceph.com
Cc: Житенев Алексей , Anna Anikina
Hi All,
how do you handle bucket-notifications?
Is there any known piece of software we can put in front of ceph-rgw?
or anything else?
best regards,
Alex
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph
On Mon, Mar 12, 2018 at 7:53 AM, Дробышевский, Владимир wrote:
>
> I was following this conversation on tracker and got the same question. I've
> got a situation with slow requests and had no any idea on how to find the
> reason. Finally I found it but only because I knew I've upgraded Mellanox
>
Figured I would see if anyone has seen this or can see something I am doing
wrong.
Upgrading all of my daemons from 12.2.2. to 12.2.4.
Followed the documentation, upgraded mons, mgrs, osds, then mds’s in that order.
All was fine, until the MDSs.
I have two MDS’s in Active:Standby config. I dec
Hi,
See:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/025092.html
Might be of interest.
Dietmar
Am 12. März 2018 18:19:51 MEZ schrieb Reed Dier :
>Figured I would see if anyone has seen this or can see something I am
>doing wrong.
>
>Upgrading all of my daemons from 12.2.2
Good eye,
Thanks Dietmar,
Glad to know this isn’t a standard issue, hopefully anything in the future will
get caught and/or make it into release notes.
Thanks,
Reed
> On Mar 12, 2018, at 12:55 PM, Dietmar Rieder
> wrote:
>
> Hi,
>
> See:
> http://lists.ceph.com/pipermail/ceph-users-ceph.
On 2018-03-12 14:23, David Disseldorp wrote:
> On Fri, 09 Mar 2018 11:23:02 +0200, Maged Mokhtar wrote:
>
>> 2)I undertand that before switching the path, the initiator will send a
>> TMF ABORT can we pass this to down to the same abort_request() function
>> in osd_client that is used for osd_r
Hi,
Try increasing the queue depth from default 128 to 1024:
rbd map image-XX -o queue_depth=1024
Also if you run multiple rbd images/fio tests, do you get higher
combined performance ?
Maged
On 2018-03-12 17:16, Sergey Kotov wrote:
> Dear moderator, i subscribed to ceph list today, cou
Hi Maged,
On Mon, 12 Mar 2018 20:41:22 +0200, Maged Mokhtar wrote:
> I was thinking we would get the block request then loop down to all its
> osd requests and cancel those using the same osd request cancel
> function.
Until we can be certain of termination, I don't think it makes sense to
cha
On Mon, Mar 12, 2018 at 9:54 AM, Fulvio Galeazzi
wrote:
> Hallo all,
> I am not sure RBD discard is working in my setup, and I am asking for
> your help.
> (I searched this mailing list for related messages and found one by
> Nathan Harper last 29th Jan 2018 "Debugging fstrim issues" w
Hi all,
how do you handle bucket-notifications?
is there any known piece of software we can put in front of ceph-rgw?
or anything else?
best regards,
Alex
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-
On Mon, Mar 12, 2018 at 7:41 PM, Maged Mokhtar wrote:
> On 2018-03-12 14:23, David Disseldorp wrote:
>
> On Fri, 09 Mar 2018 11:23:02 +0200, Maged Mokhtar wrote:
>
> 2)I undertand that before switching the path, the initiator will send a
> TMF ABORT can we pass this to down to the same abort_reque
On 2018-03-12 21:00, Ilya Dryomov wrote:
> On Mon, Mar 12, 2018 at 7:41 PM, Maged Mokhtar wrote:
>
>> On 2018-03-12 14:23, David Disseldorp wrote:
>>
>> On Fri, 09 Mar 2018 11:23:02 +0200, Maged Mokhtar wrote:
>>
>> 2)I undertand that before switching the path, the initiator will send a
>> TM
Quick update:
adding the following to your config:
rgw log http headers = "http_authorization"
rgw ops log socket path = /tmp/rgw
rgw enable ops log = true
rgw enable usage log = true
and you can now
nc -U /tmp/rgw |./jq --stream 'fromstream(1|truncate_stream(inputs))'
{
"time": "2018-03-12
Hello,
I was trying to connect to ceph with librados and i found this problem:
(0x7f4a700111d0 sd=16 :0 s=1 pgs=0 cs=0 l=1 c=0x7f4a700130b0).fault
has someone have this problem? and someone knows how to solve it
___
ceph-users mailing list
ceph-u
Hello,
On Sat, 10 Mar 2018 16:14:53 +0100 Vincent Godin wrote:
> Hi,
>
> As i understand it, you'll have one RAID1 of two SSDs for 12 HDDs. A
> WAL is used for all writes on your host.
This isn't filestore, AFAIK the WAL/DB will be used for small writes only
to keep latency with Bluestore aki
21 matches
Mail list logo