Re: [ceph-users] Obtaining cephfs client address/id from the host that mounted it

2018-02-09 Thread Ilya Dryomov
On Fri, Feb 9, 2018 at 12:05 PM, Mauricio Garavaglia wrote: > Hello, > Is it possible to get the cephfs client id/address in the host that mounted > it, in the same way we can get the address on rbd mapped volumes looking at > /sys/bus/rbd/devices/*/client_addr? No, not without querying the serve

Re: [ceph-users] rbd feature overheads

2018-02-12 Thread Ilya Dryomov
On Mon, Feb 12, 2018 at 6:25 AM, Blair Bethwaite wrote: > Hi all, > > Wondering if anyone can clarify whether there are any significant overheads > from rbd features like object-map, fast-diff, etc. I'm interested in both > performance overheads from a latency and space perspective, e.g., can > ob

Re: [ceph-users] rbd feature overheads

2018-02-13 Thread Ilya Dryomov
On Tue, Feb 13, 2018 at 1:24 AM, Blair Bethwaite wrote: > Thanks Ilya, > > We can probably handle ~6.2MB for a 100TB volume. Is it reasonable to expect > a librbd client such as QEMU to only hold one object-map per guest? Yes, I think so. Thanks, Ilya ___

Re: [ceph-users] RBD Watch Notify for snapshots

2016-08-22 Thread Ilya Dryomov
On Mon, Aug 22, 2016 at 3:13 PM, Nick Fisk wrote: > Hi Jason, > > Here is my initial attempt at using the Watch/Notify support to be able to > remotely fsfreeze a filesystem on a RBD. Please note this > was all very new to me and so there will probably be a lot of things that > haven't been done

Re: [ceph-users] RBD Watch Notify for snapshots

2016-08-22 Thread Ilya Dryomov
On Fri, Jul 8, 2016 at 5:02 AM, Jason Dillaman wrote: > librbd pseudo-automatically handles this by flushing the cache to the > snapshot when a new snapshot is created, but I don't think krbd does the > same. If it doesn't, it would probably be a nice addition to the block > driver to support the

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-22 Thread Ilya Dryomov
On Mon, Aug 22, 2016 at 3:17 PM, Nick Fisk wrote: > Hope it's useful to someone > > https://gist.github.com/fiskn/6c135ab218d35e8b53ec0148fca47bf6 Make sure your kernel is 4.4 or later - there was a 2M readahead limit imposed by the memory management subsystem until 4.4. Thanks,

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-23 Thread Ilya Dryomov
On Mon, Aug 22, 2016 at 9:22 PM, Nick Fisk wrote: >> -Original Message- >> From: Wido den Hollander [mailto:w...@42on.com] >> Sent: 22 August 2016 18:22 >> To: ceph-users ; n...@fisk.me.uk >> Subject: Re: [ceph-users] udev rule to set readahead on Ceph RBD's >> >> >> > Op 22 augustus 2016

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-23 Thread Ilya Dryomov
On Tue, Aug 23, 2016 at 6:15 PM, Nick Fisk wrote: > >> -Original Message- >> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >> Alex Gorbachev >> Sent: 23 August 2016 16:43 >> To: Wido den Hollander >> Cc: ceph-users ; Nick Fisk >> Subject: Re: [ceph-users] udev

Re: [ceph-users] latest ceph build questions

2016-08-24 Thread Ilya Dryomov
On Fri, Aug 19, 2016 at 1:21 PM, Dzianis Kahanovich wrote: > Related to fresh ceph build troubles, main question: > Are cmake now preferred? Or legacy gnu make still supported too? No, autotools files are about to be removed from the master branch. Older releases will continue to be built with au

Re: [ceph-users] Corrupt full osdmap on RBD Kernel Image mount (Jewel 10.2.2)

2016-08-24 Thread Ilya Dryomov
On Wed, Aug 24, 2016 at 4:56 PM, Ivan Grcic wrote: > Dear Cephers, > > For some time now I am running a small Ceph cluster made of 4OSD + > 1MON Servers, and evaluating possible Ceph usages in our storage > infrastructure. Until few weeks ago I was running Hammer release, > using mostly RBD Clien

Re: [ceph-users] ceph rbd and pool quotas

2016-08-24 Thread Ilya Dryomov
On Wed, Aug 24, 2016 at 11:13 PM, Thomas wrote: > Hi guys, > > quick question in regards to ceph -> rbd -> quotas per pool. I'd like to set > a quota with max_bytes of a pool so that I can limit the amount a ceph > client can use, like so: > > ceph osd pool set-quota pool1 max_bytes $(( 1024 * 102

Re: [ceph-users] ceph rbd and pool quotas

2016-08-24 Thread Ilya Dryomov
On Wed, Aug 24, 2016 at 11:27 PM, Thomas wrote: > Hi Ilya, > > Thanks for the speedy reply - unfortunately increasing the quota doesn't > help, the process keeps being stuck forever. Or do you mean with kernel 4.7 > this would work after upping the quota? Correct. Thanks, Ilya _

Re: [ceph-users] Corrupt full osdmap on RBD Kernel Image mount (Jewel 10.2.2)

2016-08-26 Thread Ilya Dryomov
On Wed, Aug 24, 2016 at 5:17 PM, Ivan Grcic wrote: > Hi Ilya, > > there you go, and thank you for your time. > > BTW should one get a crushmap from osdmap doing something like this: > > osdmaptool --export-crush /tmp/crushmap /tmp/osdmap > crushtool -c crushmap -o crushmap.3518 Yes. You can also

Re: [ceph-users] Corrupt full osdmap on RBD Kernel Image mount (Jewel 10.2.2)

2016-08-29 Thread Ilya Dryomov
On Mon, Aug 29, 2016 at 2:38 PM, Ivan Grcic wrote: > Hi Ilya, > > yes, thank you that was the issue. I was wondering why do my mons > exchange so much data :) > > I didn't know we index the buckets using the actual id value, don't > recall I red that somewhere. > One shouldn't be too imaginative w

Re: [ceph-users] Consistency problems when taking RBD snapshot

2016-09-13 Thread Ilya Dryomov
On Tue, Sep 13, 2016 at 12:08 PM, Nikolay Borisov wrote: > Hello list, > > > I have the following cluster: > > ceph status > cluster a2fba9c1-4ca2-46d8-8717-a8e42db14bb0 > health HEALTH_OK > monmap e2: 5 mons at > {alxc10=x:6789/0,alxc11=x:6789/0,alxc5=x:6789/0,alxc6=xxx

Re: [ceph-users] Consistency problems when taking RBD snapshot

2016-09-13 Thread Ilya Dryomov
On Tue, Sep 13, 2016 at 1:59 PM, Nikolay Borisov wrote: > > > On 09/13/2016 01:33 PM, Ilya Dryomov wrote: >> On Tue, Sep 13, 2016 at 12:08 PM, Nikolay Borisov wrote: >>> Hello list, >>> >>> >>> I have the following cluster: >>> >>

Re: [ceph-users] Consistency problems when taking RBD snapshot

2016-09-13 Thread Ilya Dryomov
On Tue, Sep 13, 2016 at 4:11 PM, Nikolay Borisov wrote: > > > On 09/13/2016 04:30 PM, Ilya Dryomov wrote: > [SNIP] >> >> Hmm, it could be about whether it is able to do journal replay on >> mount. When you mount a snapshot, you get a read-only block device; >&g

Re: [ceph-users] Consistency problems when taking RBD snapshot

2016-09-14 Thread Ilya Dryomov
On Wed, Sep 14, 2016 at 9:01 AM, Nikolay Borisov wrote: > > > On 09/14/2016 09:55 AM, Adrian Saul wrote: >> >> I found I could ignore the XFS issues and just mount it with the appropriate >> options (below from my backup scripts): >> >> # >> # Mount with nouuid (conflicting XFS) a

Re: [ceph-users] Consistency problems when taking RBD snapshot

2016-09-14 Thread Ilya Dryomov
On Wed, Sep 14, 2016 at 3:30 PM, Nikolay Borisov wrote: > > > On 09/14/2016 02:55 PM, Ilya Dryomov wrote: >> On Wed, Sep 14, 2016 at 9:01 AM, Nikolay Borisov wrote: >>> >>> >>> On 09/14/2016 09:55 AM, Adrian Saul wrote: >>>> >>>>

Re: [ceph-users] Consistency problems when taking RBD snapshot

2016-09-15 Thread Ilya Dryomov
On Thu, Sep 15, 2016 at 10:22 AM, Nikolay Borisov wrote: > > > On 09/15/2016 09:22 AM, Nikolay Borisov wrote: >> >> >> On 09/14/2016 05:53 PM, Ilya Dryomov wrote: >>> On Wed, Sep 14, 2016 at 3:30 PM, Nikolay Borisov wrote: >>>> >>>> &

Re: [ceph-users] Consistency problems when taking RBD snapshot

2016-09-15 Thread Ilya Dryomov
On Thu, Sep 15, 2016 at 12:54 PM, Nikolay Borisov wrote: > > > On 09/15/2016 01:24 PM, Ilya Dryomov wrote: >> On Thu, Sep 15, 2016 at 10:22 AM, Nikolay Borisov >> wrote: >>> >>> >>> On 09/15/2016 09:22 AM, Nikolay Borisov wrote: >>>> &g

Re: [ceph-users] Consistency problems when taking RBD snapshot

2016-09-15 Thread Ilya Dryomov
On Thu, Sep 15, 2016 at 2:43 PM, Nikolay Borisov wrote: > > [snipped] > > cat /sys/bus/rbd/devices/47/client_id > client157729 > cat /sys/bus/rbd/devices/1/client_id > client157729 > > Client client157729 is alxc13, based on correlation by the ip address > shown by the rados -p ... command. So it'

Re: [ceph-users] Jewel Docs | error on mount.ceph page

2016-09-20 Thread Ilya Dryomov
On Tue, Sep 20, 2016 at 7:48 PM, David wrote: > Sorry I don't know the correct way to report this. > > Potential error on this page: > > on http://docs.ceph.com/docs/jewel/man/8/mount.ceph/ > > Currently: > > rsize > int (bytes), max readahead, multiple of 1024, Default: 524288 (512*1024) > > Shou

Re: [ceph-users] Consistency problems when taking RBD snapshot

2016-09-22 Thread Ilya Dryomov
On Thu, Sep 15, 2016 at 3:18 PM, Ilya Dryomov wrote: > On Thu, Sep 15, 2016 at 2:43 PM, Nikolay Borisov wrote: >> >> [snipped] >> >> cat /sys/bus/rbd/devices/47/client_id >> client157729 >> cat /sys/bus/rbd/devices/1/client_id >> client157729

Re: [ceph-users] Consistency problems when taking RBD snapshot

2016-09-26 Thread Ilya Dryomov
On Mon, Sep 26, 2016 at 8:39 AM, Nikolay Borisov wrote: > > > On 09/22/2016 06:36 PM, Ilya Dryomov wrote: >> On Thu, Sep 15, 2016 at 3:18 PM, Ilya Dryomov wrote: >>> On Thu, Sep 15, 2016 at 2:43 PM, Nikolay Borisov wrote: >>>> >>>> [snipped]

Re: [ceph-users] Consistency problems when taking RBD snapshot

2016-09-26 Thread Ilya Dryomov
On Mon, Sep 26, 2016 at 11:13 AM, Ilya Dryomov wrote: > On Mon, Sep 26, 2016 at 8:39 AM, Nikolay Borisov wrote: >> >> >> On 09/22/2016 06:36 PM, Ilya Dryomov wrote: >>> On Thu, Sep 15, 2016 at 3:18 PM, Ilya Dryomov wrote: >>>> On Thu, Sep 1

Re: [ceph-users] Crash in ceph_readdir.

2016-10-03 Thread Ilya Dryomov
On Mon, Oct 3, 2016 at 1:19 PM, Nikolay Borisov wrote: > Hello, > > I've been investigating the following crash with cephfs: > > [8734559.785146] general protection fault: [#1] SMP > [8734559.791921] ioatdma shpchp ipmi_devintf ipmi_si ipmi_msghandler > tcp_scalable ib_qib dca ib_mad ib_cor

Re: [ceph-users] Crash in ceph_readdir.

2016-10-03 Thread Ilya Dryomov
On Mon, Oct 3, 2016 at 2:37 PM, Nikolay Borisov wrote: > > > On 10/03/2016 03:27 PM, Ilya Dryomov wrote: >> On Mon, Oct 3, 2016 at 1:19 PM, Nikolay Borisov wrote: >>> Hello, >>> >>> I've been investigating the following crash with cephfs: >>

Re: [ceph-users] Crash in ceph_read_iter->__free_pages due to null page

2016-10-10 Thread Ilya Dryomov
On Fri, Oct 7, 2016 at 1:40 PM, Nikolay Borisov wrote: > Hello, > > I've encountered yet another cephfs crash: > > [990188.822271] BUG: unable to handle kernel NULL pointer dereference at > 001c > [990188.822790] IP: [] __free_pages+0x5/0x30 > [990188.823090] PGD 180dd8f067 PUD 1bf272

Re: [ceph-users] Map RBD Image with Kernel 3.10.0+10

2016-10-12 Thread Ilya Dryomov
On Wed, Oct 12, 2016 at 10:35 PM, Mike Jacobacci wrote: > Figured it out finally!! RBD images must be in format 1, I had to export > the old image and import it as format 1, trying to create format 1 image > fails, it's doesn't like the --image-format command:" > > "rbd: the argument for option '

Re: [ceph-users] Map RBD Image with Kernel 3.10.0+10

2016-10-12 Thread Ilya Dryomov
On Wed, Oct 12, 2016 at 10:51 PM, Mike Jacobacci wrote: > Hi Ilya, > > I tried disabling feature sets, but nothing worked... What features are > different in Format 1 that's different from Format 2? Format 1 images don't have any additional features, so there is nothing to enable or disable there

Re: [ceph-users] Kernel Versions for KVM Hypervisors

2016-10-20 Thread Ilya Dryomov
On Thu, Oct 20, 2016 at 2:45 PM, David Riedl wrote: > Hi cephers, > > I want to use the newest features of jewel on my cluster. I already updated > all kernels on the OSD nodes to the following version: > 4.8.2-1.el7.elrepo.x86_64. > > The KVM hypervisors are running the CentOS 7 stock kernel ( >

Re: [ceph-users] Crash in ceph_read_iter->__free_pages due to null page

2016-10-21 Thread Ilya Dryomov
On Fri, Oct 21, 2016 at 5:01 PM, Markus Blank-Burian wrote: > Hi, > > is there any update regarding this bug? Nikolay's patch made mainline yesterday and should show up in various stable kernels in the forthcoming weeks. > > I can easily reproduce this issue on our cluster with the following > s

Re: [ceph-users] Ceph rbd jewel

2016-10-21 Thread Ilya Dryomov
On Fri, Oct 21, 2016 at 5:50 PM, fridifree wrote: > Hi everyone, > I'm using ceph jewel running on Ubuntu 16.04 (kernel 4.4) and Ubuntu 14.04 > clients (kernel 3.13) > When trying to map rbd to the clients and to servers I get error about > feature set mismatch which I didnt get on hammer. > Tried

Re: [ceph-users] Ceph rbd jewel

2016-10-22 Thread Ilya Dryomov
On Sat, Oct 22, 2016 at 8:27 AM, fridifree wrote: > Hi, > > What is the ceph tunables? how it affects the cluster? You can read more about CRUSH tunables at [1], but tunables aren't in play here - it's just the subject of that email. > I upgrade my kernel I do not understand why I have to disabl

Re: [ceph-users] Ceph and TCP States

2016-10-24 Thread Ilya Dryomov
On Mon, Oct 24, 2016 at 11:29 AM, Nick Fisk wrote: >> -Original Message- >> From: Yan, Zheng [mailto:uker...@gmail.com] >> Sent: 24 October 2016 10:19 >> To: Gregory Farnum >> Cc: Nick Fisk ; Zheng Yan ; Ceph Users >> >> Subject: Re: [ceph-users] Ceph and TCP States >> >> X-Assp-URIBL f

Re: [ceph-users] effect of changing ceph osd primary affinity

2016-10-24 Thread Ilya Dryomov
On Fri, Oct 21, 2016 at 10:35 PM, Ridwan Rashid Noel wrote: > Thank you for your reply Greg. Is there any detailed resource that describe > about how the primary affinity changing works? All I got from searching was > one paragraph from the documentation. No, probably nothing detailed. There isn

Re: [ceph-users] Ceph and TCP States

2016-10-24 Thread Ilya Dryomov
On Mon, Oct 24, 2016 at 11:50 AM, Nick Fisk wrote: >> -Original Message- >> From: Ilya Dryomov [mailto:idryo...@gmail.com] >> Sent: 24 October 2016 10:33 >> To: Nick Fisk >> Cc: Yan, Zheng ; Gregory Farnum ; >> Zheng Yan ; Ceph Users > us...@list

Re: [ceph-users] effect of changing ceph osd primary affinity

2016-11-14 Thread Ilya Dryomov
On Mon, Nov 14, 2016 at 9:38 AM, Ridwan Rashid Noel wrote: > Hi Ilya, > > I tried to test the primary-affinity change so I have setup a small cluster > to test. I am trying to understand how the different components of Ceph > interacts in the event of change of primary-affinity of any osd. I am >

Re: [ceph-users] 4.8 kernel cephfs issue reading old filesystems

2016-11-14 Thread Ilya Dryomov
On Mon, Nov 14, 2016 at 10:05 PM, John Spray wrote: > Hi folks, > > For those with cephfs filesystems created using older versions of > Ceph, you may be affected by this issue if you try to access your > filesystem using the 4.8 or 4.9-rc kernels: > http://tracker.ceph.com/issues/17825 > > If your

Re: [ceph-users] High ops/s with kRBD and "--object-size 32M"

2016-11-28 Thread Ilya Dryomov
On Mon, Nov 28, 2016 at 6:20 PM, Francois Blondel wrote: > Hi *, > > I am currently testing different scenarios to try to optimize sequential > read and write speeds using Kernel RBD. > > I have two block devices created with : > rbd create block1 --size 500G --pool rbd --image-feature layering

Re: [ceph-users] rbd_default_features

2016-12-02 Thread Ilya Dryomov
On Thu, Dec 1, 2016 at 10:31 PM, Florent B wrote: > Hi, > > On 12/01/2016 10:26 PM, Tomas Kukral wrote: >> >> I wasn't successful trying to find table with indexes of features ... >> does anybody know? > > In sources : > https://github.com/ceph/ceph/blob/master/src/include/rbd/features.h There is

Re: [ceph-users] rbd showmapped -p and --image options missing in rbd version 10.2.4, why?

2016-12-09 Thread Ilya Dryomov
On Fri, Dec 9, 2016 at 10:52 AM, Stéphane Klein wrote: > Hi, > > with: rbd version 0.80.7, `rbd showmapped` have this options: > > * -p, --pool source pool name > * --imageimage name > > This options missing in rdb version 10.2.4 > > Why ? It is a regression ?

Re: [ceph-users] Server crashes on high mount volume

2016-12-12 Thread Ilya Dryomov
On Mon, Dec 12, 2016 at 9:16 PM, Diego Castro wrote: > I didn't have a try, i'll let you know how did it goes.. This should be fixed by commit [1] upstream and it was indeed backported to 7.3. [1] https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=811c6688774613a78bfa020

Re: [ceph-users] Server crashes on high mount volume

2016-12-13 Thread Ilya Dryomov
On Tue, Dec 13, 2016 at 2:45 PM, Diego Castro wrote: > Thank you for the tip. > Just found out the repo is empty, am i doing something wrong? > > http://mirror.centos.org/centos/7/cr/x86_64/Packages/ The kernel in the OS repo seems new enough: http://mirror.centos.org/centos/7/os/x86_64/Packages

Re: [ceph-users] Performance measurements CephFS vs. RBD

2016-12-13 Thread Ilya Dryomov
On Fri, Dec 9, 2016 at 9:42 PM, Gregory Farnum wrote: > On Fri, Dec 9, 2016 at 6:58 AM, plataleas wrote: >> Hi all >> >> We enabled CephFS on our Ceph Cluster consisting of: >> - 3 Monitor servers >> - 2 Metadata servers >> - 24 OSD (3 OSD / Server) >> - Spinning disks, OSD Journal is on SSD >>

Re: [ceph-users] 10.2.3: Howto disable cephx_sign_messages and preventing a LogFlood

2016-12-14 Thread Ilya Dryomov
On Wed, Dec 14, 2016 at 5:10 PM, Bjoern Laessig wrote: > Hi, > > i triggered a Kernel bug in the ceph-krbd code > * http://www.spinics.net/lists/ceph-devel/msg33802.html The fix is ready and is set to be merged into 4.10-rc1. How often can you hit it? > > Ilya Dryomov wro

Re: [ceph-users] 10.2.3: Howto disable cephx_sign_messages and preventing a LogFlood

2016-12-15 Thread Ilya Dryomov
On Thu, Dec 15, 2016 at 4:31 PM, Bjoern Laessig wrote: > On Mi, 2016-12-14 at 18:01 +0100, Ilya Dryomov wrote: >> On Wed, Dec 14, 2016 at 5:10 PM, Bjoern Laessig >> wrote: >> > i triggered a Kernel bug in the ceph-krbd code >> > * http://www.spinics.net/lists/c

Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Ilya Dryomov
On Wed, Dec 21, 2016 at 5:50 PM, Stéphane Klein wrote: > I have configured: > > ``` > ceph osd crush tunables firefly > ``` If it gets to rm, then it's probably not tunables. Are you running these commands by hand? Anything in dmesg? Thanks, Ilya __

Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Ilya Dryomov
On Wed, Dec 21, 2016 at 6:58 PM, Stéphane Klein wrote: > > > 2016-12-21 18:47 GMT+01:00 Ilya Dryomov : >> >> On Wed, Dec 21, 2016 at 5:50 PM, Stéphane Klein >> wrote: >> > I have configured: >> > >> > ``` >> > ceph osd crush tunab

Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Ilya Dryomov
On Wed, Dec 21, 2016 at 9:42 PM, Stéphane Klein wrote: > > > 2016-12-21 19:51 GMT+01:00 Ilya Dryomov : >> >> On Wed, Dec 21, 2016 at 6:58 PM, Stéphane Klein >> wrote: >> >> >> > 2016-12-21 18:47 GMT+01:00 Ilya Dryomov : >> >> >&g

Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Ilya Dryomov
On Wed, Dec 21, 2016 at 10:55 PM, Stéphane Klein wrote: > >> Not sure what's going on here. Using firefly version of the rbd CLI >> tool isn't recommended of course, but doesn't seem to be _the_ problem. >> Can you try some other distro with an equally old ceph - ubuntu trusty >> perhaps? > > > S

Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Ilya Dryomov
On Wed, Dec 21, 2016 at 11:10 PM, Stéphane Klein wrote: > > 2016-12-21 23:06 GMT+01:00 Ilya Dryomov : >> >> What's the output of "cat /proc/$(pidof rm)/stack? > > > root@ceph-client-3:/home/vagrant# cat /proc/2315/stack > [] sleep_on_page+

Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Ilya Dryomov
On Wed, Dec 21, 2016 at 11:36 PM, Stéphane Klein wrote: > > > 2016-12-21 23:33 GMT+01:00 Ilya Dryomov : >> >> On Wed, Dec 21, 2016 at 11:10 PM, Stéphane Klein >> wrote: >> > >> > 2016-12-21 23:06 GMT+01:00 Ilya Dryomov : >> >>

Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-22 Thread Ilya Dryomov
On Thu, Dec 22, 2016 at 8:32 AM, Stéphane Klein wrote: > > > 2016-12-21 23:39 GMT+01:00 Stéphane Klein : >> >> >> >> 2016-12-21 23:33 GMT+01:00 Ilya Dryomov : >>> >>> What if you boot ceph-client-3 with >512M memory, say 2G? >> >&g

Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-07-26 Thread Ilya Dryomov
On Thu, Jul 26, 2018 at 1:55 AM Alex Gorbachev wrote: > > On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev > wrote: > > On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev > > wrote: > >> On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman > >> wrote: > >>> > >>> > >>> On Wed, Jul 25, 2018 at 5:41 PM

Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-07-26 Thread Ilya Dryomov
On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev wrote: > > On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev > wrote: > > On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman wrote: > >> > >> > >> On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev > >> wrote: > >>> > >>> I am not sure this related to RBD,

Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-07-27 Thread Ilya Dryomov
On Thu, Jul 26, 2018 at 5:15 PM Alex Gorbachev wrote: > > On Thu, Jul 26, 2018 at 9:49 AM, Ilya Dryomov wrote: > > On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev > > wrote: > >> > >> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev > >> wrote:

Re: [ceph-users] rbdmap service issue

2018-08-01 Thread Ilya Dryomov
On Wed, Aug 1, 2018 at 11:13 AM wrote: > > Hi! > > I find a rbd map service issue: > [root@dx-test ~]# systemctl status rbdmap > ● rbdmap.service - Map RBD devices >Loaded: loaded (/usr/lib/systemd/system/rbdmap.service; enabled; vendor > preset: disabled) >Active: active (exited) (Resul

Re: [ceph-users] different size of rbd

2018-08-02 Thread Ilya Dryomov
On Thu, Aug 2, 2018 at 12:49 PM wrote: > > I create a rbd named dx-app with 500G, and map as rbd0. > > But i find the size is different with different cmd: > > [root@dx-app docker]# rbd info dx-app > rbd image 'dx-app': > size 32000 GB in 8192000 objects < > order 22 (4096 kB objects)

Re: [ceph-users] a little question about rbd_discard parameter len

2018-08-06 Thread Ilya Dryomov
On Mon, Aug 6, 2018 at 9:10 AM Will Zhao wrote: > > Hi all: extern "C" int rbd_discard(rbd_image_t image, uint64_t ofs, > uint64_t len) > { > librbd::ImageCtx *ictx = (librbd::ImageCtx *)image; > tracepoint(librbd, discard_enter, ictx, ictx->name.c_str(), > ictx->snap_name.c_str(), ictx->read_only

Re: [ceph-users] different size of rbd

2018-08-06 Thread Ilya Dryomov
On Mon, Aug 6, 2018 at 3:24 AM Dai Xiang wrote: > > On Thu, Aug 02, 2018 at 01:04:46PM +0200, Ilya Dryomov wrote: > > On Thu, Aug 2, 2018 at 12:49 PM wrote: > > > > > > I create a rbd named dx-app with 500G, and map as rbd0. > > > > > > But

Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-08-06 Thread Ilya Dryomov
On Thu, Jul 26, 2018 at 1:55 AM Alex Gorbachev wrote: > > On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev > wrote: > > On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev > > wrote: > >> On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman > >> wrote: > >>> > >>> > >>> On Wed, Jul 25, 2018 at 5:41 PM

Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-08-06 Thread Ilya Dryomov
On Mon, Aug 6, 2018 at 8:13 PM Ilya Dryomov wrote: > > On Thu, Jul 26, 2018 at 1:55 AM Alex Gorbachev > wrote: > > > > On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev > > wrote: > > > On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev > > > wrote:

Re: [ceph-users] bad crc/signature errors

2018-08-13 Thread Ilya Dryomov
On Mon, Aug 13, 2018 at 2:49 PM Nikola Ciprich wrote: > > Hi Paul, > > thanks, I'll give it a try.. do you think this might head to > upstream soon? for some reason I can't review comments for > this patch on github.. Is some new version of this patch > on the way, or can I try to apply this one

Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-08-13 Thread Ilya Dryomov
On Mon, Aug 6, 2018 at 8:17 PM Ilya Dryomov wrote: > > On Mon, Aug 6, 2018 at 8:13 PM Ilya Dryomov wrote: > > > > On Thu, Jul 26, 2018 at 1:55 AM Alex Gorbachev > > wrote: > > > > > > On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev > > > w

Re: [ceph-users] bad crc/signature errors

2018-08-14 Thread Ilya Dryomov
On Mon, Aug 13, 2018 at 5:57 PM Nikola Ciprich wrote: > > Hi Ilya, > > hmm, OK, I'm not sure now whether this is the bug which I'm > experiencing.. I've had read_partial_message / bad crc/signature > problem occurance on the second cluster in short period even though > we're on the same ceph ver

Re: [ceph-users] cephfs client version in RedHat/CentOS 7.5

2018-08-20 Thread Ilya Dryomov
On Mon, Aug 20, 2018 at 4:52 PM Dietmar Rieder wrote: > > Hi Cephers, > > > I wonder if the cephfs client in RedHat/CentOS 7.5 will be updated to > luminous? > As far as I see there is some luminous related stuff that was > backported, however, > the "ceph features" command just reports "jewel" as

Re: [ceph-users] cephfs client version in RedHat/CentOS 7.5

2018-08-21 Thread Ilya Dryomov
On Tue, Aug 21, 2018 at 9:12 AM Dietmar Rieder wrote: > > On 08/20/2018 05:36 PM, Ilya Dryomov wrote: > > On Mon, Aug 20, 2018 at 4:52 PM Dietmar Rieder > > wrote: > >> > >> Hi Cephers, > >> > >> > >> I wonder if the cephfs client

Re: [ceph-users] cephfs client version in RedHat/CentOS 7.5

2018-08-21 Thread Ilya Dryomov
On Mon, Aug 20, 2018 at 9:49 PM Dan van der Ster wrote: > > On Mon, Aug 20, 2018 at 5:37 PM Ilya Dryomov wrote: > > > > On Mon, Aug 20, 2018 at 4:52 PM Dietmar Rieder > > wrote: > > > > > > Hi Cephers, > > > > > > > > >

Re: [ceph-users] ceph-container - rbd map failing since upgrade?

2018-08-21 Thread Ilya Dryomov
On Tue, Aug 21, 2018 at 9:19 PM Jacob DeGlopper wrote: > > I'm seeing an error from the rbd map command running in ceph-container; > I had initially deployed this cluster as Luminous, but a pull of the > ceph/daemon container unexpectedly upgraded me to Mimic 13.2.1. > > [root@nodeA2 ~]# ceph vers

Re: [ceph-users] Clients report OSDs down/up (dmesg) nothing in Ceph logs (flapping OSDs)

2018-08-30 Thread Ilya Dryomov
On Thu, Aug 30, 2018 at 1:04 PM Eugen Block wrote: > > Hi again, > > we still didn't figure out the reason for the flapping, but I wanted > to get back on the dmesg entries. > They just reflect what happened in the past, they're no indicator to > predict anything. The kernel client is just that,

Re: [ceph-users] kRBD write performance for high IO use cases

2018-09-08 Thread Ilya Dryomov
On Sat, Sep 8, 2018 at 1:52 AM Tyler Bishop wrote: > > I have a fairly large cluster running ceph bluestore with extremely fast SAS > ssd for the metadata. Doing FIO benchmarks I am getting 200k-300k random > write iops but during sustained workloads of ElasticSearch my clients seem to > hit a

Re: [ceph-users] Safe to use RBD mounts for Docker volumes on containerized Ceph nodes

2018-09-08 Thread Ilya Dryomov
On Sun, Sep 9, 2018 at 6:31 AM David Turner wrote: > > The problem is with the kernel pagecache. If that is still shared in a > containerized environment with the OSDs in containers and RBDs which are > married on The node outside of containers, then it is indeed still a problem. > I would gues

Re: [ceph-users] Force unmap of RBD image

2018-09-10 Thread Ilya Dryomov
On Mon, Sep 10, 2018 at 10:46 AM Martin Palma wrote: > > We are trying to unmap an rbd image form a host for deletion and > hitting the following error: > > rbd: sysfs write failed > rbd: unmap failed: (16) Device or resource busy > > We used commands like "lsof" and "fuser" but nothing is reporte

Re: [ceph-users] rbd-nbd on CentOS

2018-09-10 Thread Ilya Dryomov
On Mon, Sep 10, 2018 at 7:19 PM David Turner wrote: > > I haven't found any mention of this on the ML and Google's results are all > about compiling your own kernel to use NBD on CentOS. Is everyone that's > using rbd-nbd on CentOS honestly compiling their own kernels for the clients? > This fe

Re: [ceph-users] rbd-nbd on CentOS

2018-09-10 Thread Ilya Dryomov
On Mon, Sep 10, 2018 at 7:46 PM David Turner wrote: > > Now that you mention it, I remember those threads on the ML. What happens if > you use --yes-i-really-mean-it to do those things and then later you try to > map an RBD with an older kernel for CentOS 7.3 or 7.4? Will that mapping > fail

Re: [ceph-users] Get supported features of all connected clients

2018-09-11 Thread Ilya Dryomov
On Tue, Sep 11, 2018 at 1:00 PM Tobias Florek wrote: > > Hi! > > I have a cluster serving RBDs and CephFS that has a big number of > clients I don't control. I want to know what feature flags I can safely > set without locking out clients. Is there a command analogous to `ceph > versions` that s

Re: [ceph-users] issued! = cap->implemented in handle_cap_export

2018-09-25 Thread Ilya Dryomov
On Tue, Sep 25, 2018 at 2:05 PM 刘 轩 wrote: > > Hi Ilya: > > I have some questions about the commit > d84b37f9fa9b23a46af28d2e9430c87718b6b044 about the function > handle_cap_export. In which case, issued! = cap->implemented may occur. > > I encountered this kind of mistake in my cluster. Do you

Re: [ceph-users] bcache, dm-cache support

2018-10-10 Thread Ilya Dryomov
On Wed, Oct 10, 2018 at 8:48 PM Kjetil Joergensen wrote: > > Hi, > > We tested bcache, dm-cache/lvmcache, and one more which name eludes me with > PCIe NVME on top of large spinning rust drives behind a SAS3 expander - and > decided this were not for us. > > This was probably jewel with filestor

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-12 Thread Ilya Dryomov
On Mon, Mar 12, 2018 at 7:41 PM, Maged Mokhtar wrote: > On 2018-03-12 14:23, David Disseldorp wrote: > > On Fri, 09 Mar 2018 11:23:02 +0200, Maged Mokhtar wrote: > > 2)I undertand that before switching the path, the initiator will send a > TMF ABORT can we pass this to down to the same abort_reque

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-13 Thread Ilya Dryomov
On Mon, Mar 12, 2018 at 8:20 PM, Maged Mokhtar wrote: > On 2018-03-12 21:00, Ilya Dryomov wrote: > > On Mon, Mar 12, 2018 at 7:41 PM, Maged Mokhtar wrote: > > On 2018-03-12 14:23, David Disseldorp wrote: > > On Fri, 09 Mar 2018 11:23:02 +0200, Maged Mokhtar wrote: > >

Re: [ceph-users] Bluestore cluster, bad IO perf on blocksize<64k... could it be throttling ?

2018-03-23 Thread Ilya Dryomov
On Wed, Mar 21, 2018 at 6:50 PM, Frederic BRET wrote: > Hi all, > > The context : > - Test cluster aside production one > - Fresh install on Luminous > - choice of Bluestore (coming from Filestore) > - Default config (including wpq queuing) > - 6 nodes SAS12, 14 OSD, 2 SSD, 2 x 10Gb nodes, far mor

Re: [ceph-users] Kernel version for Debian 9 CephFS/RBD clients

2018-03-23 Thread Ilya Dryomov
On Fri, Mar 23, 2018 at 11:48 AM, wrote: > The stock kernel from Debian is perfect > Spectre / meltdown mitigations are worthless for a Ceph point of view, > and should be disabled (again, strictly from a Ceph point of view) > > If you need the luminous features, using the userspace implementatio

Re: [ceph-users] Kernel version for Debian 9 CephFS/RBD clients

2018-03-23 Thread Ilya Dryomov
On Fri, Mar 23, 2018 at 2:18 PM, wrote: > On 03/23/2018 12:14 PM, Ilya Dryomov wrote: >> luminous cluster-wide feature bits are supported since kernel 4.13. > > ? > > # uname -a > Linux abweb1 4.14.0-0.bpo.3-amd64 #1 SMP Debian 4.14.13-1~bpo9+1 > (2018-01-14) x86_64

Re: [ceph-users] Kernel version for Debian 9 CephFS/RBD clients

2018-03-23 Thread Ilya Dryomov
On Fri, Mar 23, 2018 at 3:01 PM, wrote: > Ok ^^ > > For Cephfs, as far as I know, quota support is not supported in kernel space > This is not specific to luminous, tho quota support is coming, hopefully in 4.17. Thanks, Ilya ___ ceph

Re: [ceph-users] Kernel version for Debian 9 CephFS/RBD clients

2018-03-26 Thread Ilya Dryomov
On Fri, Mar 23, 2018 at 5:53 PM, Nicolas Huillard wrote: > Le vendredi 23 mars 2018 à 12:14 +0100, Ilya Dryomov a écrit : >> On Fri, Mar 23, 2018 at 11:48 AM, wrote: >> > The stock kernel from Debian is perfect >> > Spectre / meltdown mitigations are worthless fo

Re: [ceph-users] remove big rbd image is very slow

2018-03-26 Thread Ilya Dryomov
On Sat, Mar 17, 2018 at 5:11 PM, shadow_lin wrote: > Hi list, > My ceph version is jewel 10.2.10. > I tired to use rbd rm to remove a 50TB image(without object map because krbd > does't support it).It takes about 30mins to just complete about 3%. Is this > expected? Is there a way to make it faste

Re: [ceph-users] rbd feature map fail

2018-05-15 Thread Ilya Dryomov
On Tue, May 15, 2018 at 10:07 AM, wrote: > Hi, all! > > I use rbd to do something and find below issue: > > when i create a rbd image with feature: > layering,exclusive-lock,object-map,fast-diff > > failed to map: > rbd: sysfs write failed > RBD image feature set mismatch. Try disabling features

Re: [ceph-users] [SUSPECTED SPAM]Re: RBD features and feature journaling performance

2018-05-17 Thread Ilya Dryomov
On Thu, May 17, 2018 at 11:03 AM, Jorge Pinilla López wrote: > Thanks for the info!, I absolutely agree that it should be documented > > Any further info about why journaling feature is so slow? Because everything is written twice: first to the journal and then to the actual data objects. journa

Re: [ceph-users] Poor CentOS 7.5 client performance

2018-05-17 Thread Ilya Dryomov
On Wed, May 16, 2018 at 8:27 PM, Donald "Mac" McCarthy wrote: > CephFS. 8 core atom C2758, 16 GB ram, 256GB ssd, 2.5 GB NIC (supermicro > microblade node). > > Read test: > dd if=/ceph/1GB.test of=/dev/null bs=1M Yup, looks like a kcephfs regression. The performance of the above command is hig

Re: [ceph-users] Poor CentOS 7.5 client performance

2018-05-18 Thread Ilya Dryomov
On Fri, May 18, 2018 at 3:25 PM, Donald "Mac" McCarthy wrote: > Ilya, > Your recommendation worked beautifully. Thank you! > > Is this something that is expected behavior or is this something that should > be filed as a bug. > > I ask because I have just enough experience with ceph at this poi

Re: [ceph-users] Luminous 12.2.4: CephFS kernel client (4.15/4.16) shows up as jewel

2018-05-31 Thread Ilya Dryomov
On Thu, May 31, 2018 at 4:16 AM, Linh Vu wrote: > Hi all, > > > On my test Luminous 12.2.4 cluster, with this set (initially so I could use > upmap in the mgr balancer module): > > > # ceph osd set-require-min-compat-client luminous > > # ceph osd dump | grep client > require_min_compat_client lum

Re: [ceph-users] Luminous 12.2.4: CephFS kernel client (4.15/4.16) shows up as jewel

2018-05-31 Thread Ilya Dryomov
On Thu, May 31, 2018 at 2:39 PM, Heðin Ejdesgaard Møller wrote: > I have encountered the same issue and wrote to the mailing list about it, > with the subject: [ceph-users] krbd upmap support on kernel-4.16 ? > > The odd thing is that I can krbd map an image after setting min compat to > luminou

Re: [ceph-users] How to run MySQL (or other database ) on Ceph using KRBD ?

2018-06-05 Thread Ilya Dryomov
On Tue, Jun 5, 2018 at 4:07 AM, 李昊华 wrote: > Thanks for reading my questions! > > I want to run MySQL on Ceph using KRBD because KRBD is faster than librbd. > And I know KRBD is a kernel module and we can use KRBD to mount the RBD > device on the operating systems. > > It is easy to use command li

Re: [ceph-users] rbd map hangs

2018-06-07 Thread Ilya Dryomov
On Thu, Jun 7, 2018 at 5:12 AM, Tracy Reed wrote: > > Hello all! I'm running luminous with old style non-bluestore OSDs. ceph > 10.2.9 clients though, haven't been able to upgrade those yet. > > Occasionally I have access to rbds hang on the client such as right now. > I tried to dd a VM image int

Re: [ceph-users] rbd map hangs

2018-06-07 Thread Ilya Dryomov
On Thu, Jun 7, 2018 at 4:33 PM, Tracy Reed wrote: > On Thu, Jun 07, 2018 at 02:05:31AM PDT, Ilya Dryomov spake thusly: >> > find /sys/kernel/debug/ceph -type f -print -exec cat {} \; >> >> Can you paste the entire output of that command? >> >> Which kern

Re: [ceph-users] rbd map hangs

2018-06-07 Thread Ilya Dryomov
On Thu, Jun 7, 2018 at 6:30 PM, Jason Dillaman wrote: > On Thu, Jun 7, 2018 at 12:13 PM, Tracy Reed wrote: >> On Thu, Jun 07, 2018 at 08:40:50AM PDT, Ilya Dryomov spake thusly: >>> > Kernel is Linux cpu04.mydomain.com 3.10.0-229.20.1.el7.x86_64 #1 SMP Tue >>> &g

Re: [ceph-users] rbd map hangs

2018-06-08 Thread Ilya Dryomov
On Fri, Jun 8, 2018 at 6:37 AM, Tracy Reed wrote: > On Thu, Jun 07, 2018 at 09:30:23AM PDT, Jason Dillaman spake thusly: >> I think what Ilya is saying is that it's a very old RHEL 7-based >> kernel (RHEL 7.1?). For example, the current RHEL 7.5 kernel includes >> numerous improvements that have b

Re: [ceph-users] CephFS+NFS For VMWare

2018-07-02 Thread Ilya Dryomov
On Fri, Jun 29, 2018 at 8:08 PM Nick Fisk wrote: > > This is for us peeps using Ceph with VMWare. > > > > My current favoured solution for consuming Ceph in VMWare is via RBD’s > formatted with XFS and exported via NFS to ESXi. This seems to perform better > than iSCSI+VMFS which seems to not pl

<    1   2   3   4   5   >