On Wed, Jan 11, 2017 at 5:09 PM, Jason Dillaman wrote:
> I would like to propose that starting with the Luminous release of Ceph, RBD
> will no longer support the creation of v1 image format images via the rbd
> CLI and librbd.
>
> We previously made the v2 image format the default and deprecated
On Wed, Jan 11, 2017 at 6:01 PM, Shinobu Kinjo wrote:
> It would be fine to not support v1 image format at all.
>
> But it would be probably friendly for users to provide them with more
> understandable message when they face feature mismatch instead of just
> displaying:
>
> * rbd: map failed: (
On Tue, Jan 17, 2017 at 6:49 PM, Kingsley Tart wrote:
> Oh that's good. I thought the kernel clients only supported block
> devices. I guess that has changed since I last looked.
That has always been the case -- block device support came about a year
after the filesystem was merged into the kerne
On Sat, Jan 21, 2017 at 1:18 PM, Maged Mokhtar wrote:
> Hi,
>
> If a host with a kernel mapped rbd image dies, it still keeps a watch on
> the rbd image header for a timeout that seems to be determined by
> ms_tcp_read_timeout ( default 15 minutes ) rather than
> osd_client_watch_timeout whereas a
On Wed, Feb 8, 2017 at 9:13 PM, Tracy Reed wrote:
> On Wed, Feb 08, 2017 at 10:57:38AM PST, Shinobu Kinjo spake thusly:
>> If you would be able to reproduce the issue intentionally under
>> particular condition which I have no idea about at the moment, it
>> would be helpful.
>
> The issue is very
On Mon, Feb 27, 2017 at 2:37 PM, Simon Weald wrote:
> I've currently having some issues making some Jessie-based Xen hosts
> talk to a Trusty-based cluster due to feature mismatch errors. Our
> Trusty hosts are using 3.19.0-80 (the Vivid LTS kernel), and our Jessie
> hosts were using the standard
On Mon, Feb 27, 2017 at 3:15 PM, Simon Weald wrote:
> Hi Ilya
>
> On 27/02/17 13:59, Ilya Dryomov wrote:
>> On Mon, Feb 27, 2017 at 2:37 PM, Simon Weald wrote:
>>> I've currently having some issues making some Jessie-based Xen hosts
>>> talk to a Trusty
On Mon, Feb 27, 2017 at 6:47 PM, Shinobu Kinjo wrote:
> We already discussed this:
>
> https://www.spinics.net/lists/ceph-devel/msg34559.html
>
> What do you think of comment posted in that ML?
> Would that make sense to you as well?
Sorry, I dropped the ball on this. I'll try to polish and push
On Thu, Mar 2, 2017 at 1:06 AM, Sage Weil wrote:
> On Thu, 2 Mar 2017, Xiaoxi Chen wrote:
>> >Still applies. Just create a Round Robin DNS record. The clients will
>> obtain a new monmap while they are connected to the cluster.
>> It works to some extent, but causing issue for "mount -a". We have
On Thu, Mar 2, 2017 at 5:01 PM, Xiaoxi Chen wrote:
> 2017-03-02 23:25 GMT+08:00 Ilya Dryomov :
>> On Thu, Mar 2, 2017 at 1:06 AM, Sage Weil wrote:
>>> On Thu, 2 Mar 2017, Xiaoxi Chen wrote:
>>>> >Still applies. Just create a Round Robin DNS record. The clients wi
On Tue, Mar 7, 2017 at 10:27 AM, Francois Blondel wrote:
> Hi all,
>
> I have been triyng to use RBD devices on a Erasure Coded data-pool on Ubuntu
> Xenial.
>
> I created my block device "blockec2" with :
> rbd create blockec2 --size 300G --data-pool ecpool --image-feature
> layering,data-pool
>
On Tue, Apr 11, 2017 at 4:01 AM, Alex Gorbachev
wrote:
> On Mon, Apr 10, 2017 at 2:16 PM, Alex Gorbachev
> wrote:
>> I am trying to understand the cause of a problem we started
>> encountering a few weeks ago. There are 30 or so per hour messages on
>> OSD nodes of type:
>>
>> ceph-osd.33.log:
On Tue, Apr 11, 2017 at 3:10 PM, Alex Gorbachev
wrote:
> Hi Ilya,
>
> On Tue, Apr 11, 2017 at 4:06 AM, Ilya Dryomov wrote:
>> On Tue, Apr 11, 2017 at 4:01 AM, Alex Gorbachev
>> wrote:
>>> On Mon, Apr 10, 2017 at 2:16 PM, Alex Gorbachev
>>> wrote:
>
On Wed, Apr 12, 2017 at 4:28 PM, Alex Gorbachev
wrote:
> Hi Ilya,
>
> On Wed, Apr 12, 2017 at 4:58 AM Ilya Dryomov wrote:
>>
>> On Tue, Apr 11, 2017 at 3:10 PM, Alex Gorbachev
>> wrote:
>> > Hi Ilya,
>> >
>> > On Tue, Apr 11, 2017 at 4
On Thu, Apr 13, 2017 at 5:39 AM, Alex Gorbachev
wrote:
> On Wed, Apr 12, 2017 at 10:51 AM, Ilya Dryomov wrote:
>> On Wed, Apr 12, 2017 at 4:28 PM, Alex Gorbachev
>> wrote:
>>> Hi Ilya,
>>>
>>> On Wed, Apr 12, 2017 at 4:58 AM Ilya Dryomov wrote:
>&
On Mon, Apr 17, 2017 at 1:42 PM, Alex Gorbachev
wrote:
> On Thu, Apr 13, 2017 at 4:24 AM, Ilya Dryomov wrote:
>> On Thu, Apr 13, 2017 at 5:39 AM, Alex Gorbachev
>> wrote:
>>> On Wed, Apr 12, 2017 at 10:51 AM, Ilya Dryomov wrote:
>>>> On Wed, Apr
On Wed, May 24, 2017 at 1:47 PM, Shain Miley wrote:
> Hello,
> We just upgraded from Hammer to Jewel, and after the cluster once again
> reported a healthy state I set the crush tunables to ‘optimal’ (from
> legacy).
> 12 hours later and the cluster is almost done with the pg remapping under
> the
On Wed, May 24, 2017 at 4:27 PM, Shain Miley wrote:
> Hi,
>
> Thanks for all your help so far...very useful information indeed.
>
>
> Here is the debug output from the file you referenced below:
>
>
> root@rbd1:/sys/kernel/debug/ceph/504b5794-34bd-44e7-a8c3-0494cf800c23.client67751889#
> cat osdc
On Sat, Dec 12, 2015 at 6:37 PM, Tom Christensen wrote:
> We had a kernel map get hung up again last night/this morning. The rbd is
> mapped but unresponsive, if I try to unmap it I get the following error:
> rbd: sysfs write failed
> rbd: unmap failed: (16) Device or resource busy
>
> Now that t
On Fri, Dec 18, 2015 at 10:55 AM, Alex Gorbachev
wrote:
> I hope this can help anyone who is running into the same issue as us -
> kernels 4.1.x appear to have terrible RBD sequential write performance.
> Kernels before and after are great.
>
> I tested with 4.1.6 and 4.1.15 on Ubuntu 14.04.3, ce
On Fri, Dec 18, 2015 at 9:24 PM, Alex Gorbachev
wrote:
> Hi Ilya,
>
> On Fri, Dec 18, 2015 at 11:46 AM, Ilya Dryomov wrote:
>>
>> On Fri, Dec 18, 2015 at 5:40 PM, Alex Gorbachev
>> wrote:
>> > Hi Ilya
>> >
>> > On Fri, Dec 18, 2015 at 6:50
On Sun, Jan 17, 2016 at 6:34 PM, James Gallagher
wrote:
> Hi,
>
> I'm looking to implement the CephFS on my Firefly release (v0.80) with an
> XFS native file system, but so far I'm having some difficulties. After
> following the ceph/qsg and creating a storage cluster, I have the following
> topol
On Tue, Jan 19, 2016 at 10:34 AM, Nick Fisk wrote:
> But interestingly enough, if you look down to where they run the targetcli
> ls, it shows a RBD backing store.
>
> Maybe it's using the krbd driver to actually do the Ceph side of the
> communication, but lio plugs into this rather than just t
On Wed, Jan 20, 2016 at 10:48 AM, 张鹏 wrote:
> i want change the omapval of a rbd size so i do some thing like :
>
> 1、create a rbd name zp3 with size 10G
> [root@lab8106 rbdre]# rbd create zp3 --size 10G
>
> 2、see rbd information
> [root@lab8106 rbdre]# rbd info zp3
> rbd image 'zp3':
> size 1024
On Fri, Jan 29, 2016 at 11:43 PM, Deneau, Tom wrote:
> The commands shown below had successfully mapped rbd images in the past on
> kernel version 4.1.
>
> Now I need to map one on a system running the 3.13 kernel.
> Ceph version is 9.2.0. Rados bench operations work with no problem.
> I get the
On Sat, Jan 30, 2016 at 12:07 AM, Deneau, Tom wrote:
> Ah, yes I see this...
>feature set mismatch, my 4a042a42 < server's 104a042a42, missing 10
> which looks like CEPH_FEATURE_CRUSH_V2
>
> Is there any workaround for that?
> Or what ceph version would I have to back up to?
Yes, that
On Tue, Mar 1, 2016 at 10:57 PM, Randy Orr wrote:
> Hello,
>
> I am running the following:
>
> ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299)
> ubuntu 14.04 with kernel 3.19.0-49-generic #55~14.04.1-Ubuntu SMP
>
> For this use case I am mapping and mounting an rbd using the kernel c
On Wed, Mar 2, 2016 at 1:58 PM, Randy Orr wrote:
> Ilya,
>
> That's great, thank you. I will certainly try the updated kernel when
> available. Do you have pointers to the two bugs in question?
4.5-rc6 is already available. 4.4.4 should come out shortly.
Reports:
1. http://www.spinics.net/list
On Thu, Mar 10, 2016 at 4:06 PM, Randy Orr wrote:
> Hi,
>
> Just a followup on this thread. I was about to try the 4.5-rc6 kernel when I
> realized I hadn't tried the 4.2 kernel available via the
> linux-generic-lts-wily ubuntu package. I figured it wouldn't hurt to try and
> would at the very lea
On Tue, Mar 22, 2016 at 4:48 PM, Jason Dillaman wrote:
>> Hi Jason,
>>
>> Le 22/03/2016 14:12, Jason Dillaman a écrit :
>> >
>> > We actually recommend that OpenStack be configured to use writeback cache
>> > [1]. If the guest OS is properly issuing flush requests, the cache will
>> > still provi
On Tue, Mar 22, 2016 at 1:12 PM, Xusangdi wrote:
> Hi Matt & Cephers,
>
> I am looking for advise on setting up a file system based on Ceph. As CephFS
> is not yet productive ready(or I missed some breakthroughs?), the new NFS on
> RadosGW should be a promising alternative, especially for large
On Wed, Mar 30, 2016 at 3:03 AM, Jason Dillaman wrote:
> Understood -- format 2 was promoted to the default image format starting with
> Infernalis (which not all users would have played with since it isn't LTS).
> The defaults can be overridden via the command-line when creating new images
>
On Mon, Apr 11, 2016 at 4:37 PM, Simon Ferber
wrote:
> Hi,
>
> I try to setup an ceph cluster on Debian 8.4. Mainly I followed a
> tutorial at
> http://adminforge.de/raid/ceph/ceph-cluster-unter-debian-wheezy-installieren/
>
> As far as I can see, the first steps are just working fine. I have two
On Tue, Apr 12, 2016 at 12:21 PM, Simon Ferber
wrote:
> Am 12.04.2016 um 12:09 schrieb Florian Haas:
>> On Tue, Apr 12, 2016 at 11:53 AM, Simon Ferber
>> wrote:
>>> Thank you! That's it. I have installed the Kernel from the Jessie
>>> backport. Now the crashes are gone.
>>> How often do these thi
On Tue, Apr 12, 2016 at 4:08 PM, Mathias Buresch
wrote:
>
> Hi there,
>
> I have an issue with using Ceph and Ubuntu Backport Kernel newer than
> 3.19.0-43.
>
> Following setup I have:
>
> Ubuntu 14.04
> Kernel 3.19.0-43 (Backport Kernel)
> Ceph 0.94.6
>
> I am using CephFS! The kernel 3.19.0-43 w
On Fri, Apr 15, 2016 at 10:18 AM, lin zhou wrote:
> Hi,cephers:
> In one of my ceph cluster,we map rbd then mount it. in node1 and then
> using samba to share it to do backup for several vm,and some web root
> directory.
>
> Yesterday,one of the disk in my cluster is full at 95%,then the
> cluste
On Fri, Apr 15, 2016 at 10:32 AM, lin zhou wrote:
> thanks for so fast reply.
> output in one of the faulty host:
>
> root@musicgci5:~# ceph -s
> cluster 409059ba-797e-46da-bc2f-83e3c7779094
>health HEALTH_OK
>monmap e1: 3 mons at
> {musicgci2=192.168.43.12:6789/0,musicgci3=192.168.43.1
On Fri, Apr 15, 2016 at 10:59 AM, lin zhou wrote:
> Yes,the output is the same.
(Dropped ceph-users.)
Can you attach compressed osd logs for OSDs 28 and 40?
Thanks,
Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lis
On Tue, Apr 19, 2016 at 5:28 AM, min fang wrote:
> I am confused on ceph/ceph-qa-suite and ceph/teuthology. Which one should I
> use? thanks.
ceph-qa-suite repository contains the test snippets, teuthology is the
test framework that knows how to run them. It will pull the appropriate
branch of c
On Mon, Apr 18, 2016 at 11:58 AM, Tim Bishop wrote:
> I had the same issue when testing on Ubuntu xenial beta. That has 4.4,
> so should be fine? I had to create images without the new RBD features
> to make it works.
None of the "new" features are currently supported by krbd. 4.7 will
support e
On Thu, Apr 21, 2016 at 10:00 AM, Mika c wrote:
> Hi cephers,
> Had the same issue too. But the command "rbd feature disable" not
> working to me.
> Any comment will be appreciated.
>
> $sudo rbd feature disable timg1 deep-flatten fast-diff object-map
> exclusive-lock
> rbd: failed to update i
On Thu, Apr 21, 2016 at 11:41 AM, Mika c wrote:
> Hi xizhiyong,
> Thanks for your infomation. I am using Jewel right now(10.1.2), the
> setting "rbd_default_features = 3" not working for me.
> And this setting will enable "exclusive-lock, object-map, fast-diff,
> deep-flatten" features.
Setti
On Mon, Apr 25, 2016 at 1:53 PM, Stefan Lissmats wrote:
> Hello!
>
> Running a completely new testcluster with status HEALTH_OK i get the same
> error.
> I'm running Ubuntu 14.04 with kernel 3.16.0-70-generic and ceph 10.2.0 on
> all hosts.
> The rbd-nbd mapping was done on the same host having o
On Mon, Apr 25, 2016 at 7:47 PM, Stefan Lissmats wrote:
> Hello again!
>
> I understand that it's not recommended running osd and rbd-nbd on the same
> host and i actually moved my rbd-nbd to a completely clean host (same kernel
> and OS though), but with same result.
>
> I hope someone can reso
On Sat, Nov 3, 2018 at 10:41 AM wrote:
>
> Hi.
>
> I tried to enable the "new smart balancing" - backend are on RH luminous
> clients are Ubuntu 4.15 kernel.
>
> As per: http://docs.ceph.com/docs/mimic/rados/operations/upmap/
> $ sudo ceph osd set-require-min-compat-client luminous
> Error EPERM:
On Wed, Nov 7, 2018 at 2:25 PM wrote:
>
> Hi!
>
> I use ceph version 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d77) mimic
> (stable) and i want to call `ls -ld` to read whole dir size in cephfs:
>
> When i man mount.ceph:
>
> rbytes Report the recursive size of the directory contents for st_si
On Thu, Nov 8, 2018 at 2:15 PM Stefan Kooman wrote:
>
> Quoting Ilya Dryomov (idryo...@gmail.com):
> > On Sat, Nov 3, 2018 at 10:41 AM wrote:
> > >
> > > Hi.
> > >
> > > I tried to enable the "new smart balancing" - backend are o
On Thu, Nov 8, 2018 at 5:10 PM Stefan Kooman wrote:
>
> Quoting Stefan Kooman (ste...@bit.nl):
> > I'm pretty sure it isn't. I'm trying to do the same (force luminous
> > clients only) but ran into the same issue. Even when running 4.19 kernel
> > it's interpreted as a jewel client. Here is the li
On Wed, Dec 5, 2018 at 3:48 PM Ashley Merrick wrote:
>
> I have had some ec backed Mimic RBD's mounted via the kernel module on a
> Ubuntu 14.04 VM, these have been running no issues after updating the kernel
> to 4.12 to support EC features.
>
> Today I run an apt dist-upgrade which upgraded fr
On Thu, Dec 6, 2018 at 4:22 AM Ashley Merrick wrote:
>
> Hello,
>
> As mentioned earlier the cluster is seperatly running on the latest mimic.
>
> Due to 14.04 only supporting up to Luminous I was running the 12.2.9 version
> of ceph-common for the rbd binary.
>
> This is what was upgraded when I
On Thu, Dec 6, 2018 at 10:58 AM Ashley Merrick wrote:
>
> That command returns luminous.
This is the issue.
My guess is someone ran "ceph osd set-require-min-compat-client
luminous", making it so that only luminous aware clients are allowed to
connect to the cluster. Kernel 4.12 doesn't support
On Thu, Dec 6, 2018 at 11:15 AM Ashley Merrick wrote:
>
> That is correct, but that command was run weeks ago.
>
> And the RBD connected fine on 2.9 via the kernel 4.12 so I’m really lost to
> why suddenly it’s now blocking a connection it originally allowed through
> (even if by mistake)
When
On Sat, Dec 22, 2018 at 7:18 PM Brian : wrote:
>
> Sorry to drag this one up again.
>
> Just got the unsubscribed due to excessive bounces thing.
>
> 'Your membership in the mailing list ceph-users has been disabled due
> to excessive bounces The last bounce received from you was dated
> 21-Dec-20
On Wed, Jan 9, 2019 at 5:17 PM Kenneth Van Alstyne
wrote:
>
> Hey folks, I’m looking into what I would think would be a simple problem, but
> is turning out to be more complicated than I would have anticipated. A
> virtual machine managed by OpenNebula was blown away, but the backing RBD
> im
On Fri, Jan 11, 2019 at 1:38 AM Brad Hubbard wrote:
>
> On Fri, Jan 11, 2019 at 9:57 AM Jason Dillaman wrote:
> >
> > I think Ilya recently looked into a bug that can occur when
> > CONFIG_HARDENED_USERCOPY is enabled and the IO's TCP message goes
> > through the loopback interface (i.e. co-locat
On Fri, Jan 11, 2019 at 11:58 AM Rom Freiman wrote:
>
> Same kernel :)
Rom, can you update your CentOS ticket with the link to the Ceph BZ?
Thanks,
Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listin
On Wed, Jan 16, 2019 at 1:27 AM Kjetil Joergensen wrote:
>
> Hi,
>
> you could try reducing "osd map message max", some code paths that end up as
> -EIO (kernel: libceph: mon1 *** io error) is exceeding
> include/linux/ceph/libceph.h:CEPH_MSG_MAX_{FRONT,MIDDLE,DATA}_LEN.
>
> This "worked for us"
On Wed, Jan 16, 2019 at 7:12 PM Andras Pataki
wrote:
>
> Hi Ilya/Kjetil,
>
> I've done some debugging and tcpdump-ing to see what the interaction
> between the kernel client and the mon looks like. Indeed -
> CEPH_MSG_MAX_FRONT defined as 16Mb seems low for the default mon
> messages for our clus
On Fri, Jan 18, 2019 at 9:25 AM Burkhard Linke
wrote:
>
> Hi,
>
> On 1/17/19 7:27 PM, Void Star Nill wrote:
>
> Hi,
>
> We am trying to use Ceph in our products to address some of the use cases. We
> think Ceph block device for us. One of the use cases is that we have a number
> of jobs running
On Fri, Jan 18, 2019 at 11:25 AM Mykola Golub wrote:
>
> On Thu, Jan 17, 2019 at 10:27:20AM -0800, Void Star Nill wrote:
> > Hi,
> >
> > We am trying to use Ceph in our products to address some of the use cases.
> > We think Ceph block device for us. One of the use cases is that we have a
> > numb
On Mon, Jan 21, 2019 at 11:43 AM ST Wong (ITSC) wrote:
>
> Hi, we’re trying mimic on an VM farm. It consists 4 OSD hosts (8 OSDs) and 3
> MON. We tried mounting as RBD and CephFS (fuse and kernel mount) on
> different clients without problem.
Is this an upgraded or a fresh cluster?
>
> Th
On Thu, Jan 24, 2019 at 8:16 PM Martin Palma wrote:
>
> We are experiencing the same issues on clients with CephFS mounted
> using the kernel client and 4.x kernels.
>
> The problem shows up when we add new OSDs, on reboots after
> installing patches and when changing the weight.
>
> Here the log
On Thu, Jan 24, 2019 at 6:21 PM Andras Pataki
wrote:
>
> Hi Ilya,
>
> Thanks for the clarification - very helpful.
> I've lowered osd_map_messages_max to 10, and this resolves the issue
> about the kernel being unhappy about large messages when the OSDMap
> changes. One comment here though: you m
On Fri, Jan 25, 2019 at 8:37 AM Martin Palma wrote:
>
> Hi Ilya,
>
> thank you for the clarification. After setting the
> "osd_map_messages_max" to 10 the io errors and the MDS error
> "MDS_CLIENT_LATE_RELEASE" are gone.
>
> The messages of "mon session lost, hunting for new new mon" didn't go
>
On Fri, Jan 25, 2019 at 9:40 AM Martin Palma wrote:
>
> > Do you see them repeating every 30 seconds?
>
> yes:
>
> Jan 25 09:34:37 sdccgw01 kernel: [6306813.737615] libceph: mon4
> 10.8.55.203:6789 session lost, hunting for new mon
> Jan 25 09:34:37 sdccgw01 kernel: [6306813.737620] libceph: mon3
On Mon, Jan 28, 2019 at 7:31 AM ST Wong (ITSC) wrote:
>
> > That doesn't appear to be an error -- that's just stating that it found a
> > dead client that was holding the exclusice-lock, so it broke the dead
> > client's lock on the image (by blacklisting the client).
>
> As there is only 1 RBD
On Mon, Feb 4, 2019 at 9:25 AM Massimo Sgaravatto
wrote:
>
> The official documentation [*] says that the only requirement to use the
> balancer in upmap mode is that all clients must run at least luminous.
> But I read somewhere (also in this mailing list) that there are also
> requirements wrt
On Wed, Feb 6, 2019 at 11:09 AM James Dingwall
wrote:
>
> Hi,
>
> I have been doing some testing with striped rbd images and have a
> question about the calculation of the optimal_io_size and
> minimum_io_size parameters. My test image was created using a 4M object
> size, stripe unit 64k and str
On Fri, Feb 15, 2019 at 12:05 AM Mike Perez wrote:
>
> Hi Marc,
>
> You can see previous designs on the Ceph store:
>
> https://www.proforma.com/sdscommunitystore
Hi Mike,
This site stopped working during DevConf and hasn't been working since.
I think Greg has contacted some folks about this, bu
On Wed, Feb 27, 2019 at 12:00 PM Thomas <74cmo...@gmail.com> wrote:
>
> Hi,
> I have noticed an error when writing to a mapped RBD.
> Therefore I unmounted the block device.
> Then I tried to unmap it w/o success:
> ld2110:~ # rbd unmap /dev/rbd0
> rbd: sysfs write failed
> rbd: unmap failed: (16)
On Tue, May 21, 2019 at 11:41 AM Marc Roos wrote:
>
>
>
> I have this on a cephfs client, I had ceph common on 12.2.11, and
> upgraded to 12.2.12 while having this error. They are writing here [0]
> you need to upgrade kernel and it is fixed in 12.2.2
>
> [@~]# uname -a
> Linux mail03 3.10.0-9
On Mon, Jun 10, 2019 at 8:03 PM Jason Dillaman wrote:
>
> On Mon, Jun 10, 2019 at 1:50 PM Jonas Jelten wrote:
> >
> > When I run:
> >
> > rbd map --name client.lol poolname/somenamespace/imagename
> >
> > The image is mapped to /dev/rbd0 and
> >
> > /dev/rbd/poolname/imagename
> >
> > I would
On Fri, Jul 12, 2019 at 12:33 PM Paul Emmerich wrote:
>
>
>
> On Thu, Jul 11, 2019 at 11:36 PM Marc Roos wrote:
>> Anyone know why I would get these? Is it not strange to get them in a
>> 'standard' setup?
>
> you are probably running on an ancient kernel. this bug has been fixed a long
> time a
On Fri, Jul 12, 2019 at 5:38 PM Marc Roos wrote:
>
>
> Thanks Ilya for explaining. Am I correct to understand from the link[0]
> mentioned in the issue, that because eg. I have an unhealthy state for
> some time (1 pg on a insignificant pool) I have larger osdmaps,
> triggering this issue? Or is j
On Tue, Jul 30, 2019 at 10:33 AM Massimo Sgaravatto
wrote:
>
> The documentation that I have seen says that the minimum requirements for
> clients to use upmap are:
>
> - CentOs 7.5 or kernel 4.5
> - Luminous version
Do you have a link for that?
This is wrong: CentOS 7.5 (i.e. RHEL 7.5 kernel)
On Tue, Aug 13, 2019 at 12:36 PM Serkan Çoban wrote:
>
> Hi,
>
> Just installed nautilus 14.2.2 and setup cephfs on it. OS is all centos 7.6.
> From a client I can mount the cephfs with ceph-fuse, but I cannot
> mount with ceph kernel client.
> It gives "mount error 110 connection timeout" and I c
On Tue, Aug 13, 2019 at 3:57 PM Serkan Çoban wrote:
>
> I checked /var/log/messages and see there are page allocation
> failures. But I don't understand why?
> The client has 768GB memory and most of it is not used, cluster has
> 1500OSDs. Do I need to increase vm.min_free_kytes? It is set to 1GB
On Tue, Aug 13, 2019 at 4:30 PM Serkan Çoban wrote:
>
> I am out of office right now, but I am pretty sure it was the same
> stack trace as in tracker.
> I will confirm tomorrow.
> Any workarounds?
Compaction
# echo 1 >/proc/sys/vm/compact_memory
might help if the memory in question is moveable
On Tue, Aug 13, 2019 at 6:37 PM Gesiel Galvão Bernardes
wrote:
>
> HI,
>
> I recently noticed that in two of my pools the command "rbd ls" has take
> several minutes to return the values. These pools have between 100 and 120
> images each.
>
> Where should I look to check why this slowness? The
On Tue, Aug 13, 2019 at 10:56 PM Tim Bishop wrote:
>
> Hi,
>
> This email is mostly a heads up for others who might be using
> Canonical's livepatch on Ubuntu on a CephFS client.
>
> I have an Ubuntu 18.04 client with the standard kernel currently at
> version linux-image-4.15.0-54-generic 4.15.0-
On Wed, Aug 14, 2019 at 1:54 PM Tim Bishop wrote:
>
> On Wed, Aug 14, 2019 at 12:44:15PM +0200, Ilya Dryomov wrote:
> > On Tue, Aug 13, 2019 at 10:56 PM Tim Bishop wrote:
> > > This email is mostly a heads up for others who might be using
> > > Canonical's liv
On Tue, Aug 13, 2019 at 1:06 PM Hector Martin wrote:
>
> I just had a minor CephFS meltdown caused by underprovisioned RAM on the
> MDS servers. This is a CephFS with two ranks; I manually failed over the
> first rank and the new MDS server ran out of RAM in the rejoin phase
> (ceph-mds didn't get
_inode().
>
> Cc: sta...@vger.kernel.org
> Link: https://tracker.ceph.com/issues/40102
> Signed-off-by: "Yan, Zheng"
> Reviewed-by: Jeff Layton
> Signed-off-by: Ilya Dryomov
> Signed-off-by: Sasha Levin
>
>
> Backing this patch out and recomp
On Tue, Oct 1, 2019 at 9:12 PM Jeff Layton wrote:
>
> On Tue, 2019-10-01 at 15:04 -0400, Sasha Levin wrote:
> > On Tue, Oct 01, 2019 at 01:54:45PM -0400, Jeff Layton wrote:
> > > On Tue, 2019-10-01 at 19:03 +0200, Ilya Dryomov wrote:
> > > > On Tue, Oct 1, 20
On Thu, Oct 17, 2019 at 3:38 PM Lei Liu wrote:
>
> Hi Cephers,
>
> We have some ceph clusters in 12.2.x version, now we want to use upmap
> balancer,but when i set set-require-min-compat-client to luminous, it's failed
>
> # ceph osd set-require-min-compat-client luminous
> Error EPERM: cannot se
On Sat, Oct 19, 2019 at 2:00 PM Lei Liu wrote:
>
> Hello llya,
>
> After updated client kernel version to 3.10.0-862 , ceph features shows:
>
> "client": {
> "group": {
> "features": "0x7010fb86aa42ada",
> "release": "jewel",
> "num": 5
> },
>
On Thu, Oct 24, 2019 at 5:45 PM Paul Emmerich wrote:
>
> Could it be related to the broken backport as described in
> https://tracker.ceph.com/issues/40102 ?
>
> (It did affect 4.19, not sure about 5.0)
It does, I have just updated the linked ticket to reflect that.
Thanks,
Ilya
On Thu, Dec 12, 2019 at 9:12 AM Ashley Merrick wrote:
>
> Due to the recent 5.3.x kernel having support for Object-Map and other
> features required in KRBD I have now enabled object-map,fast-diff on some RBD
> images with CEPH (14.2.5), I have rebuilt the object map using "rbd
> object-map reb
On Mon, Jan 6, 2020 at 2:51 PM M Ranga Swami Reddy wrote:
>
> Thank you.
> Can you please share a simple example here?
>
> Thanks
> Swami
>
> On Mon, Jan 6, 2020 at 4:02 PM wrote:
>>
>> Hi,
>>
>> rbd are thin provisionned, you need to trim on the upper level, either
>> via the fstrim command, or
On Thu, Jan 9, 2020 at 2:52 PM Kyriazis, George
wrote:
>
> Hello ceph-users!
>
> My setup is that I’d like to use RBD images as a replication target of a
> FreeNAS zfs pool. I have a 2nd FreeNAS (in a VM) to act as a backup target
> in which I mount the RBD image. All this (except the source F
On Fri, Jan 17, 2020 at 2:21 AM Aaron wrote:
>
> No worries, can definitely do that.
>
> Cheers
> Aaron
>
> On Thu, Jan 16, 2020 at 8:08 PM Jeff Layton wrote:
>>
>> On Thu, 2020-01-16 at 18:42 -0500, Jeff Layton wrote:
>> > On Wed, 2020-01-15 at 08:05 -0500, Aaron wrote:
>> > > Seeing a weird mou
On Tue, Jan 21, 2020 at 6:02 PM Hayashida, Mami wrote:
>
> I am trying to set up a CephFS with a Cache Tier (for data) on a mini test
> cluster, but a kernel-mount CephFS client is unable to write. Cache tier
> setup alone seems to be working fine (I tested it with `rados put` and `osd
> map`
On Tue, Jan 21, 2020 at 7:51 PM Hayashida, Mami wrote:
>
> Ilya,
>
> Thank you for your suggestions!
>
> `dmsg` (on the client node) only had `libceph: mon0 10.33.70.222:6789 socket
> error on write`. No further detail. But using the admin key (client.admin)
> for mounting CephFS solved my pro
401 - 493 of 493 matches
Mail list logo