On Thu, Sep 17, 2015 at 6:01 PM, Gregory Farnum wrote:
> On Thu, Sep 17, 2015 at 7:55 AM, Corin Langosch
> wrote:
>> Hi Greg,
>>
>> Am 17.09.2015 um 16:42 schrieb Gregory Farnum:
>>> Briefly, if you do a lot of small direct IOs (for instance, a database
>>> journal) then striping lets you send ea
On Thu, Sep 24, 2015 at 7:05 AM, Robert LeBlanc wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> If you use RADOS gateway, RBD or CephFS, then you don't need to worry
> about striping. If you write your own application that uses librados,
> then you have to worry about it. I understa
On Thu, Sep 24, 2015 at 12:33 PM, Wido den Hollander wrote:
>
>
> On 24-09-15 11:06, Ilya Dryomov wrote:
>> On Thu, Sep 24, 2015 at 7:05 AM, Robert LeBlanc wrote:
>>> -BEGIN PGP SIGNED MESSAGE-
>>> Hash: SHA256
>>>
>>> If you use RADOS g
On Fri, Sep 25, 2015 at 5:53 PM, Межов Игорь Александрович
wrote:
> Hi!
>
> Thanks!
>
> I have some suggestions for the 1st method:
>
>>You could get the name prefix for each RBD from rbd info,
> Yes, I did it already at the steps 1 and 2. I forgot to mention, that I grab
> rbd frefix from 'rbd in
On Fri, Sep 25, 2015 at 7:17 PM, Jeff Epstein
wrote:
> We occasionally have a situation where we are unable to unmap an rbd. This
> occurs intermittently, with no obvious cause. For the most part, rbds can be
> unmapped fine, but sometimes we get this:
>
> # rbd unmap /dev/rbd450
> rbd: sysfs writ
On Fri, Sep 25, 2015 at 7:41 PM, Jeff Epstein
wrote:
> On 09/25/2015 12:38 PM, Ilya Dryomov wrote:
>>
>> On Fri, Sep 25, 2015 at 7:17 PM, Jeff Epstein
>> wrote:
>>>
>>> We occasionally have a situation where we are unable to unmap an rbd.
>>> This
On Wed, Oct 14, 2015 at 5:13 PM, Saverio Proto wrote:
> Hello,
>
> debugging slow requests behaviour of our Rados Gateway, I run into
> this linger_ops field and I cannot understand the meaning.
>
> I would expect in the "ops" field to find slow requests stucked there.
> Actually most of the time
On Wed, Oct 14, 2015 at 6:05 PM, Jan Schermer wrote:
> But that's exactly what filesystems and their own journals do already :-)
They do it for filesystem "transactions", not ceph transactions. It's
true that there is quite a bit of double journaling going on - newstore
should help with that qui
On Thu, Oct 22, 2015 at 10:59 PM, Allen Liao wrote:
> Does ceph guarantee image consistency if an rbd image is unmapped on one
> machine then immediately mapped on another machine? If so, does the same
> apply to issuing a snapshot command on machine B as soon as the unmap
> command finishes on m
On Thu, Oct 29, 2015 at 8:13 AM, Wah Peng wrote:
> hello,
>
> do you know why this happens when I did it following the official
> ducumentation.
>
> $ sudo rbd map foo --name client.admin
>
> rbd: add failed: (5) Input/output error
>
>
> the OS kernel,
>
> $ uname -a
> Linux ceph.yygamedev.com 3.2
On Thu, Oct 29, 2015 at 11:22 AM, Wah Peng wrote:
> Thanks Gurjar.
> Have loaded the rbd module, but got no luck.
> what dmesg shows,
>
> [119192.384770] libceph: mon0 172.17.6.176:6789 feature set mismatch, my 2 <
> server's 42040002, missing 4204
> [119192.388744] libceph: mon0 172.17.6.176:
On Fri, Oct 30, 2015 at 1:18 PM, Florent B wrote:
> Hi,
>
> Just a little question for krbd gurus : Proxmox 4 uses 4.2.2 kernel, is
> krbd stable for Hammer ? And will it be for Infernalis ?
krbd is in upstream kernel - speaking in terms of ceph releases has
little point. It is stable and 4.2.2
On Mon, Nov 9, 2015 at 10:44 AM, Bogdan SOLGA wrote:
> Hello Adam!
>
> Thank you very much for your advice, I will try setting the tunables to
> 'firefly'.
Won't work. OS Recommendations page clearly states that firefly
tunables are supported starting with 3.15. 3.13, which came out
5 months (a
On Fri, Nov 13, 2015 at 5:25 AM, Mark Kirkwood
wrote:
> When you do:
>
> $ rbd create
>
> You are using the kernel (i.e 3.13) code for rbd, and this is likely much
> older than the code you just built for the rest of Ceph. You *might* have
> better luck installing the vivid kernel (3.19) on your t
On Mon, Nov 30, 2015 at 7:17 PM, Timofey Titovets wrote:
> Hi list,
> Short:
> i just want ask, why i can't do:
> echo 129 > /sys/class/block/rbdX/queue/nr_requests
>
> i.e. why i can't set value greater then 128?
> Why such a restriction?
>
> Long:
> Usage example:
> i have slow CEPH HDD based st
On Mon, Nov 30, 2015 at 7:47 PM, Timofey Titovets wrote:
>
> On 30 Nov 2015 21:19, "Ilya Dryomov" wrote:
>>
>> On Mon, Nov 30, 2015 at 7:17 PM, Timofey Titovets
>> wrote:
>> > Hi list,
>> > Short:
>> > i just want ask, why i can&
On Wed, Dec 2, 2015 at 12:15 PM, MinhTien MinhTien
wrote:
> Hi all,
>
> When I map block device in client kernel 3.10.93-1.el6.elrepo.x86_64, i get
> error:
>
> libceph: mon0 ip:6789 feature set mismatch, my 4a042a42 < server's
> 184a042a42, missing 18
> libceph: mon0 ip:6789 socket error
On Tue, Dec 8, 2015 at 10:57 AM, Tom Christensen wrote:
> We aren't running NFS, but regularly use the kernel driver to map RBDs and
> mount filesystems in same. We see very similar behavior across nearly all
> kernel versions we've tried. In my experience only very few versions of the
> kernel
On Tue, Dec 8, 2015 at 11:53 AM, Tom Christensen wrote:
> To be clear, we are also using format 2 RBDs, so we didn't really expect it
> to work until recently as it was listed as unsupported. We are under the
> understanding that as of 3.19 RBD format 2 should be supported. Are we
> incorrect in
On Tue, Apr 26, 2016 at 5:45 PM, Somnath Roy wrote:
> By default image format is 2 in jewel which is not supported by krbd..try
> creating image with --image-format 1 and it should be resolved..
With the default striping pattern (no --stripe-unit or --stripe-count
at create time), --image-format
On Mon, May 9, 2016 at 12:19 AM, K.C. Wong wrote:
>
>> As the tip said, you should not use rbd via kernel module on an OSD host
>>
>> However, using it with userspace code (librbd etc, as in kvm) is fine
>>
>> Generally, you should not have both:
>> - "server" in userspace
>> - "client" in kernels
On Thu, May 12, 2016 at 4:37 PM, Andrey Shevel wrote:
> Thanks a lot.
>
> I tried
>
> [ceph@ceph-client ~]$ ceph osd crush tunables hammer.
>
> Now I have
>
>
> [ceph@ceph-client ~]$ ceph osd crush show-tunables
> {
> "choose_local_tries": 0,
> "choose_local_fallback_tries": 0,
> "choo
On Fri, May 13, 2016 at 8:02 PM, Gregory Farnum wrote:
>
>
> On Friday, May 13, 2016, Andrus, Brian Contractor wrote:
>>
>> So I see that support for RHEL6 and derivatives was dropped in Jewel
>> (http://ceph.com/releases/v10-2-0-jewel-released/)
>>
>>
>>
>> But is there backward compatibility to
On Fri, May 13, 2016 at 10:11 PM, Steven Hsiao-Ting Lee
wrote:
> Hi,
>
> I’m playing with Jewel and discovered format 1 images have been deprecated.
> Since the rbd kernel module in CentOS/RHEL 7 does not yet support format 2
> images, how do I access RBD images created in Jewel from CentOS/RHEL
On Wed, May 18, 2016 at 3:56 PM, Jürgen Ludyga
wrote:
> Hi,
>
>
>
> I’ve question: Do you ever fix the error you’ve mentioned in this post:
>
>
>
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-April/000364.html
I did - http://tracker.ceph.com/issues/11449.
Thanks,
Il
On Mon, May 23, 2016 at 10:12 AM, David wrote:
> Hi All
>
> I'm doing some testing with OpenSUSE Leap 42.1, it ships with kernel 4.1.12
> but I've also tested with 4.1.24
>
> When I map an image with the kernel RBD client, max_sectors_kb = 127. I'm
> unable to increase:
>
> # echo 4096 > /sys/bloc
On Mon, May 23, 2016 at 12:27 PM, Albert Archer
wrote:
> Hello All,
> There is a problem to mapping RBD images to ubuntu 16.04(Kernel
> 4.4.0-22-generic).
> All of the ceph solution is based on ubunut 16.04(deploy, monitors, OSDs and
> Clients).
> #
> ##
On Mon, May 23, 2016 at 2:59 PM, Albert Archer wrote:
>
> Thanks.
> but how to use these features ???
> so,there is no way to implement them on ubuntu 16.04 kernel (4.4.0) ???
> it's strange !!!
What is your use case? If you are using the kernel client, create your
images with
$ rbd create
On Fri, May 27, 2016 at 8:51 PM, Max Yehorov wrote:
> Hi,
> If anyone has some insight or comments on the question:
>
> Q) Flatten with IO activity
> For example I have a clone chain:
>
> IMAGE(PARENT)
> image1(-)
> image2(image1@snap0)
>
> image2 is mapped, mounted and has some IO activity.
>
On Mon, May 30, 2016 at 4:12 PM, Jens Offenbach wrote:
> Hallo,
> in my OpenStack Mitaka, I have installed the additional service "Manila" with
> a CephFS backend. Everything is working. All shares are created successfully:
>
> manila show 9dd24065-97fb-4bcd-9ad1-ca63d40bf3a8
> +-
On Mon, May 30, 2016 at 10:54 AM, dbgong wrote:
> Hi all,
>
> I create a image on my Centos 7 client.
> Then map the device and format to ext4, and mount in /mnt/ceph-hd
> and I have added many files to /mnt/ceph-hd.
>
> and I didn't not set rbd start on boot.
> then after the server reboot, I can
> On Wed, Jun 1, 2016 at 6:15 AM, James Webb wrote:
>> Dear ceph-users...
>>
>> My team runs an internal buildfarm using ceph as a backend storage platform.
>> We’ve recently upgraded to Jewel and are having reliability issues that we
>> need some help with.
>>
>> Our infrastructure is the follo
On Wed, Jun 1, 2016 at 2:49 PM, Sage Weil wrote:
> On Wed, 1 Jun 2016, Yan, Zheng wrote:
>> On Wed, Jun 1, 2016 at 6:15 AM, James Webb wrote:
>> > Dear ceph-users...
>> >
>> > My team runs an internal buildfarm using ceph as a backend storage
>> > platform. We’ve recently upgraded to Jewel and a
On Wed, Jun 1, 2016 at 4:22 PM, Sage Weil wrote:
> On Wed, 1 Jun 2016, Yan, Zheng wrote:
>> On Wed, Jun 1, 2016 at 8:49 PM, Sage Weil wrote:
>> > On Wed, 1 Jun 2016, Yan, Zheng wrote:
>> >> On Wed, Jun 1, 2016 at 6:15 AM, James Webb wrote:
>> >> > Dear ceph-users...
>> >> >
>> >> > My team runs
On Thu, Jun 2, 2016 at 4:46 AM, Yan, Zheng wrote:
> On Wed, Jun 1, 2016 at 10:22 PM, Sage Weil wrote:
>> On Wed, 1 Jun 2016, Yan, Zheng wrote:
>>> On Wed, Jun 1, 2016 at 8:49 PM, Sage Weil wrote:
>>> > On Wed, 1 Jun 2016, Yan, Zheng wrote:
>>> >> On Wed, Jun 1, 2016 at 6:15 AM, James Webb wrote
On Fri, Jun 10, 2016 at 9:29 PM, Michael Kuriger wrote:
> Hi Everyone,
> I’ve been running jewel for a while now, with tunables set to hammer.
> However, I want to test the new features but cannot find a fully compatible
> Kernel for CentOS 7. I’ve tried a few of the elrepo kernels - elrepo-ke
On Mon, Jun 13, 2016 at 8:37 PM, Michael Kuriger wrote:
> I just realized that this issue is probably because I’m running jewel 10.2.1
> on the servers side, but accessing from a client running hammer 0.94.7 or
> infernalis 9.2.1
>
> Here is what happens if I run rbd ls from a client on infernal
On Wed, Jun 15, 2016 at 6:56 PM, Michael Kuriger wrote:
> Still not working with newer client. But I get a different error now.
>
> [root@test ~]# rbd ls
> test1
>
> [root@test ~]# rbd showmapped
>
> [root@test ~]# rbd map test1
> rbd: sysfs write failed
> RBD image feature set mismatch. You can
On Wed, Jun 15, 2016 at 7:05 PM, Michael Kuriger wrote:
> Hmm, if I only enable layering features I can get it to work. But I’m
> puzzled why all the (default) features are not working with my system fully
> up to date.
>
> Any ideas? Is this not yet supported?
Yes, these features aren't yet
On Fri, Jun 17, 2016 at 2:01 PM, Ishmael Tsoaela wrote:
> Hi,
>
> Will someone please assist, I am new to cepph and I am trying to map image
> and this happens:
>
> cluster-admin@nodeB:~/.ssh/ceph-cluster$ rbd map data_01 --pool data
> rbd: sysfs write failed
> In some cases useful info is found i
On Fri, Jun 17, 2016 at 2:37 PM, Ishmael Tsoaela wrote:
> Hi,
>
> Thank you for the response but with sudo all it does is freeze:
>
> rbd map data_01 --pool data
>
>
> cluster-admin@nodeB:~/.ssh/ceph-cluster$ date && sudo rbd map data_01 --pool
> data && date
> Fri Jun 17 14:36:41 SAST 2016
What'
On Sat, Jul 23, 2016 at 4:39 PM, Nathanial Byrnes wrote:
> I found it. I'm not sure how the block device was created, but, I had the
> wrong image format. I thought that image format 2 was in 3.11+, and debian
> 8.5 is 3.16 ... but I attached to an image-format 1 image and /dev/rbd0
> magically a
On Mon, Aug 1, 2016 at 7:55 PM, Alex Gorbachev wrote:
> RBD illustration showing RBD ignoring discard until a certain
> threshold - why is that? This behavior is unfortunately incompatible
> with ESXi discard (UNMAP) behavior.
>
> Is there a way to lower the discard sensitivity on RBD devices?
>
On Mon, Aug 1, 2016 at 9:07 PM, Ilya Dryomov wrote:
> On Mon, Aug 1, 2016 at 7:55 PM, Alex Gorbachev
> wrote:
>> RBD illustration showing RBD ignoring discard until a certain
>> threshold - why is that? This behavior is unfortunately incompatible
>> with ESXi discard (U
On Tue, Aug 2, 2016 at 1:05 AM, Alex Gorbachev wrote:
> Hi Ilya,
>
> On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov wrote:
>> On Mon, Aug 1, 2016 at 7:55 PM, Alex Gorbachev
>> wrote:
>>> RBD illustration showing RBD ignoring discard until a certain
>>> thr
On Tue, Aug 2, 2016 at 3:49 PM, Alex Gorbachev wrote:
> On Mon, Aug 1, 2016 at 11:03 PM, Vladislav Bolkhovitin wrote:
>> Alex Gorbachev wrote on 08/01/2016 04:05 PM:
>>> Hi Ilya,
>>>
>>> On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov wrote:
>>>>
On Thu, Aug 4, 2016 at 10:44 PM, K.C. Wong wrote:
> Thank you, Jason.
>
> While I can't find the culprit for the watcher (the watcher never expired,
> and survived a reboot. udev, maybe?), blacklisting the host did allow me
> to remove the device.
It survived a reboot because watch state is persi
On Sat, Aug 6, 2016 at 1:10 AM, Alex Gorbachev wrote:
> Is there a way to perhaps increase the discard granularity? The way I see
> it based on the discussion so far, here is why discard/unmap is failing to
> work with VMWare:
>
> - RBD provides space in 4MB blocks, which must be discarded entire
On Sun, Aug 7, 2016 at 7:57 PM, Alex Gorbachev wrote:
>> I'm confused. How can a 4M discard not free anything? It's either
>> going to hit an entire object or two adjacent objects, truncating the
>> tail of one and zeroing the head of another. Using rbd diff:
>>
>> $ rbd diff test | grep -A 1 2
On Mon, Aug 8, 2016 at 11:47 PM, Jason Dillaman wrote:
> On Mon, Aug 8, 2016 at 5:39 PM, Jason Dillaman wrote:
>> Unfortunately, for v2 RBD images, this image name to image id mapping
>> is stored in the LevelDB database within the OSDs and I don't know,
>> offhand, how to attempt to recover dele
On Mon, Aug 15, 2016 at 9:54 AM, Chengwei Yang
wrote:
> Hi List,
>
> I read from ceph document[1] that there are several rbd image features
>
> - layering: layering support
> - striping: striping v2 support
> - exclusive-lock: exclusive locking support
> - object-map: object map support (r
On Tue, Aug 16, 2016 at 5:18 AM, Yanjun Shen wrote:
> hi,
>when i run cep map -p pool rbd test, error
> hdu@ceph-mon2:~$ sudo rbd map -p rbd test
> rbd: sysfs write failed
> In some cases useful info is found in syslog - try "dmesg | tail" or so.
> rbd: map failed: (5) Input/output error
>
> d
On Tue, Aug 16, 2016 at 4:06 AM, Chengwei Yang
wrote:
> On Mon, Aug 15, 2016 at 03:27:50PM +0200, Ilya Dryomov wrote:
>> On Mon, Aug 15, 2016 at 9:54 AM, Chengwei Yang
>> wrote:
>> > Hi List,
>> >
>> > I read from ceph document[1] that there are several
On Thu, Jun 29, 2017 at 4:30 PM, Nick Fisk wrote:
> Hi All,
>
> Putting out a call for help to see if anyone can shed some light on this.
>
> Configuration:
> Ceph cluster presenting RBD's->XFS->NFS->ESXi
> Running 10.2.7 on the OSD's and 4.11 kernel on the NFS gateways in a
> pacemaker cluster
>
On Thu, Jun 29, 2017 at 6:22 PM, Nick Fisk wrote:
>> -Original Message-
>> From: Ilya Dryomov [mailto:idryo...@gmail.com]
>> Sent: 29 June 2017 16:58
>> To: Nick Fisk
>> Cc: Ceph Users
>> Subject: Re: [ceph-users] Kernel mounted RBD's hanging
&g
On Fri, Jun 30, 2017 at 2:14 PM, Nick Fisk wrote:
>
>
>> -Original Message-----
>> From: Ilya Dryomov [mailto:idryo...@gmail.com]
>> Sent: 29 June 2017 18:54
>> To: Nick Fisk
>> Cc: Ceph Users
>> Subject: Re: [ceph-users] Kernel mounted RBD'
On Sat, Jul 1, 2017 at 9:29 AM, Nick Fisk wrote:
>> -Original Message-
>> From: Ilya Dryomov [mailto:idryo...@gmail.com]
>> Sent: 30 June 2017 14:06
>> To: Nick Fisk
>> Cc: Ceph Users
>> Subject: Re: [ceph-users] Kernel mounted RBD's hanging
&g
On Wed, Jul 5, 2017 at 7:55 PM, Stanislav Kopp wrote:
> Hello,
>
> I have problem that sometimes I can't unmap rbd device, I get "sysfs
> write failed rbd: unmap failed: (16) Device or resource busy", there
> is no open files and "holders" directory is empty. I saw on the
> mailling list that you
On Wed, Jul 5, 2017 at 8:32 PM, David Turner wrote:
> I had this problem occasionally in a cluster where we were regularly mapping
> RBDs with KRBD. Something else we saw was that after this happened for
> un-mapping RBDs, was that it would start preventing mapping some RBDs as
> well. We were a
On Thu, Jul 6, 2017 at 1:28 PM, Stanislav Kopp wrote:
> Hi,
>
> 2017-07-05 20:31 GMT+02:00 Ilya Dryomov :
>> On Wed, Jul 5, 2017 at 7:55 PM, Stanislav Kopp wrote:
>>> Hello,
>>>
>>> I have problem that sometimes I can't unmap rbd device, I get
On Thu, Jul 6, 2017 at 2:23 PM, Stanislav Kopp wrote:
> 2017-07-06 14:16 GMT+02:00 Ilya Dryomov :
>> On Thu, Jul 6, 2017 at 1:28 PM, Stanislav Kopp wrote:
>>> Hi,
>>>
>>> 2017-07-05 20:31 GMT+02:00 Ilya Dryomov :
>>>> On Wed, Jul 5, 201
On Fri, Jul 7, 2017 at 12:10 PM, Nick Fisk wrote:
> Managed to catch another one, osd.75 again, not sure if that is an indication
> of anything or just a co-incidence. osd.75 is one of 8 OSD's in a cache tier,
> so all IO will be funnelled through them.
>
>
> cat
> /sys/kernel/debug/ceph/d027d5
On Wed, Jul 12, 2017 at 7:15 PM, Nick Fisk wrote:
>> Hi Ilya,
>>
>> I have managed today to capture the kernel logs with debugging turned on and
>> the ms+osd debug logs from the mentioned OSD.
>> However, this is from a few minutes after the stall starts, not before. The
>> very random nature o
On Fri, Jul 14, 2017 at 11:29 AM, Riccardo Murri
wrote:
> Hello,
>
> I am trying to install a test CephFS "Luminous" system on Ubuntu 16.04.
>
> Everything looks fine, but the `mount.ceph` command fails (error 110,
> timeout);
> kernel logs show a number of messages like these before the `mount`
On Wed, Jul 12, 2017 at 7:11 PM, wrote:
> Hi!
>
> I have installed Ceph using ceph-deploy.
> The Ceph Storage Cluster setup includes these nodes:
> ld4257 Monitor0 + Admin
> ld4258 Montor1
> ld4259 Monitor2
> ld4464 OSD0
> ld4465 OSD1
>
> Ceph Health status is OK.
>
> However, I cannot mount Ceph
On Thu, Jul 6, 2017 at 2:43 PM, Ilya Dryomov wrote:
> On Thu, Jul 6, 2017 at 2:23 PM, Stanislav Kopp wrote:
>> 2017-07-06 14:16 GMT+02:00 Ilya Dryomov :
>>> On Thu, Jul 6, 2017 at 1:28 PM, Stanislav Kopp wrote:
>>>> Hi,
>>>>
>>>> 2017-07
On Thu, Jul 20, 2017 at 3:23 PM, Дмитрий Глушенок wrote:
> Looks like I have similar issue as described in this bug:
> http://tracker.ceph.com/issues/15255
> Writer (dd in my case) can be restarted and then writing continues, but
> until restart dd looks like hanged on write.
>
> 20 июля 2017 г.,
On Thu, Jul 20, 2017 at 4:26 PM, Roger Brown wrote:
> What's the trick to overcoming unsupported features error when mapping an
> erasure-coded rbd? This is on Ceph Luminous 12.1.1, Ubuntu Xenial, Kernel
> 4.10.0-26-lowlatency.
>
> Steps to replicate:
>
> $ ceph osd pool create rbd_data 32 32 eras
On Thu, Jul 20, 2017 at 6:35 PM, Дмитрий Глушенок wrote:
> Hi Ilya,
>
> While trying to reproduce the issue I've found that:
> - it is relatively easy to reproduce 5-6 minutes hangs just by killing
> active mds process (triggering failover) while writing a lot of data.
> Unacceptable timeout, but
;ve never seen the endless
hang with ceph-fuse, it's probably a fairly simple to fix (might be
hard to track down though) kernel client bug.
>
> 21 июля 2017 г., в 15:47, Ilya Dryomov написал(а):
>
> On Thu, Jul 20, 2017 at 6:35 PM, Дмитрий Глушенок wrote:
>
> Hi Ilya,
>
On Mon, Jul 24, 2017 at 6:35 PM, wrote:
> Hi,
>
> I'm running a Ceph cluster which I started back in bobtail age and kept it
> running/upgrading over the years. It has three nodes, each running one MON,
> 10 OSDs and one MDS. The cluster has one MDS active and two standby.
> Machines are 8-core O
On Thu, Jul 13, 2017 at 12:54 PM, Ilya Dryomov wrote:
> On Wed, Jul 12, 2017 at 7:15 PM, Nick Fisk wrote:
>>> Hi Ilya,
>>>
>>> I have managed today to capture the kernel logs with debugging turned on
>>> and the ms+osd debug logs from the mentioned OSD.
On Tue, Aug 15, 2017 at 11:34 AM, moftah moftah wrote:
> Hi All,
>
> I have search everywhere for some sort of table that show kernel version to
> what rbd image features supported and didnt find any.
>
> basically I am looking at latest kernels from kernel.org , and i am thinking
> of upgrading t
On Sat, Aug 19, 2017 at 1:39 AM, Mingliang LIU
wrote:
> Hi all,
>
> I have a quick question about the RBD kernel module - how to best collect
> the metrics or perf numbers? The command 'ceph -w' does print some useful
> event logs of cluster-wide while I'm interested in
> per-client/per-image/per-
On Mon, Aug 21, 2017 at 2:53 PM, Stéphane Klein
wrote:
> Hi,
>
> I look for environment variable to configure rbd "-c" parameter and
> "--keyfile" parameter.
>
> I found nothing in http://docs.ceph.com/docs/master/man/8/rbd/
For -c, CEPH_CONF.
I don't think there is an environment variable for -
On Mon, Aug 21, 2017 at 4:11 PM, Ilya Dryomov wrote:
> On Mon, Aug 21, 2017 at 2:53 PM, Stéphane Klein
> wrote:
>> Hi,
>>
>> I look for environment variable to configure rbd "-c" parameter and
>> "--keyfile" parameter.
>>
>> I found
On Wed, Sep 6, 2017 at 9:16 AM, Henrik Korkuc wrote:
> On 17-09-06 09:10, Ashley Merrick wrote:
>
> I was just going by : docs.ceph.com/docs/master/start/os-recommendations/
>
>
> Which states 4.9
>
>
> docs.ceph.com/docs/master/rados/operations/crush-map
>
>
> Only goes as far as Jewel and states
On Wed, Sep 6, 2017 at 11:23 AM, Ashley Merrick wrote:
> Only drive for it was to be able to use this:
>
> http://docs.ceph.com/docs/master/rados/operations/upmap/
>
> To see if would help with the current very uneven PG MAP across 100+ OSD's,
> something that can wait if current kernel isn't rea
On Sat, Sep 23, 2017 at 12:07 AM, Muminul Islam Russell
wrote:
> Hi Ilya,
>
> Hope you are doing great.
> Sorry for bugging you. I did not find enough resources for my question. I
> would be really helped if you could reply me. My questions are in red
> colour.
>
> - layering: layering suppo
On Thu, Oct 5, 2017 at 9:05 AM, Osama Hasebou wrote:
> Hi Everyone,
>
> We are testing running Ceph as a backend for Xen server acting as the
> client, and when a pool was created and mounted it as RBD in one of the
> client server, while adding data to it, we see this below error :
>
>
On Thu, Oct 5, 2017 at 9:03 AM, Olivier Bonvalet wrote:
> I also see that, but on 4.9.52 and 4.13.3 kernel.
>
> I also have some kernel panic, but don't know if it's related (RBD are
> mapped on Xen hosts).
Do you have that panic message?
Do you use rbd devices for something other than Xen? If
On Thu, Oct 5, 2017 at 7:53 AM, Adrian Saul
wrote:
>
> We see the same messages and are similarly on a 4.4 KRBD version that is
> affected by this.
>
> I have seen no impact from it so far that I know about
>
>
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.cep
On Thu, Oct 5, 2017 at 12:01 PM, Olivier Bonvalet wrote:
> Le jeudi 05 octobre 2017 à 11:47 +0200, Ilya Dryomov a écrit :
>> The stable pages bug manifests as multiple sporadic connection
>> resets,
>> because in that case CRCs computed by the kernel don't always matc
On Thu, Oct 5, 2017 at 6:05 PM, Olivier Bonvalet wrote:
> Le jeudi 05 octobre 2017 à 17:03 +0200, Ilya Dryomov a écrit :
>> When did you start seeing these errors? Can you correlate that to
>> a ceph or kernel upgrade? If not, and if you don't see other issues,
>>
On Wed, Oct 11, 2017 at 4:40 PM, Olivier Bonvalet wrote:
> Hi,
>
> I had a "general protection fault: " with Ceph RBD kernel client.
> Not sure how to read the call, is it Ceph related ?
>
>
> Oct 11 16:15:11 lorunde kernel: [311418.891238] general protection fault:
> [#1] SMP
> Oct 11 1
On Thu, Oct 12, 2017 at 12:23 PM, Jeff Layton wrote:
> On Thu, 2017-10-12 at 09:12 +0200, Ilya Dryomov wrote:
>> On Wed, Oct 11, 2017 at 4:40 PM, Olivier Bonvalet
>> wrote:
>> > Hi,
>> >
>> > I had a "general protection fault: " with C
On Fri, Oct 27, 2017 at 6:33 PM, Bogdan SOLGA wrote:
> Hello, everyone!
>
> We have recently upgraded our Ceph pool to the latest Luminous release. On
> one of the servers that we used as Ceph clients we had several freeze
> issues, which we empirically linked to the concurrent usage of some I/O
>
On Tue, Oct 31, 2017 at 8:05 AM, shadow_lin wrote:
> Hi Jason,
> Thank you for your advice.
> The no discard option works gread.It now takes 5min to format 5t rbd image
> in xfs and only seconds to format in ext4.
> Is there any drawback to format rbd image with no discard option?
If it is a fres
On Fri, Nov 3, 2017 at 2:51 AM, Jason Dillaman wrote:
> There was a little delay getting things merged in the upstream kernel so we
> are now hoping for v4.16. You should be able to take a 4.15 rc XYZ kernel
I think that should be 4.15 and 4.14-rc respectively ;)
> and apply the patches from thi
On Mon, Nov 6, 2017 at 8:22 AM, Stefan Priebe - Profihost AG
wrote:
> Hello,
>
> is there already a kernel available which connects with luminous?
>
> ceph features tells for my kernel clients still release jewel.
require-min-compat-client = jewel is the default for new luminous
clusters.
4.13 s
On Mon, Nov 13, 2017 at 7:45 AM, ? ? wrote:
> I met the same issue as http://tracker.ceph.com/issues/3370 ,
>
> But I can’t find the commit id of 2978257c56935878f8a756c6cb169b569e99bb91 ,
> Can someone help me?
I updated the ticket. It's very old though, which kernel are you
running?
Thanks,
On Mon, Nov 13, 2017 at 10:18 AM, 周 威 wrote:
> Hi, Ilya
>
> I'm using the kernel of centos 7, should be 3.10
> I checked the patch, and it is appears in my kernel source.
> We got the same stack of #3370, the process is hung in sleep_on_page_killable.
> The debugs/ceph/osdc shows there is a read r
On Mon, Nov 13, 2017 at 10:53 AM, 周 威 wrote:
> Hi, Ilya
>
> The kernel version is 3.10.106.
> Part of dmesg related to ceph:
> [7349718.004905] libceph: osd297 down
> [7349718.005190] libceph: osd299 down
> [7349785.671015] libceph: osd295 down
> [7350006.357509] libceph: osd291 weight 0x0 (out)
>
On Tue, Dec 12, 2017 at 8:18 PM, fcid wrote:
> Hello everyone,
>
> We had an incident regarding a client which reboot after experiencing some
> issues with a ceph cluster.
>
> The other clients who consume RBD images from the same ceph cluster showed
> and error at the time of the reboot in logs r
On Wed, Dec 20, 2017 at 6:56 PM, Jason Dillaman wrote:
> ... looks like this watch "timeout" was introduced in the kraken
> release [1] so if you don't see this issue with a Jewel cluster, I
> suspect that's the cause.
>
> [1] https://github.com/ceph/ceph/pull/11378
Strictly speaking that's a bac
On Wed, Dec 20, 2017 at 6:20 PM, Serguei Bezverkhi (sbezverk)
wrote:
> It took 30 minutes for the Watcher to time out after ungraceful restart. Is
> there a way limit it to something a bit more reasonable? Like 1-3 minutes?
>
> On 2017-12-20, 12:01 PM, "Serguei Bezverkhi (sbezverk)"
> wrote:
>
On Thu, Dec 21, 2017 at 3:04 PM, Serguei Bezverkhi (sbezverk)
wrote:
> Hi Ilya,
>
> Here you go, no k8s services running this time:
>
> sbezverk@kube-4:~$ sudo rbd map raw-volume --pool kubernetes --id admin -m
> 192.168.80.233 --key=AQCeHO1ZILPPDRAA7zw3d76bplkvTwzoosybvA==
> /dev/rbd0
> sbezver
On Mon, Jan 29, 2018 at 8:37 AM, Konstantin Shalygin wrote:
> Anybody know about changes in rbd feature 'striping'? May be is deprecated
> feature? What I mean:
>
> I have volume created by Jewel client on Luminous cluster.
>
> # rbd --user=cinder info
> solid_rbd/volume-12b5df1e-df4c-4574-859d-22
On Thu, Feb 8, 2018 at 11:20 AM, Martin Emrich
wrote:
> I have a machine here mounting a Ceph RBD from luminous 12.2.2 locally,
> running linux-generic-hwe-16.04 (4.13.0-32-generic).
>
> Works fine, except that it does not support the latest features: I had to
> disable exclusive-lock,fast-diff,ob
On Thu, Feb 8, 2018 at 12:54 PM, Kevin Olbrich wrote:
> 2018-02-08 11:20 GMT+01:00 Martin Emrich :
>>
>> I have a machine here mounting a Ceph RBD from luminous 12.2.2 locally,
>> running linux-generic-hwe-16.04 (4.13.0-32-generic).
>>
>> Works fine, except that it does not support the latest feat
201 - 300 of 493 matches
Mail list logo