Re: [ceph-users] ceph-mon not listening on IPv6?

2017-07-31 Thread Wido den Hollander
> Op 30 juli 2017 om 2:42 schreef Stuart Longland : > > > Hi all, > > I'm setting up an experimental cluster at home, consisting of 3 nodes > which run combined ceph-osd and ceph-mon daemons, and a pair of nodes > that run virtual machines. > > All 5 nodes run Intel Atom C2750s with 8GB RAM an

Re: [ceph-users] ceph-mon not listening on IPv6?

2017-07-31 Thread Stuart Longland
On 31/07/17 19:10, Wido den Hollander wrote: > >> Op 30 juli 2017 om 2:42 schreef Stuart Longland : >> As a result, I see messages like this from clients: >>> oneadmin@opennebula:~$ rados df --id libvirt >>> 2017-07-30 09:58:32.389376 7f4f611b4700 0 -- :/3981532287 >> >>> [2001:44b8:21ac:70fc::

Re: [ceph-users] Kernel mounted RBD's hanging

2017-07-31 Thread Ilya Dryomov
On Thu, Jul 13, 2017 at 12:54 PM, Ilya Dryomov wrote: > On Wed, Jul 12, 2017 at 7:15 PM, Nick Fisk wrote: >>> Hi Ilya, >>> >>> I have managed today to capture the kernel logs with debugging turned on >>> and the ms+osd debug logs from the mentioned OSD. >>> However, this is from a few minutes af

Re: [ceph-users] Kernel mounted RBD's hanging

2017-07-31 Thread Nick Fisk
> -Original Message- > From: Ilya Dryomov [mailto:idryo...@gmail.com] > Sent: 31 July 2017 11:36 > To: Nick Fisk > Cc: Ceph Users > Subject: Re: [ceph-users] Kernel mounted RBD's hanging > > On Thu, Jul 13, 2017 at 12:54 PM, Ilya Dryomov wrote: > > On Wed, Jul 12, 2017 at 7:15 PM, Nick

Re: [ceph-users] ceph-mon not listening on IPv6?

2017-07-31 Thread Wido den Hollander
> Op 31 juli 2017 om 11:40 schreef Stuart Longland : > > > On 31/07/17 19:10, Wido den Hollander wrote: > > > >> Op 30 juli 2017 om 2:42 schreef Stuart Longland > >> : > >> As a result, I see messages like this from clients: > >>> oneadmin@opennebula:~$ rados df --id libvirt > >>> 2017-07-30

Re: [ceph-users] ceph-disk activate-block: not a block device

2017-07-31 Thread bruno.canning
Hi All, We are seeing the same problem here at Ruthford Appleton Laboratory: During our patching against Stack Clash on our large physics data cluster, when rebooting the storage nodes about 8/36 OSD disks remount. We coaxed them to mount manually during the reboot campaign (see method below) b

[ceph-users] Client behavior when adding and removing mons

2017-07-31 Thread Edward R Huyer
I'm migrating my Ceph cluster to entirely new hardware. Part of that is replacing the monitors. My plan is to add new monitors and remove old ones, updating config files on client machines as I go. I have clients actively using the cluster. They are all QEMU/libvirt and kernel clients using R

Re: [ceph-users] Client behavior when adding and removing mons

2017-07-31 Thread Richard Hesketh
On 31/07/17 14:05, Edward R Huyer wrote: > I’m migrating my Ceph cluster to entirely new hardware. Part of that is > replacing the monitors. My plan is to add new monitors and remove old ones, > updating config files on client machines as I go. > > I have clients actively using the cluster. T

[ceph-users] radosgw hung when OS disks went readonly, different node radosgw restart fixed it

2017-07-31 Thread Sean Purdy
Hi, Just had an incident in a 3-node test cluster running 12.1.1 on debian stretch Each cluster had its own mon, mgr, radosgw, and osds. Just object store. I had s3cmd looping and uploading files via S3. On one of the machines, the RAID controller barfed and dropped the OS disks. Or the di

[ceph-users] Manual fix pg with bluestore

2017-07-31 Thread Marc Roos
I have an error with a placement group, and seem to only find these solutions based on a filesystem osd. http://ceph.com/geen-categorie/ceph-manually-repair-object/ Anybody have a link to how can I do this with a bluestore osd? /var/log/ceph/ceph-osd.9.log:48:2017-07-31 14:21:33.929855 7fbbb

Re: [ceph-users] CRC mismatch detection on read (XFS OSD)

2017-07-31 Thread Дмитрий Глушенок
You are right - missing xattrs are leading to ENOENT. Corrupting the file without removing xattrs leading to i/o error without marking PG as inconsistent. Created an issue: http://tracker.ceph.com/issues/20863 > 28 июля 2017 г., в 23:04, Gregory Farnum написал(а): > > > > On Fri, Jul 28, 201

[ceph-users] Ceph - OpenStack space efficiency

2017-07-31 Thread Italo Santos
Hello everyone, As we know the Openstack ceph integration uses ceph rbd snapshot feature to give more space efficiency. Mybquestion is if there some way to calculate this space saving/efficiency by using rbd snapshots/clones? ___ ceph-users mailing list