Re: [ceph-users] what do pull request label "cleanup" mean?

2016-05-27 Thread John Spray
On Fri, May 27, 2016 at 4:46 AM, wrote: > what do pull request label "cleanup" mean? It's used when something is not exactly a bug fix (the existing code works) and not a feature (doesn't add new functionality), but improves the code in some other way, usually by removing something unneeded or m

[ceph-users] Ubuntu Xenial - Ceph repo uses weak digest algorithm (SHA1)

2016-05-27 Thread Saverio Proto
I started to use Xenial... does everyone have this error ? : W: http://ceph.com/debian-hammer/dists/xenial/InRelease: Signature by key 08B73419AC32B4E966C1A330E84AC2C0460F3994 uses weak digest algorithm (SHA1) Saverio ___ ceph-users mailing list ceph-us

Re: [ceph-users] Jewel ubuntu release is half cooked

2016-05-27 Thread Andrei Mikhailovsky
Hi Ernst, Here is what i've got: $ cat /etc/udev/rules.d/55-ceph-journals.rules # # JOURNAL_UUID # match for the Intel SSD model INTEL SSDSC2BA20 # ACTION=="add|change", KERNEL=="sd??", ATTRS{model}=="INTEL SSDSC2BA20", OWNER="ceph", GROUP="ceph", MODE="660" So, it looks as all /dev/sd

[ceph-users] unfound objects - why and how to recover ? (bonus : jewel logs)

2016-05-27 Thread SCHAER Frederic
Hi, -- First, let me start with the bonus... I migrated from hammer => jewel and followed the migration instructions... but migrations instructions are missing this : #chown -R ceph:ceph /var/log/ceph I just discoved this was the reason I found no log nowhere about my current issue :/ -- This

[ceph-users] Rebuilding/recreating CephFS journal?

2016-05-27 Thread Stillwell, Bryan J
I have a Ceph cluster at home that I¹ve been running CephFS on for the last few years. Recently my MDS server became damaged and while attempting to fix it I believe I¹ve destroyed by CephFS journal based off this: 2016-05-25 16:48:23.882095 7f8d2fac2700 -1 log_channel(cluster) log [ERR] : Error

Re: [ceph-users] unfound objects - why and how to recover ? (bonus : jewel logs)

2016-05-27 Thread Samuel Just
Well, it's not supposed to do that if the backing storage is working properly. If the filesystem/disk controller/disk combination is not respecting barriers (or otherwise can lose committed data in a power failure) in your configuration, a power failure could cause a node to go backwards in time -

Re: [ceph-users] Rebuilding/recreating CephFS journal?

2016-05-27 Thread Gregory Farnum
On Fri, May 27, 2016 at 9:44 AM, Stillwell, Bryan J wrote: > I have a Ceph cluster at home that I¹ve been running CephFS on for the > last few years. Recently my MDS server became damaged and while > attempting to fix it I believe I¹ve destroyed by CephFS journal based off > this: > > 2016-05-25

[ceph-users] Flatten of mapped image

2016-05-27 Thread Max Yehorov
Hi, If anyone has some insight or comments on the question: Q) Flatten with IO activity For example I have a clone chain: IMAGE(PARENT) image1(-) image2(image1@snap0) image2 is mapped, mounted and has some IO activity. How safe to flatten image2 if it has ongoing IO thanks. ___

Re: [ceph-users] Blocked ops, OSD consuming memory, hammer

2016-05-27 Thread Chad William Seys
Hi Heath, My OSDs do the exact same thing - consume lots of RAM when the cluster is reshuffling OSDs. Try ceph tell osd.* heap release as a cron job. Here's a bug: http://tracker.ceph.com/issues/12681 Chad ___ ceph-users mailing list ceph-use

Re: [ceph-users] Flatten of mapped image

2016-05-27 Thread Ilya Dryomov
On Fri, May 27, 2016 at 8:51 PM, Max Yehorov wrote: > Hi, > If anyone has some insight or comments on the question: > > Q) Flatten with IO activity > For example I have a clone chain: > > IMAGE(PARENT) > image1(-) > image2(image1@snap0) > > image2 is mapped, mounted and has some IO activity. >

Re: [ceph-users] Rebuilding/recreating CephFS journal?

2016-05-27 Thread Stillwell, Bryan J
On 5/27/16, 11:27 AM, "Gregory Farnum" wrote: >On Fri, May 27, 2016 at 9:44 AM, Stillwell, Bryan J > wrote: >> I have a Ceph cluster at home that I¹ve been running CephFS on for the >> last few years. Recently my MDS server became damaged and while >> attempting to fix it I believe I¹ve destroye

Re: [ceph-users] Rebuilding/recreating CephFS journal?

2016-05-27 Thread Gregory Farnum
On Fri, May 27, 2016 at 1:54 PM, Stillwell, Bryan J wrote: > On 5/27/16, 11:27 AM, "Gregory Farnum" wrote: > >>On Fri, May 27, 2016 at 9:44 AM, Stillwell, Bryan J >> wrote: >>> I have a Ceph cluster at home that I¹ve been running CephFS on for the >>> last few years. Recently my MDS server becam

Re: [ceph-users] Rebuilding/recreating CephFS journal?

2016-05-27 Thread Stillwell, Bryan J
On 5/27/16, 3:01 PM, "Gregory Farnum" wrote: >> >> So would the next steps be to run the following commands?: >> >> cephfs-table-tool 0 reset session >> cephfs-table-tool 0 reset snap >> cephfs-table-tool 0 reset inode >> cephfs-journal-tool --rank=0 journal reset >> cephfs-data-scan init >> >> c

Re: [ceph-users] Rebuilding/recreating CephFS journal?

2016-05-27 Thread Gregory Farnum
What's the current full output of "ceph -s"? If you already had your MDS in damaged state, you might just need to mark it as repaired. That's a monitor command. On Fri, May 27, 2016 at 2:09 PM, Stillwell, Bryan J wrote: > On 5/27/16, 3:01 PM, "Gregory Farnum" wrote: > >>> >>> So would the next

Re: [ceph-users] Rebuilding/recreating CephFS journal?

2016-05-27 Thread Gregory Farnum
On Fri, May 27, 2016 at 2:22 PM, Stillwell, Bryan J wrote: > Here's the full 'ceph -s' output: > > # ceph -s > cluster c7ba6111-e0d6-40e8-b0af-8428e8702df9 > health HEALTH_ERR > mds rank 0 is damaged > mds cluster is degraded > monmap e5: 3 mons at > {b3=172.2

Re: [ceph-users] Rebuilding/recreating CephFS journal?

2016-05-27 Thread Stillwell, Bryan J
Here's the full 'ceph -s' output: # ceph -s cluster c7ba6111-e0d6-40e8-b0af-8428e8702df9 health HEALTH_ERR mds rank 0 is damaged mds cluster is degraded monmap e5: 3 mons at {b3=172.24.88.53:6789/0,b4=172.24.88.54:6789/0,lira=172.24.88.20:6789/0} e

Re: [ceph-users] Rebuilding/recreating CephFS journal?

2016-05-27 Thread Stillwell, Bryan J
On 5/27/16, 3:23 PM, "Gregory Farnum" wrote: >On Fri, May 27, 2016 at 2:22 PM, Stillwell, Bryan J > wrote: >> Here's the full 'ceph -s' output: >> >> # ceph -s >> cluster c7ba6111-e0d6-40e8-b0af-8428e8702df9 >> health HEALTH_ERR >> mds rank 0 is damaged >> mds clu