On Fri, May 27, 2016 at 4:46 AM, wrote:
> what do pull request label "cleanup" mean?
It's used when something is not exactly a bug fix (the existing code
works) and not a feature (doesn't add new functionality), but improves
the code in some other way, usually by removing something unneeded or
m
I started to use Xenial... does everyone have this error ? :
W: http://ceph.com/debian-hammer/dists/xenial/InRelease: Signature by
key 08B73419AC32B4E966C1A330E84AC2C0460F3994 uses weak digest
algorithm (SHA1)
Saverio
___
ceph-users mailing list
ceph-us
Hi Ernst,
Here is what i've got:
$ cat /etc/udev/rules.d/55-ceph-journals.rules
#
# JOURNAL_UUID
# match for the Intel SSD model INTEL SSDSC2BA20
#
ACTION=="add|change", KERNEL=="sd??", ATTRS{model}=="INTEL SSDSC2BA20",
OWNER="ceph", GROUP="ceph", MODE="660"
So, it looks as all /dev/sd
Hi,
--
First, let me start with the bonus...
I migrated from hammer => jewel and followed the migration instructions... but
migrations instructions are missing this :
#chown -R ceph:ceph /var/log/ceph
I just discoved this was the reason I found no log nowhere about my current
issue :/
--
This
I have a Ceph cluster at home that I¹ve been running CephFS on for the
last few years. Recently my MDS server became damaged and while
attempting to fix it I believe I¹ve destroyed by CephFS journal based off
this:
2016-05-25 16:48:23.882095 7f8d2fac2700 -1 log_channel(cluster) log [ERR]
: Error
Well, it's not supposed to do that if the backing storage is working
properly. If the filesystem/disk controller/disk combination is not
respecting barriers (or otherwise can lose committed data in a power
failure) in your configuration, a power failure could cause a node to
go backwards in time -
On Fri, May 27, 2016 at 9:44 AM, Stillwell, Bryan J
wrote:
> I have a Ceph cluster at home that I¹ve been running CephFS on for the
> last few years. Recently my MDS server became damaged and while
> attempting to fix it I believe I¹ve destroyed by CephFS journal based off
> this:
>
> 2016-05-25
Hi,
If anyone has some insight or comments on the question:
Q) Flatten with IO activity
For example I have a clone chain:
IMAGE(PARENT)
image1(-)
image2(image1@snap0)
image2 is mapped, mounted and has some IO activity.
How safe to flatten image2 if it has ongoing IO
thanks.
___
Hi Heath,
My OSDs do the exact same thing - consume lots of RAM when the cluster
is
reshuffling OSDs.
Try
ceph tell osd.* heap release
as a cron job.
Here's a bug:
http://tracker.ceph.com/issues/12681
Chad
___
ceph-users mailing list
ceph-use
On Fri, May 27, 2016 at 8:51 PM, Max Yehorov wrote:
> Hi,
> If anyone has some insight or comments on the question:
>
> Q) Flatten with IO activity
> For example I have a clone chain:
>
> IMAGE(PARENT)
> image1(-)
> image2(image1@snap0)
>
> image2 is mapped, mounted and has some IO activity.
>
On 5/27/16, 11:27 AM, "Gregory Farnum" wrote:
>On Fri, May 27, 2016 at 9:44 AM, Stillwell, Bryan J
> wrote:
>> I have a Ceph cluster at home that I¹ve been running CephFS on for the
>> last few years. Recently my MDS server became damaged and while
>> attempting to fix it I believe I¹ve destroye
On Fri, May 27, 2016 at 1:54 PM, Stillwell, Bryan J
wrote:
> On 5/27/16, 11:27 AM, "Gregory Farnum" wrote:
>
>>On Fri, May 27, 2016 at 9:44 AM, Stillwell, Bryan J
>> wrote:
>>> I have a Ceph cluster at home that I¹ve been running CephFS on for the
>>> last few years. Recently my MDS server becam
On 5/27/16, 3:01 PM, "Gregory Farnum" wrote:
>>
>> So would the next steps be to run the following commands?:
>>
>> cephfs-table-tool 0 reset session
>> cephfs-table-tool 0 reset snap
>> cephfs-table-tool 0 reset inode
>> cephfs-journal-tool --rank=0 journal reset
>> cephfs-data-scan init
>>
>> c
What's the current full output of "ceph -s"?
If you already had your MDS in damaged state, you might just need to
mark it as repaired. That's a monitor command.
On Fri, May 27, 2016 at 2:09 PM, Stillwell, Bryan J
wrote:
> On 5/27/16, 3:01 PM, "Gregory Farnum" wrote:
>
>>>
>>> So would the next
On Fri, May 27, 2016 at 2:22 PM, Stillwell, Bryan J
wrote:
> Here's the full 'ceph -s' output:
>
> # ceph -s
> cluster c7ba6111-e0d6-40e8-b0af-8428e8702df9
> health HEALTH_ERR
> mds rank 0 is damaged
> mds cluster is degraded
> monmap e5: 3 mons at
> {b3=172.2
Here's the full 'ceph -s' output:
# ceph -s
cluster c7ba6111-e0d6-40e8-b0af-8428e8702df9
health HEALTH_ERR
mds rank 0 is damaged
mds cluster is degraded
monmap e5: 3 mons at
{b3=172.24.88.53:6789/0,b4=172.24.88.54:6789/0,lira=172.24.88.20:6789/0}
e
On 5/27/16, 3:23 PM, "Gregory Farnum" wrote:
>On Fri, May 27, 2016 at 2:22 PM, Stillwell, Bryan J
> wrote:
>> Here's the full 'ceph -s' output:
>>
>> # ceph -s
>> cluster c7ba6111-e0d6-40e8-b0af-8428e8702df9
>> health HEALTH_ERR
>> mds rank 0 is damaged
>> mds clu
17 matches
Mail list logo