hello,
we plan to upgrade from luminous to nautilus.
does it make sense to do the mimic step instead of going directly for
nautilus?
br
wolfgang
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.i
Afaik you can migrate from 12 to 14 in a direct way. This is supported iirc.
I will do that on a few month on my ceph Cluster.
Hth
Mehmet
Am 12. Februar 2020 09:19:53 MEZ schrieb Wolfgang Lendl
:
>hello,
>
>we plan to upgrade from luminous to nautilus.
>does it make sense to do the mimic step
Hi,
we also skipped Mimic when upgrading from L --> N and it worked fine.
Zitat von c...@elchaka.de:
Afaik you can migrate from 12 to 14 in a direct way. This is supported iirc.
I will do that on a few month on my ceph Cluster.
Hth
Mehmet
Am 12. Februar 2020 09:19:53 MEZ schrieb Wolfgang L
We skipped from Luminous to Nautilus, skipping Mimic
This is supported and documented
On Wed, Feb 12, 2020 at 9:30 AM Eugen Block wrote:
> Hi,
>
> we also skipped Mimic when upgrading from L --> N and it worked fine.
>
>
> Zitat von c...@elchaka.de:
>
> > Afaik you can migrate from 12 to 14 in a
worked fine for us as well
D.
On 2020-02-12 09:33, Massimo Sgaravatto wrote:
> We skipped from Luminous to Nautilus, skipping Mimic
> This is supported and documented
>
> On Wed, Feb 12, 2020 at 9:30 AM Eugen Block wrote:
>
>> Hi,
>>
>> we also skipped Mimic when upgrading from L --> N and it
Hi all,
I'm helping Luca with this a bit and we made some progress.
We currently have an MDS starting and we're able to see the files.
But when browsing the filesystem we have lot of "loaded dup inode"
warnings, e.g.
2020-02-12 08:47:44.546063 mds.ceph-mon-01 [ERR] loaded dup inode
0x100
On 2/11/20 2:53 PM, Marc Roos wrote:
>
> Say I think my cephfs is slow when I rsync to it, slower than it used to
> be. First of all, I do not get why it reads so much data. I assume the
> file attributes need to come from the mds server, so the rsync backup
> should mostly cause writes not?
>
>
> The problem is that rsync creates and renames files a lot. When doing
> this with small files it can be very heavy for the MDS.
>
Perhaps run rsync with --in-place to prevent it from re-creating partial
files to a temp entity named .dfg45terf.~tmp~ and then renaming it into the
correct filen
>
>
>>
>> Say I think my cephfs is slow when I rsync to it, slower than it
used
>> to be. First of all, I do not get why it reads so much data. I
assume
>> the file attributes need to come from the mds server, so the rsync
>> backup should mostly cause writes not?
>>
>
>Are you run
Hi Muhammad,
Yes, that tool helps! Thank you for pointing it out!
With a combination of openSeaChest_Info and smartctl I was able to
extract the following stats of our cluster, and the numbers are very
surprising to me. I hope someone here can explain the what we see below:
node1 AnnualWr
On 2/12/20 11:23 AM, mj wrote:
Better layout for the disks usage stats:
https://pastebin.com/8V5VDXNt
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi all,
I have an issue on my Ceph cluster.
For one of my pools I have 107TiB STORED and 298TiB USED.
This is strange, since I've configured erasure coding (6 data chunks, 3
coding chunks).
So, in an ideal world this should result in approx. 160.5TiB USED.
The question now is why this is the case
I just found an interesting thread:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024589.html
I assume this is the case I’m dealing with.
The question is, can I safely adapt the parameter
bluestore_min_alloc_size_hdd and how will the system react? Is this
backwards compat
Hi,
we sometimes loose access to our cephfs mount and get permission denied
if we try to cd into it. This happens apparently only on some of our HPC
cephfs-client nodes (fs mounted via kernel client) when they are busy
with calculation and I/O.
When we then manually force unmount the fs and remou
Den ons 12 feb. 2020 kl 12:58 skrev Kristof Coucke :
> For one of my pools I have 107TiB STORED and 298TiB USED.
> This is strange, since I've configured erasure coding (6 data chunks, 3
> coding chunks).
> So, in an ideal world this should result in approx. 160.5TiB USED.
>
> There are 473+M obje
Kristof Coucke writes:
> I have an issue on my Ceph cluster.
> For one of my pools I have 107TiB STORED and 298TiB USED.
> This is strange, since I've configured erasure coding (6 data chunks, 3
> coding chunks).
> So, in an ideal world this should result in approx. 160.5TiB USED.
> The question n
Hi Simon and Janne,
Thanks for the reply.
It seems indeed related to the bluestore_min_alloc_size.
In an old thread I've also found the following:
*S3 object saving pipeline:*
*- S3 object is divided into multipart shards by client.*
*- Rgw shards each multipart shard into rados objects of siz
We have been using RadosGW with Keystone integration for a couple of
years, to allow users of our OpenStack-based IaaS to create their own
credentials for our object store. This has caused us a fair amount of
performance headaches.
Last year, Jjames Weaver (BBC) has contributed a patch (PR #26095
On Wed, Feb 12, 2020 at 6:08 PM Marc Roos wrote:
>
> >
> >
> >>
> >> Say I think my cephfs is slow when I rsync to it, slower than it
> used
> >> to be. First of all, I do not get why it reads so much data. I
> assume
> >> the file attributes need to come from the mds server, so the rsync
>
Dear Cephalopodians,
for those on the list also fighting rbd mirror process instabilities: With
14.2.7 (but maybe it was also present before, it does not happen often),
I very rarely encounter a case where none of the two described hacks I use is working
anymore, since "ceph daemon /var/run/cep
>> >
>> >>
>> >> Say I think my cephfs is slow when I rsync to it, slower than it
>> used >> to be. First of all, I do not get why it reads so much
data.
>> I assume >> the file attributes need to come from the mds server,
so
>> the rsync >> backup should mostly cause writes not?
On Wed, Feb 12, 2020 at 11:55 AM Oliver Freyermuth
wrote:
>
> Dear Cephalopodians,
>
> for those on the list also fighting rbd mirror process instabilities: With
> 14.2.7 (but maybe it was also present before, it does not happen often),
> I very rarely encounter a case where none of the two descr
Dear Jason,
Am 12.02.20 um 19:29 schrieb Jason Dillaman:
> On Wed, Feb 12, 2020 at 11:55 AM Oliver Freyermuth
> wrote:
>>
>> Dear Cephalopodians,
>>
>> for those on the list also fighting rbd mirror process instabilities: With
>> 14.2.7 (but maybe it was also present before, it does not happen o
On Wed, Feb 12, 2020 at 2:53 PM Oliver Freyermuth
wrote:
>
> Dear Jason,
>
> Am 12.02.20 um 19:29 schrieb Jason Dillaman:
> > On Wed, Feb 12, 2020 at 11:55 AM Oliver Freyermuth
> > wrote:
> >>
> >> Dear Cephalopodians,
> >>
> >> for those on the list also fighting rbd mirror process instabilities
Hi,
now we got a kernel crash (Oops) probably related to the my issue since
it all seems to start with a hung mds (see attached dmesg from crashed
client and mds log from mds server):
[281202.923064] Oops: 0002 [#1] SMP
[281202.924952] Modules linked in: fuse xt_multiport squashfs loop
overlay(T)
Hello All,
We seen one of the Ceph data nodes, all osd's are 90-100% disk utilized ,
those all are SSD drive and traffic is normal compare to other data nodes.
How can we debug it?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
26 matches
Mail list logo