connections coming from qemu vm clients.
It's generally easy to upgrade. Just switch your Ceph yum repo from
jewel to luminous.
Then update `librbd` on your hypervisors and migrate your VM's. It's
fast and without downtime of your VM's.
k
___
ce
Hi,cephers
I try to two expand test in bluestore.
First test is expands db lv which osd have separate block and db and
wal, after runs ceph-bluestore-tool buefs-bdev-expand --path XX, it
works well. perf dump shows the correct db size.
Second test is expands block lv which osd do not have separ
I follow https://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-mons/
to add MON for 109.x.
On Fri, Oct 25, 2019 at 11:27 AM luckydog xf wrote:
> Hi, list,
>
> Currently my ceph nodes with 3 MON and 9 OSDs, everything is fine. Now
> I plan to add onre more public network, the initial
Hi, list,
Currently my ceph nodes with 3 MON and 9 OSDs, everything is fine. Now
I plan to add onre more public network, the initial public network is
103.x/24, and the target network is 109.x/24. And 103 cannot reach 109, as
I don't config route table for them.
I add 109.x for 3 MON node
Hi Konstantin,
connections coming from qemu vm clients.
Best regards
Gesendet: Donnerstag, 24. Oktober 2019 um 09:47 Uhr
Von: "Konstantin Shalygin"
An: ceph-users@lists.ceph.com, "Jan Peters"
Betreff: Re: [ceph-users] ceph balancer do not start
Hi,
ceph features
{
"mon": {
"
It's not clear what the problem is to me. Please try increasing the
debugging on your MDS and share a snippet (privately to me if you
wish). Other information would also be helpful like `ceph status` and
what kind of workloads these clients are running.
On Fri, Oct 18, 2019 at 7:22 PM Lei Liu wro
Hi Neale,
On Fri, Oct 18, 2019 at 9:31 AM Pickett, Neale T wrote:
>
> Last week I asked about a rogue inode that was causing ceph-mds to segfault
> during replay. We didn't get any suggestions from this list, so we have been
> familiarizing ourselves with the ceph source code, and have added th
On Thu, Oct 24, 2019 at 5:45 PM Paul Emmerich wrote:
>
> Could it be related to the broken backport as described in
> https://tracker.ceph.com/issues/40102 ?
>
> (It did affect 4.19, not sure about 5.0)
It does, I have just updated the linked ticket to reflect that.
Thanks,
Ilya
Also, search for this topic on the list. Ubuntu Disco with most recent
Kernel 5.0.0-32 seems to be instable
On Thu, Oct 24, 2019 at 10:45 AM Paul Emmerich
wrote:
> Could it be related to the broken backport as described in
> https://tracker.ceph.com/issues/40102 ?
>
> (It did affect 4.19, not
Could it be related to the broken backport as described in
https://tracker.ceph.com/issues/40102 ?
(It did affect 4.19, not sure about 5.0)
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel
Hello all,
I've been using the Ceph kernel modules in Ubuntu to load a CephFS filesystem
quite successfully for several months. Yesterday, I went through a round of
updates on my Ubuntu 18.04 machines, which loaded linux-image-5.0.0-32-generic
as the kernel. I'm noticing that while the kernel
There are plenty of posts in this list. Please search a bit. Example threads
are:
What's the best practice for Erasure Coding
large concurrent rbd operations block for over 15 mins!
Can't create erasure coded pools with k+m greater than hosts?
And many more. As you will see there, k=2,m=1 is bad
On 2019-10-24 09:46, Janne Johansson wrote:
(Slightly abbreviated)
Den tors 24 okt. 2019 kl 09:24 skrev Frank Schilder mailto:fr...@dtu.dk>>:
What I learned are the following:
1) Avoid this work-around too few hosts for EC rule at all cost.
2) Do not use EC 2+1. It does not offe
Hi,
ceph features
{
"mon": {
"group": {
"features": "0x3ffddff8eeacfffb",
"release": "luminous",
"num": 3
}
},
"osd": {
"group": {
"features": "0x3ffddff8eeacfffb",
"release": "luminous",
(Slightly abbreviated)
Den tors 24 okt. 2019 kl 09:24 skrev Frank Schilder :
> What I learned are the following:
>
> 1) Avoid this work-around too few hosts for EC rule at all cost.
>
> 2) Do not use EC 2+1. It does not offer anything interesting for
> production. Use 4+2 (or 8+2, 8+3 if you hav
I was thinking of going to the Polish one, but I will be tempted to go
to the London one, if you also be wearing this Kilt. ;D
-Original Message-
From: John Hearns [mailto:hear...@googlemail.com]
Sent: donderdag 24 oktober 2019 8:14
To: ceph-users
Subject: [ceph-users] Cloudstack and
I have some experience with an EC set-up with 2 shards per host, failure-domain
is host, and also some multi-site wishful thinking of users. What I learned are
the following:
1) Avoid this work-around too few hosts for EC rule at all cost. There are two
types of resiliency in ceph. One is again
17 matches
Mail list logo