Thank you for your reply.
so,i would like to verify this problem. i create a new VM as a
client,it is kernel version:
[root@localhost ~]# uname -a
Linux localhost.localdomain 5.2.9-200.fc30.x86_64 #1 SMP Fri Aug 16
21:37:45 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
First of all,use command:ceph feat
On 17.09.2019 06:54, Ashley Merrick wrote:
Have just noticed their is packages available for 14.2.4..
I know with the whole 14.2.3 release and the notes not going out to a
good day or so later.. but this is not long after the 14.2.3 release..?
Was this release even meant to have come out? Mak
Hi,
I have defined pool hdd which is exclusively used by virtual disks of
multiple KVMs / LXCs.
Yesterday I run these commands
osdmaptool om --upmap out.txt --upmap-pool hdd
source out.txt
and Ceph started rebalancing this pool.
However since then no KVM / LXC is reacting anymore.
If I try to sta
Have just noticed their is packages available for 14.2.4..
I know with the whole 14.2.3 release and the notes not going out to a good day
or so later.. but this is not long after the 14.2.3 release..?
Was this release even meant to have come out? Makes it difficult for people
installing a new n
please send me crash log
On Tue, Sep 17, 2019 at 12:56 AM Guilherme Geronimo
wrote:
>
> Thank you, Yan.
>
> It took like 10 minutes to execute the scan_links.
> I believe the number of Lost+Found decreased in 60%, but the rest of
> them are still causing the MDS crash.
>
> Any other suggestion?
>
Dear all,
Can you show me steps how to intergrate Metadata of Ceph object with
ElasticSeach to improve searching medata performance?
thank you very much.
-
Br,
Dương Tuấn Dũng
0986153686
___
ceph-users mailing list -- ceph-user
I have mimic installed and for some reason the dashboard isn't showing up.
I see which mon is listed as active for "mgr", the module is enabled, but
nothing is listening on port 8080:
# ceph mgr module ls
{
"enabled_modules": [
"dashboard",
"iostat",
"status"
tcp
unsubscribe
-- Forwarded message -
From:
Date: Mon, Sep 16, 2019 at 7:22 PM
Subject: ceph-users Digest, Vol 80, Issue 54
To:
Send ceph-users mailing list submissions to
ceph-users@ceph.io
To subscribe or unsubscribe via email, send a message with subject or
body 'help'
Thank you, Yan.
It took like 10 minutes to execute the scan_links.
I believe the number of Lost+Found decreased in 60%, but the rest of
them are still causing the MDS crash.
Any other suggestion?
=D
[]'s
Arthur (aKa Guilherme Geronimo)
On 10/09/2019 23:51, Yan, Zheng wrote:
On Wed, Sep 4,
Hi Robert,
So far the cloud tiering features are still in the design stages. We're
working on some initial refactoring work to support this abstraction
(ie. to either satisfy a request against the local rados cluster, or to
proxy it somewhere else). With respect to passthrough/tiering to AWS,
bump. anyone?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On Mon, Sep 16, 2019 at 5:10 PM Thomas Schneider <74cmo...@gmail.com> wrote:
>
> Wonderbra.
>
> I found some relevant sessions on 2 of 3 monitor nodes.
> And I found some others:
> root@ld5505:~# ceph daemon mon.ld5505 sessions | grep 0x40106b84a842a42
> root@ld5505:~# ceph daemon mon.ld5505 sessio
Wonderbra.
I found some relevant sessions on 2 of 3 monitor nodes.
And I found some others:
root@ld5505:~# ceph daemon mon.ld5505 sessions | grep 0x40106b84a842a42
root@ld5505:~# ceph daemon mon.ld5505 sessions | grep -v luminous
[
"MonSession(client.32679861 v1:10.97.206.92:0/1183647891 is op
On Mon, Sep 16, 2019 at 4:40 PM Thomas Schneider <74cmo...@gmail.com> wrote:
>
> Hi,
>
> thanks for your valuable input.
>
> Question:
> Can I get more information of the 6 clients (those with features
> 0x40106b84a842a42), e.g. IP, that allows me to identify it easily?
Yes, although it's not inte
Hi,
thanks for your valuable input.
Question:
Can I get more information of the 6 clients (those with features
0x40106b84a842a42), e.g. IP, that allows me to identify it easily?
Regards
Thomas
Am 16.09.2019 um 15:56 schrieb Paul Emmerich:
> Bit 21 in the features bitfield is upmap support
>
>
Bit 21 in the features bitfield is upmap support
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Mon, Sep 16, 2019 at 3:21 PM Ilya Dryomov wrote:
>
> On Mon, Sep 16
On Mon, Sep 16, 2019 at 2:24 PM 潘东元 wrote:
>
> hi,
>my ceph cluster version is Luminous run the kernel version Linux 3.10
>[root@node-1 ~]# ceph features
> {
> "mon": {
> "group": {
> "features": "0x3ffddff8eeacfffb",
> "release": "luminous",
>
On Mon, Sep 16, 2019 at 2:20 PM Thomas Schneider <74cmo...@gmail.com> wrote:
>
> Hello,
>
> the current kernel with SLES 12SP3 is:
> ld3195:~ # uname -r
> 4.4.176-94.88-default
>
>
> Assuming that this kernel is not supporting upmap, do you recommend to
> use balance mode crush-compat then?
Hi Tho
Hi
on 2019/9/16 20:19, 潘东元 wrote:
my ceph cluster version is Luminous run the kernel version Linux 3.10
Please refer this page:
https://docs.ceph.com/docs/master/start/os-recommendations/
see [LUMINOUS] section.
regards.
___
ceph-users mailing
hi,
my ceph cluster version is Luminous run the kernel version Linux 3.10
[root@node-1 ~]# ceph features
{
"mon": {
"group": {
"features": "0x3ffddff8eeacfffb",
"release": "luminous",
"num": 3
}
},
"osd": {
"group": {
Hello,
the current kernel with SLES 12SP3 is:
ld3195:~ # uname -r
4.4.176-94.88-default
Assuming that this kernel is not supporting upmap, do you recommend to
use balance mode crush-compat then?
Regards
Thomas
Am 16.09.2019 um 11:11 schrieb Oliver Freyermuth:
> Am 16.09.19 um 11:06 schrieb Ko
Hi Robert,
As long as you triple-check permissions on the cache tier (should be the
same as your actual storage pool) you should be fine.
In our setup I applied this a few times. The first time I made the
assumption permissions would be inherited or not applicable but IOPs get
redirected towards
That would be the case for me. I think all data during one day would fit into
the cache, and we could slowly flush back over night (or even over the
weekend). But my impression is, I would have to test it. So my initial
question: Do I have to stop all VMs before activating the cache? And restart
Hi!
I create bug https://tracker.ceph.com/issues/41832
Maybe someone also encountered such a problem?
WBR,
Fyodor.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi!
Cache tiering is a great solution if the cache size is larger than the hot
data. Even better if the data can cool quietly in the cache. Otherwise, it’s
really better not to do this.
- Original Message -
> From: "Wido den Hollander"
> To: "Eikermann, Robert" , ceph-users@ceph.io
> S
I hope the data your running the CEPH server isn't important if your looking to
run a Cache tier with just 2 SSDS / Replication of 2.
If your cache tier fails, you basically corrupt most data on the pool below.
Also as Wido said, as much as you may get it to work, I don't think it will
give you
We have terrible IO performance when multiple VMs do some file IO. Mainly do
some java compilation on that servers. If we have 2 parallel jobs everything is
fine, but having 10 jobs we see the warning "HEALTH_WARN X requests are blocked
> 32 sec; Y osds have slow requests". I have two enterprise
On 9/16/19 11:36 AM, Eikermann, Robert wrote:
>
> Hi,
>
>
>
> I’m using Ceph in combination with Openstack. For the “VMs” Pool I’d
> like to enable writeback caching tier, like described here:
> https://docs.ceph.com/docs/luminous/rados/operations/cache-tiering/.
>
>
>
Can you explain why? The
Hello
on 2019/9/16 17:36, Eikermann, Robert wrote:
Should it be possible to do that on a running pool? I tried to do so and
immediately all VMs (Linux Ubuntu OS) running on Ceph disks got readonly
filesystems. No errors were shown in ceph (but also no traffic arrived
after enabling the cache t
Hi,
The CFP is ending today for the Ceph Day London on October 24th.
If you have a talk you would like to submit, please follow the link below!
Wido
On 7/18/19 3:43 PM, Wido den Hollander wrote:
> Hi,
>
> We will be having Ceph Day London October 24th!
>
> https://ceph.com/cephdays/ceph-day-lon
Have you checked that the user/keys that your VMs are connecting to have access
rights to the cache pool?
On Mon, 16 Sep 2019 17:36:38 +0800 Eikermann, Robert
wrote
Hi,
I’m using Ceph in combination with Openstack. For the “VMs” Pool I’d like to
enable writeback caching
Hi,
I'm using Ceph in combination with Openstack. For the "VMs" Pool I'd like to
enable writeback caching tier, like described here:
https://docs.ceph.com/docs/luminous/rados/operations/cache-tiering/ .
Should it be possible to do that on a running pool? I tried to do so and
immediately all VM
Am 16.09.19 um 11:06 schrieb Konstantin Shalygin:
On 9/16/19 3:59 PM, Thomas wrote:
I tried to run this command with failure:
root@ld3955:/mnt/rbd# ceph osd set-require-min-compat-client luminous
Error EPERM: cannot set require_min_compat_client to luminous: 6
connected client(s) look like jewel
On 9/16/19 3:59 PM, Thomas wrote:
I tried to run this command with failure:
root@ld3955:/mnt/rbd# ceph osd set-require-min-compat-client luminous
Error EPERM: cannot set require_min_compat_client to luminous: 6
connected client(s) look like jewel (missing 0xa20); 19
connected client(s
Hi,
I tried to run this command with failure:
root@ld3955:/mnt/rbd# ceph osd set-require-min-compat-client luminous
Error EPERM: cannot set require_min_compat_client to luminous: 6
connected client(s) look like jewel (missing 0xa20); 19
connected client(s) look like jewel (missing 0x800
35 matches
Mail list logo