limit the
issues. ***
Can anyone tell me what to do? Downgrade seems that it won't fix the issue.
Maybe remove this node and rebuild with 12.2.5 and resync data? Wait a few days
for 12.2.7?
Kind regards,
Glen Baars
This e-mail is intended solely for the benefit of the addressee(s) and any
o
ion in the repo and
upgraded. That turned up two other regressions[2][3]. We have fixes for
those, but are working on an additional fix to make the damage from [3]
be transparently repaired."
Regards,
Uwe
Am 14.07.2018 um 17:02 schrieb Glen Baars:
> Hello Ceph users!
&
LACP with VLANs for ceph
front/backend networks.
Not sure that it is the same issue but if you want me to do any tests - let me
know.
Kind regards,
Glen Baars
-Original Message-
From: ceph-users On Behalf Of Xavier Trilla
Sent: Tuesday, 17 July 2018 6:16 AM
To: Pavel Shub ; Ceph User
7;s a 500TB all bluestore cluster.
We are now seeing inconsistent PGs and scrub errors now the scrubbing has
resumed.
What is the best way forward?
1. Upgrade all nodes to 12.2.7?
2. Remove the 12.2.7 node and rebuild?
Kind regards,
Glen Baars
BackOnline Manager
This e-mail is intended s
Hello Sage,
Thanks for the response.
I new fairly new to ceph. Is there any commands that would help confirm the
issue?
Kind regards,
Glen Baars
T 1300 733 328
NZ +64 9280 3561
MOB +61 447 991 234
This e-mail may contain confidential and/or privileged information.If you are
not the
ERR] 1.275 soid 1:ae4f1dd8:::rbd_data.7695c59bb0bc2.05bb:head:
failed to pick suitable auth object
2018-07-20 12:21:07.463206 osd.124 osd.124 10.4.35.36:6810/1865422 99 : cluster
[ERR] 1.275 repair 12 errors, 0 fixed
Kind regards,
Glen Baars
From: ceph-users
mailto:ceph-users-boun...@lists
I saw that on the release notes.
Does that mean that the active+clean+inconsistent PGs will be OK?
Is the data still getting replicated even if inconsistent?
Kind regards,
Glen Baars
-Original Message-
From: Dan van der Ster
Sent: Friday, 20 July 2018 3:57 PM
To: Glen Baars
Cc: ceph
Thanks, we are fully bluestore and therefore just set osd skip data digest =
true
Kind regards,
Glen Baars
-Original Message-
From: Dan van der Ster
Sent: Friday, 20 July 2018 4:08 PM
To: Glen Baars
Cc: ceph-users
Subject: Re: [ceph-users] 12.2.6 upgrade
That's right. But p
312G 44.07 1.35 104
74 ssd 0.54579 1.0 558G 273G 285G 48.91 1.50 122
75 ssd 0.54579 1.0 558G 281G 276G 50.45 1.55 114
78 ssd 0.54579 1.0 558G 289G 269G 51.80 1.59 133
79 ssd 0.54579 1.0 558G 276G 282G 49.39 1.52 119
Kind regards,
Glen Baars
BackOnline Manager
T
osd.78up 1.0 1.0
79 ssd 0.54579 osd.79up 1.0 1.0
Kind regards,
Glen Baars
From: Shawn Iverson
Sent: Saturday, 21 July 2018 9:21 PM
To: Glen Baars
Cc: ceph-users
Subject: Re: [ceph-users] 12.2.7 - Available space decreasing
hosts.
Kind regards,
Glen Baars
From: Linh Vu
Sent: Sunday, 22 July 2018 7:46 AM
To: Glen Baars ; ceph-users
Subject: Re: 12.2.7 - Available space decreasing when adding disks
Something funny going on with your new disks:
138 ssd 0.90970 1.0 931G 820G 111G 88.08 2.71 216 Added
139
How very timely, I am facing the exact same issue.
Kind regards,
Glen Baars
-Original Message-
From: ceph-users On Behalf Of Thode Jocelyn
Sent: Monday, 23 July 2018 1:42 PM
To: Vasu Kulkarni
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Ceph-deploy] Cluster Name
Hi,
Yes
Hello Ceph Users,
Does anyone know how to set the Cluster Name when deploying with Ceph-deploy? I
have 3 clusters to configure and need to correctly set the name.
Kind regards,
Glen Baars
-Original Message-
From: ceph-users On Behalf Of Glen Baars
Sent: Monday, 23 July 2018 5:59 PM
To
Hello Erik,
We are going to use RBD-mirror to replicate the clusters. This seems to need
separate cluster names.
Kind regards,
Glen Baars
From: Erik McCormick
Sent: Thursday, 2 August 2018 9:39 AM
To: Glen Baars
Cc: Thode Jocelyn ; Vasu Kulkarni ;
ceph-users@lists.ceph.com
Subject: Re: [ceph
and all bluestore. We have also tried the
ceph.conf option (rbd journal pool = SSDPOOL )
Has anyone else gotten this working?
Kind regards,
Glen Baars
This e-mail is intended solely for the benefit of the addressee(s) and any
other named recipient. It is confidential and may contain legally
had these files in it.
ceph.client.admin.keyring
ceph.client.primary.keyring
ceph.conf
primary.client.primary.keyring
primary.conf
secondary.client.secondary.keyring
secondary.conf
Kind regards,
Glen Baars
-Original Message-
From: Thode Jocelyn
Sent: Thursday, 9 August 2018 1:41 PM
To
128K writes from 160MB/s down to 14MB/s ). We see no improvement when moving
the journal to SSDPOOL ( but we don’t think it is really moving )
Kind regards,
Glen Baars
From: Jason Dillaman
Sent: Saturday, 11 August 2018 11:28 PM
To: Glen Baars
Cc: ceph-users
Subject: Re: [ceph-users] RBD journal
7c8974b0dc51
format: 2
features: layering, exclusive-lock, object-map, fast-diff,
deep-flatten, journaling
flags:
create_timestamp: Sat May 5 11:39:07 2018
journal: 37c8974b0dc51
mirroring state: disabled
Kind regards,
Glen Baars
From: Jason Dillaman
Sent: Tuesd
Hello Jason,
I will also complete testing of a few combinations tomorrow to try and isolate
the issue now that we can get it to work with a new image.
The cluster started out at 12.2.3 bluestore so there shouldn’t be any old
issues from previous versions.
Kind regards,
Glen Baars
From: Jason
Hello Jason,
I have tried with and without ‘rbd journal pool = rbd’ in the ceph.conf. it
doesn’t seem to make a difference.
Also, here is the output:
rbd image-meta list RBD-HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
There are 0 metadata on this image.
Kind regards,
Glen Baars
From: Jason
Hello Jason,
I have now narrowed it down.
If the image has an exclusive lock – the journal doesn’t go on the correct pool.
Kind regards,
Glen Baars
From: Jason Dillaman
Sent: Tuesday, 14 August 2018 9:29 PM
To: Glen Baars
Cc: ceph-users
Subject: Re: [ceph-users] RBD journal feature
On Tue
Hello Jason,
Thanks for your help. Here is the output you asked for also.
https://pastebin.com/dKH6mpwk
Kind regards,
Glen Baars
From: Jason Dillaman
Sent: Tuesday, 14 August 2018 9:33 PM
To: Glen Baars
Cc: ceph-users
Subject: Re: [ceph-users] RBD journal feature
On Tue, Aug 14, 2018 at 9
Is there any workaround that you can think of to correctly enable journaling on
locked images?
Kind regards,
Glen Baars
From: ceph-users On Behalf Of Glen Baars
Sent: Tuesday, 14 August 2018 9:36 PM
To: dilla...@redhat.com
Cc: ceph-users
Subject: Re: [ceph-users] RBD journal feature
Hello
Thanks for your help 😊
Kind regards,
Glen Baars
From: Jason Dillaman
Sent: Thursday, 16 August 2018 10:21 PM
To: Glen Baars
Cc: ceph-users
Subject: Re: [ceph-users] RBD journal feature
On Thu, Aug 16, 2018 at 2:37 AM Glen Baars
mailto:g...@onsitecomputers.com.au>> wrote:
Is the
69","format":2,"features":["layering","exclusive-lock","object-map","fast-diff","deep-flatten"],"flags":[],"create_timestamp":"Sat
Apr 28 19:45:59 2018"}
[Feat]["layering","exclu
Hello K,
We have found our issue – we were only fixing the main RDB image in our script
rather than the snapshots. Working fine now.
Thanks for your help.
Kind regards,
Glen Baars
From: Konstantin Shalygin
Sent: Friday, 17 August 2018 11:20 AM
To: ceph-users@lists.ceph.com; Glen Baars
Subject
error. I am assuming this is due to SCSI-3
persistent reservations.
Has anyone managed to get ceph to serve iscsi to windows clustered shared
volumes? If so, how?
Kind regards,
Glen Baars
This e-mail is intended solely for the benefit of the addressee(s) and any
other named recipient. It is
rker(unsigned int)+0x884)
[0x55565ee0c1a4]
18: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55565ee0f1e0]
19: (()+0x76ba) [0x7fec8af206ba]
20: (clone()+0x6d) [0x7fec89f9741d]
NOTE: a copy of the executable, or `objdump -rdS ` is needed to
interpret this.
Kind regards,
Glen Baars
This e-m
reference the hosts are 1 x 6 core CPU, 72GB ram, 14 OSDs, 2 x 10Gbit. LSI
cachecade / writeback cache for the HDD and LSI JBOD for SSDs. 9 hosts in this
cluster.
Kind regards,
Glen Baars
This e-mail is intended solely for the benefit of the addressee(s) and any
other named recipient. It is co
just running a rbd du on the
large images. The limiting factor is the cpu on the rbd du command, it uses
100% of a single core.
Our cluster is completely bluestore/mimic 13.2.4. 168 OSDs, 12 Ubuntu 16.04
hosts.
Kind regards,
Glen Baars
This e-mail is intended solely for the benefit of the addres
)
goto cleanup;
} else {
vol->target.allocation = info.obj_size * info.num_objs;
}
------
Kind regards,
Glen Baars
-Original Message-
From: Wido den Hollander
Sent: Thursday, 28 February 2019 3:49 PM
To: Glen Baars ; ceph-users@lists
du command now takes around 2-3
minutes.
Kind regards,
Glen Baars
-Original Message-
From: Wido den Hollander
Sent: Thursday, 28 February 2019 5:05 PM
To: Glen Baars ; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Mimic 13.2.4 rbd du slowness
On 2/28/19 9:41 AM, Glen Baars wrote
:46 AM
To: Glen Baars
Cc: Wido den Hollander ; ceph-users
Subject: Re: [ceph-users] Mimic 13.2.4 rbd du slowness
Have you used strace on the du command to see what it's spending its time doing?
On Thu, Feb 28, 2019, 8:45 PM Glen Baars
mailto:g...@onsitecomputers.com.au>> wrote:
Hello
Hello Ceph Users,
Does anyone know what the flag point 'Started' is? Is that ceph osd daemon
waiting on the disk subsystem?
Ceph 13.2.4 on centos 7.5
"description": "osd_op(client.1411875.0:422573570 5.18ds0
5:b1ed18e5:::rbd_data.6.cf7f46b8b4567.0046e41a:head [read
1703936~
{
"time": "2019-03-21 14:12:43.699872",
"event": "commit_sent"
},
Does anyone know what that section is waiting for?
Kind regards,
Glen Baars
-Original Message-
From: Br
f6b8b4567.0042766
a:head v 30675'5522366)",
"initiated_at": "2019-03-21 16:51:56.862447",
"age": 376.527241,
"duration": 1.331278,
Kind regards,
Glen Baars
-Original Message-----
From: Brad Hu
Hello Ceph,
What is the best way to find out how the RocksDB is currently performing? I
need to build a business case for NVME devices for RocksDB.
Kind regards,
Glen Baars
This e-mail is intended solely for the benefit of the addressee(s) and any
other named recipient. It is confidential and
Hello Ceph Users,
I am finding that the write latency across my ceph clusters isn't great and I
wanted to see what other people are getting for op_w_latency. Generally I am
getting 70-110ms latency.
I am using: ceph --admin-daemon /var/run/ceph/ceph-osd.102.asok perf dump |
grep -A3 '\"op_w_la
doing 500-1000 ops overall.
The network is dual 10gbit using lacp. Vlan for private ceph traffic and
untagged for public
Glen
From: Konstantin Shalygin
Sent: Wednesday, 3 April 2019 11:39 AM
To: Glen Baars
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] op_w_latency
Hello Ceph Users
Interesting performance increase! I'm Iscsi it at a few installations and now a
wonder what version of Centos is required to improve performance! Did the
cluster go from Luminous to Mimic?
Glen
-Original Message-
From: ceph-users On Behalf Of Heðin
Ejdesgaard Møller
Sent: Saturday, 8
Hello Ceph Users,
I am trialing CephFS / Ganesha NFS for VMWare usage. We are on Mimic / Centos
7.7 / 130 x 12TB 7200rpm OSDs / 13 hosts / 3 replica.
So far the read performance has been great. The write performance ( NFS sync )
hasn't been great. We use a lot of 64KB NFS read / writes and the
41 matches
Mail list logo