On 1/11/19 8:08 PM, Kenneth Van Alstyne wrote:
> Hello all (and maybe this would be better suited for the ceph devel
> mailing list):
> I’d like to use RBD mirroring between two sites (to each other), but I
> have the following limitations:
> - The clusters use the same name (“ceph”)
> - The clus
Hello list,
I noticed my last post was displayed as a reply to a different thread,
so I re-send my question, please excuse the noise.
There are two config options of mon/osd interaction that I don't fully
understand. Maybe one of you could clarify it for me.
mon osd report timeout
- The
Yes, your understanding is correct. But the main mechanism by which
OSDs are reported as down is that other OSDs report them as down with
a much stricter timeout (20 seconds? 30 seconds? something like that).
It's quite rare to hit the "mon osd report timeout" (the usual
scenario here is a network
Thanks for the reply, Paul.
Yes, your understanding is correct. But the main mechanism by which
OSDs are reported as down is that other OSDs report them as down with
a much stricter timeout (20 seconds? 30 seconds? something like that).
Yes, the osd_heartbeat_grace of 20 seconds has occured fr
Hello
My name is Yanko Davila, I´m new to ceph so please pardon my ignorance.
I have a question about Bluestore and SPDK.
I´m currently running ceph version:
ceph version 12.2.10 (177915764b752804194937482a39e95e0ca3de94) luminous
(stable)
on Debian:
Linux 4.9.0-8-amd64 #1 SMP Debian 4.
Hi All
I've been playing around with the nfs-ganesha 2.7 exporting a cephfs
filesystem, it seems to be working pretty well so far. A few questions:
1) The docs say " For each NFS-Ganesha export, FSAL_CEPH uses a libcephfs
client,..." [1]. For arguments sake, if I have ten top level dirs in my
Cep
I have a ceph luminous cluster running on CentOS7 nodes.
This cluster has 50 OSDs, all with the same size and all with the same
weight.
Since I noticed that there was a quite "unfair" usage of OSD nodes (some
used at 30 %, some used at 70 %) I tried to activate the balancer.
But the balancer does
On Mon, Jan 14, 2019 at 3:06 PM Massimo Sgaravatto
wrote:
>
> I have a ceph luminous cluster running on CentOS7 nodes.
> This cluster has 50 OSDs, all with the same size and all with the same weight.
>
> Since I noticed that there was a quite "unfair" usage of OSD nodes (some used
> at 30 %, some
Thanks for the prompt reply
Indeed I have different racks with different weights.
Below the ceph osd tree" output
[root@ceph-mon-01 ~]# ceph osd tree
ID CLASS WEIGHTTYPE NAME STATUS REWEIGHT PRI-AFF
-1 272.80426 root default
-7 109.12170 rack Rack11-PianoAlto
-
On 1/14/19 3:18 PM, Massimo Sgaravatto wrote:
> Thanks for the prompt reply
>
> Indeed I have different racks with different weights.
> Below the ceph osd tree" output
>
Can you also show the output of 'ceph osd df' ?
The amount of PGs might be on the low side which also causes this imbalanc
On Mon, Jan 14, 2019 at 3:18 PM Massimo Sgaravatto
wrote:
>
> Thanks for the prompt reply
>
> Indeed I have different racks with different weights.
Are you sure you're replicating across racks? You have only 3 racks,
one of which is half the size of the other two -- if yes, then your
cluster will
This [*]is the output of "ceph osd df".
Thanks a lot !
Massimo
[*]
[root@ceph-mon-01 ~]# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
30 hdd 5.45609 1.0 5587G 1875G 3711G 33.57 0.65 140
31 hdd 5.45609 1.0 5587G 3951G 1635G 70.72 1.38 144
32 hdd 5.45609
Hi Dan
I have indeed at the moment only 5 OSD nodes on 3 racks.
The crush-map is attached.
Are you suggesting to replicate only between nodes and not between racks
(since the very few resources) ?
Thanks, Massimo
On Mon, Jan 14, 2019 at 3:29 PM Dan van der Ster wrote:
> On Mon, Jan 14, 2019 at
Your crush rule is ok:
step chooseleaf firstn 0 type host
You are replicating host-wise, not rack wise.
This is what I would suggest for you cluster, but keep in mind that a
whole-rack outage will leave some PGs incomplete.
Regarding the straw2 change causing 12% data movement -- in this ca
Thanks for the reply Jason — I was actually thinking of emailing you directly,
but thought it may be beneficial to keep the conversation to the list so that
everyone can see the thread. Can you think of a reason why one-way RBD
mirroring would not work to a shared tertiary cluster? I need to
Hi. Welcome to the community.
On 01/14/2019 07:56 AM, David C wrote:
Hi All
I've been playing around with the nfs-ganesha 2.7 exporting a cephfs
filesystem, it seems to be working pretty well so far. A few questions:
1) The docs say " For each NFS-Ganesha export, FSAL_CEPH uses a
libcephfs
We've found that more aggressive prefetching in the Ceph client can
help with some poorly behaving legacy applications (don't know the
option off the top of my head but it's documented).
It can also be useful to disable logging (even the in-memory logs) if
you do a lot IOPS (that's debug client and
On Mon, Jan 14, 2019 at 10:10 AM Kenneth Van Alstyne
wrote:
>
> Thanks for the reply Jason — I was actually thinking of emailing you
> directly, but thought it may be beneficial to keep the conversation to the
> list so that everyone can see the thread. Can you think of a reason why
> one-way
In this case, I’m imagining Clusters A/B both having write access to a third
“Cluster C”. So A/B -> C rather than A -> C -> B / B -> C -> A / A -> B-> C.
I admit, in the event that I need to replicate back to either primary cluster,
there may be challenges.
Thanks,
--
Kenneth Van Alstyne
Sys
On Mon, Jan 14, 2019 at 11:09 AM Kenneth Van Alstyne
wrote:
>
> In this case, I’m imagining Clusters A/B both having write access to a third
> “Cluster C”. So A/B -> C rather than A -> C -> B / B -> C -> A / A -> B-> C.
> I admit, in the event that I need to replicate back to either primary
>
D’oh! I was hoping that the destination pools could be unique names,
regardless of the source pool name.
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA 20190
c: 228-547-8045 f: 571-266-
Hey,
I am having some issues upgrading to 12.2.10 on my 18.04 server. It is
saying 12.2.8 is the latest.
I am not sure why it is not going to 12.2.10, also the rest of my
cluster is already in 12.2.10 except this one machine.
$ cat /etc/apt/sources.list.d/ceph.list
deb https://download.ceph.com/de
This is because Luminous is not being built for Bionic for whatever reason.
There are some other mailing list entries detailing this.
Right now you have ceph installed from the Ubuntu bionic-updates repo, which
has 12.2.8, but does not get regular release updates.
This is what I ended up having
On Fri, Jan 11, 2019 at 10:07 PM Brian Topping
wrote:
> Hi all,
>
> I have a simple two-node Ceph cluster that I’m comfortable with the care
> and feeding of. Both nodes are in a single rack and captured in the
> attached dump, it has two nodes, only one mon, all pools size 2. Due to
> physical l
Wow OK.
I wish there was some official stance on this.
Now I got to remove those OSDs, downgrade to 16.04 and re-add them,
this is going to take a while.
--Scott
On Mon, Jan 14, 2019 at 10:53 AM Reed Dier wrote:
>
> This is because Luminous is not being built for Bionic for whatever reason.
> T
Hi,
while trying to upgrade a cluster from 12.2.8 to 12.2.10 i'm experience
issues with bluestore osds - so i canceled the upgrade and all bluestore
osds are stopped now.
After starting a bluestore osd i'm seeing a lot of slow requests caused
by very high read rates.
Device: rrqm/s wr
What's the output of "ceph daemon osd. status" on one of the OSDs
while it's starting?
Is the OSD crashing and being restarted all the time? Anything weird
in the log files? Was there recovery or backfill during the upgrade?
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contac
Hello
I was able to find the device selector. Now I have an issue understanding the
steps to activate the osd. Once I setup spdk the device disappears from lsblk
as expected. So the ceph manual is not very helpful after spdk is enabled. Is
there any manual that walks you through the steps to
Hi Paul,
Am 14.01.19 um 21:39 schrieb Paul Emmerich:
> What's the output of "ceph daemon osd. status" on one of the OSDs
> while it's starting?
{
"cluster_fsid": "b338193d-39e0-40e9-baba-4965ef3868a3",
"osd_fsid": "d95d0e3b-7441-4ab0-869c-fe0551d3bd52",
"whoami": 2,
"state": "act
Ah! Makes perfect sense now. Thanks!!
Sent from my iPhone
> On Jan 14, 2019, at 12:30, Gregory Farnum wrote:
>
>> On Fri, Jan 11, 2019 at 10:07 PM Brian Topping
>> wrote:
>> Hi all,
>>
>> I have a simple two-node Ceph cluster that I’m comfortable with the care and
>> feeding of. Both nodes
Hi Stefan,
Any idea if the reads are constant or bursty? One cause of heavy reads
is when rocksdb is compacting and has to read SST files from disk. It's
also possible you could see heavy read traffic during writes if data has
to be read from SST files rather than cache. It's possible this
Hello Ceph users,
I am chasing an issue that is affecting one of our clusters across various
Nodes / OSDs. The cluster is around 150 OSDs / 9 nodes and running Ceph 12.2.8
and 12.2.9.
We have three ceph clusters around this size, two are Centos 7.5 based and one
is Ubuntu 16.04. This segfault
Can I use python34 package in python36 environment? If not , what
should I do to use python34 package in python36 ?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello ceph users!
A couple of days ago I've got a ceph health error - mds0: Metadata damage
detected.
Overall ceph cluster is fine: all pgs are clean, all osds are up and in, no
big problems.
Looks like there is not much information regarding this class of issues, so
I'm writing this message and h
34 matches
Mail list logo