Hi,
have you tried specifying the image instead of the ceph version? I
found a bug report [1] stating that the option '--ceph-version' will
be removed.
When I tried that the last time it worked for me with '--image':
ceph orch upgrade start --image /ceph/ceph:latest
Regards,
Eugen
[1] ht
Hi,
Hello I am using rados bench tool. Currently I am using this tool on the
development cluster after running vstart.sh script. It is working fine and
I am interested in benchmarking the cluster. However I am struggling to
achieve a good bandwidth i.e. bandwidth (MB/sec). My target throughput i
try 4MB that is the default not?
> -Original Message-
> Sent: 10 February 2021 09:30
> To: ceph-users ; dev ; ceph...@ceph.io
> Subject: [ceph-users] struggling to achieve high bandwidth on Ceph dev
> cluster - HELP
>
> Hi,
>
> Hello I am using rados bench tool. Currently I am using
thanks for the reply.
Yes, 4MB is the default. I have tried it. For example below (posted) is for
4MB (default) ran for 600 seconds. The seq read and rand read gives me a
good bandwidth (not posted here). But with write its still very less. And I
am particularly interested in block sizes. And rado
Hi Tom,
This is great! will look into the PR.
Regarding the tests, the unit tests for amqp are actually here [1].
However, they are testing against a mock amqp library [2] and not a real
broker, so, I don't think it is critical to cover SSL there.
The disabled tests you pointed out, are the integ
You have to tell a bit about your cluster setup, like nr of osd's, 3x
replication on your testing pool?
Eg. this[1] was my test on a cluster with only 1gbit ethernet, 3x repl hdd
pool. This[2] with 10gbit and more osd's added
[2]
[root@c01 ~]# rados bench -p rbd 10 write
hints = 1
Maintaining
thanks, this looks really helpful and it proves me that I am not doing the
right way.
And you had the hit the nail by asking about *replication factor*. Because
I don't know how to change the replication factor. AFAIK, by default it is
*3x*. But I would like to change, for example to* 2x*.
So ple
> And you had the hit the nail by asking about *replication factor*.
> Because
> I don't know how to change the replication factor. AFAIK, by default it
> is
> *3x*. But I would like to change, for example to* 2x*.
ceph osd pool get rbd size
https://docs.ceph.com/en/latest/man/8/ceph/
> So plea
Hello all!
We have a cluster where there are HDDs for data and NVMEs for journals and
indexes. We recently added pure SSD hosts, and created a storage class SSD.
To do this, we create a default.rgw.hot.data pool, associate a crush rule
using SSD and create a HOT storage class in the placement-targ
Hi,
Am 10.02.21 um 15:15 schrieb Frank Schilder:
> we plan to add a kernel client mount to a server in our DMZ. I can't find
> information on how to allow a ceph client to access a ceph cluster through a
> firewall.
A CephFS client will always talk to all MONs, MDSs and OSDs in the cluster.
Y
On Wed, Feb 10, 2021 at 8:31 AM Marcelo wrote:
>
> Hello all!
>
> We have a cluster where there are HDDs for data and NVMEs for journals and
> indexes. We recently added pure SSD hosts, and created a storage class SSD.
> To do this, we create a default.rgw.hot.data pool, associate a crush rule
> u
Am 10.02.21 um 15:54 schrieb Frank Schilder:
> Which ports are the clients using - if any?
All clients only have outgoing connections and do not listen to any
ports themselves.
The Ceph cluster will not initiate a connection to the client.
Kindest Regards
--
Robert Sander
Heinlein Support GmbH
S
I have the same question about when recovery is going to happen! I think
recovering from second and third OSD can lead to not impact client IO too
when the primary OSD has another recovery ops!
On Tue, Feb 9, 2021 at 1:28 PM mj wrote:
> Hi,
>
> Quoting the page https://docs.ceph.com/en/latest/ar
thanks.
Ceph source code contains a script called vstart.sh which allows
developers to quickly test their code using a simple deployment on your
development system.
Here: https://docs.ceph.com/en/latest//dev/quick_guide/
Although I completely agree with your manual deployment part, I thought ma
Thanks, Gilles. I recently opened a PR to improve RBD image listing (
https://github.com/ceph/ceph/pull/39344). In your specific case, I think
that part of the issue could come from calculating the actually provisioned
capacity.
Could you please share the image details (or an `rbd info ` dump),
li
Den ons 10 feb. 2021 kl 18:05 skrev Seena Fallah :
> I have the same question about when recovery is going to happen! I think
> recovering from second and third OSD can lead to not impact client IO too
> when the primary OSD has another recovery ops!
>
Those OSDs (the 2nd and 3rd) are obviously s
But I think they can have no recovery ops.
On Wed, Feb 10, 2021 at 9:28 PM Janne Johansson wrote:
> Den ons 10 feb. 2021 kl 18:05 skrev Seena Fallah :
>
>> I have the same question about when recovery is going to happen! I think
>> recovering from second and third OSD can lead to not impact clie
Hi all,
we plan to add a kernel client mount to a server in our DMZ. I can't find
information on how to allow a ceph client to access a ceph cluster through a
firewall. Does somebody have a link or sample configs for both, iptables on the
host itself and for a transparent firewall between host
Sorry, not much to say other than a "me too".
i spent a week testing ceph configurations.. it should have only been 2 days.
but a huge amount of my time was wasted because I needed to do a full reboot on
the hardware.
on a related note: sometimes "zap" didnt fully clean things up. I had to
manu
Den ons 10 feb. 2021 kl 19:09 skrev Seena Fallah :
> But I think they can have no recovery ops.
>
No, but they would still have client ops even if there was no backfills or
recovery anywhere on the OSD.
--
May the most significant bit of your life be positive.
__
Yes but this can speed up and balance the recovery ops to all OSDs and
because it's a read op for the secondary or third OSD this can't be much
heartful!
On Wed, Feb 10, 2021 at 10:03 PM Janne Johansson
wrote:
> Den ons 10 feb. 2021 kl 19:09 skrev Seena Fallah :
>
>> But I think they can have no
Hi Robert,
thanks for your fast reply. I probably misunderstand something, I thought the
client binds to a port itself. I guess, the info you refer to is this:
https://docs.ceph.com/en/latest/rados/configuration/network-config-ref/?highlight=iptables
. I read this as the iptables config on the
What's "ceph orch device ls" look like, and please show us your
specification that you've used.
Jens was correct, his example is how we worked-around this problem, pending
patch/new release.
On Wed, Feb 10, 2021 at 12:05 AM Tony Liu wrote:
> With db_devices.size, db_devices shows up from "orch
Super, thanks! -- Frank
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Robert Sander
Sent: 10 February 2021 16:32:20
To: ceph-users@ceph.io
Subject: [ceph-users] Re: firewall config for ceph fs client
Am 10.02.21 um 15:54 schr
ive always run it against the block dev
- Original Message -
From: "Matt Wilder"
To: "Philip Brown"
Cc: "ceph-users"
Sent: Wednesday, February 10, 2021 12:06:55 PM
Subject: Re: [ceph-users] Re: Device is not available after zap
Are you running zap on the lvm volume, or the underlying
> I am interested in benchmarking the cluster.
dstat is great, but can you send and example of this command on your
osd machine: iostat -mtxy 1
This will also show some basic CPU info and more detailed analysis of
the I/O pattern.
What kind of drives are you using? Random access can be very slo
I had something similar a while ago, can't remember how I solved it sorry, but
it is not a lvm bug. Also posted it here. To bad this is still not fixed.
> -Original Message-
> Cc: ceph-users
> Subject: [ceph-users] Re: Device is not available after zap
>
> ive always run it against the
> Some more questions please:
> How many OSDs have you been using in your second email tests for 1gbit
> [1]
> and 10gbit [2] ethernet? Or to be precise, what is your cluster for
When I was testing with 1gbit ethernet I had 11 osds on 4 servers, but this
already showed saturated 1Gbit links. Now
Hi David,
Request info is below.
# ceph orch device ls ceph-osd-1
HOSTPATH TYPE SIZE DEVICE_ID MODEL
VENDOR ROTATIONAL AVAIL REJECT REASONS
ceph-osd-1 /dev/sdd hdd 2235G SEAGATE_DL2400MM0159_WBM2VL2G
DL2400MM0159 SEAGATE 1
It has become a ot more severe after adding a large nubmer of disks. I added a
tracker
https://tracker.ceph.com/issues/49231
In case you have additional information, feel free to add.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
__
To update, the OSD had data on HDD and DB on SSD.
After "ceph orch osd rm 12 --replace --force" and wait
till rebalancing is done and daemon is stopped,
I ran "ceph orch device zap ceph-osd-2 /dev/sdd" to zap the device.
It cleared PV, VG and LV for data device, but not DB device.
DB device issue i
On Wed, Feb 10, 2021 at 1:11 AM Eugen Block wrote:
> I
> found a bug report [1] stating that the option '--ceph-version' will
> be removed.
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1917552
Note that bugzilla ticket is about the downstream RH Ceph Storage
product, where we have a very d
It's displaying sdb (what I assume you want to be used as a DB device) as
unavailable. What's "pvs" output look like on that "ceph-osd-1" host?
Perhaps it is full. I see the other email you sent regarding replacement; I
suspect the pre-existing LV from your previous OSD is not re-used. You may
need
Hi David,
===
# pvs
PV VG Fmt Attr
PSizePFree
/dev/sda3 vg0 lvm2 a--
1.09t 0
/dev/sdb ceph-block-dbs-f8d28f1f-2dd3-47d0-9110-959e88405112 lv
Are you running zap on the lvm volume, or the underlying block device?
If you are running it against the lvm volume, it sounds like you need to
run it against the block device so it wipes the lvm volumes as well.
(Disclaimer: I don't run Ceph in this configuration)
On Wed, Feb 10, 2021 at 10:24 A
Hi,
I have a few questions about krbd on kernel 4.15
1. Does it support msgr v2? (If not which kernel supports msgr v2?)
2. If krbd is using msgr v1, does it checksum (CRC) the messages that it
sends to see for example if the write is correct or not? and if it does
checksums, If there were a probl
Hi Michael,
out of curiosity, did the pool go away or did it put up a fight?
I don't remember exactly, its a long time ago, but I believe stray objects on
fs pools come from files still in snapshots but were deleted on the fs level.
Such files are moved to special stray pools until the snapshot
Msgr2 will be supported from kernel 5.11
k
Sent from my iPhone
> On 11 Feb 2021, at 03:35, Seena Fallah wrote:
>
> Does it support msgr v2? (If not which kernel supports msgr v2?)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send
According to your "pvs" you still have a VG on your sdb device. As long as that
is on there, it will not be available to ceph. I have had to do a lvremove,
like this:
lvremove ceph-78c78efb-af86-427c-8be1-886fa1d54f8a
osd-db-72784b7a-b5c0-46e6-8566-74758c297adc
Do a lvs command to see the right
39 matches
Mail list logo