Hello Daniel,
yes Samsung "Pro" SSD series aren't to much "pro", especially when it's
about write IOPS. I would tend to say get some Intel S4510 if you can
afford it. It you can't you can still try to activate overprovisioning
on the SSD, I would trend to say reserve 10-30% of the SSD for wear
lev
Hello Community,
I have problems with ceph-mons in docker. Docker pods are starting but I got a
lot of messages "e6 handle_auth_request failed to assign global_id” in log. 2
mons are up but I can’t send any ceph commands.
Regards
Mateusz
___
ceph-users
This error means your quorum didn’t formed.
How much mon nodes do you have usually and how much went down?
Le mar. 13 oct. 2020 à 10:56, Mateusz Skała a
écrit :
> Hello Community,
> I have problems with ceph-mons in docker. Docker pods are starting but I
> got a lot of messages "e6 handle_auth_
Hello fellow Ceph users,
we have released our new Ceph benchmark paper [0]. The used platform and
Hardware is Proxmox VE 6.2 with Ceph Octopus on a new AMD Epyc Zen2 CPU
with U.2 SSDs (details in the paper).
The paper should illustrate the performance that is possible with a 3x
node cluster witho
Hi,
Thanks for responding, all monitors goes down, 2/3 is actually up, but
probably not in the quorum. Quick look for before tasks:
1. few pgs without scrub and deep-scrub, 2 mons in cluster
2. added one monitor (via ansible), ansible restarted osd
3. all system os filesystem goes full (b
Hi Team,
I would like to validate cephadm on bare metal and use docker/podman as a
container.
Currently we use NUMA aware config on bare metal to improve performance .
Is there any config I can apply in cephadm to run podman/docker use run
with *–cpuset-cpus*=*num * and *–cpuset-mems*=*nodes *opt
Hello,
We are running a 14.2.7 cluster - 3 nodes with 24 OSDs, I've recently
started getting 'BlueFS spillover detected'. I'm up to 3 OSDs in this
state.
In scanning through the various online sources I haven't been able to
determine how to respond to this condition.
Please advise.
Thanks.
-D
Hi,
if possible you can increase the devices (to reasonable sizes,
3/30/300 GB, doubling the space can help during compaction) holding
the rocksDB and expand them. Compacting the OSDs should then remove
the respective spillover from the main devices or you can let ceph do
it by its own wh
If you’ve got all “Nodes” up and running fine now, here what I’ve done on
my own just this morning.
1°/- Ensure all MONs get the same /etc/ceph/ceph.conf file.
2°/- Many times you MONs share the same keyring, if so, ensure you’ve got
the right keyring in both places /etc/ceph/ceph.mon.keyring and
I'm happy to announce the another release of the go-ceph API
bindings. This is a regular release following our every-two-months release
cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.6.0
Changes in the release are detailed in the link above.
The bindings aim to play a similar role to
Hi Pritha and thanks for your reply. We are using Ceph Octopus and we have
switched to Keycloak from dexIdP.
Having said that we have followed the guide from
https://docs.ceph.com/en/octopus/radosgw/STS/ but we are constantly having an
issue with the AssumeRoleWithWebIdentity example.
We are u
Thanks for the link Alwin!
On intel platforms disabling C/P state transitions can have a really big
impact on IOPS (on RHEL for instance using the network or performance
latency tuned profile). It would be very interesting to know if AMD
EPYC platforms see similar benefits. I don't have any
There is a general documentation meeting called the "DocuBetter Meeting",
and it is held every two weeks. The next DocuBetter Meeting will be on 14
Oct 2020 at 1630 UTC, and will run for thirty minutes. Everyone with a
documentation-related request or complaint is invited.
The meeting will be held
Alwin, this is excellent info. We have a lab on AMD with a similar setup
with NVMe on Proxmox, and will try these benchmarks as well.
--
Alex Gorbachev
Intelligent Systems Services Inc. STORCIUM
On Tue, Oct 13, 2020 at 6:18 AM Alwin Antreich
wrote:
> Hello fellow Ceph users,
>
> we have relea
Very nice and useful document. One thing is not clear for me, the fio
parameters in appendix 5:
--numjobs=<1|4> --iodepths=<1|32>
it is not clear if/when the iodepth was set to 32, was it used with all
tests with numjobs=4 ? or was it:
--numjobs=<1|4> --iodepths=1
/maged
On 13/10/2020 12:1
Hello,
A little off topic, but there isn't much activity in the Puppet community
around Ceph so if it's something you'd like to share, if the module is
reusable, I'm sure there is people that could have good use of it.
The "official" ceph/puppet-ceph is dead which was forked out of the Puppet
Hi all,
Is TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES configured just for filestore or
can be used for bluestore, too?
https://github.com/ceph/ceph/blob/master/etc/default/ceph#L7
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send
Hi,
What is the correct procedure to change a running cluster deployed with cephadm
to a custom container from a private repository that requires a login?
I would have thought something like this would have been the right way, but the
second command fails with an authentication error.
$ cephadm
Hi guys,
Thanks for replies. I looked through that table - hmmm, that is really
true - samsung pro is not really pro. Well im getting what im paying
for. Mainly my question was - am i getting adequate to my disk
performance, and seems like yes i am. My tests shows 7-8 kIOPs,
replicatio fa
Hello,
rgw sts key should be a key of length 16 since we use AES 128 for
encryption (e.g. rgw sts key = abcdefghijklmnop)
Yes it should be 'sts_client' and not 'client'. The errors in documentation
have been noted and will be corrected.
Also please note that the backport to octopus of the new c
On Tue, Oct 13, 2020 at 11:19:33AM -0500, Mark Nelson wrote:
> Thanks for the link Alwin!
>
>
> On intel platforms disabling C/P state transitions can have a really big
> impact on IOPS (on RHEL for instance using the network or performance
> latency tuned profile). It would be very interesting
On Tue, Oct 13, 2020 at 09:09:27PM +0200, Maged Mokhtar wrote:
>
> Very nice and useful document. One thing is not clear for me, the fio
> parameters in appendix 5:
> --numjobs=<1|4> --iodepths=<1|32>
> it is not clear if/when the iodepth was set to 32, was it used with all
> tests with numjobs=4
22 matches
Mail list logo