Hi,
El 2/3/24 a las 18:00, Tyler Stachecki escribió:
On 23.02.24 16:18, Christian Rohmann wrote:
I just noticed issues with ceph-crash using the Debian /Ubuntu
packages (package: ceph-base):
While the /var/lib/ceph/crash/posted folder is created by the package
install,
it's not properly chowne
Hi all, coming late to the party but want to ship in as well with some
experience.
The problem of tail latencies of individual OSDs is a real pain for any
redundant storage system. However, there is a way to deal with this in an
elegant way when using large replication factors. The idea is to u
Hello,
i wonder why my autobalancer is not working here:
root@ceph01:~# ceph -s
cluster:
id: 5436dd5d-83d4-4dc8-a93b-60ab5db145df
health: HEALTH_ERR
1 backfillfull osd(s)
1 full osd(s)
1 nearfull osd(s)
4 pool(s) full
=> osd.17 was to
Den mån 4 mars 2024 kl 11:30 skrev Ml Ml :
>
> Hello,
>
> i wonder why my autobalancer is not working here:
I think the short answer is "because you have so wildly varying sizes
both for drives and hosts".
If your drive sizes span from 0.5 to 9.5, there will naturally be
skewed data, and it is no
>
> Fast write enabled would mean that the primary OSD sends #size copies to the
> entire active set (including itself) in parallel and sends an ACK to the
> client as soon as min_size ACKs have been received from the peers (including
> itself). In this way, one can tolerate (size-min_size) slow(e
On 04/03/2024 13:35, Marc wrote:
Fast write enabled would mean that the primary OSD sends #size copies to the
entire active set (including itself) in parallel and sends an ACK to the
client as soon as min_size ACKs have been received from the peers (including
itself). In this way, one can toler
Hi,
ceph dashboard fails to listen on all IPs.
log_channel(cluster) log [ERR] : Unhandled exception from module 'dashboard'
while running on mgr.controllera: OSError("No socket could be created --
(('0.0.0.0', 8443): [Errno -2] Name or service not known) -- (('::', 8443,
0, 0):
ceph version 17.2
>>> Fast write enabled would mean that the primary OSD sends #size copies to the
>>> entire active set (including itself) in parallel and sends an ACK to the
>>> client as soon as min_size ACKs have been received from the peers (including
>>> itself). In this way, one can tolerate (size-min_size) s
On 04/03/2024 15:37, Frank Schilder wrote:
Fast write enabled would mean that the primary OSD sends #size copies to the
entire active set (including itself) in parallel and sends an ACK to the
client as soon as min_size ACKs have been received from the peers (including
itself). In this way, one
We're happy to announce the 15th, and expected to be the last,
backport release in the Pacific series.
https://ceph.io/en/news/blog/2024/v16-2-15-pacific-released/
Notable Changes
---
* `ceph config dump --format ` output will display the localized
option names instead of their nor
This is great news! Many thanks!
/Z
On Mon, 4 Mar 2024 at 17:25, Yuri Weinstein wrote:
> We're happy to announce the 15th, and expected to be the last,
> backport release in the Pacific series.
>
> https://ceph.io/en/news/blog/2024/v16-2-15-pacific-released/
>
> Notable Changes
> --
> I think the short answer is "because you have so wildly varying sizes
> both for drives and hosts".
Arguably OP's OSDs *are* balanced in that their PGs are roughly in line with
their sizes, but indeed the size disparity is problematic in some ways.
Notably, the 500GB OSD should just be remov
On 3/4/24 08:40, Maged Mokhtar wrote:
On 04/03/2024 15:37, Frank Schilder wrote:
Fast write enabled would mean that the primary OSD sends #size
copies to the
entire active set (including itself) in parallel and sends an ACK
to the
client as soon as min_size ACKs have been received from the pe
Did the balancer has enabled pools ? "ceph balancer pool ls"
Actually I am wondering if the balancer do something when no pools are
added.
On Mon, Mar 4, 2024, 11:30 Ml Ml wrote:
> Hello,
>
> i wonder why my autobalancer is not working here:
>
> root@ceph01:~# ceph -s
> cluster:
> id:
The balancer will operate on all pools unless otherwise specified.
Josh
On Mon, Mar 4, 2024 at 1:12 PM Cedric wrote:
>
> Did the balancer has enabled pools ? "ceph balancer pool ls"
>
> Actually I am wondering if the balancer do something when no pools are
> added.
>
>
>
> On Mon, Mar 4, 2024, 1
I likely missed an announcement, and if so, please forgive me.
I’m seeing some failure for when running apt on a cluster of ubuntu machines —
looks like a directory has changed on https://download.ceph.com/
Was:
debian-reef/
Now appears to be:
debian-reef_OLD/
Was reef pulled?
___
Hi community,
I have a user that owns some buckets. I want to create a subuser that has
permission to access only one bucket. What can I do to archive this?
Thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-us
Hi,
I have upgraded my test and production cephadm-managed clusters from
16.2.14 to 16.2.15. The upgrade was smooth and completed without issues.
There were a few things which I noticed after each upgrade:
1. RocksDB options, which I provided to each mon via their configuration
files, got overwri
Dear Ceph users,
in order to reduce the deep scrub load on my cluster I set the deep
scrub interval to 2 weeks, and tuned other parameters as follows:
# ceph config get osd osd_deep_scrub_interval
1209600.00
# ceph config get osd osd_scrub_sleep
0.10
# ceph config get osd osd_scrub_loa
Hi,
1. RocksDB options, which I provided to each mon via their configuration
files, got overwritten during mon redeployment and I had to re-add
mon_rocksdb_options back.
IIRC, you didn't use the extra_entrypoint_args for that option but
added it directly to the container unit.run file. So it
20 matches
Mail list logo