Hi,
I have a request about docs.ceph.com. Could you provide per minor-version views
on docs.ceph.com? Currently, we can select the Ceph version
by using `https://docs.ceph.com/en/". In this case, we can
use the major
version's code names (e.g., "quincy") or "latest". However, we can't
use minor ve
On 7/11/23 09:44, Luis Domingues wrote:
"bluestore-pricache": {
"target_bytes": 6713193267,
"mapped_bytes": 6718742528,
"unmapped_bytes": 467025920,
"heap_bytes": 7185768448,
"cache_bytes": 4161537138
},
Hi Luis,
Looks like the mapped by
Here you have. Perf dump:
{
"AsyncMessenger::Worker-0": {
"msgr_recv_messages": 12239872,
"msgr_send_messages": 12284221,
"msgr_recv_bytes": 43759275160,
"msgr_send_bytes": 61268769426,
"msgr_created_connections": 754,
"msgr_active_connections":
It was installed with Octopus and hasn't been upgraded yet:
"require_osd_release": "octopus",
Zitat von Josh Baergen :
Out of curiosity, what is your require_osd_release set to? (ceph osd
dump | grep require_osd_release)
Josh
On Tue, Jul 11, 2023 at 5:11 AM Eugen Block wrote:
I'm
Out of curiosity, what is your require_osd_release set to? (ceph osd
dump | grep require_osd_release)
Josh
On Tue, Jul 11, 2023 at 5:11 AM Eugen Block wrote:
>
> I'm not so sure anymore if that could really help here. The dump-keys
> output from the mon contains 42 million osd_snap prefix entrie
Hello everyone,
We have a ceph cluster which was recently updated from octopus(15.2.12) to
pacific(16.2.13). There has been a problem in multi part upload, which is,
when doing UPLOAD_PART_COPY from a valid and existing previously uploaded
part, it gets 403, ONLY WHEN IT'S CALLED BY SERVICE-USER. T
Hi Luis,
Can you do a "ceph tell osd. perf dump" and "ceph daemon osd.
dump_mempools"? Those should help us understand how much memory is
being used by different parts of the OSD/bluestore and how much memory
the priority cache thinks it has to work with.
Mark
On 7/11/23 4:57 AM, Luis Do
I'm not so sure anymore if that could really help here. The dump-keys
output from the mon contains 42 million osd_snap prefix entries, 39
million of them are "purged_snap" keys. I also compared to other
clusters as well, those aren't tombstones but expected "history" of
purged snapshots. So
Hi everyone,
We recently migrate a cluster from ceph-ansible to cephadm. Everything went as
expected.
But now we have some alerts on high memory usage. Cluster is running ceph
16.2.13.
Of course, after adoption OSDs ended up in the zone:
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
osd 88 7m ag
I'm not sure if it's a bug with Cephadm, but it looks like it. I've got Loki
deployed on one machine and Promtail deployed to all machines. After creating a
login, I can view only the logs on the hosts on which Loki is running.
When inspecting the Promtail configuration, the configured URL for L
Okay, this turned out to be down to cephadm rejecting the request because
the new MON was not in the list of public networks.
I had seen that error in the logs, but it looked as though it was a
consequence of the store.db error, rather than a cause.
After adding the new network, and repeating the
Forgot to say we're on Pacific 16.2.13.
On Tue, 11 Jul 2023 at 08:55, Adam Huffman
wrote:
> Hello
>
> I'm trying to add MONs in advance of a planned downtime.
>
> This has actually ended up removing an existing MON, which isn't helpful.
>
> The error I'm seeing is:
>
> Invalid argument: /var/lib
Hello
I'm trying to add MONs in advance of a planned downtime.
This has actually ended up removing an existing MON, which isn't helpful.
The error I'm seeing is:
Invalid argument: /var/lib/ceph/mon/ceph-/store.db: does not
exist (create_if_missing is false)
error opening mon data directory at '
Hi again, I got the log excerpt with the rgw error message:
s3:put_obj block_while_resharding ERROR: bucket is still resharding,
please retry
Below is the message in context, I don't see a return code though,
only 206 for the get requests. Unfortunately, we only have a recorded
putty sess
Never ever use osd pool default min size = 1
this will break your neck and does not make sense really.
:-)
On Mon, Jul 10, 2023 at 7:33 PM Dan van der Ster
wrote:
>
> Hi Jan,
>
> On Sun, Jul 9, 2023 at 11:17 PM Jan Marek wrote:
>
> > Hello,
> >
> > I have a cluster, which have this configurati
15 matches
Mail list logo