this is really odd
Please run following commands and send over their outputs:
# ceph status
# ceph fs status
# ceph report
# ls -ld //volumes/subvolgrp/test
# ls -l //volumes/subvolgrp/test/.snap
On Thu, Oct 5, 2023 at 11:17 AM Kushagr Gupta
wrote:
>
> Hi Milind,Team
>
> Thank you for your respo
Hi,
were you able to recover your cluster or is this still an issue?
What exactly do you mean by this?
It generally fails to recover in the middle and starts from scratch.
Are OSDs "flapping" or are there other issues as well? Please provide
more details what exactly happens.
There are a co
Hi Dave,
The request built correctly.
Actually... RGW responded to that request (the embedded S3 select engine)
The engine error message points on syntax error.
It's quite an old version... we made a lot of changes, and implemented more
related features.
If not mistaken, the query missing a se
Hello Yuri,
On the RGW side I would very much like to get this [1] patch in that release
that is already merged in reef [2] and pacific [3].
Perhaps Casey can approve and merge that so you can bring it into your
testing.
Thanks!
[1] https://github.com/ceph/ceph/pull/53414
[2] https://github.com
Thank you for your response, Igor.
Currently debug_rocksdb is set to 4/5:
# ceph config get osd debug_rocksdb
4/5
This setting seems to be default. Is my understanding correct that you're
suggesting setting it to 3/5 or even 0/5? Would setting it to 0/5 have any
negative effects on the cluster?
Hi Eugen,
warnings continue to spam cluster log.Actually for the whole picture of the
issue please see:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/VDL56J75FG5LO4ZECIWWGGBW4ULPZUIP/
I was thinking about the following options:
1, restart problematic nodes: 24,32,34,36: need to
Hi Yuri (resend it because I forgot to add ccs to MLs.)
2023年10月5日(木) 5:57 Yuri Weinstein :
Hello
We are getting very close to the next Quincy point release 17.2.7
Here is the list of must-have PRs https://pad.ceph.com/p/quincy_17.2.7_prs
We will start the release testing/review/approval proces
On Tue, Oct 03, 2023 at 06:10:17PM +0200, Matthias Ferdinand wrote:
> On Sun, Oct 01, 2023 at 12:00:58PM +0200, Peter Goron wrote:
> > Hi Matthias,
> >
> > One possible way to achieve your need is to set a quota on number of
> > buckets at user level (see
> > https://docs.ceph.com/en/reef/radosgw
Hello
We are getting very close to the next Quincy point release 17.2.7
Here is the list of must-have PRs https://pad.ceph.com/p/quincy_17.2.7_prs
We will start the release testing/review/approval process as soon as
all PRs from this list are merged.
If you see something missing please speak up
Hi Everyone,
I've been trying to get S3 Select working on our system and whenever I
send a query I get the following in the Payload (Result 200 from RGW):
# aws --endpoint-url http://cephtest1 s3api select-object-content
--bucket test1 --expression-type SQL --input-serialization '{"CSV":
{"Fieldt
Hi Zakhar,
do reduce rocksdb logging verbosity you might want to set debug_rocksdb
to 3 (or 0).
I presume it produces a significant part of the logging traffic.
Thanks,
Igor
On 04/10/2023 20:51, Zakhar Kirpichenko wrote:
Any input from anyone, please?
On Tue, 19 Sept 2023 at 09:01, Zakh
Any input from anyone, please?
On Tue, 19 Sept 2023 at 09:01, Zakhar Kirpichenko wrote:
> Hi,
>
> Our Ceph 16.2.x cluster managed by cephadm is logging a lot of very
> detailed messages, Ceph logs alone on hosts with monitors and several OSDs
> has already eaten through 50% of the endurance of t
Hi folks,
I am aware that dynamic resharding isn't supported before Reef with multisite.
However, does manual resharding work? It doesn't seem to be so, either. First
of all, running "bucket reshard" has to be in the master zone. But if the
objects of that bucket isn't in the master zone, resha
> Tried a negative number ("--max-buckets=-1"), but that had no effect at
> all (not even an error message).
must have mistyped the command; trying again with "-max-buckets=-1", it
shows the wanted effect: user cannot create any bucket.
So, an effective and elegant method indeed :-)
Matthias
PS
On Tue, Oct 03, 2023 at 06:10:17PM +0200, Matthias Ferdinand wrote:
> On Sun, Oct 01, 2023 at 12:00:58PM +0200, Peter Goron wrote:
> > Hi Matthias,
> >
> > One possible way to achieve your need is to set a quota on number of
> > buckets at user level (see
> > https://docs.ceph.com/en/reef/radosgw
Hi Ceph users and developers,
We are gearing up for the next User + Developer Monthly Meeting, happening
October 19th at 10am EST.
If you are interested in being a guest speaker, you are invited to submit a
focus topic to this Google form:
https://docs.google.com/forms/d/e/1FAIpQLSdboBhxVoBZoaHm8
On Wed, Oct 4, 2023 at 7:19 PM Kushagr Gupta
wrote:
>
> Hi Milind,
>
> Thank you for your swift response.
>
> >>How many hours did you wait after the "start time" and decide to restart
> >>mgr ?
> We waited for ~3 days before restarting the mgr-service.
The only thing I can think of is a stale m
Also found what the 2nd problem was:
When there are pools using the default replicated_ruleset while there are
multiple rulesets with differenct device classes, the autoscaler does not
produce any output.
Should I open a bug for that?
Am Mi., 4. Okt. 2023 um 14:36 Uhr schrieb Boris Behrens :
> F
Hello Eugen,
yes, we followed the documentation and everything worked fine. The cache
is gone.
Removing the pool worked well. Everything is clean.
The PGs are empty active+clean.
Possible solutions:
1.
ceph pg {pg-id} mark_unfound_lost delete
I do not think this is the right way since it
Hi Milind,
Thank you for your swift response.
>>How many hours did you wait after the "start time" and decide to restart
mgr ?
We waited for ~3 days before restarting the mgr-service.
There was one more instance where we waited for 2 hours and then re-started
and in the third hour the schedule s
Hi everybody,
I tried to reshard a bucket belonging to the tenant "test-tenant", but got an
"No such file or directory" error.
$ radosgw-admin reshard add --bucket test-tenant/test-bucket --num-shards 40
$ radosgw-admin reshard process
2023-10-04T12:12:52.470+0200 7f654237afc0 0 process_single
Hi Team,Milind
*Ceph-version:* Quincy, Reef
*OS:* Almalinux 8
*Issue:* snap_schedule works after 1 hour of schedule
*Description:*
We are currently working in a 3-node ceph cluster.
We are currently exploring the scheduled snapshot capability of the
ceph-mgr module.
To enable/configure schedule
Found the bug for the TOO_MANY_PGS: https://tracker.ceph.com/issues/62986
But I am still not sure, why I don't have any output on that one cluster.
Am Mi., 4. Okt. 2023 um 14:08 Uhr schrieb Boris Behrens :
> Hi,
> I've just upgraded to our object storages to the latest pacific version
> (16.2.14)
On Wed, Oct 4, 2023 at 3:40 PM Kushagr Gupta
wrote:
>
> Hi Team,Milind
>
> Ceph-version: Quincy, Reef
> OS: Almalinux 8
>
> Issue: snap_schedule works after 1 hour of schedule
>
> Description:
>
> We are currently working in a 3-node ceph cluster.
> We are currently exploring the scheduled snapsho
Hi,
I've just upgraded to our object storages to the latest pacific version
(16.2.14) and the autscaler is acting weird.
On one cluster it just shows nothing:
~# ceph osd pool autoscale-status
~#
On the other clusters it shows this when it is set to warn:
~# ceph health detail
...
[WRN] POOL_TOO_M
Hi,
I just did this successfully on a test Reef cluster (no multi-site)
$ radosgw-admin object rewrite --bucket=bucket1 --object="myfile.txt"
where "--object" is the object name. The epoch and the tag have been
updated, so I guess it worked.
But I also got a segfault on a Octopus test cluster
Hi,
I suspect the auth_allow_insecure_global_id_reclaim config option. If
you really need this to work you can set
$ ceph config set mon auth_allow_insecure_global_id_reclaim true
and the client should be able to connect. You will get a warning though:
mon is allowing insecure global_id re
Hi,
is this still an issue? If so, I would try to either evict the client
via admin socket:
ceph tell mds.5 client evict [...] --- Evict client
session(s) based on a filter
alternatively locally on the MDS:
cephadm enter mds.
ceph daemon mds. client evict
or restart the MDS which should
Hi,
we have often seen strange behavior and also interesting pg targets from
pg_autoscaler in the last years.
That's why we disable it globally.
The commands:
ceph osd reweight-by-utilization
ceph osd test-reweight-by-utilization
are from the time before the upmap balancer was introduced and di
Hi,
you could change the target_max_misplaced_ratio to 1, the balancer has
a default 5% ratio of misplaced objects, see [1] for more information:
ceph config get mgr target_max_misplaced_ratio
0.05
[1] https://docs.ceph.com/en/latest/rados/operations/balancer/#throttling
Zitat von b...@
Hi all,
I would like to follow up on this, it turns out that overwriting the
file doesn't actually hang, but is just super slow, like several
minutes. The process is busy in a syscall reading large amounts of what
I'm assuming is filesystem metadata until the operation finally completes.
The
Hi,
just for clarity, you're actually talking about the cache tier as
described in the docs [1]? And you followed the steps until 'ceph osd
tier remove cold-storage hot-storage' successfully? And the pool has
been really deleted successfully ('ceph osd pool ls detail')?
[1]
https://docs
Hi,Please take a look at the following thread: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/PWHG6QJ6N2TJEYD2U4AXJAJ23CRPJG4E/#7ZMBM23GXYFIGY52ZWJDY5NUSYSDSYL6In short, the value for "osd_mclock_cost_per_byte_usec_hdd" isn't correct. With the release of 17.2.7 this option will be
Did you apply the changes to the containers.conf file on all hosts?
The MGR daemon is issuing the cephadm commands on the remote hosts, so
it would need that as well. That setup works for me quite well for
years now. What distro is your host running on? We mostly use openSUSE
or SLES, but I
34 matches
Mail list logo