Just wanted to follow up on this to say that it is now working.
After reviewing the configuration of the new host many times, I did a
hard restart of the active mrg container.
The command to add the new host proceeded without error.
Thanks everyone.
Gary
On 2024-02-06 16:01, Tim Holloway wr
Hello,
Which version of Ceph are you using? Are all of your OSDs currently
up+in? If you're HEALTH_OK and all OSDs are up, snaptrim should work
through the removed_snaps_queue and clear it over time, but I have
seen cases where this seems to get stuck and restarting OSDs can help.
Josh
On Wed, F
I have updated the documentation in two places to address this matter:
https://github.com/ceph/ceph/pull/55518
https://github.com/ceph/ceph/pull/55520
Thank you to everyone in this thread, and special thanks to Eugen Block for
bringing this to my attention.
Zac Dover
Upstream Docs
Ceph Foundati
Hi Maged,
1) Good question. Our cmake setup is complex enough that I suspect it's
hard to definitively answer that question without auditing every
sub-project for each build type. My instinct was to explicitly set the
CMAKE_BUILD_TYPE to RelWithDebInfo in the rules file (This is what
Canon
i've cc'ed Matt who's working on the s3 object integrity feature
https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html,
where rgw compares the generated checksum with the client's on ingest,
then stores it with the object so clients can read it back for later
integrit
Hello Ankit,
try bootstrapping the cluster as root. Become the root user first! Do
not use sudo.
Does it fail again?
Best,
Malte
On 09.02.24 10:02, Eugen Block wrote:
You already are in the right place wrt the docs. You can check out the
help page of cephadm to see which other options you h
MPU etags are an MD5-of-MD5s, FWIW. If the users knows how the parts are
uploaded then it can be used to verify contents, both just after upload and
then at download time (both need to be validated if you want end-to-end
validation - but then you're trusting the system to not change the etag
undern
> On Feb 9, 2024, at 08:15, Michal Strnad wrote:
>
> Thank you for your response.
>
> We have already done some Lua scripting in the past, and it wasn't entirely
> enjoyable :-), but we may have to do it again. Scrubbing is still enabled,
> and turning it off definitely won't be an option.
On 09-02-2024 14:18, Maged Mokhtar wrote:
Hi Mark,
Thanks a lot for highlighting this issue...I have 2 questions:
1) In the patch comments:
/"but we fail to populate this setting down when building external
projects. this is important when it comes to the projects which is
critical to the pe
Hi Mark,
Thanks a lot for highlighting this issue...I have 2 questions:
1) In the patch comments:
/"but we fail to populate this setting down when building external
projects. this is important when it comes to the projects which is
critical to the performance. RocksDB is one of them."/
Do w
Thank you for your response.
We have already done some Lua scripting in the past, and it wasn't
entirely enjoyable :-), but we may have to do it again. Scrubbing is
still enabled, and turning it off definitely won't be an option.
However, due to the project requirements, it would be great if
You could use Lua scripting perhaps to do this at ingest, but I'm very curious
about scrubs -- you have them turned off completely?
> On Feb 9, 2024, at 04:18, Michal Strnad wrote:
>
> Hi all!
>
> In the context of a repository-type project, we need to address a situation
> where we cannot u
Hi,
do you really need multi-site since you mentioned that you have one
cluster? Maybe start with single-site RGW [1] since there's no
replication target anyway.
If you deploy multiple rgw daemons you might need an ingress service
[2] as well and point your zone endpoints to your virtual
Hi all!
In the context of a repository-type project, we need to address a
situation where we cannot use periodic checks in Ceph (scrubbing) due to
the project's nature. Instead, we need the ability to write a checksum
into the metadata of the uploaded file via API. In this context, we are
not
You already are in the right place wrt the docs. You can check out the
help page of cephadm to see which other options you have. The easiest
way for you would be to bootstrap without monitoring stack:
cephadm bootstrap --mon-ip 192.168.2.125 --skip-monitoring-stack
This should bring up your
Hi Eugen,
Thank you very much for looking this insight.
But as I mentioned earlier and I am trying build ceph-cluster first time so
could you please help me to build it if you can point any documentation where
all details are available so that I can follow it!
Regards,
Ankit Sharma
--
Hi,
I don't really know how the ceph-exporter gets into your Quincy
bootstrap, when I deploy it with Quincy (also cephadm is from Quincy
repo) it doesn't try to deploy ceph-exporter. When I use cephadm from
Reef repo it does deploy a Reef cluster including ceph-exporter
successfully.
As a
17 matches
Mail list logo