hi Stephan,
On Tue, Jan 28, 2025 at 6:23 AM Stephan Hohn wrote:
>
>
> Hi Yuri and Devs,
>
> with the latest release of aws (cli, boto3, go sdk) users have issues with s3
> and crc.
>
> It looks like the fix is already merged but not backported yet in reef (nor
> quincy)
>
> https://tracker.ceph
Hi Casey,
thanks for the update.
Cheers
Stephan
Am Di., 28. Jan. 2025 um 15:18 Uhr schrieb Casey Bodley :
> hi Stephan,
>
> On Tue, Jan 28, 2025 at 6:23 AM Stephan Hohn wrote:
> >
> >
> > Hi Yuri and Devs,
> >
> > with the latest release of aws (cli, boto3, go sdk) users have issues
> with
Hi,
a colleague brought this to my attention:
| AWS recently updated their SDKs to enable CRC32 checksums on multiple
| object operations by default.
| R2 does not currently support CRC32 checksums, and the default
| configurations will return header related errors such as Header
| 'x-amz-checks
Hi Felix,
A dumb answer first: if you know the image names, have you tried "rbd
rm $pool/$imagename"? Or, is there any reason like concerns about
iSCSI control data integrity that prevents you from trying that?
Also, have you checked the rbd trash?
On Tue, Jan 28, 2025 at 5:43 PM Stolte, Felix
Yeah... that's right, the way certificates are managed and there's no
documentation on how to set the new ones mainly because it's not easy to do
that manually. I'm
working on some detailed instructions (hosted in the below repo) to help
with that. I tested the script on my test cluster and it work
Dear All,
I am getting 500 internal server error after deleting and creating new
access and secret keys.
I am able to access my buckets and and its data using new keys from S3cmd
utility. but it seems ceph MGR keep cache and throwing 500 internal server
error I am trying to get meta data from CEP
Hi guys,
we have a rbd pool we used for images exported via ceph-iscsi on a 17.2.7
cluster. The pool uses 10 times the diskspace i would suppose it should and
after investigating we noticed a lot of rbd_data Objects which images are no
longer present. I assume that the original images were del
As far as https://tracker.ceph.com/issues/63153 is concerned,
https://github.com/ceph/ceph/pull/58435 (its reef backport) applies
cleanly on top of 18.2.4.
Backport to Quincy is less obvious -- Also, unsure if a new Quincy
release is expected (17.2.8 should be the latest of the Quincy series).
Hello,
I just wanted to share here that the default Debian 12 (bookworm) kernel v6.1
is affected by a bug that leads to cephfs client "failing to respond to
capability release". This bug has been investigated here [1] and there [2].
To work around this, make sure you're using a Ceph version a
On 28.01.25 11:52 AM, Stephan Hohn wrote:
I think we have to wait for the reef and quincy backport
https://tracker.ceph.com/issues/63153
And there also is a new issue by Casey in the tracker to add support for
CRC-64NVME: https://tracker.ceph.com/issues/69105
Regards
Christian
Hi Yuri and Devs,
with the latest release of aws (cli, boto3, go sdk) users have issues with
s3 and crc.
It looks like the fix is already merged but not backported yet in reef (nor
quincy)
https://tracker.ceph.com/issues/63153
https://github.com/ceph/ceph/pull/54856
Will this be part of the nex
I think we have to wait for the reef and quincy backport
https://tracker.ceph.com/issues/63153
Am Di., 28. Jan. 2025 um 11:36 Uhr schrieb Dave Holland :
> Hi,
>
> a colleague brought this to my attention:
>
> | AWS recently updated their SDKs to enable CRC32 checksums on multiple
> | object oper
On 28/1/25 19:33, Enrico Bocchi wrote:
> Also, unsure if a new Quincy release is expected (17.2.8 should be the
> latest of the Quincy series).
https://pad.ceph.com/p/csc-weekly-minutes
17.2.8 - EOL
That seems pretty clear to me. Also it's in line with expectations (one
more point release after $
Hi Marc,
I don't have a link to a dashboard, but since the "node_filesystem"
metrics are already available, you could simply add a panel to an
existing grafana dashboard and add this expression:
100 -
((node_filesystem_avail_bytes{instance="$node",job="$job",device!~'rootfs'} *
100) /
Hi Robert,
although I would assume that deleting the pool is safe, I'd rather try
to get to the bottom of this as well.
Do you still have access to the directories to check for snapshots
(.snap directories underneath the root filesystem mount)?
Zitat von Robert Sander :
Hi,
there is an o
15 matches
Mail list logo