On 4/11/23 03:24, Thomas Widhalm wrote:
Hi,
If you remember, I hit bug https://tracker.ceph.com/issues/58489 so I
was very relieved when 17.2.6 was released and started to update
immediately.
Please note, this fix is not in the v17.2.6 yet in upstream code.
Thanks
- Xiubo
But now I'm s
On 11.04.23 09:16, Xiubo Li wrote:
On 4/11/23 03:24, Thomas Widhalm wrote:
Hi,
If you remember, I hit bug https://tracker.ceph.com/issues/58489 so I
was very relieved when 17.2.6 was released and started to update
immediately.
Please note, this fix is not in the v17.2.6 yet in upstream
Hi.
Thank you for the explanation. I get it now.
Michal
On 4/10/23 20:44, Alexander E. Patrakov wrote:
On Sat, Apr 8, 2023 at 2:26 PM Michal Strnad wrote:
cluster:
id: a12aa2d2-fae7-df35-ea2f-3de23100e345
health: HEALTH_WARN
...
pgs: 1656117639/32580808518 o
Hi Jorge,
firstly, it would be really helpful if you would not truncate output of ceph
status or omit output of commands you refer to, like ceph df. We have seen way
too many examples where the clue was in the omitted part.
Without any information, my bets in order are (according to many cases
Hi,
Our Ceph cluster is in an error state with the message:
# ceph status
cluster:
id: 58140ed2-4ed4-11ed-b4db-5c6f69756a60
health: HEALTH_ERR
Module 'cephadm' has failed: invalid literal for int() with base
10: '352.broken'
This happened after trying to re-add an OSD
Mandi! Matthias Ferdinand
In chel di` si favelave...
> To check current state:
> sdparm --get=WCE /dev/sdf
> /dev/sdf: SEAGATE ST2000NM0045 DS03
> WCE 0 [cha: y, def: 0, sav: 0]
> "WCE 0" means: off
> "sav: 0" means: off next time the disk is powered on
Checkin
Mandi! Anthony D'Atri
In chel di` si favelave...
> Dell???s CLI guide describes setting individual drives in Non-RAID, which
> *smells* like passthrough, not the more-complex RAID0 workaround we had to do
> before passthrough.
> https://www.dell.com/support/manuals/en-nz/perc-h750-sas/perc_cli
We are happy to announce another release of the go-ceph API library.
This is a regular release following our every-two-months release
cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.21.0
Changes include additions to the rbd, cephfs, and cephfs/admin packages.
More details are available
What ceph version is this? Could it be this bug [1]? Although the
error message is different, not sure if it could be the same issue,
and I don't have anything to test ipv6 with.
[1] https://tracker.ceph.com/issues/47300
Zitat von Lokendra Rathour :
Hi All,
Requesting any inputs around the
Ceph version Quincy.
But now I am able to resolve the issue.
During mount i will not pass any monitor details, it will be
auto-discovered via SRV.
On Tue, Apr 11, 2023 at 6:09 PM Eugen Block wrote:
> What ceph version is this? Could it be this bug [1]? Although the
> error message is different
there's a rgw_period_root_pool option for the period objects too. but
it shouldn't be necessary to override any of these
On Sun, Apr 9, 2023 at 11:26 PM wrote:
>
> Up :)
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an emai
It is a simple fix. you can have a look here:
https://github.com/ceph/ceph/pull/50975
will backport to reef so it will be in the next release.
On Tue, Apr 11, 2023 at 2:48 PM Thomas Bennett wrote:
> Thanks Yuval. From your email I've confirmed that it's not the logging
> that is broken - it's th
> iops: min=2, max= 40, avg=21.13, stdev= 6.10, samples=929
> iops: min=2, max= 42, avg=21.52, stdev= 6.56, samples=926
That looks horrible. We also have a few SATA HDDs in Dell servers and they do
about 100-150 IOP/s read or write. Originally, I was also a bit afr
Hi,
do you want to hear the truth from real experience?
Or the myth?
The truth is that:
- hdd are too slow for ceph, the first time you need to do a rebalance or
similar you will discover...
- if you want to use hdds do a raid with your controller and use the
controller BBU cache (do not consider c
Thanks Yuval. From your email I've confirmed that it's not the logging that
is broken - it's the CopyFrom is causing an issue :)
I've got some other example Lua scripts working now.
Kind regards,
Thomas
On Sun, 9 Apr 2023 at 11:41, Yuval Lifshitz wrote:
> Hi Thomas,I think you found a crash
The radosgw-admin bucket stats show there are 209266 objects in this bucket,
but it included failed multiparts, so that make the size parameter is also
wrong. When I use boto3 to count objects, the bucket only has 209049 objects.
The only solution I can find is to use lifecycle to clean these f
I don't think you can exclude that.
We've build a notification in the customer panel that there are incomplete
multipart uploads which will be added as space to the bill. We also added a
button to create a LC policy for these objects.
Am Di., 11. Apr. 2023 um 19:07 Uhr schrieb :
> The radosgw-adm
>
> The truth is that:
> - hdd are too slow for ceph, the first time you need to do a rebalance or
> similar you will discover...
Depends on the needs. For cold storage, or sequential use-cases that aren't
performance-sensitive ... Can't say "too slow" without context. In Marco's
case, I
With the Reef dev cycle closing, it's time to think about S and future
releases.
There are a bunch of options for S already, add a +1 or a new option to
this etherpad, and we'll see what has the most votes next week:
https://pad.ceph.com/p/s
Josh
___
Hi,
I see that this PR: https://github.com/ceph/ceph/pull/48030
made it into ceph 17.2.6, as per the change log at:
https://docs.ceph.com/en/latest/releases/quincy/ That's great.
But my scenario is as follows:
I have two clusters set up as multisite. Because of the lack of replication
for IAM
On Tue, Apr 11, 2023 at 3:19 PM Christopher Durham wrote:
>
>
> Hi,
> I see that this PR: https://github.com/ceph/ceph/pull/48030
> made it into ceph 17.2.6, as per the change log at:
> https://docs.ceph.com/en/latest/releases/quincy/ That's great.
> But my scenario is as follows:
> I have two
On Tue, Apr 11, 2023 at 3:53 PM Casey Bodley wrote:
>
> On Tue, Apr 11, 2023 at 3:19 PM Christopher Durham wrote:
> >
> >
> > Hi,
> > I see that this PR: https://github.com/ceph/ceph/pull/48030
> > made it into ceph 17.2.6, as per the change log at:
> > https://docs.ceph.com/en/latest/releases/
Hi,
Our cluster is running Pacific 16.2.10. We have a problem using the
dashboard to display information about RGWs configured in the cluster.
When clicking on "Object Gateway", we get an error 500. Looking in the
mgr logs, I found that the problem is that the RGW is accessed by its IP
addres
I have a similar issue with how the dashboard tries to access an SSL protected
RGW service. It doesn't use the correct name and doesn't allow for any way to
override the RGW name that the dashboard uses.
https://tracker.ceph.com/issues/59111
Bug #59111: dashboard should use rgw_dns_name when ta
Hi,
version 16.2.11 (which was just recently released) contains a fix for
that. But it still doesn’t work with wildcard certificates, that’s
still an issue for us.
Zitat von Michel Jouvin :
Hi,
Our cluster is running Pacific 16.2.10. We have a problem using the
dashboard to display inf
Thanks for these answers, I was not able to find information mentioning the
problem, thus my email. I didn't try 16.2.11 because of the big mentioned
by others in volume activation when using cephadm.
Michel
Sent from my mobile
Le 11 avril 2023 22:28:37 Eugen Block a écrit :
Hi,
version 16.
Hi,
My problem is the opposite !
I don't use SSL on RGWs, because I use a load balancer with HTTPS endpoint.
so no problem with certificates and IP adresses.
With 16.2.11, it does not work anymore because it uses DNS names, and those
names are resolving to a management IP, which is not the networ
Right, I almost forgot that one, I stumbled upon the performance
regression as well. :-/
Zitat von Michel Jouvin :
Thanks for these answers, I was not able to find information
mentioning the problem, thus my email. I didn't try 16.2.11 because
of the big mentioned by others in volume activ
I forgot, there's a similar bug around that :
https://tracker.ceph.com/issues/58811
Le mardi 11 avril 2023, 22:45:28 CEST Gilles Mocellin a écrit :
> Hi,
>
> My problem is the opposite !
> I don't use SSL on RGWs, because I use a load balancer with HTTPS endpoint.
> so no problem with certificate
On 4/11/23 15:59, Thomas Widhalm wrote:
On 11.04.23 09:16, Xiubo Li wrote:
On 4/11/23 03:24, Thomas Widhalm wrote:
Hi,
If you remember, I hit bug https://tracker.ceph.com/issues/58489 so
I was very relieved when 17.2.6 was released and started to update
immediately.
Please note, this
Yeah, thanks for your suggestion
Vào Th 4, 12 thg 4, 2023 vào lúc 00:10 Boris Behrens đã
viết:
> I don't think you can exclude that.
> We've build a notification in the customer panel that there are incomplete
> multipart uploads which will be added as space to the bill. We also added a
> butt
31 matches
Mail list logo