>
> I am not sure adding more RGW's will increase the performance.
That was a tangent.
> To be clear, that means whatever.rgw.buckets.index ?
>>> No, sorry my bad. .index is 32 and .data is 256.
>> Oh, yeah. Does `ceph osd df` show you at the far right like 4-5 PG replicas
>> on each OSD? Yo
On 2024-06-11 01:01, Anthony D'Atri wrote:
To be clear, you don't need more nodes. You can add RGWs to the ones
you already have. You have 12 OSD nodes - why not put an RGW on
each?
Might be an option, just don't like the idea to host multiple
components on nodes. But I'll consider it.
I
Custom names were never really 100% implemented, and I would not be surprised
if they don't work in Reef.
> On Jun 11, 2024, at 14:02, Joel Davidow wrote:
>
> Zac,
>
> Thanks for your super-fast response and action on this. Those four items
> are great and the corresponding email as reformatte
Zac,
Thanks for your super-fast response and action on this. Those four items
are great and the corresponding email as reformatted looks good.
Jana's point about cluster names is a good one. The deprecation of custom
cluster names, which appears to have started in octopus per
https://docs.ceph.co
Call Vicky 8448079011 Book Call Girls In Uttam Nagar Delhi Independent
/Mahipalpur, Available inbound and outbound escorts in India for home or
hotel service at reasonable charges. The need for escorts or a professional
companion in India is growing with the increasing number of city people.
The pe
Call Vicky 8448079011 Book Call Girls In Janakpuri Delhi Independent
/Mahipalpur, Available inbound and outbound escorts in India for home or
hotel service at reasonable charges. The need for escorts or a professional
companion in India is growing with the increasing number of city people.
The pers
Call Vicky 8448079011 Book Call Girls In Dwarka Delhi Independent
/Mahipalpur, Available inbound and outbound escorts in India for home or
hotel service at reasonable charges. The need for escorts or a professional
companion in India is growing with the increasing number of city people.
The person
Call Vicky 8448079011 Book Call Girls In Dwarka Delhi Independent
/Mahipalpur, Available inbound and outbound escorts in India for home or
hotel service at reasonable charges. The need for escorts or a professional
companion in India is growing with the increasing number of city people.
The person
Call Vicky 8448079011 Book Call Girls In Karol Bagh Delhi Independent
/Mahipalpur, Available inbound and outbound escorts in India for home or
hotel service at reasonable charges. The need for escorts or a professional
companion in India is growing with the increasing number of city people.
The per
Call Vicky 8448079011 Book Call Girls In Majnu Ka tilla Delhi Independent
/Mahipalpur, Available inbound and outbound escorts in India for home or
hotel service at reasonable charges. The need for escorts or a professional
companion in India is growing with the increasing number of city people.
Th
Call Vicky 8448079011 Book Call Girls In Majnu Ka tilla Delhi Independent
/Mahipalpur, Available inbound and outbound escorts in India for home or
hotel service at reasonable charges. The need for escorts or a professional
companion in India is growing with the increasing number of city people.
Th
Call Vicky 8448079011 Book Call Girls In Mahipalpur Delhi Independent
/Mahipalpur, Available inbound and outbound escorts in India for home or
hotel service at reasonable charges. The need for escorts or a professional
companion in India is growing with the increasing number of city people.
The per
Call Vicky 8448079011 Book Call Girls In Mahipalpur Delhi Independent
/Mahipalpur, Available inbound and outbound escorts in India for home or
hotel service at reasonable charges. The need for escorts or a professional
companion in India is growing with the increasing number of city people.
The per
Hi,
We are happy to announce another release of the go-ceph API library.
This is a regular release following our every-two-months release
cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.28.0
More details are available at the link above.
The library includes bindings that aim to play a
Hi All,
Ceph Days is coming to London on July 17th 2024, co-hosted by IBM and Canonical.
We're planning a full day of talks, lighting talks, and panel discussions all
relating to various
parts of the Ceph project. The CFP is now open and will close on the 30th June,
so get your talks in ASAP!
Call Vicky 8448079011 Book Call Girls In Paharganj Delhi Independent
/Mahipalpur, Available inbound and outbound escorts in India for home or
hotel service at reasonable charges. The need for escorts or a professional
companion in India is growing with the increasing number of city people.
The pers
There is a tiny bit more to it. The idea is that, when adding a data pool, any
cephfs client can access the new pool without changing and updating the caps.
To this end, the fs-caps must include 2 pieces of information, the application
name "cephfs" and the file system name (ceph can have multip
Only in warning mode. And there were no PG splits or merges in the last 2
month.
[image: ariadne.ai Logo] Lars Köppel
Developer
Email: lars.koep...@ariadne.ai
Phone: +49 6221 5993580 <+4962215993580>
ariadne.ai (Germany) GmbH
Häusserstraße 3, 69115 Heidelberg
Amtsgericht Mannheim, HRB 744040
Gesc
I don't think scrubs can cause this. Do you have autoscaler enabled?
Zitat von Lars Köppel :
Hi,
thank you for your response.
I don't think this thread covers my problem, because the OSDs for the
metadata pool fill up at different rates. So I would think this is no
direct problem with the jou
Hi,
thank you for your response.
I don't think this thread covers my problem, because the OSDs for the
metadata pool fill up at different rates. So I would think this is no
direct problem with the journal.
Because we had earlier problems with the journal I changed some
settings(see below). I alre
Hi Stefan,
I assume the number of dropped replicas is related to the pool's
min_size. If you increase min_size to 3 you should see only one
replica dropped from the acting set. I didn't run too detailed tests,
but a first quick one seems to confirm that:
# Test with min_size 2, size 4
48.
Hi,
don’t expect solution on group, just direction.
Here is link to the blog post
https://ceph.io/en/news/blog/2024/ceph-a-journey-to-1tibps/
on youtube is presentation from nyc ceph days
View performance from the client's perspective, run the measurement tools from
inside the virtual machine.
I assume it means that pools with an enabled application "cephfs" can
be targeted by specifying this tag instead of listing each pool
separately. Browsing through the code [1] seems to confirm that
(somehow, I'm not a dev):
if (g.match.pool_tag.application == ng.match.pool_tag.application
Hi,
can you check if this thread [1] applies to your situation? You don't
have multi-active MDS enabled, but maybe it's still some journal
trimming, or maybe misbehaving clients? In your first post there were
health warnings regarding cache pressure and cache size. Are those
resolved?
[
Hello
In https://docs.ceph.com/en/latest/cephfs/client-auth/ we can find that
ceph fs authorize cephfs_a client.foo / r /bar rw Results in
client.foo
key: *key*
caps: [mds] allow r, allow rw path=/bar
caps: [mon] allow r
caps: [osd] allow rw tag cephfs data=cephfs_a
Wha
Hello everyone,
short update to this problem.
The zapped OSD is rebuilt and it has now 1.9 TiB (the expected size ~50%).
The other 2 OSDs are now at 2.8 respectively 3.2 TiB. They jumped up and
down a lot but the higher one has now also reached 'nearfull' status. How
is this possible? What is goin
> Note the difference of convention in ceph command presentation. In
> https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-mon/#understanding-mon-status,
> mon.X uses X to represent the portion of the command to be replaced by the
> operator with a specific value. However, that ma
Hi Anthony,
I have 15 nodes 18HDD disk and 6 ssd disk per node
Vào Th 3, 11 thg 6, 2024 vào lúc 10:29 Anthony D'Atri <
anthony.da...@gmail.com> đã viết:
> What specifically are your OSD devices?
>
> On Jun 10, 2024, at 22:23, Phong Tran Thanh
> wrote:
>
> Hi ceph user!
>
> I am encountering a
Hi Anthony!
My osd is HDD 12TB 7200 and SSD 960GB for wal/db
Thanks Anthony!
Vào Th 3, 11 thg 6, 2024 vào lúc 10:29 Anthony D'Atri <
anthony.da...@gmail.com> đã viết:
> What specifically are your OSD devices?
>
> On Jun 10, 2024, at 22:23, Phong Tran Thanh
> wrote:
>
> Hi ceph user!
>
> I am
29 matches
Mail list logo