Hey,
That sounds very similar to what I described there:
https://tracker.ceph.com/issues/43364
Best,
Eric
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
copies ever being removed?
4. Is anyone experiencing this issue willing to run their RGWs with
'debug_ms=1'? That would allow us to see a request from an RGW to either remove
a tail object or decrement its reference counter (and when its counter reaches
0 it will be deleted).
Tha
shadow_) objects, then the initial data is stored in the head object. So this
test would not be truly diagnostic. This could be done with a large object, for
example, with `s3cmd put --disable-multipart …`.
Eric
--
J. Eric Ivancich
he / him / his
Red Hat Storage
Ann Arbor, Michigan, USA
___
on the gc list.
Eric
> On Nov 16, 2020, at 3:48 AM, Janek Bevendorff
> wrote:
>
> As noted in the bug report, the issue has affected only multipart objects at
> this time. I have added some more remarks there.
>
> And yes, multipart objects tend to have 0 byte head ob
emove entry from reshard log, oid=reshard.09
> tenant= bucket=foo
>
> Is there anything else that I should look for? It looks like the cancel
> process thinks
> that reshard.09 is present (and probably blocking my attempts at
> resharding) but
> it's not act
Tracking down this bug was a group effort and many people participated. See the
master branch tracker for that history. Thanks to everyone who helped out.
Eric
--
J. Eric Ivancich
he / him / his
Red Hat Storage
Ann Arbor, Michigan, USA
___
ceph-users mailing
> Istvan Szabo
> Senior Infrastructure Engineer
--
J. Eric Ivancich
he / him / his
Red Hat Storage
Ann Arbor, Michigan, USA
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
re the common case and not an issue.
Eric
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
way"
I found was to delete all 4 OSDs and create everything from scratch (I didn't
actually do it, as I hope there is a better way).
Has anyone had this issue before? I'd be glad if someone pointed me in the
right direction.
Currently runni
Hi Eugen, thanks for the reply.
I've already tried what you wrote in your answer, but still no luck.
The NVMe disk still doesn't have the OSD. Please note I using containers, not
standalone OSDs.
Any ideas?
Regards,
Eric
___
ceph-users ma
Hi Eugen, thanks for the reply.
I've already tried what you wrote in your answer, but still no luck.
The NVMe disk still doesn't have the OSD. Please note I using containers, not
standalone OSDs.
Any ideas?
Regards,
Eric
Message: 2
Date: Fri, 20 A
h-block-dbs-8b159f55-2500-427f-9743-2bb8b3df3e17/osd-block-db-b1e2a81f-2fc9-4786-85d2-6a27430d9f2e
--force
3. You should also have some logs available from the deployment attempt,
maybe it reveals why the NVMe was not considered.
I couldn't find any relevant logs regarding this question.
Be
"osdj2"
service_id: test_db_device
service_type: osd
...(snip)...
Without success. Also tried without the "filter_logic: AND" in the yaml file
and the result was the same.
Best regards,
Eric
-Original Message-
From: David Orman [mailto:orma...@corenode.com]
Sent: 27 A
as it had before
If I run everything again but with osd.0, it creates correctly, because when
running:
ceph-volume lvm zap --osd-id 0 --destroy
It doesn't say this line:
--> More than 1 LV left in VG, will proceed to destroy LV only
But it rather says this:
--> Only 1 LV left in
Hi,
I get the same after upgrading to 16.2.6. All mds daemons are standby.
After setting
ceph fs set cephfs max_mds 1
ceph fs set cephfs allow_standby_replay false
the mds still wants to be standby.
2021-09-17T14:40:59.371+0200 7f810a58f600 0 ceph version 16.2.6
(ee28fb57e47e9f88813e24bbf4c1449
up:standby seq 1 addr [v2:
192.168.1.72:6800/2991378711,v1:192.168.1.72:6801/2991378711] compat
{c=[1],r=[1],i=[7ff]}]
dumped fsmap epoch 226256
On Fri, Sep 17, 2021 at 4:41 PM Patrick Donnelly
wrote:
> On Fri, Sep 17, 2021 at 8:54 AM Eric Dold wrote:
> >
> > Hi,
> >
&
ote:
> On Fri, Sep 17, 2021 at 6:57 PM Eric Dold wrote:
> >
> > Hi Patrick
> >
> > Here's the output of ceph fs dump:
> >
> > e226256
> > enable_multiple, ever_enabled_multiple: 0,1
> > default compat: compat={},rocompat={},incompat={1=base v0
s, you may be hitting
the same issue.
It happened 2 or 3 times and then went away, possibly thanks to software
updates (currently on 14.2.21).
Eric
> On 11 Oct 2021, at 18:44, Simon Ironside wrote:
>
> Bump for any pointers here?
>
> tl;dr - I've got a single PG that
ent metadata OSD.
Does anyone have any suggestion on how to get the MDS to switch from "up:rejoin" to
"up:active"?
Is there any way to debug this, to determine what issue really is? I'm unable
to interpret the debug log.
Cheers,
Eric
__
92+
./src/mds/MDCache.cc: 4084: FAILED ceph_assert(auth >= 0)
Is this a file permission problem?
Eric
On 27/11/2023 14:29, Eric Tittley wrote:
Hi all,
For about a week our CephFS has experienced issues with its MDS.
Currently the MDS is stuck in "up:rejoin"
Issues become apparent wh
sert() has
advantages, but also massive disadvantages when it comes to debugging.
Cheers,
Eric
On 05/12/2023 06:10, Venky Shankar wrote:
This email was sent to you by someone outside the University.
You should only click on links or attachments if you are certain that the email
is genuine an
On 05/12/2023 12:50, Venky Shankar wrote:
This email was sent to you by someone outside the University.
You should only click on links or attachments if you are certain that the email
is genuine and the content is safe.
Hi Eric,
On Tue, Dec 5, 2023 at 3:43 PM Eric Tittley wrote:
Hi Venky
ry
ceph pg deep-scrub 8.8
ceph pg repair 8.8
I also tried to set one of the primary OSD out, but the affected PG did
stay on that OSD.
What's the best course of action to get the cluster back to a healthy state?
Should i make
ceph pg 8.8 mark_unfound_lost revert
or
ceph pg 8.8 mark_unfound_lost delete
Or is there another way?
Would the cache pool still work after that?
Thanks,
Eric
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
config help *rgw_multipart_part_upload_limit*
rgw_multipart_part_upload_limit - Max number of parts in multipart upload
(int, advanced)
Default: 1
Can update at runtime: true
Services: [rgw]
*rgw_max_put_size* is set in bytes.
Regards,
Eric.
On Fri, Dec 9, 2022 at 11:24 AM Boris Behrens
I've been trying to get ceph to force the PG to a good state but it
continues to give me a single PG peering. This is a rook-ceph cluster on
VMs (hosts went out for a brief period) and I can't figure out how to get
this 1GB or so of data to become available to the client. This occurred
during a clu
something wrong? Do I have to remove all fragments of the PG and force it
to go NOENT before trying an import?
On Wednesday, December 8, 2021, Eric Alba wrote:
> I've been trying to get ceph to force the PG to a good state but it
> continues to give me a single PG peering. This is a rook-ceph
.
I look forward to your report! And please feel free to post additional
questions in this forum.
Eric
--
J. Eric Ivancich
he / him / his
Red Hat Storage
Ann Arbor, Michigan, USA
> On Apr 20, 2020, at 6:18 AM, Katarzyna Myrek wrote:
>
> Hi Eric,
>
> I will try your tool t
Hi Katarzyna,
Incomplete multipart uploads are not considered orphans.
With respect to the 404s…. Which version of ceph are you running? What tooling
are you using to list and cancel? Can you provide a console transcript of the
listing and cancelling?
Thanks,
Eric
--
J. Eric Ivancich
he
Perhaps the next step is to examine the generated logs from:
radosgw-admin reshard status --bucket=foo --debug-rgw=20 --debug-ms=1
radosgw-admin reshard cancel --bucket foo --debug-rgw=20 --debug-ms=1
Eric
--
J. Eric Ivancich
he / him / his
Red Hat Storage
Ann Arbor, Michigan
Hello,
What I can see here : http://download.ceph.com/rpm-octopus/el8/ is that the
first Ceph release available on CentOS 8 is Octopus and is already
accessible.
Thanks, Eric.
On Fri, May 29, 2020 at 5:44 PM Guillaume Abrioux
wrote:
> Hi Jan,
>
> I might be wrong but I do
if self.break_ties and self.decision_function_shape == 'ovo':
AttributeError: 'SVC' object has no attribute 'break_ties'
Best Regards
Eric
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I have a cluster running Luminous 12.2.12 with Filestore and it takes my OSDs
somewhere around an hour to start (They do start successfully - eventually). I
have the following log entries that seem to show the OSD process attempting to
descend into the PG directory on disk and create an object l
82400-1593486601\u\uwzdchd3._0bfd7c716b839cb7b3ad_0_long
Does this matter? AFAICT it sees this as a long file name and has to lookup the
object name in the xattrs ? Is that bad?
-Original Message-
From: Eric Smith
Sent: Friday, July 10, 2020 6:59 AM
To: ceph-users@ceph.io
Subject: [ceph-
ee if it's also
susceptible to these boot / read issues.
Eric
-Original Message-
From: Eric Smith
Sent: Friday, July 10, 2020 1:46 PM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Luminous 12.2.12 - filestore OSDs take an hour to boot
For what it's worth - all of our ob
If you run (Substitute your pool name for ):
rados -p list-inconsistent-obj 1.574 --format=json-pretty
You should get some detailed information about which piece of data actually has
the error and you can determine what to do with it from there.
-Original Message-
From: Abhimnyu Dhobal
FWIW Bluestore is not affected by this problem!
-Original Message-
From: Eric Smith
Sent: Saturday, July 11, 2020 6:40 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Luminous 12.2.12 - filestore OSDs take an hour to boot
It does appear that long file names and filestore seem to be
We're lucky that we are in the process of expanding the cluster, instead of
expanding we'll just build a new Bluestore cluster and migrate data to it.
-Original Message-
From: Dan van der Ster
Sent: Tuesday, July 14, 2020 9:17 AM
To: Eric Smith
Cc: ceph-users@ceph.io
S
Can you post the output of a couple of commands:
ceph df
ceph osd pool ls detail
Then we can probably explain the utilization you're seeing.
-Original Message-
From: Mateusz Skała
Sent: Saturday, July 18, 2020 1:35 AM
To: ceph-users@ceph.io
Subject: [ceph-users] EC profile datastore us
Can you post the output of these commands:
ceph osd pool ls detail
ceph osd tree
ceph osd crush rule dump
-Original Message-
From: Frank Schilder
Sent: Monday, August 3, 2020 9:19 AM
To: ceph-users
Subject: [ceph-users] Re: Ceph does not recover from OSD restart
After moving the newl
You said you had to move some OSDs out and back in for Ceph to go back to
normal (The OSDs you added). Which OSDs were added?
-Original Message-
From: Frank Schilder
Sent: Monday, August 3, 2020 9:55 AM
To: Eric Smith ; ceph-users
Subject: Re: Ceph does not recover from OSD restart
Have you adjusted the min_size for pool sr-rbd-data-one-hdd at all? Also can
you send the output of "ceph osd erasure-code-profile ls" and for each EC
profile, "ceph osd erasure-code-profile get "?
-Original Message-
From: Frank Schilder
Sent: Monday, August 3, 20
Schilder
Sent: Tuesday, August 4, 2020 7:10 AM
To: Eric Smith ; ceph-users
Subject: Re: Ceph does not recover from OSD restart
Hi Eric,
> Have you adjusted the min_size for pool sr-rbd-data-one-hdd
Yes. For all EC pools located in datacenter ServerRoom, we currently set
min_size=k=6, because
/ rebalancing
ongoing it's not unexpected. You should not have to move OSDs in and out of the
CRUSH tree however in order to solve any data placement problems (This is the
baffling part).
-Original Message-
From: Frank Schilder
Sent: Tuesday, August 4, 2020 7:45 AM
To: Eric Smith ;
Do you have any monitor / OSD logs from the maintenance when the issues
occurred?
Original message
From: Frank Schilder
Date: 8/4/20 8:07 AM (GMT-05:00)
To: Eric Smith , ceph-users
Subject: Re: Ceph does not recover from OSD restart
Hi Eric,
thanks for the clarification, I
e you start doing a ton a maintenance so old PG
maps can be trimmed. That's the best I can ascertain from the logs for now.
-Original Message-
From: Frank Schilder
Sent: Tuesday, August 4, 2020 8:35 AM
To: Eric Smith ; ceph-users
Subject: Re: Ceph does not recover from OSD restart
If
account that an object with the same name as the previously deleted one was
re-created in the versioned bucket.
Eric
> On Oct 1, 2020, at 8:46 AM, Matt Benjamin wrote:
>
> Hi Dan,
>
> Possibly you're reproducing https://tracker.ceph.com/issues/46456.
>
> That e
ted in the versioned bucket.
I hope that’s informative, if not what you were hoping to hear.
Eric
--
J. Eric Ivancich
he / him / his
Red Hat Storage
Ann Arbor, Michigan, USA
> On Oct 1, 2020, at 10:53 AM, Dan van der Ster wrote:
>
> Thanks Matt and Eric,
>
> Sorry for the basi
}
ceph1 ~ # ceph osd erasure-code-profile ls
default
isa_62
ceph1 ~ # ceph osd erasure-code-profile get default
k=2
m=1
plugin=jerasure
technique=reed_sol_van
ceph1 ~ # ceph osd erasure-code-profile get isa_62
crush-device-class=
crush-failure-domain=osd
crush-root=default
k=6
m=2
plugin=isa
technique=reed_sol_van
The idea with four hosts was that the ec profile should take two osds on
each host for the eight buckets.
Now with six hosts i guess two hosts will have tow buckets on two osds and
four hosts will have each one bucket for a piece of data.
Any idea how to resolve this?
Regards
Eric
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I can verify lists.ceph.com still works, I just posted a message there..
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I previously posted this question to lists.ceph.com not understanding
lists.ceph.io is the replacement for it. Posting it again here with some edits.
---
Hi there, we have been using ceph for a few years now, it's only now that
I've noticed we have been using the same name for all RGW hosts, re
bump. anyone?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
With ceph 14.2.4 it's the same.
The upmap balancer is not working.
Any ideas?
On Wed, Sep 11, 2019 at 11:32 AM Eric Dold wrote:
> Hello,
>
> I'm running ceph 14.2.3 on six hosts with each four osds. I did recently
> upgrade this from four hosts.
>
> The cluster is
Hello,
We recently upgraded from Luminous to Nautilus, after the upgrade, we are
seeing this sporadic "lock-up" behavior on the RGW side.
What I noticed from the log is that it seems to coincide with rgw realm
reloader. What we are seeing is that realm reloader tries to pause frontends,
and
like CRUSH does not stop picking a host after the first four with the
first rule and is complaining when it gets the fifth host.
Is this a bug or intended behaviour?
Regards
Eric
On Tue, Sep 17, 2019 at 3:55 PM Eric Dold wrote:
> With ceph 14.2.4 it's the same.
> The upmap bala
I have a Ceph Luminous (12.2.12) cluster with 6 nodes. I'm attempting to create
an EC3+2 pool with the following commands:
1. Create the EC profile:
* ceph osd erasure-code-profile set es32 k=3 m=2 plugin=jerasure w=8
technique=reed_sol_van crush-failure-domain=host crush-root=sgshared
Thanks for the info regarding min_size in the crush rule - does this seem like
a bug to you then? Is anyone else able to reproduce this?
-Original Message-
From: Paul Emmerich
Sent: Monday, January 27, 2020 11:15 AM
To: Smith, Eric
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] EC
OK I see this: https://github.com/ceph/ceph/pull/8008
Perhaps it's just to be safe...
-Original Message-
From: Smith, Eric
Sent: Monday, January 27, 2020 11:22 AM
To: Paul Emmerich
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: EC pool creation results in incorrect M value?
T
ot thread_join’d by their
parents.
a. This seems unlikely as this appears to happen during start-up before
threads are likely done with their work.
3. Florian Weimer identifies a kernel bug.
I suspect it’s #1, so you might want to try reducing the number of threads rgw
uses by lowering the valu
. Takes
some trickery to configure and bring the OSDs up on boot (using puppet in my
case), though that might get easier with the containerized approach in Ceph 15+.
Best,
Eric
> On 21 Mar 2020, at 14:18, huxia...@horebdata.cn wrote:
>
> Hi, Marc,
>
> Indeed PXE boot makes a lot
There is currently a PR for an “orphans list” capability. I’m currently working
on the testing side to make sure it’s part of our teuthology suite.
See: https://github.com/ceph/ceph/pull/34148
<https://github.com/ceph/ceph/pull/34148>
Eric
> On Apr 16, 2020, at 9:26 AM, Katarz
> On Apr 16, 2020, at 1:58 PM, EDH - Manuel Rios
> wrote:
>
> Hi Eric,
>
> Are there any ETA for get those script backported maybe in 14.2.10?
>
> Regards
> Manuel
There is a nautilus backport PR where the code works. It’s waiting on the added
testing to be
> On Apr 17, 2020, at 9:38 AM, Katarzyna Myrek wrote:
>
> Hi Eric,
>
> Would it be possible to use it with an older cluster version (like
> running new radosgw-admin in the container, connecting to the cluster
> on 14.2.X)?
>
> Kind regards / Pozdrawiam,
> Kata
Just in case, make sure the Ceph builds you use do have tcmalloc enabled in the
first place.
The only time I’ve seen OSDs exceed their memory targets so far was on a
Pacific cluster that used Debian 12 provided packages, and I eventually figured
that those had Crimson enabled - which comes with
20:04:55-rgw-squid-release-distro-default-smithi/7880520/remote/smithi142/log/valgrind/
Eric
(he/him)
> On Aug 30, 2024, at 10:42 AM, Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/67779#note-1
>
> Release Notes
rgw — approved
Eric
(he/him)
> On Aug 30, 2024, at 10:42 AM, Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/67779#note-1
>
> Release Notes - TBD
> Gibba upgrade -TBD
> LRC upgrade - TBD
>
> It was d
I wonder if this will be resolved by:
https://github.com/ceph/ceph/pull/39358
<https://github.com/ceph/ceph/pull/39358>
Deleting a bucket invokes an unordered listing, so that the objects in the
bucket can be removed. There was a bug that caused this to loop back the
objects.
Dear list,
we run a 7 node proxmox cluster with ceph nautilus (14.2.18), with 2
ceph filesystems, mounted in debian buster VMs using the cephfs kernel
module.
4 times in the last 6 months we had all mds servers failing one after
the other with an assert, either in the rename_prepare or unlin
imagine you’re not going to be listing them with any regularity.
Eric
(he/him)
> On Aug 29, 2022, at 12:06 PM, Boris Behrens wrote:
>
> Hi there,
>
> I have some buckets that would require >100 shards and I would like to ask
> if there are any downsides to have these ma
RGW’s
garbage collection mechanism.
It should also be noted that the bucket index does not need to be consulted
during a GET operation.
I looked for the string “SSECustomerAlgorithm” in the ceph source code and
couldn’t find it. Which tool is generating your “details about the object”?
Eric
(he/him)
+Casey — who might have some insight
> On Aug 31, 2022, at 5:45 AM, Alex Hussein-Kershaw (HE/HIM)
> wrote:
>
> Hi Eric,
>
> Thanks for your response. Answers below.
>
> Is it the case that the object does not appear when you list the RGW bucket
> it was in?
>
You could use `rgw-orphan-list` to determine rados objects that aren’t
referenced by any bucket indices. Those objects could be removed after
verification since this is an experimental feature.
Eric
(he/him)
> On Sep 5, 2022, at 10:44 AM, Ulrich Klein wrote:
>
> Looks like the old p
What jumps out to me is:
a. The -13 error code represents permission denied
b. You’ve commented out the keyring configuration in ceph.conf
So do your RGWs have appropriate credentials?
Eric
(he/him)
> On Sep 7, 2022, at 3:04 AM, Rok Jaklič wrote:
>
> Hi,
>
>
removing those objects will be very problematic
(to say the least). That’s in part why I said “after verification” in my
previous message.
Eric
(he/him)
> On Sep 6, 2022, at 10:34 AM, Ulrich Klein wrote:
>
> I’m not sure anymore, but I think I tried that on a test system. Afterwards I
I don’t believe there is any tooling to find and clean orphaned bucket index
shards. So if you’re certain they’re no longer needed, you can use `rados`
commands to remove the objects.
Eric
(he/him)
> On Sep 27, 2022, at 2:37 AM, Yuji Ito (伊藤 祐司) wrote:
>
> Hi,
>
> I hav
object:
_obj1
You’ll end up with something like
"c44a7aab-e086-43df-befe-ed8151b3a209.4147.1_obj1”.
3. grep through the logs for the head object and see if you find anything.
Eric
(he/him)
> On Nov 22, 2022, at 10:36 AM, Boris Behrens wrote:
>
> Does someone have an idea wh
the quincy version is not yet
merged. See:
https://tracker.ceph.com/issues/58034
Octopus was EOLed back in August so won’t receive the fix. But it seems the
next releases pacific and quincy will have the fix as will reef.
Eric
(he/him)
> On Feb 13, 2023, at 11:41 AM, mahnoosh shah
com/ceph/ceph/pull/46928
In the PR linked immediately above, looking at the additions to the file
src/test/rgw/test_rgw_lc.cc, you can find this XML snippet:
spongebob
squarepants
Eric
(he/him)
___
ceph-u
Removal/clean-up? Recovery of as much as
possible? Both are possible to a degree (not 100%) but the processes are not
simple and highly manual.
Eric
(he/him)
> On Feb 20, 2023, at 10:01 AM, Robert Sander
> wrote:
>
> Hi,
>
> There is an operation "radosgw-admin bi
corner cases where it’d
partially fail, such as (possibly) transactional changes that were underway
when the bucket index was purged. And there is metadata in the bucket index
that’s not stored in the objects, so it would have to be recreated somehow. But
no one has written it yet.
Eric
(he/him
: Versioned buckets will likely require some additional steps, but I’d
need to refresh my memory on some of the details.
Eric
(he/him)
> On Feb 23, 2023, at 4:51 AM, Robert Sander
> wrote:
>
> Hi,
>
> On 22.02.23 17:45, J. Eric Ivancich wrote:
>
>> You also asked w
rently it does not work for versioned buckets. And it is experimental.
If anyone is able to try it I’d be curious about your experiences.
Eric
(he/him)
> On Feb 23, 2023, at 11:20 AM, J. Eric Ivancich wrote:
>
> Off the top of my head:
>
> 1. The command would take a bucket
-line output along the
lines of:
2023-07-24T13:33:50.867-0400 7f10359f2a80 1 execute INFO: reshard of
bucket “" completed successfully
Eric
(he/him)
P.S. It’s likely obvious, but in the above replace with the
actual bucket name.
> On Jul 18, 2023, at 10:18 AM, Christian Kugler
-entries=50`and provide the output in a reply.
Thanks,
Eric
(he/him)
> On Jul 28, 2023, at 9:25 AM, Uday Bhaskar Jalagam
> wrote:
>
> Hello Everyone ,
> I am getting [WRN] LARGE_OMAP_OBJECTS: 18 large omap objects warning
> in one of my clusters . I see one of the buckets
then it’s safe to remove old
bucket index shards. Depending on the version of ceph running when reshard was
run, they were either intentionally left behind (earlier behavior) or removed
automatically (later behavior).
Eric
(he/him)
> On Jul 25, 2023, at 6:32 AM, Christian Kugler wrote:
Hi,
thanks for looking into this: our system disks also wear out too quickly!
Here are the numbers on our small cluster.
Best,
1) iotop results:
TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND
TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND
6426
examine how it works. It’s designed to
list the orphans but not the shared tail objects. Many ceph users use that tool
to list the orphans and then delete those directly from the rados pool,
although please realize it’s still considered “experimental”.
Eric
(he/him)
> On Jun 6, 2022, at 7:14
"--omap-key-file” command-line option to refer to that
file.
I realize this is a pain and I’m sorry that you have to go through this. But
once you remove those bucket index entries you should be all set.
Eric
(he/him)
> On May 9, 2022, at 4:20 PM, Christopher Durham wrote:
>
>
ing it and starting over is possible, but it would be wonderful if
we could
figure out what's wrong with it...
Cheers,
Eric
Ceph health is OK, there are 18 NVMe 4TB OSDs on 4 hosts.
Is there something wrong with these key statistics in monitor databases ?
151 auth
2 config
11 heal
Le 10/06/2022 à 11:58, Stefan Kooman a écrit :
CAUTION: This email originated from outside the organization. Do not
click links or open attachments unless you recognize the sender and
know the content is safe.
On 6/10/22 11:41, Eric Le Lay wrote:
Hello list,
my ceph cluster was upgraded
Le 13/06/2022 à 17:54, Eric Le Lay a écrit :
Le 10/06/2022 à 11:58, Stefan Kooman a écrit :
CAUTION: This email originated from outside the organization. Do not
click links or open attachments unless you recognize the sender and
know the content is safe.
On 6/10/22 11:41, Eric Le Lay wrote
to share all the bucket index entries related to this object?
Eric
(he/him)
> On Jun 13, 2022, at 6:05 PM, Boris Behrens wrote:
>
> Hi everybody,
>
> are there other ways for rados objects to get removed, other than "rados -p
> POOL rm OBJECT"?
> We have a custome
Le 13/06/2022 à 18:37, Stefan Kooman a écrit :
CAUTION: This email originated from outside the organization. Do not
click links or open attachments unless you recognize the sender and
know the content is safe.
On 6/13/22 18:21, Eric Le Lay wrote:
Those objects are deleted but have
while, well before octopus. It involves a
race condition with a very narrow window, so normally only encountered in
large, busy clusters.
Also, I think it’s up to the s3 client whether to use multipart upload. Do you
know which s3 client the user was using?
Eric
(he/him)
> On Jun 14, 2022, at
ize of the PR -- 22 commits and
32 files altered -- my guess is that it will not be backported to Nautilus.
However I'll invite the principals to weigh in.
Best,
Eric
--
J. Eric Ivancich
he/him/his
Red Hat Storage
Ann Arbor, Michigan, USA
___
ce
github.com
So I think rgw needs this PR added to the release and a re-run.
Thank you, Yuri!
Eric
(he/him)
> On Jan 6, 2025, at 11:55 AM, Yuri Weinstein wrote:
>
> Adam
>
> You are looking at the old run for the build 1
> See to lines for the Build 2
>
> https://pulpito.ceph
Hi,
these are very impressive results! On HDD even!!
Here are results on my cluster:
| | no cache | writeback | unsafe |
| | | - | --- |
| RBD | 40MB/s | 40MB/s| ? |
| KRBD | 40MB/s | 245MB/s | 245MB/s |
cluster: 8 proxmox nodes, 6 of them hos
Hi Aref,
same issue here, upgrading from 17.2.7 to 17.2.8, the bug #69764 hit us,
with OSDs randomly crashing.
We did a rollback to 17.2.7, where we had occasional OSD lock-up for 30s
(maybe #62815) but non-crashing OSDs.
So we are now planning the upgrade to Reef that we would have done an
Le 13/06/2025 à 08:57, Burkhard Linke a écrit :
Hi,
On 12.06.25 21:58, Daniel Vogelbacher wrote:
Hi Eric,
On 6/12/25 17:33, Eric Le Lay wrote:
I use rsync to copy data (~10TB) to backup storage.
To speed things up I use the ceph.dir.rctime extended attribute to
instantly ignore sub-trees
Le 13/06/2025 à 08:57, Burkhard Linke a écrit :
Hi,
On 12.06.25 21:58, Daniel Vogelbacher wrote:
Hi Eric,
On 6/12/25 17:33, Eric Le Lay wrote:
I use rsync to copy data (~10TB) to backup storage.
To speed things up I use the ceph.dir.rctime extended attribute to
instantly ignore sub-trees
Hi,
I use rsync to copy data (~10TB) to backup storage.
To speed things up I use the ceph.dir.rctime extended attribute to
instantly ignore sub-trees that haven't changed without iterating
through their contents.
I have to maintain the ceph.dir.rctime value between backups: I just
keep it in
100 matches
Mail list logo