Hi,
I had a similar problem while resharding an oversized non-sharded
bucket in Jewel (10.2.7), the bi_list exited with ERROR: bi_list():
(4) Interrupted system call at, what seemed like the very end of the
operation. I went ahead and resharded the bucket anyway and the
reshard process ended the sa
maybe share how you linked the bucket to the new index by hand?
> That would already give me some extra insight.
> Thanks!
>
> Regards,
> Maarten
>
> On Wed, Jul 5, 2017 at 10:21 AM, Andreas Calminder
> wrote:
>>
>> Hi,
>> I had a similar problem while
Hi,
I'm running into a weird issue while trying to delete a bucket with
radosgw-admin
# radosgw-admin --cluster ceph bucket rm --bucket=12856/weird_bucket
--purge-objects
This returns almost instantly even though the bucket contains +1M
objects and the bucket isn't removed. Running above command
said oversized indexes being
altered all at the same time
* RH solution article is for Hammer, I'm using Jewel 10.2.7
It was great fun, hope this helps anyone having similar issues.
Cheers!
/andreas
On 8 August 2017 at 12:31, Andreas Calminder
wrote:
> Hi,
> I'm runni
Hi,
I got hit with osd suicide timeouts while deep-scrub runs on a
specific pg, there's a RH article
(https://access.redhat.com/solutions/2127471) suggesting changing
osd_scrub_thread_suicide_timeout' from 60s to a higher value, problem
is the article is for Hammer and the osd_scrub_thread_suicide_
Thanks, I'll try and do that. Since I'm running a cluster with
multiple nodes, do I have to set this in ceph.conf on all nodes or
does it suffice with just the node with that particular osd?
On 15 August 2017 at 22:51, Gregory Farnum wrote:
>
>
> On Tue, Aug 15, 2017 at 7:03 A
-add it since the data is
supposed to be replicated to 3 nodes in the cluster, but I kind of want to
find out what has happened and have it fixed.
/andreas
On 17 Aug 2017 20:21, "Gregory Farnum" wrote:
On Thu, Aug 17, 2017 at 12:14 AM Andreas Calminder <
andreas.calmin...@klarna.com&
ach realm in each zone in
> master/master allowing data to sync in both directions.
>
> On Mon, Jun 5, 2017 at 3:05 AM Andreas Calminder <
> andreas.calmin...@klarna.com> wrote:
>
>> Hello,
>> I'm using Ceph jewel (10.2.7) and as far as I know I'm using the
e kind of large and thereby kind of slow, sas, but still.
Anyhow, thanks a lot for the help!
/andreas
On 17 August 2017 at 23:48, Gregory Farnum wrote:
> On Thu, Aug 17, 2017 at 1:02 PM, Andreas Calminder
> wrote:
>> Hi!
>> Thanks for getting back to me!
>>
>> Cl
Hi,
I had a similar problem on jewel, where I was unable to properly delete
objects eventhough radosgw-admin returned rc 0 after issuing rm, somehow
the object was deleted but the metadata wasn't removed.
I ran
# radosgw-admin --cluster ceph object stat --bucket=weird_bucket
--object=$OBJECT
to f
Hello,
running Jewel on some nodes with rados gateway I've managed to get a
lot of leaked multipart objects, most of them belonging to buckets
that do not even exist anymore. We estimated these objects to occupy
somewhere around 60TB, which would be great to reclaim. Question is
how, since trying t
ol=_DATA_POOL_ --job-id=_JOB_ID_
radosgw-admin orphans finish --job-id=_JOB_ID_
_JOB_ID_ being anything.
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
On Thu, Sep 28, 2017 at 9:38 AM, Andreas Calminder <
andreas.calmin...@klarna.com> wrote:
> H
>> Hello,
> >>
> >> not an expert here but I think the answer is something like:
> >>
> >> radosgw-admin orphans find --pool=_DATA_POOL_ --job-id=_JOB_ID_
> >> radosgw-admin orphans finish --job-id=_JOB_ID_
> >>
> >> _JOB_ID_ being an
The output, to stdout, is something like leaked: $objname. Am I supposed to
pipe it to a log, grep for leaked: and pipe it to rados delete? Or am I
supposed to dig around in the log pool to try and find the objects there?
The information available is quite vague. Maybe Yehuda can shed some light
on
Hi,
You should most definitely look over number of pgs, there's a pg calculator
available here: http://ceph.com/pgcalc/
You can increase pgs but not the other way around (
http://docs.ceph.com/docs/jewel/rados/operations/placement-groups/)
To solve the immediate problem with your cluster being fu
Hello,
Thanks for the heads-up. As someone who's currently maintaining a
Jewel cluster and are in the process of setting up a shiny new
Luminous cluster and writing Ansible roles in the process to make
setup reproducible. I immediately proceeded to look into ceph-volume
and I've some questions/conc
ormat json" since it's quite
helpful while setting up stuff through ansible
Thanks,
Andreas
On 28 November 2017 at 12:47, Alfredo Deza wrote:
> On Tue, Nov 28, 2017 at 1:56 AM, Andreas Calminder
> wrote:
>> Hello,
>> Thanks for the heads-up. As someone who'
Thanks!
I'll start looking into rebuilding my roles once 12.2.2 is out then.
On 28 November 2017 at 13:37, Alfredo Deza wrote:
> On Tue, Nov 28, 2017 at 7:22 AM, Andreas Calminder
> wrote:
>>> For the `simple` sub-command there is no prepare/activate, it is just
>
Hello,
With release 12.2.2 dynamic resharding bucket index has been disabled
when running a multisite environment
(http://tracker.ceph.com/issues/21725). Does this mean that resharding
of bucket indexes shouldn't be done at all, manually, while running
multisite as there's a risk of corruption?
Al
Thanks!
Is there anything in the bug tracker about the resharding issues that
I can check, just to follow progress?
Regards,
Andreas
On 4 December 2017 at 18:57, Orit Wasserman wrote:
> Hi Andreas,
>
> On Mon, Dec 4, 2017 at 11:26 AM, Andreas Calminder
> wrote:
>> Hello,
>
Hello!
According to the documentation at
http://docs.ceph.com/docs/master/radosgw/admin/#quota-management
there's a way to set the default quota for all RGW users, if I
understand it correctly it'll apply the quota for all users created
after the default quota is set. For instance, I want to all bu
Hi,
I'm writing a small python script using librados to display cluster health,
same info as ceph health detail show, it works fine but I rather not use
the admin keyring for something like this. However I have no clue what kind
of caps I should or can set, I was kind of hoping that mon allow r wou
rt rados
>>
>> import json
>>
>>
>> def get_cluster_health(r):
>>
>> cmd = {"prefix":"status", "format":"json"}
>>
>> ret, buf, errs = r.mon_command(json.dumps(cmd), b'', timeout=5)
>>
>> r
Hi,
I'm running Jewel (10.2.7). While trying to get rid of an oversized bucket
(+14M objects) I tried to reshard the bucket index be able to remove it.
As per the Red Hat documentation I ran
# radosgw-admin bucket reshard --bucket=oversized_bucket --num-shards=300
I noted the old instance id and wa
Hello,
I've got a sync issue with my multisite setup. There's 2 zones in 1
zone group in 1 realm. The data sync in the non-master zone have stuck
on Incremental sync is behind by 1 shard, this wasn't noticed until
the radosgw instances in the master zone started dying from out of
memory issues, all
f someone reads this, who has a working "one Kraken CEPH cluster"
> based multisite setup (or, let me dream, even a working ElasticSearch setup
> :| ) please step out of the dark and enlighten us :O
>
> Gesendet: Dienstag, 30. Mai 2017 um 11:02 Uhr
> Von: "Andreas Calmin
Hi,
I'm trying to reshard a rather large bucket (+13M objects) as per the
Red Hat documentation
(https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/object_gateway_guide_for_ubuntu/administration_cli#resharding-bucket-index)
to be able to delete it, the process starts and runs
27 matches
Mail list logo