Hi,
I have some issues with my bucket index. As you can see in the
attachment, everyday around 16:30, the amount of objects in the
default.rgw.buckets.index increases. This happens since upgrading from
12.2.2 to 12.2.4.
There is no real activity at that moment, but the logs do show the
following:
On Wed, 2018-04-04 at 09:38 +0200, Mark Schouten wrote:
> I have some issues with my bucket index. As you can see in the
> attachment, everyday around 16:30, the amount of objects in the
> default.rgw.buckets.index increases. This happens since upgrading
> from
> 12.2.2 to 12.2.4.
It seems there i
Hi,
We have an rgw user who had a bunch of partial multipart uploads in a
bucket, which they then deleted. radosgw-admin bucket list doesn't show
the bucket any more, but user stats --sync-stats still has (I think)
the contents of that bucket counted against the users' quota.
So, err, how do I c
Hi Everyone,
I would like to know what kind of setup had the Ceph community been using for
their Openstack's Ceph configuration when it comes to number of Pools & OSDs
and their PGs.
Ceph documentation briefly mentions it for small cluster size, and I would like
to know from your experience,
On 04/04/18 10:30, Matthew Vernon wrote:
> Hi,
>
> We have an rgw user who had a bunch of partial multipart uploads in a
> bucket, which they then deleted. radosgw-admin bucket list doesn't show
> the bucket any more, but user stats --sync-stats still has (I think)
> the contents of that bucket c
FYI, I have these to on a test cluster. Upgraded from Kraken
Apr 4 14:48:23 c01 ceph-osd: 2018-04-04 14:48:23.347040 7f9a39c05700 -1
osd.8 pg_epoch: 19002 pg[17.32( v 19002'2523501
(19002'2521979,19002'2523501] local-lis/les=18913/18914 n=3600
ec=3636/3636 lis/c 18913/18913 les/c/f 18914/189
Hi,
I have a Bucket with multiple broken multipart uploads, which can't be aborted.
radosgw-admin bucket check shows thousands of _multipart_ objects,
unfortunately the --fix and --check-objects don't change anything.
I decided to get rid of the bucket completely, but even this command:
rados
Hello,
I wonder if it is any way to run `trimfs` on rbd image which is currently
used by the KVM process? (when I don't have access to VM)
I know that I can do this by qemu-guest-agent but not all VMs have it
installed.
I can't use rbdmap too, because most images don't have distributed
filesyste
I read a couple of versions ago that ceph-deploy was not recommended for
production clusters. Why was that? Is this still the case? We have a lot
of problems automating deployment without ceph-deploy.
___
ceph-users mailing list
ceph-users@lists.ceph.
Am 4. April 2018 20:58:19 MESZ schrieb Robert Stanford
:
>I read a couple of versions ago that ceph-deploy was not recommended
>for
>production clusters. Why was that? Is this still the case? We have a
I cannot Imagine that. Did use it Now a few versions before 2.0 and it works
Great. We use
On Tue, Apr 3, 2018 at 6:30 PM Jeffrey Zhang <
zhang.lei.fly+ceph-us...@gmail.com> wrote:
> I am testing ceph Luminous, the environment is
>
> - centos 7.4
> - ceph luminous ( ceph offical repo)
> - ceph-deploy 2.0
> - bluestore + separate wal and db
>
> I found the ceph osd folder `/var/lib/ceph/
On Mon, Apr 2, 2018 at 11:18 AM Robert Stanford
wrote:
>
> This is a known issue as far as I can tell, I've read about it several
> times. Ceph performs great (using radosgw), but as the OSDs fill up
> performance falls sharply. I am down to half of empty performance with
> about 50% disk usag
We use ceph-deploy in production. That said, our crush map is getting more
complex and we are starting to make use of other tooling as that occurs.
But we still use ceph-deploy to install ceph and bootstrap OSDs.
On Wed, Apr 4, 2018, 1:58 PM Robert Stanford
wrote:
>
> I read a couple of version
Hi all,
Was wondering if someone could enlighten me...
I've recently been upgrading a small test clusters tunables from bobtail to
firefly prior to doing the same with an old production cluster.
OS is rhel 7.4, kernel in test is all 3.10.0-693.el7.x86_64, in prod admin
box
is 3.10.0-693.el7.x86_
On Thu, Mar 29, 2018 at 3:17 PM Damian Dabrowski wrote:
> Greg, thanks for Your reply!
>
> I think Your idea makes sense, I've did tests and its quite hard to
> understand for me. I'll try to explain my situation in few steps
> below.
> I think that ceph showing progress in recovery but it can on
http://docs.ceph.com/docs/master/rados/operations/crush-map/#firefly-crush-tunables3
"The optimal value (in terms of computational cost and correctness) is 1."
I think you're just finding that the production cluster, with a much
larger number of buckets, didn't ever run in to the situation
choose
Hi Gregory
We were planning on going to chooseleaf_vary_r=4 so we could upgrade to
jewel now and schedule the change to 1 at a more suitable time since we
were expecting a large rebalancing of objects (should have mentioned that).
Good to know that there's a valid reason we didn't see any rebalan
See my latest update in the tracker.
On Sun, Apr 1, 2018 at 2:27 AM, Julien Lavesque
wrote:
> At first the cluster has been deployed using ceph-ansible in version
> infernalis.
> For some unknown reason the controller02 was out of the quorum and we were
> unable to add it in the quorum.
>
> We ha
Hi Mathew,
We approached the problem by first running swift-bench for performance
tuning and configuration. Since it was the easiest to get up and running
and test the gateway.
Then we wrote a python script using python boto and python futures to model
our usecase and test s3.
We found the most
On 04/04/2018 07:30 PM, Damian Dabrowski wrote:
> Hello,
>
> I wonder if it is any way to run `trimfs` on rbd image which is
> currently used by the KVM process? (when I don't have access to VM)
>
> I know that I can do this by qemu-guest-agent but not all VMs have it
> installed.
>
> I can't
On 04/04/2018 08:58 PM, Robert Stanford wrote:
>
> I read a couple of versions ago that ceph-deploy was not recommended
> for production clusters. Why was that? Is this still the case? We
> have a lot of problems automating deployment without ceph-deploy.
>
We are using it in production on o
Hi all,
we've created a new #ceph-dashboard channel on OFTC to talk about all
the related dashboard functionalities and developments. This means that
the old "openattic" channel on Freenode is just for openATTIC and
everything new regarding the mgr module will now be discussed in the new
channel o
On 04/04/2018 08:58 PM, Robert Stanford wrote:
>
> I read a couple of versions ago that ceph-deploy was not recommended
> for production clusters. Why was that? Is this still the case? We
> have a lot of problems automating deployment without ceph-deploy.
>
>
In the end it is just a Pytho
23 matches
Mail list logo