If you use 'radosgw-admin bi list', you can get a listing of the raw bucket
index. I'll bet that the objects aren't being shown at the S3 layer
because something is wrong with them. But since they are in the bi-list,
you'll get 409 BucketNotEmpty.
At this point, I've found two different approaches
I'm testing using RBD as VMWare datastores. I'm currently testing with
krbd+LVM on a tgt target hosted on a hypervisor.
My Ceph cluster is HDD backed.
In order to help with write latency, I added an SSD drive to my hypervisor
and made it a writeback cache for the rbd via LVM. So far I've managed
I noticed this morning that all four of our rados gateways (luminous
12.2.2) hung at logrotate time overnight. The last message logged was:
2017-12-08 03:21:01.897363 7fac46176700 0 ERROR: failed to clone shard,
completion_mgr.get_next() returned ret=-125
one of the 3 nodes recorded more de
Hello Team,
We aware that ceph-disk which is deprecated in 12.2.2 . As part of my
testing, I can still using this ceph-disk utility for creating OSD's in
12.2.2
Here I'm getting activation error on the second hit onwards.
First occurance OSD's creating without any issue.
===
Thanks David for the suggestion, let me try that :)
On Fri, Dec 8, 2017 at 9:28 PM, David Turner wrote:
> Why are you rebooting the node? You should only need to restart the ceph
> services. You need all of your MONs to be running Luminous before any
> Luminous OSDs will be accepted by the clu
Why are you rebooting the node? You should only need to restart the ceph
services. You need all of your MONs to be running Luminous before any
Luminous OSDs will be accepted by the cluster. So you should update the
packages on each server, restart the MONs, then restart your OSDs. After
you res
Hello Team,
I having a 5 node cluster running with kraken 11.2.0 EC 4+1.
My plan is to upgrade all 5 nodes to 12.2.2 Luminous without any downtime.
I tried on first node, below procedure.
commented below directive from ceph.conf
enable experimental unrecoverable data corrupting features = bluest
We have graphs for network usage in graphana. We even have aggregate
graphs for projects. For my team, we specifically have graphs for the Ceph
cluster osd public network, osd private network, rgw network, and mon
network. You should be able to do something similar for each of the
servers in you
You're correct, I was mistaken that a bucket could be renamed. How many
buckets do you have in your RGW? I got away from the buckets that wouldn't
delete by recreating the ceph pool for the data since it was only backup
data at the time.
On Fri, Dec 8, 2017 at 10:10 AM Martin Emrich
wrote:
> Hi!
From: Gregory Farnum [gfar...@redhat.com]
Sent: 07 December 2017 21:57
To: Vasilakakos, George (STFC,RAL,SC)
Cc: drakonst...@gmail.com; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Sudden omap growth on some OSDs
On Thu, Dec 7, 2017 at 4:41 AM
mailto:geo
Here are some random samples I recorded in the past 30 minutes.
11 K blocks 10542 kB/s 909 op/s
12 K blocks 15397 kB/s 1247 op/s
26 K blocks 34306 kB/s 1307 op/s
33 K blocks 48509 kB/s 1465 op/s
59 K blocks 59333 kB/s 999 op/s
172 K blocks 101939 kB/s 590 op/s
104 K blocks
Hello Brad,
> I see others have answered these questions but I'll provide the link
> to the relevant section of the docs here for those that may read this
> later.
>
> http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/#adding-monitors
>
thanks for the link, i think i have read th
Hi!
I found no way to rename the bucket. Neither s3cmd nor radosgw-admin offer a
renaming option (even Amazon S3 does not support renaming).
Deleting the objects did not work:
# s3cmd rb s3://bucket -r
WARNING: Bucket is not empty. Removing all the objects from it first. This may
take some tim
On Fri, Dec 8, 2017 at 10:04 PM, Florent B wrote:
> When I look in MDS slow requests I have a few like this :
>
> {
> "description": "client_request(client.460346000:5211
> setfilelockrule 1, type 2, owner 9688352835732396778, pid 660, start 0,
> length 0, wait 1 #0x100017da2aa 2017-12
Hi
we are planning to replace our NFS infra with CephFS (Luminous). Our use
case for CephFS is mounting directories via kernel client (not fuse for
performance reason). The cephfs root directory is logically split into
subdirs where each represent separate project with its own source codes
a
On Thu, Dec 7, 2017 at 3:40 PM, Burkhard Linke
wrote:
> Hi,
>
>
> we have upgraded our cluster to luminous 12.2.2 and wanted to use a second
> MDS for HA purposes. Upgrade itself went well, setting up the second MDS
> from the former standby-replay configuration worked, too.
>
>
> But upon load bo
Hi,
Did you see this http://docs.ceph.com/docs/master/install/get-packages/ It
contains details on how to add the apt repo's provided by the ceph project.
You may also want to consider 16.04 if this is a production install as
17.10 has a pretty short life (
https://www.ubuntu.com/info/release-end
First off, you can rename a bucket and create a new one for the application
to use. You can also unlink the bucket so it is no longer owned by the
access-key/user that created it. That should get your application back on
its feet.
I have had very little success with bypass-gc, although I think it
On 08. des. 2017 14:49, Florent B wrote:
On 08/12/2017 14:29, Yan, Zheng wrote:
On Fri, Dec 8, 2017 at 6:51 PM, Florent B wrote:
I don't know I didn't touched that setting. Which one is recommended ?
If multiple dovecot instances are running at the same time and they
all modify the same fil
On Fri, Dec 8, 2017 at 6:51 PM, Florent B wrote:
> I don't know I didn't touched that setting. Which one is recommended ?
>
>
If multiple dovecot instances are running at the same time and they
all modify the same files. you need to set fuse_disable_pagecache to
true.
> On 08/12/2017 11:49, Alex
Hi,
which repository should i take for Luminous under Ubuntu 17.10?
I want a total new install with ceph-deploy, no upgrade.
Is there any good tutorial for fresh install incl. bluestor?
--
MfG,
Markus Goldberg
--
Markus G
Followup:
I eventually gave up trying to salvage the bucket. The bucket is supposed to
have ca. 11 objects, every attempt to "bucket index check --fix" increased
that number by 11, so something is very wrong.
Also, deleting the bucket with "radosgw bucket rm --purge-objects" failed with
do you have disabled fuse pagecache on your clients ceph.conf ?
[client]
fuse_disable_pagecache = true
- Mail original -
De: "Florent Bautista"
À: "ceph-users"
Envoyé: Vendredi 8 Décembre 2017 10:54:59
Objet: Re: [ceph-users] Corrupted files on CephFS since Luminous upgrade
On 08/12/2
On 12/08/2017 10:27 AM, Florent B wrote:
Hi everyone,
A few days ago I upgraded a cluster from Kraken to Luminous.
I have a few mail servers on it, running Ceph-Fuse & Dovecot.
And since the day of upgrade, Dovecot is reporting some corrupted files
on a large account :
doveadm(myu...@mydoma
4M block sizes you will only need 22.5 iops
On 2017-12-08 09:59, Maged Mokhtar wrote:
> Hi Russell,
>
> It is probably due to the difference in block sizes used in the test vs your
> cluster load. You have a latency problem which is limiting your max write
> iops to around 2.5K. For large b
Hi Russell,
It is probably due to the difference in block sizes used in the test vs
your cluster load. You have a latency problem which is limiting your max
write iops to around 2.5K. For large block sizes you do not need that
many iops, for example if you write in 4M block sizes you will only ne
26 matches
Mail list logo