Hi Ceph team,
Can you explain for me how ceph deleting object work? I have a bucket with
above 100M object (file size ~ 50KB). When I delete object for free space,
speed of deleting object very slow (about ~ 30-33 objects /s). I wan to
tuning performance of cluster but i do not clearly know how cep
On 09/20/2019 01:52 PM, Gesiel Galvão Bernardes wrote:
> Hi,
> I'm testing Ceph with Vmware, using Ceph-iscsi gateway. I reading
> documentation* and have doubts some points:
>
> - If I understanded, in general terms, for each VMFS datastore in VMware
> will match the an RBD image. (consequently
Robert,
There're a storage company that integrate TAPES as OSD for deep-cold ceph.
But the code is not opensource
Regards
-Mensaje original-
De: Robert LeBlanc
Enviado el: viernes, 20 de septiembre de 2019 23:28
Para: Paul Emmerich
CC: ceph-users
Asunto: [ceph-users] Re: RGW backup
On Fri, Sep 20, 2019 at 11:10 AM Paul Emmerich wrote:
>
> Probably easiest if you get a tape library that supports S3. You might
> even have some luck with radosgw's cloud sync module (but I wouldn't
> count on it, Octopus should improve things, though)
>
> Just intercepting PUT requests isn't tha
The deep scrub of the pg updated the cluster that the large omap was gone.
HEALTH_OK !
On Fri., Sep. 20, 2019, 2:31 p.m. shubjero, wrote:
> Still trying to solve this one.
>
> Here is the corresponding log entry when the large omap object was found:
>
> ceph-osd.1284.log.2.gz:2019-09-18 11:43:39
Thank you for the responce, but of course I'd tried this before asking.
It has no effect. Selinux still prevents to open authorized_keys.
I suppose there is something wrong with file contexts at my cephfs. For
instance, 'ls -Z' shows just a '?' as a context, and chcon fails with
"Operation not
Thanks Casey. I will issue a scrub for the pg that contains this
object to speed things along. Will report back when that's done.
On Fri, Sep 20, 2019 at 2:50 PM Casey Bodley wrote:
>
> Hi Jared,
>
> My understanding is that these 'large omap object' warnings are only
> issued or cleared during s
On Fri, Sep 20, 2019 at 1:31 PM Thomas Schneider <74cmo...@gmail.com> wrote:
>
> Hi,
>
> I cannot get rid of
> pgs unknown
> because there were 3 disks that couldn't be started.
> Therefore I destroyed the relevant OSD and re-created it for the
> relevant disks.
and you had it configured to r
On 9/19/19 11:52 PM, Hanyu Liu wrote:
Hi,
We are looking for a way to set timeout on requests to rados gateway.
If a request takes too long time, just kill it.
1. Is there a command that can set the timeout?
there isn't, no
2. This parameter looks interesting. Can I know what the "open
Hello Gesiel,
Some iscsi settings are stored in an object, this object is stored in
the rbd pool. Hnece the rbd pool is required.
Your LUN's are mapped to {pool}/{rbdimage}. You should treat these as
you treat pools and rbd images in general.
In smallish deployments I try to keep it simple and m
On Fri, Sep 20, 2019 at 8:55 PM Gesiel Galvão Bernardes
wrote:
>
> Hi,
> I'm testing Ceph with Vmware, using Ceph-iscsi gateway. I reading
> documentation* and have doubts some points:
>
> - If I understanded, in general terms, for each VMFS datastore in VMware will
> match the an RBD image. (c
Hi,
I'm testing Ceph with Vmware, using Ceph-iscsi gateway. I reading
documentation* and have doubts some points:
- If I understanded, in general terms, for each VMFS datastore in VMware
will match the an RBD image. (consequently in an RBD image I will possible
have many VMWare disks). Its correc
Hi Jared,
My understanding is that these 'large omap object' warnings are only
issued or cleared during scrub, so I'd expect them to go away the next
time the usage objects get scrubbed.
On 9/20/19 2:31 PM, shubjero wrote:
Still trying to solve this one.
Here is the corresponding log entry
Still trying to solve this one.
Here is the corresponding log entry when the large omap object was found:
ceph-osd.1284.log.2.gz:2019-09-18 11:43:39.237 7fcd68f96700 0
log_channel(cluster) log [WRN] : Large omap object found. Object:
26:86e4c833:::usage.22:head Key count: 2009548 Size (bytes): 3
Probably easiest if you get a tape library that supports S3. You might
even have some luck with radosgw's cloud sync module (but I wouldn't
count on it, Octopus should improve things, though)
Just intercepting PUT requests isn't that easy because of multi-part
stuff and load balancing. I.e., if yo
The question was posed, "What if we want to backup our RGW data to
tape?" Anyone doing this? Any suggestions? We could probably just
catch any PUT requests and queue them to be written to tape. Our
dataset is so large, that traditional backup solutions don't seem
feasible (GFS), so probably a singl
Hi all,
I regularly check the MDS performance graphs in the dashboard,
especially the requests per second is interesting in my case.
Since our upgrade to Nautilus the values in the activity column are
still refreshed every 5 seconds (I believe), but the graphs are not
refreshed since that u
Hi,
ceph health status reports unknown objects.
All objects reside on same osd.9
When I execute ceph pg query I get this (endless) output:
2019-09-20 14:47:35.922 7f937144f700 0 --1- 10.97.206.91:0/2060489821
>> v1:10.97.206.93:7054/15812 conn(0x7f935407c120 0x7f935407b120 :-1
s=CONNECTING_SE
Hi,
I am using ceph mimic in a small test setup using the below configuration.
OS: ubuntu 18.04
1 node running (mon,mds,mgr) + 4 core cpu and 4GB RAM and 1 Gb lan
3 nodes each having 2 osd's, disks are 2TB + 2 core cpu and 4G RAM and 1
Gb lan
1 node acting as cephfs client + 2 core cpu and 4G
Hi,
I cannot get rid of
pgs unknown
because there were 3 disks that couldn't be started.
Therefore I destroyed the relevant OSD and re-created it for the
relevant disks.
Then I added the 3 OSDs to crushmap.
Regards
Thomas
Am 20.09.2019 um 08:19 schrieb Ashley Merrick:
> Your need to fix thi
Hi,
I cannot get rid of
pgs unknown
because there were 3 disks that couldn't be started.
Therefore I destroyed the relevant OSD and re-created it for the
relevant disks.
Then I added the 3 OSDs to crushmap.
Regards
Thomas
Am 20.09.2019 um 08:19 schrieb Ashley Merrick:
> Your need to fix thi
Dear all,
I found a partial solution to the problem and I also repeated a bit of testing,
see below.
# Client-sided solution, works for single-client IO
The hard solution is to mount cephfs with the option "sync". This will
translate any IO to direct IO and successfully throttle clients no ma
22 matches
Mail list logo