Re: [ceph-users] rgw gives MethodNotAllowed for OPTIONS?

2018-02-12 Thread Piers Haken
So I put a, nginx proxy in front of rgw since I couldn't find any definitive answer on whether or not it allows OPTIONS. Now the browser is doing a POST, and it's still getting a MethodNotAllowed response. This is a fresh install of luminous. Is this an rgw error or a civetweb error? I found an

[ceph-users] rgw gives MethodNotAllowed for OPTIONS?

2018-02-12 Thread Piers Haken
I'm trying to do direct-from-browser upload to rgw using pre-signed urls, and I'm getting stuck because the browser is doing a pre-flight OPTIONS request and rgw is giving me a MethodNotAllowed response. Is this supported? OPTIONS http://storage-test01:7480/ HTTP/1.1 Host: storage-test01:7480 C

Re: [ceph-users] ceph luminous source packages

2018-02-12 Thread Mike O'Connor
On 13/02/2018 11:19 AM, Brad Hubbard wrote: > On Tue, Feb 13, 2018 at 10:23 AM, Mike O'Connor wrote: >> Hi All >> >> Where can I find the source packages that the Proxmox Ceph Luminous was >> built from ? > You can find any source packages we release on http://download.ceph.com/ > > You'd have to

Re: [ceph-users] ceph luminous source packages

2018-02-12 Thread Brad Hubbard
On Tue, Feb 13, 2018 at 10:23 AM, Mike O'Connor wrote: > Hi All > > Where can I find the source packages that the Proxmox Ceph Luminous was > built from ? You can find any source packages we release on http://download.ceph.com/ You'd have to ask Proxmox which one they used and whether they modif

[ceph-users] Understanding/correcting sudden onslaught of unfound objects

2018-02-12 Thread Graham Allan
Hi, For the past few weeks I've been seeing a large number of pgs on our main erasure coded pool being flagged inconsistent, followed by them becoming active+recovery_wait+inconsistent with unfound objects. The cluster is currently running luminous 12.2.2 but has in the past also run its way

Re: [ceph-users] rbd feature overheads

2018-02-12 Thread Blair Bethwaite
Thanks Ilya, We can probably handle ~6.2MB for a 100TB volume. Is it reasonable to expect a librbd client such as QEMU to only hold one object-map per guest? Cheers, On 12 February 2018 at 21:01, Ilya Dryomov wrote: > On Mon, Feb 12, 2018 at 6:25 AM, Blair Bethwaite > wrote: > > Hi all, > > >

[ceph-users] ceph luminous source packages

2018-02-12 Thread Mike O'Connor
Hi All Where can I find the source packages that the Proxmox Ceph Luminous was built from ? Mike ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] mgr[influx] Cannot transmit statistics: influxdb python module not found.

2018-02-12 Thread Marc Roos
why not use collectd? centos7 rpms should do fine. On Feb 12, 2018 9:50 PM, Benjeman Meekhof wrote: > > In our case I think we grabbed the SRPM from Fedora and rebuilt it on > Scientific Linux (another RHEL derivative).  Presumably the binary > didn't work or I would have installed it direct

Re: [ceph-users] Help rebalancing OSD usage, Luminus 1.2.2

2018-02-12 Thread Bryan Banister
Hi Janne and others, We used the “ceph osd reweight-by-utilization “ command to move a small amount of data off of the top four OSDs by utilization. Then we updated the pg_num and pgp_num on the pool from 512 to 1024 which started moving roughly 50% of the objects around as a result. The unfo

[ceph-users] [rgw] Underscore at the beginning of access key not works after upgrade Jewel->Luminous

2018-02-12 Thread Rudenko Aleksandr
Hi friends, I have rgw-user (_sc) with with the same access key: radosgw-admin metadata user info --uid _sc { "user_id": "_sc", "display_name": "_sc", "email": "", "suspended": 0, "max_buckets": 0, "auid": 0, "subusers": [], "keys": [ { "user":

Re: [ceph-users] mgr[influx] Cannot transmit statistics: influxdb python module not found.

2018-02-12 Thread Benjeman Meekhof
In our case I think we grabbed the SRPM from Fedora and rebuilt it on Scientific Linux (another RHEL derivative). Presumably the binary didn't work or I would have installed it directly. I'm not quite sure why it hasn't migrated to EPEL yet. I haven't tried the SRPM for latest releases, we're ac

Re: [ceph-users] OSDs with primary affinity 0 still used for primary PG

2018-02-12 Thread David Turner
If you look at the PGs that are primary on an OSD that has primary affinity 0, you'll find that they are only on OSDs with primary affinity of 0, so 1 of them has to take the reins or nobody would be responsible for the PG. To prevent this from happening, you would need to configure your crush map

Re: [ceph-users] Ceph-fuse : unmounted but ceph-fuse process not killed

2018-02-12 Thread David Turner
Why are you using force and lazy options to umount cephfs? Those should only be done if there are problems unmounting the volume. Lazy umount will indeed leave things running, but just quickly remove the FS from it's mount point. You should rarely use umount -f and even more rarely use umount -l

Re: [ceph-users] Is there a "set pool readonly" command?

2018-02-12 Thread Reed Dier
I do know that there is a pause flag in Ceph. What I do not know is if that also pauses recovery traffic, in addition to client traffic. Also worth mentioning, this is a cluster-wide flag, not a pool level flag. Reed > On Feb 11, 2018, at 11:45 AM, David Turner wrote: > > If you set min_size

Re: [ceph-users] Luminous 12.2.3 release date?

2018-02-12 Thread Abhishek Lekshmanan
Hans van den Bogert writes: > Hi Wido, > > Did you ever get an answer? I'm eager to know as well. We're currently testing 12.2.3; once the QE process completes we can publish the packages, hopefully by the end of this week > > > Hans > > On Tue, Jan 30, 2018 at 10:35 AM, Wido den Hollander wrot

[ceph-users] mgr[influx] Cannot transmit statistics: influxdb python module not found.

2018-02-12 Thread knawnd
Dear all, I'd like to store ceph luminous metrics into influxdb. It seems like influx plugin has been already backported for lumious: rpm -ql ceph-mgr-12.2.2-0.el7.x86_64|grep -i influx /usr/lib64/ceph/mgr/influx /usr/lib64/ceph/mgr/influx/__init__.py /usr/lib64/ceph/mgr/influx/__init__.pyc /us

[ceph-users] OSDs with primary affinity 0 still used for primary PG

2018-02-12 Thread Teun Docter
Hi, I'm looking into storing the primary copy on SSDs, and replicas on spinners. One way to achieve this should be the primary affinity setting, as outlined in this post: https://www.sebastien-han.fr/blog/2015/08/06/ceph-get-the-best-of-your-ssd-with-primary-affinity So I've deployed a small te

[ceph-users] PG replication issues

2018-02-12 Thread Alexandru Cucu
Hello, Warning, this is a long story! There's a TL;DR; close to the end. We are replacing some of our spinning drives with SSDs. We have 14 OSD nodes with 12 drives each. We are replacing 4 drives from each node with SSDs. The cluster is running Ceph Jewel (10.2.7). The affected pool had min_size

Re: [ceph-users] Bluestore with so many small files

2018-02-12 Thread Wido den Hollander
On 02/12/2018 03:16 PM, Behnam Loghmani wrote: so you mean that rocksdb and osdmap filled disk about 40G for only 800k files? I think it's not reasonable and it's too high Could you check the output of the OSDs using a 'perf dump' on their admin socket? The 'bluestore' and 'bluefs' sectio

Re: [ceph-users] Ceph Day Germany :)

2018-02-12 Thread Kai Wagner
Hi Wido, how do you know about that beforehand? There's no official upcoming event on the ceph.com page? Just because I'm curious :) Thanks Kai On 12.02.2018 10:39, Wido den Hollander wrote: > The next one is in London on April 19th -- SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard,

Re: [ceph-users] Is there a "set pool readonly" command?

2018-02-12 Thread David Turner
The pause flag also pauses recovery traffic. It is literally a flag to stop anything and everything in the cluster so you can get an expert in to prevent something even worse from happening. On Mon, Feb 12, 2018 at 1:56 PM Reed Dier wrote: > I do know that there is a pause flag in Ceph. > > Wha

Re: [ceph-users] ceph mons de-synced from rest of cluster?

2018-02-12 Thread Gregory Farnum
On Sun, Feb 11, 2018 at 8:19 PM Chris Apsey wrote: > All, > > Recently doubled the number of OSDs in our cluster, and towards the end > of the rebalancing, I noticed that recovery IO fell to nothing and that > the ceph mons eventually looked like this when I ran ceph -s > >cluster: >

Re: [ceph-users] Rocksdb: Try to delete WAL files size....

2018-02-12 Thread Dietmar Rieder
Anyone? Am 9. Februar 2018 09:59:54 MEZ schrieb Dietmar Rieder : >Hi, > >we are running ceph version 12.2.2 (10 nodes, 240 OSDs, 3 mon). While >monitoring the WAL db used bytes we noticed that there are some OSDs >that use proportionally more WAL db bytes than others (800Mb vs 300Mb). >These OSDs

Re: [ceph-users] Luminous 12.2.3 release date?

2018-02-12 Thread Hans van den Bogert
Hi Wido, Did you ever get an answer? I'm eager to know as well. Hans On Tue, Jan 30, 2018 at 10:35 AM, Wido den Hollander wrote: > Hi, > > Is there a ETA yet for 12.2.3? Looking at the tracker there aren't that many > outstanding issues: http://tracker.ceph.com/projects/ceph/roadmap > > On Git

Re: [ceph-users] Bluestore with so many small files

2018-02-12 Thread Behnam Loghmani
so you mean that rocksdb and osdmap filled disk about 40G for only 800k files? I think it's not reasonable and it's too high On Mon, Feb 12, 2018 at 5:06 PM, David Turner wrote: > Some of your overhead is the Wal and rocksdb that are on the OSDs. The Wal > is pretty static in size, but rocksdb g

Re: [ceph-users] Bluestore with so many small files

2018-02-12 Thread David Turner
Some of your overhead is the Wal and rocksdb that are on the OSDs. The Wal is pretty static in size, but rocksdb grows with the amount of objects you have. You also have copies of the osdmap on each osd. There's just overhead that adds up. The biggest is going to be rocksdb with how many objects yo

[ceph-users] Bluestore with so many small files

2018-02-12 Thread Behnam Loghmani
Hi there, I am using ceph Luminous 12.2.2 with: 3 osds (each osd is 100G) - no WAL/DB separation. 3 mons 1 rgw cluster size 3 I stored lots of thumbnails with very small size on ceph with radosgw. Actual size of files is something about 32G but it filled 70G of each osd. what's the reason of t

[ceph-users] NFS-Ganesha: Files disappearing?

2018-02-12 Thread Martin Emrich
Hi! I am trying out NFS-Ganesha-RGW (2.5.4 and also Git V2.5-stable) with Ceph 12.2.2. Mounting the RGW works fine, but if I try to archive all files, some paths seem to "disappear": ... tar: /store/testbucket/nhxYgfUgFivgzRxw: File removed before we read it tar: /store/testbucket/nlkijFwq

Re: [ceph-users] rbd feature overheads

2018-02-12 Thread Ilya Dryomov
On Mon, Feb 12, 2018 at 6:25 AM, Blair Bethwaite wrote: > Hi all, > > Wondering if anyone can clarify whether there are any significant overheads > from rbd features like object-map, fast-diff, etc. I'm interested in both > performance overheads from a latency and space perspective, e.g., can > ob

Re: [ceph-users] Ceph Day Germany :)

2018-02-12 Thread Kai Wagner
Sometimes I'm just blind. Way to less ML :D Thanks! On 12.02.2018 10:51, Wido den Hollander wrote: > Because I'm co-organizing it! :) It send out a Call for Papers last > week to this list. -- SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) s

Re: [ceph-users] Ceph Day Germany :)

2018-02-12 Thread Wido den Hollander
On 02/12/2018 10:42 AM, Kai Wagner wrote: Hi Wido, how do you know about that beforehand? There's no official upcoming event on the ceph.com page? Because I'm co-organizing it! :) It send out a Call for Papers last week to this list. Waiting for the page to come online on ceph.com, but t

Re: [ceph-users] Ceph Day Germany :)

2018-02-12 Thread Wido den Hollander
On 02/12/2018 12:33 AM, c...@elchaka.de wrote: Am 9. Februar 2018 11:51:08 MEZ schrieb Lenz Grimmer : Hi all, On 02/08/2018 11:23 AM, Martin Emrich wrote: I just want to thank all organizers and speakers for the awesome Ceph Day at Darmstadt, Germany yesterday. I learned of some cool stu