Re: [ceph-users] How to get Active set of OSD Map in serial order of osd index

2016-07-27 Thread Syed Hussain
Fundamentally, I wanted to know what chunks are allocated in which OSDs. This way I can preserve the array structure required for my Erasure Code. If all the chunks are placed in randomly ordered OSDs (like in Jerasure or ISA) then I loss that array structure required in the Encoding/Decoding algor

Re: [ceph-users] Unknown error (95->500) when creating buckets or putting files to RGW after upgrade from Infernalis to Jewel

2016-07-27 Thread Naruszewicz, Maciej
Sure Nick, here they are: # ceph osd lspools 72 .rgw.control,73 .rgw,74 .rgw.gc,75 .log,76 .users.uid,77 .users,78 .users.swift,79 .rgw.buckets.index,80 .rgw.buckets.extra,81 .rgw.buckets,82 .rgw.root.backup,83 .rgw.root,84 logs,85 default.rgw.meta, Thanks for your help nonetheless! -Origi

Re: [ceph-users] RGW container deletion problem

2016-07-27 Thread Daniel Schneller
Bump On 2016-07-25 14:05:38 +, Daniel Schneller said: Hi! I created a bunch of test containers with some objects in them via RGW/Swift (Ubuntu, RGW via Apache, Ceph Hammer 0.94.1) Now I try to get rid of the test data. I manually staretd with one container: ~/rgwtest ➜ swift -v -V 1.0

[ceph-users] OSD host swap usage

2016-07-27 Thread Kenneth Waegeman
Hi all, When our OSD hosts are running for some time, we start see increased usage of swap on a number of them. Some OSDs don't use swap for weeks, while others has a full (4G) swap, and start filling swap again after we did a swapoff/swapon. We have 8 8TB OSDS and 2 cache SSDs on each hosts,

Re: [ceph-users] Unknown error (95->500) when creating buckets or putting files to RGW after upgrade from Infernalis to Jewel

2016-07-27 Thread nick
I compared the pools with ours and I can see no difference to be honest. The issue sounds like you can not write into a specific pool (as get and delete works). Are all the filesystem permissions correct? Maybe another 'chown -R ceph:ceph' for all the OSD data dirs would help? Did you check th

Re: [ceph-users] cephfs - mds hardware recommendation for 40 million files and 500 users

2016-07-27 Thread John Spray
On Tue, Jul 26, 2016 at 9:53 PM, Mike Miller wrote: > Hi, > > we have started to migrate user homes to cephfs with the mds server 32GB > RAM. With multiple rsync threads copying this seems to be undersized; the > mds process consumes all memory 32GB fitting about 4 million caps. > > Any hardware r

Re: [ceph-users] OSD host swap usage

2016-07-27 Thread Christian Balzer
Hello, On Wed, 27 Jul 2016 10:21:34 +0200 Kenneth Waegeman wrote: > Hi all, > > When our OSD hosts are running for some time, we start see increased > usage of swap on a number of them. Some OSDs don't use swap for weeks, > while others has a full (4G) swap, and start filling swap again after

[ceph-users] how to list the objects stored in the specified placement group?

2016-07-27 Thread jerry
Hello everyone, I want to list the objects stored in the specified placement group through rados API, do you know how to deal with it?___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] how to list the objects stored in the specified placement group?

2016-07-27 Thread Wido den Hollander
> Op 27 juli 2016 om 12:48 schreef jerry : > > > Hello everyone, > > > I want to list the objects stored in the specified placement group through > rados API, do you know how to deal with > it?___ As far as I know that's not possible. Placement Gr

Re: [ceph-users] Monitors not reaching quorum

2016-07-27 Thread Sergio A. de Carvalho Jr.
In my case, everything else running on the host seems to be okay. I'm wondering if the other problems you see aren't a side-effect of Ceph services running slow? What do you do to get around the problem when it happens? Disable syslog in Ceph? What version of Ceph and OS are you using? On Wed, J

Re: [ceph-users] Monitors not reaching quorum

2016-07-27 Thread Sean Crosby
Oh, my problems weren't on Ceph nodes. I've seen this problem on non-Ceph nodes. The symptoms you had of unexplained weirdness with services (in your case, Ceph), and syslog lagging 10mins behind just reminded me of symptoms I've seen before where the sending of syslog messages to a central syslog

Re: [ceph-users] syslog broke my cluster

2016-07-27 Thread Sergio A. de Carvalho Jr.
I guess the point I was trying to make is that, ideally, Ceph would isolate its logging system in a way that a problem with writing the logs wouldn't affect the operation of the core Ceph services. In my case, all other services running on the machine (ssh, ntp, cron, etc.) are operating normally,

Re: [ceph-users] syslog broke my cluster

2016-07-27 Thread Karsten Heymann
Hi, The syslog socket will block if it can't deliver it's logs. This happens for example if logs are forwarded to a remote loghost via tcp and the remote server becomes unavailable. Best Karsten ___ ceph-users mailing list ceph-users@lists.ceph.com http

[ceph-users] Unsubscribe

2016-07-27 Thread Jimmy Stemple
[ceph-users] Unsubscribe Sent from my iPhone > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] OSD host swap usage

2016-07-27 Thread George Shuklin
Check NUMA status in bios'es. Sometimes linux do swap instead of task transfer between numa nodes (inside one host). Set "interleave" or "disable" to see difference. On 07/27/2016 11:21 AM, Kenneth Waegeman wrote: Hi all, When our OSD hosts are running for some time, we start see increased

Re: [ceph-users] Ceph performance pattern

2016-07-27 Thread RDS
I had a similar issue when migrating from SSD to NVMe using Ubuntu. Read performance tanked using NVMe. Iostat showed each NVMe performing 30x more physical reads compared to SSD, but the MB/s was 1/6 the speed of the SSD. I set "blockdev --setra 128 /dev/nvmeX” and now performance is much bette

Re: [ceph-users] OSD host swap usage

2016-07-27 Thread Kenneth Waegeman
On 27/07/16 10:59, Christian Balzer wrote: Hello, On Wed, 27 Jul 2016 10:21:34 +0200 Kenneth Waegeman wrote: Hi all, When our OSD hosts are running for some time, we start see increased usage of swap on a number of them. Some OSDs don't use swap for weeks, while others has a full (4G) swap,

[ceph-users] Cleaning Up Failed Multipart Uploads

2016-07-27 Thread Brian Felton
Greetings, Background: If an object storage client re-uploads parts to a multipart object, RadosGW does not clean up all of the parts properly when the multipart upload is aborted or completed. You can read all of the gory details (including reproduction steps) in this bug report: http://tracker.

Re: [ceph-users] Monitors not reaching quorum

2016-07-27 Thread Sergio A. de Carvalho Jr.
Got it. Are you sending logs to the central syslog servers via TCP (@@) or UDP (@)? I just realised that my test cluster sends logs via UDP to our usual central syslog server (as our productions hosts normally do), but it is also configured to send logs via TCP to a testing Logstash VM. My suspic

[ceph-users] Listing objects in a specified placement group / OSD

2016-07-27 Thread David Blundell
Hi, I wasn't sure if this is a ceph-users or ceph-devel question as it's about the API (users) but the answer may involve me writing a RADOS method (devel). At the moment in Ceph Jewel I can find which objects are held in an OSD or placement group by looking on the filesystem under /var/lib/ce

[ceph-users] Ceph libaio queue depth understanding

2016-07-27 Thread nick
Hi, we would like to write a testplan to benchmark our ceph cluster. We want to use fio for it. According to an article from Sebastian Han [1] ceph is using libaio with O_DIRECT for writing data to the journal. In a different blog article [2] I read that ceph is using D_SYNC as well for this. T

Re: [ceph-users] Ceph performance pattern

2016-07-27 Thread EP Komarla
I am using aio engine in fio. Fio is working on rbd images - epk -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark Nelson Sent: Tuesday, July 26, 2016 6:27 PM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph performance pattern

Re: [ceph-users] How to get Active set of OSD Map in serial order of osd index

2016-07-27 Thread Samuel Just
Think of the osd numbers as names. The plugin interface doesn't even tell you which shard maps to which osd. Why would it make a difference? -Sam On Wed, Jul 27, 2016 at 12:45 AM, Syed Hussain wrote: > Fundamentally, I wanted to know what chunks are allocated in which OSDs. > This way I can pre

[ceph-users] Error with instance snapshot in ceph storage : Image Pending Upload state.

2016-07-27 Thread Gaurav Goyal
Dear Ceph Team, I am trying to take snapshot of my instance. Image was stuck up in Queued state and instance is stuck up in Image Pending Upload state. I had to manually quit the job as it was not working since last 1 hour .. my instance is still in Image Pending Upload state. Is it something w

Re: [ceph-users] Ceph performance pattern

2016-07-27 Thread Mark Nelson
Ok. Are you using O_DIRECT? That will disable readahead on the client, but if you don't use O_DIRECT you won't get the benefit of iodepth=16. See fio's man page: "Number of I/O units to keep in flight against the file. Note that increasing iodepth beyond 1 will not affect synchronous ioengin

Re: [ceph-users] Ceph performance pattern

2016-07-27 Thread EP Komarla
I am using O_DIRECT=1 -Original Message- From: Mark Nelson [mailto:mnel...@redhat.com] Sent: Wednesday, July 27, 2016 8:33 AM To: EP Komarla ; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph performance pattern Ok. Are you using O_DIRECT? That will disable readahead on the cli

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-07-27 Thread Alex Gorbachev
Hi Vlad, On Mon, Jul 25, 2016 at 10:44 PM, Vladislav Bolkhovitin wrote: > Hi, > > I would suggest to rebuild SCST in the debug mode (after "make 2debug"), then > before > calling the unmap command enable "scsi" and "debug" logging for scst and > scst_vdisk > modules by 'echo add scsi >/sys/kern

[ceph-users] Searchable metadata and objects in Ceph

2016-07-27 Thread Andrey Ptashnik
Hello team, We are looking for ways to store metadata with objects and make this metadata searchable. For example if we store an image of the car in Ceph we would like to be able to attach metadata like model, make, year, damaged parts list, owner information. So later on we can run a report a

Re: [ceph-users] Listing objects in a specified placement group / OSD

2016-07-27 Thread Samuel Just
Well, it's kind of deliberately obfuscated because PGs aren't a librados-level abstraction. Why do you want to list the objects in a PG? -Sam On Wed, Jul 27, 2016 at 8:10 AM, David Blundell wrote: > Hi, > > > > I wasn’t sure if this is a ceph-users or ceph-devel question as it’s about > the API

Re: [ceph-users] [Ceph-community] Noobie question about OSD fail

2016-07-27 Thread Patrick McGarry
Moving this to ceph-user. On Wed, Jul 27, 2016 at 8:36 AM, Kostya Velychkovsky wrote: > Hello. I have test CEPH cluster with 5 nodes: 3 MON and 2 OSD > > This is my ceph.conf > > [global] > fsid = 714da611-2c40-4930-b5b9-d57e70d5cf7e > mon_initial_members = node1 > mon_host = node1,node3,node4

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-07-27 Thread Alex Gorbachev
One other experiment: just running blkdiscard against the RBD block device completely clears it, to the point where the rbd-diff method reports 0 blocks utilized. So to summarize: - ESXi sending UNMAP via SCST does not seem to release storage from RBD (BLOCKIO handler that is supposed to work wit

Re: [ceph-users] Listing objects in a specified placement group / OSD

2016-07-27 Thread David Blundell
Hi Sam, We're running a program on each OSD host that reads the contents of the objects on that host's OSDs (using LIBRADOS_OPERATION_LOCALIZE_READS when reading as eventual consistency is ok). At the moment the simplest way of finding out which objects are local is to look in the local filesy

Re: [ceph-users] [Ceph-community] Noobie question about OSD fail

2016-07-27 Thread Samuel Just
osd min down reports = 2 Set that to 1? -Sam On Wed, Jul 27, 2016 at 10:24 AM, Patrick McGarry wrote: > Moving this to ceph-user. > > > On Wed, Jul 27, 2016 at 8:36 AM, Kostya Velychkovsky > wrote: >> Hello. I have test CEPH cluster with 5 nodes: 3 MON and 2 OSD >> >> This is my ceph.conf >> >

[ceph-users] How to configure OSD heart beat to happen on public network

2016-07-27 Thread Venkata Manojawa Paritala
Hi, I have configured the below 2 networks in Ceph.conf. 1. public network 2. cluster_network Now, the heart beat for the OSDs is happening thru cluster_network. How can I configure the heart beat to happen thru public network? I actually configured the property "osd heartbeat address" in the g

Re: [ceph-users] Searchable metadata and objects in Ceph

2016-07-27 Thread Gregory Farnum
On Wed, Jul 27, 2016 at 9:17 AM, Andrey Ptashnik wrote: > Hello team, > > We are looking for ways to store metadata with objects and make this metadata > searchable. > For example if we store an image of the car in Ceph we would like to be able > to attach metadata like model, make, year, damage

Re: [ceph-users] performance decrease after continuous run

2016-07-27 Thread RDS
I have seen this and some of our big customers have also seen this. I was using 8TB HDDs and when running small tests using a fresh HDD setup, these tests resulted in very good performance. I then loaded the ceph cluster so each of the 8TB HDD used 4TB and reran the same tests. performance was c

Re: [ceph-users] CephFS snapshot preferred behaviors

2016-07-27 Thread Patrick Donnelly
On Mon, Jul 25, 2016 at 5:41 PM, Gregory Farnum wrote: > Some specific questions: > * Right now, we allow users to rename snapshots. (This is newish, so > you may not be aware of it if you've been using snapshots for a > while.) Is that an important ability to preserve? IMO, renaming snapshots is

Re: [ceph-users] CephFS snapshot preferred behaviors

2016-07-27 Thread Gregory Farnum
On Wed, Jul 27, 2016 at 2:51 PM, Patrick Donnelly wrote: > On Mon, Jul 25, 2016 at 5:41 PM, Gregory Farnum wrote: >> Some specific questions: >> * Right now, we allow users to rename snapshots. (This is newish, so >> you may not be aware of it if you've been using snapshots for a >> while.) Is th

[ceph-users] Ceph Days - APAC Roadshow Schedules Posted

2016-07-27 Thread Patrick McGarry
Hey cephers, Just wanted to let you know that the schedules for all Ceph Days in the APAC roadshow have now been published. If you are going to be in the region 20-29 Aug, check out the schedule and come join us! http://ceph.com/cephdays/ -- Best Regards, Patrick McGarry Director Ceph Commu

[ceph-users] ceph-fuse (jewel 10.2.2): No such file or directory issues

2016-07-27 Thread Goncalo Borges
Dear cephfsers :-) We saw some weirdness in cephfs that we do not understand. We were helping some user which complained that her batch system job outputs were not produced in cephfs. Please note that we are using ceph-fuse (jewel 10.2.2) as client We log in into the machine where her jobs r

Re: [ceph-users] ceph-fuse (jewel 10.2.2): No such file or directory issues

2016-07-27 Thread Gregory Farnum
On Wed, Jul 27, 2016 at 6:13 PM, Goncalo Borges wrote: > Dear cephfsers :-) > > We saw some weirdness in cephfs that we do not understand. > > We were helping some user which complained that her batch system job outputs > were not produced in cephfs. > > Please note that we are using ceph-fuse (je

Re: [ceph-users] ceph-fuse (jewel 10.2.2): No such file or directory issues

2016-07-27 Thread Goncalo Borges
Hi Greg Thanks for replying. Answer inline. Dear cephfsers :-) We saw some weirdness in cephfs that we do not understand. We were helping some user which complained that her batch system job outputs were not produced in cephfs. Please note that we are using ceph-fuse (jewel 10.2.2) as clien

Re: [ceph-users] mon_osd_nearfull_ratio (unchangeable) ?

2016-07-27 Thread Goncalo Borges
Hi David Thanks for replying. Unfortunately, at the end, I did not test this. We solved our near full problems by adding a new host and now it doesn't make sense to test it anymore. Thanks for suggestion. Will keep it in mind next time. Cheers Goncalo On 07/26/2016 06:09 PM, David wrote

[ceph-users] rbd-nbd, failed to bind the UNIX domain socket

2016-07-27 Thread joecyw
請教一下各位不知是否有人遇過類似的問題,最近透過ceph-deploy(1.5.34) 佈署了 ceph(10.2.2),建立了一個pool ( name: dp) 及image ( img001),利用rbd-nbd mapping 至 /dev/nbd0,但是在使用 rbd-nbd list-mapped 查看mapping 狀態的時候均會有如下例錯誤訊息: [root於ceph01 ~]# rbd-nbd list-mapped /dev/nbd0 2016-07-27