Re: [ceph-users] Ceph monitor ip address issue

2015-09-08 Thread Willi Fehler
Hi Chris, I tried to reconfigure my cluster but my MONs are still using the wrong network. The new ceph.conf was pushed to all nodes and ceph was restarted. [root@linsrv001 ~]# netstat -tulpen Aktive Internetverbindungen (Nur Server) Proto Recv-Q Send-Q Local Address Foreign Address

Re: [ceph-users] Ceph monitor ip address issue

2015-09-08 Thread Joao Eduardo Luis
On 09/08/2015 08:13 AM, Willi Fehler wrote: > Hi Chris, > > I tried to reconfigure my cluster but my MONs are still using the wrong > network. The new ceph.conf was pushed to all nodes and ceph was restarted. If your monitors are already deployed, you will need to move them to the new network man

Re: [ceph-users] Ceph monitor ip address issue

2015-09-08 Thread Willi Fehler
Hi, many thanks for your feedback. I've redeployed my cluster and now it was working. Last beginner question: Replication size is by default now since a while to 3. When I set min_size to 1 it means that in a 3 node cluster 2 nodes(doesn't matter which of them) could crash and I have still a

Re: [ceph-users] Extra RAM use as Read Cache

2015-09-08 Thread Nick Fisk
Hi Vickey, What are you using for the clients to access the Ceph Cluster ie Kernel mounted RBD, KVM VM's, CephFS? And as Somnath Roy touched on, what sort of IO pattern are you generating? Also if you can specify the type hardware and configuration you are running, that will also help. You sai

[ceph-users] osd daemon cpu threads

2015-09-08 Thread Gurvinder Singh
Hi, Just wondering if a Ceph OSD daemon supports multi threading and can get benefit from multi core Intel/ARM processor. E.g. 12 disk server with 36 Intel or 48 ARM cores. Thanks, Gurvinder ___ ceph-users mailing list ceph-users@lists.ceph.com http://l

[ceph-users] How to observed civetweb.

2015-09-08 Thread Vickie ch
Dear cephers, Just upgrade radosgw from apache to civetweb. It's really simple to installed and used. But I can't find any parameters or logs to adjust(or observe) civetweb. (Like apache log). I'm really confuse. Any ideas? Best wishes, Mika ___ cep

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-08 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Nick Fisk > Sent: 06 September 2015 15:11 > To: 'Shinobu Kinjo' ; 'GuangYang' > > Cc: 'ceph-users' ; 'Nick Fisk' > Subject: Re: [ceph-users] Ceph performance, empty vs part full > > Just a q

[ceph-users] test

2015-09-08 Thread Shikejun
Test - 本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出 的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、 或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本 邮件! This e-mail and its attachments c

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-08 Thread Mariusz Gronczewski
For those interested: Bug that caused ceph to go haywire was a emulex nic driver dropping packets when making more than few hundred megabits (basically linear change compared to load) which caused osds to flap constantly once something gone wrong (high traffic, osd go down, ceph starts to realloca

Re: [ceph-users] osd daemon cpu threads

2015-09-08 Thread Jan Schermer
In terms of throughput yes - one OSD may have thousands of threads doing work so it will scale accross multiple clients. But in terms of latency you are still limited by a throughput of one core, so for database workloads or any type of synchronous or single-threaded IO more cores will be of no

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-08 Thread Jan Schermer
YMMV, same story like SSD selection. Intels have their own problems :-) Jan > On 08 Sep 2015, at 12:09, Mariusz Gronczewski > wrote: > > For those interested: > > Bug that caused ceph to go haywire was a emulex nic driver dropping > packets when making more than few hundred megabits (basicall

[ceph-users] qemu jemalloc support soon in master (applied in paolo upstream branch)

2015-09-08 Thread Alexandre DERUMIER
Hi, Paolo Bonzini from qemu team has finally applied my qemu jemalloc patch in his for-upstream branch https://github.com/bonzini/qemu/releases/tag/for-upstream https://github.com/bonzini/qemu/tree/for-upstream So,It'll be in qemu master soon and ready for qemu 2.5 I have write some small ben

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-08 Thread Mariusz Gronczewski
the worst thing that cluster was running (on light load tho) for about 6 months now and I already flashed firmware to those cards which made problem "disappear" for small loads, so I wasnt even expecting problem in that place. Sadly OSDs still eat between 2 and 6 GB RAM each but I hope that will st

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-08 Thread Shinobu Kinjo
Was that just driver issue? If so, could we face same kind of issue on different distributed file systems. I'm just asking. I'm quite interested in: What kind of HBA you are using Which version of driver caused the issue Does any Cepher have any comment on Mariusz's comment? Shinobu - O

Re: [ceph-users] qemu jemalloc support soon in master (applied in paolo upstream branch)

2015-09-08 Thread Shinobu Kinjo
That would be my life saver. Thanks a lot! > you simply need to compile qemu with --enable-jemalloc, to enable jemmaloc > support. - Original Message - From: "Alexandre DERUMIER" To: "ceph-users" , "ceph-devel" Sent: Tuesday, September 8, 2015 7:58:15 PM Subject: [ceph-users] qemu jem

Re: [ceph-users] osd daemon cpu threads

2015-09-08 Thread Gurvinder Singh
Thanks Jan for the reply. It's good to know that Ceph can use extra cpus for throughput. I am wondering if any one in the community has used/experimented with Arm v8 2.5 GHz prosessors instead of Intel E5. On Sep 8, 2015 12:28 PM, "Jan Schermer" wrote: > In terms of throughput yes - one OSD may

[ceph-users] ceph-users test

2015-09-08 Thread Shikejun
ceph-users test - 本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出 的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、 或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本 邮件! This e-mail and its at

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-08 Thread Shinobu Kinjo
> eat between 2 and 6 GB RAM That is a bit huge difference, I think. - Original Message - From: "Mariusz Gronczewski" To: "Jan Schermer" Cc: ceph-users@lists.ceph.com Sent: Tuesday, September 8, 2015 8:17:43 PM Subject: Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant th

[ceph-users] ceph-users test

2015-09-08 Thread Shikejun
[ceph-users]ceph-users test - 本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出 的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、 或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本 邮件! This e-mai

Re: [ceph-users] How objects are reshuffled on addition of new OSD

2015-09-08 Thread Gregory Farnum
On Tue, Sep 1, 2015 at 2:31 AM, Shesha Sreenivasamurthy wrote: > I had a question regarding how OSD locations are determined by CRUSH. > > From the CRUSH paper I gather that the replica locations of an object (A) is > a vector (v) that is got by the function c(r,x) = (hash (x) + rp) mod m). It is

Re: [ceph-users] Inconsistency in 'ceph df' stats

2015-09-08 Thread Gregory Farnum
This comes up periodically on the mailing list; see eg http://www.spinics.net/lists/ceph-users/msg15907.html I'm not sure if your case fits within those odd parameters or not, but I bet it does. :) -Greg On Mon, Aug 31, 2015 at 8:16 PM, Stillwell, Bryan wrote: > On one of our staging ceph cluste

[ceph-users] A friend just gave you $10 to try DigitalOcean

2015-09-08 Thread DigitalOcean
Subject Line (for proofing, doesn't show in actual email) == Deploy a Server For Free! == Your friend ajmdfeipan has been using DigitalOcean – a cloud hosting service designed just for developers – and thought you might want to give

Re: [ceph-users] how to improve ceph cluster capacity usage

2015-09-08 Thread Gregory Farnum
On Tue, Sep 1, 2015 at 3:58 PM, huang jun wrote: > hi,all > > Recently, i did some experiments on OSD data distribution, > we set up a cluster with 72 OSDs,all 2TB sata disk, > and ceph version is v0.94.3 and linux kernel version is 3.18, > and set "ceph osd crush tunables optimal". > There are 3

Re: [ceph-users] rebalancing taking very long time

2015-09-08 Thread Gregory Farnum
On Wed, Sep 2, 2015 at 9:34 PM, Bob Ababurko wrote: > When I lose a disk OR replace a OSD in my POC ceph cluster, it takes a very > long time to rebalance. I should note that my cluster is slightly unique in > that I am using cephfs(shouldn't matter?) and it currently contains about > 310 million

Re: [ceph-users] osds on 2 nodes vs. on one node

2015-09-08 Thread Gregory Farnum
On Fri, Sep 4, 2015 at 12:24 AM, Deneau, Tom wrote: > After running some other experiments, I see now that the high single-node > bandwidth only occurs when ceph-mon is also running on that same node. > (In these small clusters I only had one ceph-mon running). > If I compare to a single-node wher

Re: [ceph-users] CephFS/Fuse : detect package upgrade to remount

2015-09-08 Thread Gregory Farnum
On Fri, Sep 4, 2015 at 9:15 AM, Florent B wrote: > Hi everyone, > > I would like to know if there is a way on Debian to detect an upgrade of > ceph-fuse package, that "needs" remouting CephFS. > > When I upgrade my systems, I do a "aptitude update && aptitude > safe-upgrade". > > When ceph-fuse pa

Re: [ceph-users] [Ceph-community] Ceph MeetUp Berlin Sept 28

2015-09-08 Thread Joao Eduardo Luis
This may see more traction in ceph-users and ceph-devel. Most people don't usually subscribe to ceph-community. Cheers! -Joao On 09/08/2015 11:44 AM, Robert Sander wrote: > Hi, > > the next meetup in Berlin takes place on September 28 at 18:00 CEST. > > Please RSVP at http://www.meetup.com/

Re: [ceph-users] CephFS and caching

2015-09-08 Thread Gregory Farnum
On Thu, Sep 3, 2015 at 11:58 PM, Kyle Hutson wrote: > I was wondering if anybody could give me some insight as to how CephFS does > its caching - read-caching in particular. > > We are using CephFS with an EC pool on the backend with a replicated cache > pool in front of it. We're seeing some very

Re: [ceph-users] crash on rbd bench-write

2015-09-08 Thread Jason Dillaman
> The client version is what was installed by the ceph-deploy install > ceph-client command. Via the debian-hammer repo. Per the quickstart doc. > Are you saying I need to install a different client version somehow? You listed the version as 0.80.10 which is a Ceph Firefly release -- Hammer is 0.

Re: [ceph-users] CephFS/Fuse : detect package upgrade to remount

2015-09-08 Thread Gregory Farnum
On Tue, Sep 8, 2015 at 2:33 PM, Florent B wrote: > > > On 09/08/2015 03:26 PM, Gregory Farnum wrote: >> On Fri, Sep 4, 2015 at 9:15 AM, Florent B wrote: >>> Hi everyone, >>> >>> I would like to know if there is a way on Debian to detect an upgrade of >>> ceph-fuse package, that "needs" remouting

Re: [ceph-users] A few questions and remarks about cephx

2015-09-08 Thread Gregory Farnum
On Sun, Sep 6, 2015 at 10:07 AM, Marin Bernard wrote: > Hi, > > I've just setup Ceph Hammer (latest version) on a single node (1 MON, 1 > MDS, 4 OSDs) for testing purposes. I used ceph-deploy. I only > configured CephFS as I don't use RBD. My pool config is as follows: > > $ sudo ceph df > GLOBAL:

[ceph-users] [Ceph-community] Ceph MeetUp Berlin Sept 28

2015-09-08 Thread Robert Sander
Hi, the next meetup in Berlin takes place on September 28 at 18:00 CEST. Please RSVP at http://www.meetup.com/de/Ceph-Berlin/events/222906639/ Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051

Re: [ceph-users] rebalancing taking very long time

2015-09-08 Thread Alphe Salas
I can say exactly the same I am using ceph sin 0.38 and I never get osd so laggy than with 0.94. rebalancing /rebuild algorithm is crap in 0.94 serriously I have 2 osd serving 2 discs of 2TB and 4 GB of RAM osd takes 1.6GB each !!! serriously ! that makes avanche snow. Let me be straight and e

Re: [ceph-users] which SSD / experiences with Samsung 843T vs. Intel s3700

2015-09-08 Thread Mark Nelson
On 09/07/2015 11:34 AM, Quentin Hartman wrote: fwiw, I am not confused about the various types of SSDs that Samsung offers. I knew exactly what I was getting when I ordered them. Based on their specs and my WAG on how much writing I would be doing they should have lasted about 6 years. Turns ou

Re: [ceph-users] How to observed civetweb.

2015-09-08 Thread Yehuda Sadeh-Weinraub
You can increase the civetweb logs by adding 'debug civetweb = 10' in your ceph.conf. The output will go into the rgw logs. Yehuda On Tue, Sep 8, 2015 at 2:24 AM, Vickie ch wrote: > Dear cephers, >Just upgrade radosgw from apache to civetweb. > It's really simple to installed and used. But I

[ceph-users] maximum object size

2015-09-08 Thread HEWLETT, Paul (Paul)
Hi All We have recently encountered a problem on Hammer (0.94.2) whereby we cannot write objects > 2GB in size to the rados backend. (NB not RadosGW, CephFS or RBD) I found the following issue https://wiki.ceph.com/Planning/Blueprints/Firefly/Object_striping_in_librad os which seems to address th

Re: [ceph-users] [Problem] I cannot start the OSD daemon

2015-09-08 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 I would check that the /var/lib/ceph/osd/ceph-0/ is mounted and has the file structure for Ceph. - Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Mon, Sep 7, 2015 at 2:16 AM, Aaron wrote: > Hi

Re: [ceph-users] which SSD / experiences with Samsung 843T vs. Intel s3700

2015-09-08 Thread Quentin Hartman
On Tue, Sep 8, 2015 at 9:05 AM, Mark Nelson wrote: > A list of hardware that is known to work well would be incredibly >> valuable to people getting started. It doesn't have to be exhaustive, >> nor does it have to provide all the guidance someone could want. A >> simple "these things have worked

Re: [ceph-users] maximum object size

2015-09-08 Thread Ilya Dryomov
On Tue, Sep 8, 2015 at 6:54 PM, HEWLETT, Paul (Paul) wrote: > Hi All > > We have recently encountered a problem on Hammer (0.94.2) whereby we > cannot write objects > 2GB in size to the rados backend. > (NB not RadosGW, CephFS or RBD) > > I found the following issue > https://wiki.ceph.com/Plannin

Re: [ceph-users] Ceph cluster NO read / write performance :: Ops are blocked

2015-09-08 Thread Lincoln Bryant
For whatever it’s worth, my problem has returned and is very similar to yours. Still trying to figure out what’s going on over here. Performance is nice for a few seconds, then goes to 0. This is a similar setup to yours (12 OSDs per box, Scientific Linux 6, Ceph 0.94.3, etc) 384 16

Re: [ceph-users] maximum object size

2015-09-08 Thread Ilya Dryomov
On Tue, Sep 8, 2015 at 7:30 PM, HEWLETT, Paul (Paul) wrote: > Hi Ilya > > Thanks for that - libradosstriper is what we need - any notes available on > usage? No, I'm afraid not. include/radosstriper/libradosstriper.h and libradosstriper.hpp should be enough to get you started - there is a fair a

Re: [ceph-users] maximum object size

2015-09-08 Thread HEWLETT, Paul (Paul)
I found the description in the source code. Apparently one sets attributes on the object to force striping. Regards Paul On 08/09/2015 17:39, "Ilya Dryomov" wrote: >On Tue, Sep 8, 2015 at 7:30 PM, HEWLETT, Paul (Paul) > wrote: >> Hi Ilya >> >> Thanks for that - libradosstriper is what we need -

Re: [ceph-users] maximum object size

2015-09-08 Thread Somnath Roy
I think the limit is 90 MB from OSD side, isn't it ? If so, how are you able to write object till 1.99 GB ? Am I missing anything ? Thanks & Regards Somnath -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of HEWLETT, Paul (Paul) Sent: Tuesday, Sep

Re: [ceph-users] Inconsistent PGs that ceph pg repair does not fix

2015-09-08 Thread Andras Pataki
Hi Sam, I saw that ceph 0.94.3 is out and it contains a resolution to the issue below (http://tracker.ceph.com/issues/12577). I installed it on our cluster, but unfortunately it didn't resolve the issue. Same as before, I have a couple of inconsistent pg's, and run ceph pg repair on them - th

Re: [ceph-users] Inconsistent PGs that ceph pg repair does not fix

2015-09-08 Thread Sage Weil
On Tue, 8 Sep 2015, Andras Pataki wrote: > Hi Sam, > > I saw that ceph 0.94.3 is out and it contains a resolution to the issue below > (http://tracker.ceph.com/issues/12577). I installed it on our cluster, but > unfortunately it didn't resolve the issue. Same as before, I have a couple > of i

Re: [ceph-users] Inconsistent PGs that ceph pg repair does not fix

2015-09-08 Thread Andras Pataki
Cool, thanks! Andras From: Sage Weil Sent: Tuesday, September 8, 2015 2:07 PM To: Andras Pataki Cc: Samuel Just; ceph-users@lists.ceph.com; ceph-de...@vger.kernel.org Subject: Re: [ceph-users] Inconsistent PGs that ceph pg repair does not fix On Tue, 8 S

Re: [ceph-users] How to observed civetweb.

2015-09-08 Thread Kobi Laredo
Vickie, You can add: *access_log_file=/var/log/civetweb/access.log error_log_file=/var/log/civetweb/error.log* to *rgw frontends* in ceph.conf though these logs are thin on info (Source IP, date, and request) Check out https://github.com/civetweb/civetweb/blob/master/docs/UserManual.md for more

[ceph-users] ensuring write activity is finished

2015-09-08 Thread Deneau, Tom
When measuring read bandwidth using rados bench, I've been doing the following: * write some objects using rados bench write --no-cleanup * drop caches on the osd nodes * use rados bench seq to read. I've noticed that on the first rados bench seq immediately following the rados bench wri

[ceph-users] Ceph Tuning + KV backend

2015-09-08 Thread Niels Jakob Darger
Hello, Excuse my ignorance, I have just joined this list and started using Ceph (which looks very cool). On AWS I have set up a 5-way Ceph cluster (4 vCPUs, 32G RAM, dedicated SSDs for system, osd and journal) with the Object Gateway. For the purpose of simplicity of the test all the nodes ar

[ceph-users] jemalloc and transparent hugepage

2015-09-08 Thread Alexandre DERUMIER
Hi, I have found an interesting article about jemalloc and transparent hugepages https://www.digitalocean.com/company/blog/transparent-huge-pages-and-alternative-memory-allocators/ Could be great to see if disable transparent hugepage help to have lower jemalloc memory usage. Regards, Alexan

[ceph-users] OSD crash

2015-09-08 Thread Alex Gorbachev
Hello, We have run into an OSD crash this weekend with the following dump. Please advise what this could be. Best regards, Alex 2015-09-07 14:55:01.345638 7fae6c158700 0 -- 10.80.4.25:6830/2003934 >> 10.80.4.15:6813/5003974 pipe(0x1dd73000 sd=257 :6830 s=2 pgs=14271 cs=251 l=0 c=0x10d34580).f

Re: [ceph-users] OSD respawning -- FAILED assert(clone_size.count(clone))

2015-09-08 Thread David Zafman
Chris, I was wondering if you still had /tmp/snap.out laying around, could you send it to me? The way the dump to json code works if the "clones" is empty it doesn't show me what two other structures look like. David On 9/5/15 3:24 PM, Chris Taylor wrote: # ceph-dencoder type SnapSet imp

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-08 Thread Chad William Seys
Does 'ceph tell osd.* heap release' help with OSD RAM usage? From http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-August/003932.html Chad. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.

Re: [ceph-users] OSD respawning -- FAILED assert(clone_size.count(clone))

2015-09-08 Thread Chris Taylor
Attached is the snap.out On 09/08/2015 01:47 PM, David Zafman wrote: Chris, I was wondering if you still had /tmp/snap.out laying around, could you send it to me? The way the dump to json code works if the "clones" is empty it doesn't show me what two other structures look like. David O

Re: [ceph-users] Cannot add/create new monitor on ceph v0.94.3

2015-09-08 Thread Chang, Fangzhe (Fangzhe)
Thanks for the answer. NTP is running on both the existing monitor and the new monitor being installed. I did run ceph-deploy in the same directory as I created the cluster. However, I need to tweak the options supplied to ceph-deploy a little bit since I was running it behind a corporate firewa

Re: [ceph-users] Inconsistent PGs that ceph pg repair does not fix

2015-09-08 Thread Shinobu Kinjo
That's a good news. Shinobu - Original Message - From: "Sage Weil" To: "Andras Pataki" Cc: ceph-users@lists.ceph.com, ceph-de...@vger.kernel.org Sent: Wednesday, September 9, 2015 3:07:29 AM Subject: Re: [ceph-users] Inconsistent PGs that ceph pg repair does not fix On Tue, 8 Sep 2015,

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-08 Thread Shinobu Kinjo
Have you ever? http://ceph.com/docs/master/rados/troubleshooting/memory-profiling/ Shinobu - Original Message - From: "Chad William Seys" To: "Mariusz Gronczewski" , "Shinobu Kinjo" , ceph-users@lists.ceph.com Sent: Wednesday, September 9, 2015 6:14:15 AM Subject: Re: Huge memory usage

Re: [ceph-users] jemalloc and transparent hugepage

2015-09-08 Thread Alexandre DERUMIER
I have done small benchmark with tcmalloc and jemalloc, transparent hugepage=always|never. for tcmalloc, they are no difference. but for jemalloc, the difference is huge (around 25% lower with tp=never). jemmaloc 4.6.0+tp=never vs tcmalloc use 10% more RSS memory jemmaloc 4.0+tp=never almost u

Re: [ceph-users] jemalloc and transparent hugepage

2015-09-08 Thread Mark Nelson
Excellent investigation Alexandre! Have you noticed any performance difference with tp=never? Mark On 09/08/2015 06:33 PM, Alexandre DERUMIER wrote: I have done small benchmark with tcmalloc and jemalloc, transparent hugepage=always|never. for tcmalloc, they are no difference. but for jemal

Re: [ceph-users] jemalloc and transparent hugepage

2015-09-08 Thread Mark Nelson
Also, for what it's worth, I did analysis during recovery (though not with different transparent hugepage settings). You can see it on slide #13 here: http://nhm.ceph.com/mark_nelson_ceph_tech_talk.odp On 09/08/2015 06:49 PM, Mark Nelson wrote: Excellent investigation Alexandre! Have you no

Re: [ceph-users] jemalloc and transparent hugepage

2015-09-08 Thread Shinobu Kinjo
I email you guys about using jemalloc. There might be workaround to use it much more effectively. I hope some of you saw my email... Shinobu - Original Message - From: "Mark Nelson" To: "Alexandre DERUMIER" , "ceph-devel" , "ceph-users" Sent: Wednesday, September 9, 2015 8:52:35 AM Sub

Re: [ceph-users] Cannot add/create new monitor on ceph v0.94.3

2015-09-08 Thread Brad Hubbard
I'd suggest starting the mon with debugging turned right up and taking a good look at the output. Cheers, Brad - Original Message - > From: "Fangzhe Chang (Fangzhe)" > To: "Brad Hubbard" > Cc: ceph-users@lists.ceph.com > Sent: Wednesday, 9 September, 2015 7:35:42 AM > Subject: RE: [ceph

Re: [ceph-users] Ceph Tuning + KV backend

2015-09-08 Thread Haomai Wang
On Wed, Sep 9, 2015 at 3:00 AM, Niels Jakob Darger wrote: > Hello, > > Excuse my ignorance, I have just joined this list and started using Ceph > (which looks very cool). On AWS I have set up a 5-way Ceph cluster (4 vCPUs, > 32G RAM, dedicated SSDs for system, osd and journal) with the Object > Ga

Re: [ceph-users] jemalloc and transparent hugepage

2015-09-08 Thread Alexandre DERUMIER
>>Have you noticed any performance difference with tp=never? No difference. I think hugepage could speedup big memory sets like 100-200GB, but for 1-2GB they are no noticable difference. - Mail original - De: "Mark Nelson" À: "aderumier" , "ceph-devel" , "ceph-users" Cc: "Somnat

Re: [ceph-users] jemalloc and transparent hugepage

2015-09-08 Thread Sage Weil
On Wed, 9 Sep 2015, Alexandre DERUMIER wrote: > >>Have you noticed any performance difference with tp=never? > > No difference. > > I think hugepage could speedup big memory sets like 100-200GB, but for > 1-2GB they are no noticable difference. Is this something we can set with mallctl[1] at st

Re: [ceph-users] Still have orphaned rgw shadow files, ceph 0.94.3

2015-09-08 Thread Ben Hines
FYI, over the past week I have deleted over 50 TB of data from my cluster of these objects. Almost all were from buckets that no longer exist, and the fix tool did not find them. Fortunately i don't need the data from these old buckets so deleting all objects by prefix worked great. Anyone managin

Re: [ceph-users] How to observed civetweb.

2015-09-08 Thread Vickie ch
Thanks a lot!! One more question. I can understand use haproxy is a better way for loadbalance. And github say civetweb already support https. But I found some documents mention that civetweb need haproxy for https. Which one is true? Best wishes, Mika 2015-09-09 2:21 GMT+08:00 Kobi Laredo :

Re: [ceph-users] jemalloc and transparent hugepage

2015-09-08 Thread Alexandre DERUMIER
>>Is this something we can set with mallctl[1] at startup? I don't think it's possible. TP hugepage are managed by kernel, not jemalloc. (but a simple "echo never > /sys/kernel/mm/transparent_hugepage/enabled" in init script is enough) - Mail original - De: "Sage Weil" À: "aderumier"

Re: [ceph-users] jemalloc and transparent hugepage

2015-09-08 Thread Alexandre DERUMIER
They are a tracker here https://github.com/jemalloc/jemalloc/issues/243 "Improve interaction with transparent huge pages" - Mail original - De: "aderumier" À: "Sage Weil" Cc: "ceph-devel" , "ceph-users" Envoyé: Mercredi 9 Septembre 2015 06:37:22 Objet: Re: [ceph-users] jemalloc and

[ceph-users] radula - radosgw(s3) cli tool

2015-09-08 Thread Andrew Bibby (lists)
Hey cephers, Just wanted to briefly announce the release of a radosgw CLI tool that solves some of our team's minor annoyances. Called radula, a nod to the patron animal, this utility acts a lot like s3cmd with some tweaks to meet the expectations of our researchers. https://pypi.python.org/py