Re: [ceph-users] radosgw swift java jars

2014-01-06 Thread Wido den Hollander
On 01/07/2014 08:15 AM, raj kumar wrote: I meant could not find required jar files to run java swift program. I don't think nobody has a clue of what you mean. Ceph is completely written in C++, so there are no Java JARs. The only piece of Java is the CephFS JNI integration and the RADOS Jav

Re: [ceph-users] radosgw swift java jars

2014-01-06 Thread raj kumar
I meant could not find required jar files to run java swift program. On Mon, Jan 6, 2014 at 11:35 PM, raj kumar wrote: > Hi, could find all necessary jars required to run the java program. Is > there any place to get all jars for both swift and s3? Thanks. > ___

Re: [ceph-users] crush chooseleaf vs. choose

2014-01-06 Thread Dietmar Maurer
> > I think this is just fundamentally a problem with distributing 3 > > replicas over only 4 hosts. Every piece of data in the system needs > > to include either host 3 or 4 (and thus device 4 or 5) in order to > > have 3 replicas (on separate hosts). Add more hosts or disks and the > distributi

[ceph-users] How to deploy ceph with a Debian version other than stable (Hello James Page ^o^)

2014-01-06 Thread Christian Balzer
Hello, I previously created a test cluster using the Argonaut packages available in Debian testing aka Jessie (atm). Since it was pointed out to me that I ought to play with something more recent, I bumped the machines to sid, which has 0.72.2 packages natively. The sid packages do not include

Re: [ceph-users] Current state of OpenStack/Ceph rbd live migration?

2014-01-06 Thread Haomai Wang
On Tue, Jan 7, 2014 at 6:13 AM, Jeff Bachtel wrote: > I just wanted to get a quick sanity check (and ammunition for updating from > Grizzly to Havana). > > Per > https://blueprints.launchpad.net/nova/+spec/bring-rbd-support-libvirt-images-type > it seems that explicit support for rbd image types h

Re: [ceph-users] repair incosistent pg using emperor

2014-01-06 Thread David Zafman
Did the inconsistent flag eventually get cleared? It might have been you didn’t wait long enough for the repair to get through the pg. David Zafman Senior Developer http://www.inktank.com On Dec 28, 2013, at 12:29 PM, Corin Langosch wrote: > Hi Sage, > > Am 28.12.2013 19:18, schrieb Sage

[ceph-users] Current state of OpenStack/Ceph rbd live migration?

2014-01-06 Thread Jeff Bachtel
I just wanted to get a quick sanity check (and ammunition for updating from Grizzly to Havana). Per https://blueprints.launchpad.net/nova/+spec/bring-rbd-support-libvirt-images-type it seems that explicit support for rbd image types has been brought into OpenStack/Havana. Does this correspond

Re: [ceph-users] radosgw package - missing deps on Ubuntu < 13.04

2014-01-06 Thread LaSalle, Jurvis
On 1/2/14, 1:42 PM, "Sage Weil" wrote: >The precise version has a few annoying (though rare) >bugs, and more importantly does not support caching properly. For >clusters of any size this can become a performance problem, particularly >when the cluster is stressed (lots of OSDs catching up on O

Re: [ceph-users] What's the status of feature: S3 object versioning?

2014-01-06 Thread Gregory Farnum
On Thu, Jan 2, 2014 at 12:40 AM, Ray Lv wrote: > Hi there, > > Noted that there is a Blueprint item about S3 object versioning in radosgw > for Firefly at > http://wiki.ceph.com/Planning/Blueprints/Firefly/rgw%3A_object_versioning > And Sage has announced v0.74 release for Firefly. Do you guys kno

Re: [ceph-users] cannot see recovery statistics + pgs stuck unclean

2014-01-06 Thread Gregory Farnum
[Hrm, this email was in my spam folder.] At a quick glance, you're probably running into some issues because you've got two racks of very different weights. Things will probably get better if you enable the optimal "crush tunables"; check out the docs on that and see if you can switch to them. -Gr

Re: [ceph-users] How can I set the warning level?

2014-01-06 Thread Gregory Farnum
On Wed, Dec 25, 2013 at 6:13 PM, vernon1...@126.com wrote: > Hello, my Mon's always HEALTH_WARN, and I run ceph health detail, it show > me like this: > > HEALTH_WARN > mon.2 addr 192.168.0.7:6789/0 has 30% avail disk space -- low disk space! > > I want to know how to set this warning level? I ha

Re: [ceph-users] CephFS files not appearing in DF (or rados ls)

2014-01-06 Thread Gregory Farnum
On Thu, Jan 2, 2014 at 2:18 PM, Alex Pearson wrote: > Hi All, > Victory! Found the issue, it was a mistake on my part, however it does raise > another questions... > > The issue was: > root@osh1:~# ceph --cluster apics auth list > installed auth entries: > > client.cuckoo > key: AQBjTbl

Re: [ceph-users] building librados static library librados.a

2014-01-06 Thread Noah Watkins
The default configuration for a Ceph build should produce a static rados library. If you actually want to build _only_ librados, that might require a bit automake tweeks. nwatkins@kyoto:~$ ls -l projects/ceph_install/lib/ total 691396 -rw-r--r-- 1 nwatkins nwatkins 219465940 Jan 6 09:56 librados.

[ceph-users] radosgw swift java jars

2014-01-06 Thread raj kumar
Hi, could find all necessary jars required to run the java program. Is there any place to get all jars for both swift and s3? Thanks. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph Command Prepending "None" to output on one node (only)

2014-01-06 Thread Zeb Palmer
I've (re)confirmed that all nodes are the same build. # ceph --version ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60) ubuntu package version 0.72.2-1precise I was discussing this with my engineers this morning and a couple of them vaguely recalled that we had run into this on

Re: [ceph-users] Ceph Command Prepending "None" to output on one node (only)

2014-01-06 Thread Gregory Farnum
I have a vague memory of this being something that happened in an outdated version of the ceph tool. Are you running an older binary on the node in question? -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Sat, Jan 4, 2014 at 4:34 PM, Zeb Palmer wrote: > I have a small ceph

[ceph-users] building librados static library librados.a

2014-01-06 Thread david hong
Hi ceph-users team, I'm a junior systems developer. I'm developing some applications using librados (librados only rather than the whole Ceph package) from Ceph and it turns out the work of building the librados-only package from the huge Ceph source code would be enormous. All I want is just a

Re: [ceph-users] ceph osd perf question

2014-01-06 Thread Gregory Farnum
On Fri, Jan 3, 2014 at 2:02 AM, Andrei Mikhailovsky wrote: > Hi guys, > > Could someone explain what's the new perf stats show and if the numbers are > reasonable on my cluster? > > I am concerned about the high fs_commit_latency, which seems to be above > 150ms for all osds. I've tried to find th

Re: [ceph-users] [Rados] How to get the scrub progressing ?

2014-01-06 Thread Gregory Farnum
On Mon, Dec 30, 2013 at 11:14 PM, Kuo Hugo wrote: > > Hi all, > > I have several question about osd scrub. > > Does the scrub job run in the background automatically? Is it working > periodically ? Yes, the OSDs will periodically scrub the PGs they host based on load and the min/max scrub interv

[ceph-users] Ceph@HOME: the domestication of a wild cephalopod

2014-01-06 Thread Loic Dachary
Hi, The Ceph User Committee is proud to present its first use case :-) http://ceph.com/use-cases/cephhome-the-domestication-of-a-wild-cephalopod/ Many thanks to Alexandre Oliva for this inspiring story, Nathan Regola and Aaron Ten Clay for editing and proofreading and Patrick McGarry for wordp

Re: [ceph-users] crush chooseleaf vs. choose

2014-01-06 Thread Dietmar Maurer
> > Host with only one osd gets too much data. > > I think this is just fundamentally a problem with distributing 3 replicas > over only 4 > hosts. Every piece of data in the system needs to include either host 3 or 4 > (and > thus device 4 or 5) in order to have 3 replicas (on separate hosts).

Re: [ceph-users] crush chooseleaf vs. choose

2014-01-06 Thread Sage Weil
On Mon, 6 Jan 2014, Dietmar Maurer wrote: > > 'ceph osd crush tunables optimal' > > > > or adjust an offline map file via the crushtool command line (more > > annoying) and retest; I suspect that is the problem. > > > > http://ceph.com/docs/master/rados/operations/crush-map/#tunables > > That so

Re: [ceph-users] crush chooseleaf vs. choose

2014-01-06 Thread Dietmar Maurer
> 'ceph osd crush tunables optimal' > > or adjust an offline map file via the crushtool command line (more > annoying) and retest; I suspect that is the problem. > > http://ceph.com/docs/master/rados/operations/crush-map/#tunables That solves the bug with weight 0, thanks. But is still get the

Re: [ceph-users] backfill_toofull issue - can reassign PGs to different server?

2014-01-06 Thread Robert van Leeuwen
> I have 4 servers with 4 OSDs / drives each, so total I have 16 OSDs. For some > reason, the last server is over-utilised compared to the first 3 servers, > causing all the OSDs on the fourth server: osd.12, osd.13, osd.14 and osd.15 > to be near full (above 8