Hi all,
We're trying to spread data in our ceph cluster as much as possible,
that is pick different racks, then different hosts, then different OSDs.
It seems to work fine as long as there are enough buckets available,
but if we ask for more replicas than we have racks, for instance, the
requested
I'm trying to use Ceph FS as glance's backend.
I have mount Ceph FS at glance machine. And edit /etc/glance/glance-api.conf to
use the mounted directory.
But when I upload the image as I used to, I met the error:
Request returned failure status.
None
HTTPServiceUnavailable (HTTP 503)
If I change
Anybody? :)
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
On Mar 17, 2013 6:37 PM, "Igor Laskovy" wrote:
> Hi there!
>
> Could you please clarify what is the current status of development client
> for OS X and Windows desktop editions?
>
> --
> Igor Laskovy
> facebook.com/igor.laskovy
> K
Any reason you have chosen to use CephFS here instead of RBD for
direct integration with Glance?
http://ceph.com/docs/master/rbd/rbd-openstack/
Best Regards,
Patrick McGarry
Director, Community || Inktank
http://ceph.com || http://inktank.com
@scuttlemonkey || @ceph || @inktank
On Tue, Ma
Exporting the CephFS via Samba (Winbind/idmap_ad authenticated to Active
Directory) seems to work. I just set up a very small incarnation of a Ceph
storage cluster yesterday using 0.56.3 on Scientific Linux 6.3 x64 (RHEL6
clone).
(When I initially ran Ceph a couple of years ago, for example
Hey Igor,
Currently there are no plans to develop a OS X or Windows-specific
client per se. We do provide a number of different ways to expose the
cluster in ways that you could use it from these machines, however.
The most recent example of this is the work being done on tgt that can
expose Cep
On Tue, 19 Mar 2013, Jerker Nyberg wrote:
(When I initially ran Ceph a couple of years ago, for example rtorrent or
even benchmarks did not work
Sorry I didn't complete that sentence. I just want to say that I get a
very polished impression of Ceph now compared to then. Great work.
I miss a
Hi,
Sorry for bringing this thread up again. After building hadoop and ceph, I
am not able to run the wordcount example. I am getting the following error:
varunc@varunc4-virtual-machine:/usr/local/hadoop$ time
/usr/local/hadoop/bin/hadoop jar hadoop*examples*.jar wordcount
/mnt/ceph/wc2 /mnt/ceph
Hey Arne,
So I am not one of the CRUSH-wizards by any means, but while we are
waiting for them I wanted to take a crack at it so you weren't left
hanging. You are able to make more complex choices than just a single
chooseleaf statment in your rules. Take the example from the doc
where you want
Hi Varun,
Try removing this configuration option:
>
> ceph.root.dir
> /mnt/ceph
>
Hadoop running on Ceph uses the libcephfs user-space library to talk directly
to the file system, as opposed to running through the kernel or FUSE client.
This setting is which directory within the Ceph fil
Hi Noah,
After removing that and running, I get this error:
varunc@varunc4-virtual-machine:~$ time /usr/local/hadoop/bin/hadoop jar
hadoop*examples*.jar wordcount /mnt/ceph/wc /mnt/ceph/op2
Warning: $HADOOP_HOME is deprecated.
13/03/19 20:35:03 INFO ceph.CephFileSystem: selectDataPool
path=ceph:
Varun,
Can you post your updated core-site.xml? I've seen this error before when I
used the URI: ceph:// as opposed to ceph:///.
- Noah
On Mar 19, 2013, at 8:10 AM, Varun Chandramouli wrote:
> Hi Noah,
>
> After removing that and running, I get this error:
>
> varunc@varunc4-virtual-machin
On Mar 19, 2013, at 8:10 AM, Varun Chandramouli wrote:
> Hi Noah,
>
> After removing that and running, I get this error:
>
> varunc@varunc4-virtual-machine:~$ time /usr/local/hadoop/bin/hadoop jar
> hadoop*examples*.jar wordcount /mnt/ceph/wc /mnt/ceph/op2
> Warning: $HADOOP_HOME is deprecate
No, hadoop data is not located in /mnt/ceph/wc, but now i copied the data
into /user/varunc/wc, which cntains the hadoop data. However, the
MR-wordcount example still is not working. Its stuck with this output:
varunc@varunc4-virtual-machine:~$ time /usr/local/hadoop/bin/hadoop jar
hadoop*examples
Getting closer! I suggest checking the log files for your job tracker and all
of your task tracker nodes to see if any of them are having troubles.
-Noah
On Mar 19, 2013, at 8:42 AM, Varun Chandramouli wrote:
> No, hadoop data is not located in /mnt/ceph/wc, but now i copied the data
> into /
Thanks for reply!
Actually I would like found some way to use one large salable central
storage across multiple PC and MAC. CephFS will be most suitable here, but
you provide only Linux support.
Really no planning here?
On Tue, Mar 19, 2013 at 3:52 PM, Patrick McGarry wrote:
> Hey Igor,
>
> Cur
At various times there the ceph-fuse client has worked on OS X — Noah was the
last one to do this and the branch for it is sitting in my long-term
really-like-to-get-this-mainlined-someday queue. OS X is a lot easier than
Windows though, and nobody's done any planning around that beyond noting t
On 03/19/2013 10:58 AM, Igor Laskovy wrote:
> Thanks for reply!
>
> Actually I would like found some way to use one large salable central
> storage across multiple PC and MAC. CephFS will be most suitable here, but
> you provide only Linux support.
> Really no planning here?
Windows is a real pai
Hi,
The job tracker seems to be running fine. The task tracker however is
shutting down with the error:
2013-03-19 22:29:24,583 ERROR org.apache.hadoop.mapred.TaskTracker: Can not
start task tracker because java.lang.UnsatisfiedLinkError: no cephfs_jni in
java.library.path
libcephfs_jni.so is pre
On Mar 19, 2013, at 10:05 AM, Varun Chandramouli wrote:
> libcephfs_jni.so is present in /usr/local/lib/, which I added to
> LD_LIBRARY_PATH and tried it again. The same error is displayed in the log
> file for the task trackers. Anything else I should be doing?
It looks like something is con
Hi Patrick,
We actually already tried that before (get three from racks, then the 4th
from hosts). What we ended up with was a pg that was supposed to go
on the same osd twice … so we rolled back in the end :)
Then we thought it might be better to ask how to do this as the scenario
described shou
Task tracker is running on both the nodes, and is giving the same error on
both of them (and I am able to run hadoop fs -ls on both). I am running
hadoop as the same user, varunc, and I guess LD_LIBRARY_PATH has been setup
correctly:
varunc@varunc4-virtual-machine:/usr/local/hadoop$ echo $LD_LIBRA
Are you setting LD_LIBRARY_PATH in your bashrc? If so, make sure it is set at
the _very_ top (before the handling for interactive mode, a common problem with
stock Ubuntu setups).
Alternatively, set LD_LIBRARY_PATH in conf/hadoop-env.sh.
-Noah
On Mar 19, 2013, at 10:32 AM, Varun Chandramouli
Hi Noah,
Setting LD_LIBRARY_PATH in conf/hadoop-env.sh seems to have done the trick.
Thanks a ton.
Varun
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
No problem! Let us know if you have any other issues.
-Noah
On Mar 19, 2013, at 11:05 AM, Varun Chandramouli wrote:
> Hi Noah,
>
> Setting LD_LIBRARY_PATH in conf/hadoop-env.sh seems to have done the trick.
> Thanks a ton.
>
> Varun
>
___
ceph-us
I think object storage (using the Swift-compatible Ceph Object
Gateway) is the preferred mechanism for a Glance backend.
Neil
On Tue, Mar 19, 2013 at 6:49 AM, Patrick McGarry wrote:
> Any reason you have chosen to use CephFS here instead of RBD for
> direct integration with Glance?
>
> http://ce
..to be more precise, I should have said: object storage has been the
preferred mechanism of late in Openstack, but RBD makes more sense due
to the copy-on-write facility. Either way, either the Ceph object
gateway or Ceph RBD makes more sense than CephFS currently.
neil
On Tue, Mar 19, 2013 at 1
My ceph cluster consistes of 3 hosts in 3 locations, each with 2 SSD and
4 spinning disks.
I have created a fresh ceph filesystem and start up ceph.
Ceph health report HEALTH_OK.
I created a crushmap to suit our installation where each host will be in
separate racks, based on the example in the
I just want to try if Ceph FS works.
Thanks.
-chen
-Original Message-
From: pmcga...@gmail.com [mailto:pmcga...@gmail.com] On Behalf Of Patrick
McGarry
Sent: Tuesday, March 19, 2013 9:50 PM
To: Li, Chen
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] using Ceph FS as OpenStack G
On 03/18/2013 07:53 AM, Wolfgang Hennerbichler wrote:
On 03/13/2013 06:38 PM, Josh Durgin wrote:
Anyone seeing this problem, could you try the wip-rbd-cache-aio branch?
Hi,
just compiled and tested it out, unfortunately there's no big change:
ceph --version
ceph version 0.58-375-ga4a6075
I'm using Ceph RBD for both Cinder and Glance. Cinder and Glance are installed
in two machines.
I have get information from many place that when cinder and glance both using
Ceph RBD, then no real data transmit will happen because of copy on write.
But the truth is when i run the command:
cinde
I think Josh may be the right man for this question ☺
To be more precious, I would like to add more words about the status:
1. We have configured “show_image_direct_url= Ture” in Glance, and from the
Cinder-volume’s log, we can make sure we have got a direct_url , for example.
image_id 6565d775-
On 03/19/2013 11:03 PM, Chen, Xiaoxi wrote:
I think Josh may be the right man for this question ☺
To be more precious, I would like to add more words about the status:
1. We have configured “show_image_direct_url= Ture” in Glance, and from the
Cinder-volume’s log, we can make sure we have got
33 matches
Mail list logo