Make sure you are intializing the sub-modules.. the autogen.sh script
should probably notify users when these are missing and/or initialize
them automatically..
git submodule init
git submodule update
or alternatively, git clone --recursive ...
On Fri, Jul 25, 2014 at 11:48 AM, Deven Phillips
w
Oh, it looks like autogen.sh is smart about that now. If you using the
latest master, my suggestion may not be the solution.
On Fri, Jul 25, 2014 at 11:51 AM, Noah Watkins wrote:
> Make sure you are intializing the sub-modules.. the autogen.sh script
> should probably notify users when the
3/ directory was empty and when I tried to use submodules to update
> it I got errors about non-empty directories... Trying to fix that now..
>
> Thanks!
>
> Deven
>
>
> On Fri, Jul 25, 2014 at 2:51 PM, Noah Watkins
> wrote:
>>
>> Make sure you are intializin
I'll take a shot at answering this:
Operations are atomic in the sense that there are no partial failures.
Additionally, access to an object should appear to be serialized. So, two
in-flight operations A and B will be applied in either A,B or B,A order. If
ordering is important (e.g. the operat
Nevermind. I see that `ceph-deploy mon create-initial` has stopped
accepting the trailing hostname which was causing the failure. I don't
know if those problems above I showed are actually anything to worry
about :)
On Tue, Jul 21, 2015 at 3:17 PM, Noah Watkins wrote:
> The docker/dist
The docker/distribution project runs a continuous integration VM using
CircleCI, and part of the VM setup installs Ceph packages using
ceph-deploy. This has been working well for quite a while, but we are
seeing a failure running `ceph-deploy install --release hammer`. The
snippet is here where it
your logfile, I do not know what the apt-get errors. It
> does seem like the install proceeds successfully, and that the ceph
> setup will proceed once the extra arg to mon create-initial is
> removed.
>
> Here's hoping that is indeed nothing to worry about. :)
>
> -
Hi KC,
The locality information is now collected and available to Hadoop
through the CephFS API, so fixing this is certainly possible. However,
there has not been extensive testing. I think the tasks that need to
be completed are (1) make sure that `CephFileSystem` is encoding the
correct block lo
ere a command line tool that I can use to verify the results from
> getFileBlockLocations() ?
>
> thanks
> KC
>
>
>
> On Mon, Jul 8, 2013 at 3:09 PM, Noah Watkins
> wrote:
>>
>> Hi KC,
>>
>> The locality information is now collected and ava
osd.11 up 1
>
> 12 1 osd.12 up 1
>
> 13 1 osd.13 up 1
>
> 7 1 osd.7 up 1
>
> 8 1 osd.8 up 1
esday.
Are you running Cuttlefish? I believe it has all the dependencies.
On Mon, Jul 8, 2013 at 7:00 PM, Noah Watkins wrote:
> KC,
>
> Thanks a lot for checking that out. I just went to investigate, and
> the work we have done on the locality/topology-aware features are
> sitting i
Yep, I'm running cuttlefish ... I'll try building out of that branch and let
> you know how that goes.
>
> -KC
>
>
> On Mon, Jul 8, 2013 at 9:06 PM, Noah Watkins
> wrote:
>>
>> FYI, here is the patch as it currently stands:
>>
>>
>> htt
e map tasks are running on the same nodes as the
> splits they're processing. good stuff !
>
>
> On Mon, Jul 8, 2013 at 9:18 PM, Noah Watkins
> wrote:
>>
>> You might want create a new branch and cherry-pick the topology
>> relevant commits (I think there is 1
On Tue, Jul 9, 2013 at 12:35 PM, ker can wrote:
> hi Noah,
>
> while we're still on the hadoop topic ... I was also trying out the
> TestDFSIO tests ceph v/s hadoop. The Read tests on ceph takes about 1.5x
> the hdfs time. The write tests are worse about ... 2.5x the time on hdfs,
> but I guess
ta' rep size 2 min_size 1 crush_ruleset 1 object_hash
>> rjenkins pg_num 960 pgp_num 960 last_change 1 owner 0
>> pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins
>> pg_num 960 pgp_num 960 last_change 1 owner 0
>>
>> From hdfs-site.
, Jul 9, 2013 at 3:27 PM, Noah Watkins wrote:
>> Is the JNI interface still an issue or have we moved past that ?
>
> We haven't done much performance tuning with Hadoop, but I suspect
> that the JNI interface is not a bottleneck.
>
> My very first thought about what
ker can wrote:
> Makes sense. I can try playing around with these settings when you're
> saying client, would this be libcephfs.so ?
>
>
>
>
>
> On Tue, Jul 9, 2013 at 5:35 PM, Noah Watkins
> wrote:
>>
>> Greg pointed out the read-ahead client o
d be very
useful.
Thanks!
Noah
>
> I didn't set max bytes ... I guess the default is zero which means no max ?
> I tried increasing the readahead max periods to 8 .. didn't look like a good
> change.
>
> thanks !
>
>
>
>
> On Wed, Jul 10, 2013 at 10:5
On Wed, Jul 10, 2013 at 6:23 PM, ker can wrote:
>
> Now separating out the journal from data disk ...
>
> HDFS write numbers (3 disks/data node)
> Average execution time: 466
> Best execution time : 426
> Worst execution time : 508
>
> ceph write numbers (3 data disks/data node + 3 journal d
On Wed, Jul 17, 2013 at 11:07 AM, ker can wrote:
> Hi,
>
> Has anyone got hbase working on ceph ? I've got ceph (cuttlefish) and
> hbase-0.94.9.
> My setup is erroring out looking for getDefaultReplication &
> getDefaultBlockSize ... but I can see those defined in
> core/org/apache/hadoop/fs/ceph/
thod.invoke(Method.java:597)
>> at
>> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.init(SequenceFileLogWriter.java:156)
>> ... 18 more
>>
>>
>>
>> On Wed, Jul 17, 2013 at 1:49 PM, Noah Watkins
>> wrote:
>>>
>&
On Fri, Jul 19, 2013 at 8:09 AM, ker can wrote:
>
> With ceph is there any way to influence the data block placement for a
> single file ?
AFAIK, no... But, this is an interesting twist. New files written out
to HDFS, IIRC, will by default store 1 local and 2 remote copies. This
is great for MapR
Hey Scott,
Things look OK, but I'm a little foggy on what exactly was shipping in
the libcephfs-java jar file back at 0.61. There was definitely a time
where Hadoop and libcephfs.jar in the Debian repos were out of sync,
and that might be what you are seeing.
Could you list the contents of the li
ike an older version 56.6, I got it from the Ubuntu Repo.
> Is there another method or pull request I can run to get the latest? I am
> having a hard time finding it.
>
> Thanks
>
>
> On Sun, Aug 4, 2013 at 10:33 PM, Noah Watkins
> wrote:
>>
>> Hey Scott,
>&
>
> ceph.root.dir
> /mnt/mycephfs
>
This is probably causing the issue. Is this meant to be a local mount
point? The 'ceph.root.dir' property specifies the root directory
/inside/ CephFS, and the Hadoop implementation doesn't require a local
CephFS mount--it uses a client library
e.hadoop.fs.ceph.CephFileSystem
>
>
>
> fs.default.name
> ceph://hyrax1:6789/
>
>
>
> ceph.conf.file
> /hyrax/hadoop-ceph/ceph/ceph.conf
>
>
>
> ceph.root.dir
> /
>
>
>
> ceph.auth.keyfile
> /hyrax/hadoop-ceph/ceph/admin.secret
&g
>
>
> fs.default.name
> ceph://hyrax1:6789/
>
>
>
> ceph.conf.file
> /hyrax/hadoop-ceph/ceph/ceph.conf
>
>
>
> ceph.root.dir
> /
>
>
> ceph.auth.keyring
>/hyrax/hadoop-ceph/ceph/c
RY_PATH=/hyrax/hadoop-ceph/lib
>
> I confirmed using bin/hadoop classpath that both jar are in the classpath.
>
> On Mon, Sep 23, 2013 at 3:17 PM, Noah Watkins
> wrote:
>> How are you invoking Hadoop? Also, I forgot to ask, are you using the
>> wrappers located in githu
45733 7f0b58de7700 10 jni: ceph_mount: exit ret -2
>
> On Mon, Sep 23, 2013 at 3:39 PM, Noah Watkins
> wrote:
>> What happens when you run `bin/hadoop fs -ls` ? This is entirely
>> local, and a bit simpler and easier to grok.
>>
>> On Mon, Sep 23, 2013 at 12:23 PM
10 jni: ceph_mount: exit ret -2
> 2013-09-23 19:42:23.520569 7f0b58de7700 10 jni: conf_read_file: exit ret 0
> 2013-09-23 19:42:23.520601 7f0b58de7700 10 jni: ceph_mount: /
> ....
>
>
> On Mon, Sep 23, 2013 at 3:47 PM, Noah Watkins
> wrote:
>> In the log file that you sho
oop-common/ and checkout the
cephfs/branch-1.0 branch, you can run 'ant cephfs' to make a fresh jar
file.
On Mon, Sep 23, 2013 at 1:22 PM, Rolando Martins
wrote:
> My bad, I associated conf_read_file with conf_set.
> No, it does not appear in the logs.
>
> On Mon, Sep 23, 20
> [javac] import com.ceph.fs.CephStat;
> [javac] ^
>
> What are the dependencies that I need to have installed?
>
>
> On Mon, Sep 23, 2013 at 4:32 PM, Noah Watkins
> wrote:
>> Ok thanks. That narrows things down a lot. It seems like the keyring
>&g
the jar that is posted online? It is misleading...
>
> Thanks!
> Rolando
>
>
>
> On Mon, Sep 23, 2013 at 5:07 PM, Noah Watkins
> wrote:
>> You need to stick the CephFS jar files in the hadoop lib folder.
>>
>> On Mon, Sep 23, 2013 at 2:02 PM, Rola
On Thu, Oct 10, 2013 at 12:27 AM, 鹏 wrote:
>
> First of all ,I install ceph at 192.168.58.132 , tar -zxvf
> ceph-0.6.2.tar.gz ./configure make make install ; does this mean "The
> native Ceph file system client must be installed on each participating node
> in the Hadoop cluster" .Shoul
On Thu, Oct 10, 2013 at 7:29 AM, Noah Watkins wrote:
> hadoop cluster. You do not need to run any Ceph dameons, but it is
> common to run them together for data locality. If you are building
Woah, my wording here is terrible. What I meant to say is that you
don't necessarily need to
On Sun, Oct 13, 2013 at 8:28 PM, 鹏 wrote:
> hi all:
> Exception in thread "main" java.lang.NoClassDefFoundError:
> com/ceph/fs/cephFileAlreadyExisteException
> at java.lang.class.forName0(Native Method)
This looks like a bug, which I'll fixup today. But it shouldn't be
related to
The error below seems to indicate that Hadoop isn't aware of the `ceph://`
file system. You'll need to manually add this to your core-site.xml:
>* *>* fs.ceph.impl*>*
>org.apache.hadoop.fs.ceph.CephFileSystem*>* *
> report:FileSystem ceph://192.168.22.158:6789 is not a distributed
Do you have the following in your core-site.xml?
>
> fs.ceph.impl
> org.apache.hadoop.fs.ceph.CephFileSystem
>
On Sun, Oct 13, 2013 at 11:55 PM, 鹏 wrote:
> hi all
> I follow the mail configure the ceph with hadoop
> (http://permalink.gmane.org/gmane.comp.file-systems.ceph.user/180
Hi Kai,
It doesn't look like there is anything Ceph specific in the Java
backtrace you posted. Does you installation work with HDFS? Are there
any logs where an error is occurring with the Ceph plugin?
Thanks,
Noah
On Mon, Oct 14, 2013 at 4:34 PM, log1024 wrote:
> Hi,
> I have a 4-node Ceph clu
On Tue, Oct 15, 2013 at 2:13 AM, 鹏 wrote:
>
> *** # javac -classpath ../libcephfs.jar com/ceph/fs/Test.java
> com/ceph/fs/Test:9:unreported exception java.io.FileNotFoundException;
> must be caught or declared to be throw
> mount.conf_read_file("/ect/ceph/ceph.conf");
>
The --with-hadoop option has been removed. The Ceph Hadoop bindings are now
located in git://github.com/ceph/hadoop-common cepfs/branch-1.0, and the
required CephFS Java bindings can be built from the Ceph Git repository
using the --enable-cephfs-java configure option.
On Wed, Oct 16, 2013 at 12:
Kai,
It looks like libcephfs-java (the CephFS Java bindings) are not in your
classpath. Where did you install them?
-Noah
On Thu, Oct 17, 2013 at 11:30 PM, log1024 wrote:
> Hi Peng
> The conf in my cluster is almost the same with yours, but when i run
> #bin/hadoop fs -ls /
> It failed with:
Peng,
I'm glad you were able to get it working. You'll have to provide some more
information to start debugging why it is slow. How is your Ceph cluster
configured? Also, have a look at the jobtracker statistics and see if any
tasks are failing.
On Thu, Oct 17, 2013 at 8:17 PM, 鹏 wrote:
> **
>
On Fri, Oct 18, 2013 at 12:04 PM, wrote:
> Hi all,
>
> Is this possible?
> Does it make sense?
As far as constructing scriptable object interfaces (Java, LISP,
etc...) this is certainly possible, and pretty cool :) Currently we
have a development version of Lua support (github.com/ceph/ceph.git
On Fri, Oct 18, 2013 at 7:31 PM, 鹏 wrote:
> hi Noah
> That is a stupid mistake which I make! the reason is start-all.sh not
> start the datanode! so I use the shell start-mapr.sh to start it!
> by the way! is the ceph repalce HDFS is respace the name node???
> thank you ,Noah!
> peng
There is n
Hi Alek,
The Lua branch is definitely ready for testing now. I'm putting together a
blog post about it and ill shoot a note to mailing list when that's
complete.
On Oct 19, 2013 11:45 AM, "Alek Paunov" wrote:
> On 18.10.2013 22:23, Noah Watkins wrote:
>
>>
>
Hi peng,
Unfortunately I’ve never used Eclipse. I think there may be some tutorials on
setting up Eclipse for Hadoop development, but I’ve never tried them.
-Noah
On Oct 27, 2013, at 7:13 PM, 鹏 wrote:
> Hi all !
> I have replaced the HDFS with cephFS,. today, I want using eclipse
>
ng
>
>
>
>
>
>
> 在 2013-10-28 23:12:03,"Noah Watkins" 写道:
>>Hi peng,
>>
>>Unfortunately I’ve never used Eclipse. I think there may be some tutorials
>> on setting up Eclipse for Hadoop development, but I’ve never tried them.
>>
>>-
Can you try again but with openjdk or oracle java? We haven't tested
with gcj, but I'll take a look and see if we can suppor that, too.
Thanks!
On Mon, Nov 11, 2013 at 4:29 AM, 皓月 wrote:
> i configure with --enable cephfs-java,then i make.there is an error.
> export CLASSPATH=java/ ; \
> gcj -C
The cls_crypto.cc file in src/ hasn't been included in the Ceph
compilation for a long time. Take a look at src/cls/* for a list of
modules that are compiled. In particular, there is a "Hello World"
example that is nice. These should work for you out-of-the-box.
You could also try to compile cls_c
Generally these steps need to be taken:
1) Compile the custom methods into a shared library
2) Place the library in the class load path of the OSD
3) Invoke the methods via librados exec method
The easiest way to do this is to use the ceph build system by adding your
module to src/cls/Makefile.a
On Nov 13, 2013, at 12:16 AM, wrote:
> my core-site.conf list :
> fs.ceph.impl=org.apache.hadoop.fs.ceph.CephFileSystem
> fs.default.name=ceph://ca189:6789/
> ceph.conf.file=/etc/ceph/ceph.conf
> ceph.root.dir=/mnt/fuse
This looks suspicious. This should point to a root directory within C
There are users that have/are running HBase on top of Ceph. The setup should be
no different than the standard HBase setup instructions, with the exception
that when configuring the Hadoop file system, you specify CephFS instead
(typically in core-site.xml).
Currently the documentation for runn
I don't think there is any inherent limitation to using RADOS or RBD
as a backend for an a non-CephFS file system, as CephFS is inherently
built on top of RADOS (though I suppose it doesn't directly use
librados). However, the challenge would be in configuring and tuning
the two independent systems
I've posted a preliminary patch set to support a libcephfs io engine in fio:
http://github.com/noahdesu/fio cephfs
You can use this right now to generate load through libcephfs, but the
plugin needs a bit more work before it goes upstream (patches
welcome), but feel free to play around with it
I'm trying to install Firefly on an up-to-date FC20 box. I'm getting
the following errors:
[nwatkins@kyoto cluster]$ ../ceph-deploy/ceph-deploy install --release
firefly kyoto
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/nwatkins/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked
me like there might be a repo priority issue. It's mixing packages
>> from Fedora downstream repos and the ceph.com upstream repos. That's
>> not supposed to happen.
>>
>> - Travis
>>
>> On Wed, Jan 7, 2015 at 2:15 PM, Noah Watkins
>> wr
A little info about wip-port.
The wip-port branch lags behind master a bit, usually a week or two
depending on what I've got going on. There are testers for OSX and
FreeBSD, and bringing in windows patches would probably be a nice
staging place for them, as I suspect the areas of change will overl
You'll need to register the new pool with the MDS:
ceph mds add_data_pool
On Thu, Jan 2, 2014 at 9:48 PM, 鹏 wrote:
> Hi all;
> today, I want to use the fuction of ceph_open_layout() in libcephFs.h
>
> I creat a new pool success,
> # rados mkpool data1
> and then I edit the code like thi
The default configuration for a Ceph build should produce a static
rados library. If you actually want to build _only_ librados, that
might require a bit automake tweeks.
nwatkins@kyoto:~$ ls -l projects/ceph_install/lib/
total 691396
-rw-r--r-- 1 nwatkins nwatkins 219465940 Jan 6 09:56 librados.
Most (all?) the network message structures are located in:
https://github.com/ceph/ceph/tree/master/src/messages
On Jan 9, 2014, at 7:44 AM, Bruce Lee wrote:
> Hi all,
> I am new here and glad to see you guys.
> Thanks for your hard work for providing a more stable, powerful,functional
> cep
Hi Kesten,
It's a little difficult to tell what the source of the problem is, but
looking at the gist you referenced, I don't see anything that would
indicate that Ceph is causing the issue. For instance,
hadoop-mapred-tasktracker-xxx-yyy-hdfs01.log looks like Hadoop daemons
are having problems co
Hi Gurviner,
There is a pull request for Hadoop 2 support here
https://github.com/noahdesu/cephfs-hadoop/pull/1
I have not yet tested it personally, but it looks OK to me.
Data locality support in Ceph is supported.
On 2/18/14, 3:15 AM, Gurvinder Singh wrote:
Hi,
I am planning to test th
On Wed, Mar 19, 2014 at 4:28 AM, Gurvinder Singh
wrote:
> Hi,
>
> I have ceph 0.72.2 running on debian wheezy with cloudera 5.0 beta 2
> hadoop. I have installed the ceph hadoop binding with hadoop 2.x
> support. I am able to run the command such as
>From github.com/noahdesu/cephfs-hadoop patched
> On 03/19/2014 05:18 PM, Noah Watkins wrote:
>> Err, obviously switching things out for Ceph rather than Gluster.
>>
>> On Wed, Mar 19, 2014 at 9:18 AM, Noah Watkins
>> wrote:
>>> Looks like this is a configuration issue that has popped up with other
>
Err, obviously switching things out for Ceph rather than Gluster.
On Wed, Mar 19, 2014 at 9:18 AM, Noah Watkins wrote:
> Looks like this is a configuration issue that has popped up with other
> 3rd party file systems in Hadoop 2.x with YARN.
>
>
> http://mail-archives.apac
error itself looks like a missing dependency, but that exception
being thrown might also be tirggered by other problems while loading
the bindings.
On Wed, Mar 19, 2014 at 8:43 AM, Gurvinder Singh
wrote:
> On 03/19/2014 03:51 PM, Noah Watkins wrote:
>> On Wed, Mar 19, 2014 at 4:28 AM,
:
fs.AbstractFileSystem.glusterfs.impl
org.apache.hadoop.fs.glusterfs.GlusterFS
Apparently rather than the `fs.ceph.impl` property in 2.x
On Wed, Mar 19, 2014 at 9:06 AM, Gurvinder Singh
wrote:
> On 03/19/2014 04:50 PM, Noah Watkins wrote:
>> Since `hadoop -fs ls /` seems to work on your local node
This strikes me as a difference in semantics between HDFS and CephFS,
and like Greg said it's probably based on HBase assumptions. It'd be
really helpful to find out what the exception is. If you are building
the Hadoop bindings from scratch, you can instrument `listStatus` in
`CephFileSystem.java`
On Sat, Apr 30, 2016 at 2:55 PM, Adam Tygart wrote:
> Supposedly cephfs-hadoop worked and/or works on hadoop 2. I am in the
> process of getting it working with cdh5.7.0 (based on hadoop 2.6.0).
> I'm under the impression that it is/was working with 2.4.0 at some
> point in time.
>
> At this very
Installing Jewel with ceph-deploy has been working for weeks. Today I
started to get some dependency issues:
[b61808c8624c][DEBUG ] The following packages have unmet dependencies:
[b61808c8624c][DEBUG ] ceph : Depends: ceph-mon (= 10.2.1-1trusty) but it
is not going to be installed
[b61808c8624c]
mind trying this again and see if you are good?
> >
> > On Tue, Jun 14, 2016 at 5:31 PM, Noah Watkins
> wrote:
> >> Installing Jewel with ceph-deploy has been working for weeks. Today I
> >> started to get some dependency issues:
> >>
> >> [b61808
tested it out and it works as expected. Let me know if you have any
> issues.
>
> On Tue, Jun 14, 2016 at 5:57 PM, Noah Watkins wrote:
>> Yeh, I'm still seeing the problem, too Thanks.
>>
>> On Tue, Jun 14, 2016 at 2:55 PM Alfredo Deza wrote:
>>>
>>
Hi Varun,
Try removing this configuration option:
>
> ceph.root.dir
> /mnt/ceph
>
Hadoop running on Ceph uses the libcephfs user-space library to talk directly
to the file system, as opposed to running through the kernel or FUSE client.
This setting is which directory within the Ceph fil
pl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
> Any ways to remove this error?
>
> On Tue, Mar 19, 2013 at 7:39 PM, Noah Watkins wrote:
> Hi Varun,
>
> Try removing this conf
gt; at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
> Any ways to remove this error?
>
> On Tue, Mar
Getting closer! I suggest checking the log files for your job tracker and all
of your task tracker nodes to see if any of them are having troubles.
-Noah
On Mar 19, 2013, at 8:42 AM, Varun Chandramouli wrote:
> No, hadoop data is not located in /mnt/ceph/wc, but now i copied the data
> into /
On Mar 19, 2013, at 10:05 AM, Varun Chandramouli wrote:
> libcephfs_jni.so is present in /usr/local/lib/, which I added to
> LD_LIBRARY_PATH and tried it again. The same error is displayed in the log
> file for the task trackers. Anything else I should be doing?
It looks like something is con
Are you setting LD_LIBRARY_PATH in your bashrc? If so, make sure it is set at
the _very_ top (before the handling for interactive mode, a common problem with
stock Ubuntu setups).
Alternatively, set LD_LIBRARY_PATH in conf/hadoop-env.sh.
-Noah
On Mar 19, 2013, at 10:32 AM, Varun Chandramouli
No problem! Let us know if you have any other issues.
-Noah
On Mar 19, 2013, at 11:05 AM, Varun Chandramouli wrote:
> Hi Noah,
>
> Setting LD_LIBRARY_PATH in conf/hadoop-env.sh seems to have done the trick.
> Thanks a ton.
>
> Varun
>
___
ceph-us
On Mar 21, 2013, at 8:03 AM, François P-L wrote:
> I'm not seeing the new location on github (but the ceph documentation page
> have been updated, thx ;)).
> What is the status of all Hadoop dependency on the master branch ?
The current Hadoop dependency is on the master branch. We believe all
On Tue, Apr 2, 2013 at 4:18 AM, Varun Chandramouli wrote:
>
> Another question I had was regarding hadoop-MR on ceph. I believe that on
> HDFS, the jobtracker tries to schedule jobs locally, with necessary
> information from the namenode. When on ceph, how is this ensured, given
> that a file may
On Apr 4, 2013, at 3:06 AM, Waed Bataineh wrote:
> Hello,
>
> I'm using Ceph as object storage, where it put the whole file what ever was
> its size in one object (correct me if i'm wrong).
> i used it for multiple files that have different extension (.txt, .mp3,
> ...etc) i can store the
ular nodes.
>
> Thanks
> Varun
>
>
> On Wed, Apr 3, 2013 at 11:50 PM, Noah Watkins wrote:
>
>> On Tue, Apr 2, 2013 at 4:18 AM, Varun Chandramouli
>> wrote:
>>
>>>
>>> Another question I had was regarding hadoop-MR on ceph. I believe
Varun,
What version of Ceph are you running? Can you confirm that the MDS daemon
(ceph-mds) is still running or has crashed when the MDS becomes
laggy/unresponsive? If it has crashed checked the MDS log for a crash report.
There were a couple Hadoop workloads that caused the MDS to misbehave fo
You may need to be root to look at the logs in /var/log/ceph. Turning up
logging is helpful, too. Is the bug reproducible? It'd be great if you could
get a core dump file for the crashed MDS process.
-Noah
On Apr 24, 2013, at 9:53 PM, Varun Chandramouli wrote:
> Ceph version was a 0.58 build
On Apr 25, 2013, at 4:08 AM, Varun Chandramouli wrote:
> 2013-04-25 13:54:36.182188 bff8cb40 -1 common/Thread.cc: In function 'void
> Thread::create(size_t)' thread bff8cb40 time 2013-04-25
> 13:54:36.053392#012common/Thread.cc: 110: FAILED assert(ret == 0)#012#012
> ceph version 0.58-500-gaf
Mike,
I'm guessing that HBase is creating and deleting its blocks, but that the
deletes are delayed:
http://ceph.com/docs/master/dev/delayed-delete/
which would explain the correct reporting at the file system level, but not the
actual 'data' pool. I'm not as familiar with this level of deta
Mike,
Thanks for the looking into this further.
On May 10, 2013, at 5:23 AM, Mike Bryant wrote:
> I've just found this bug report though: http://tracker.ceph.com/issues/3601
> Looks like that may be the same issue..
This definitely seems like a candidate.
>> Adding some debug to the cephfs ja
On Jun 4, 2013, at 2:58 PM, Ilja Maslov wrote:
> Is the only way to get it to work is to build Hadoop off the
> https://github.com/ceph/hadoop-common/tree/cephfs/branch-1.0/src or is it
> possible to compile/obtain some sort of a plugin and feed it to a stable
> hadoop version?
Hi Ilja,
We
Thanks a lot for this Ilja! I'm going to update the documentation again soon,
so this very helpful.
On Jun 5, 2013, at 12:21 PM, Ilja Maslov wrote:
> export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true"
Was there actually a problem if you didn't set this?
> 4. Symink JNI library
> cd $HADOOP_
I've used ceph-deploy to create a new cluster with the default cluster name. I
now want to deploy a second cluster in parallel, using the same nodes. I went
through the same process for deploying the first, but with --cluster option,
and I'm getting an error on gatherkeys.
$ ceph-deploy --clust
On Mon, Jun 5, 2017 at 11:04 AM Gregory Farnum wrote:
> On Mon, Jun 5, 2017 at 10:43 AM Noah Watkins
> wrote:
>
>>
>> Fixing it would require we persist the entire returned bufferlist, which
> isn't feasible in general. There's a proposal that gets floated
>
Hi Nick,
First thing to note is that in Kraken that object classes not whitelisted
need to be enabled explicitly. This is in the Kraken release notes (
http://docs.ceph.com/docs/master/release-notes/):
tldr: add 'osd class load list = *' and 'osd class default list = *' to
ceph.conf.
- The ‘osd
and I'm getting
>
> /tmp/buildd/ceph-11.2.0/src/cls/lua/cls_lua.cc:1004: error: [string
> "..."]:2: attempt to call a nil value (global 'require')
>
> I did see this comment
>
> https://github.com/ceph/ceph/blob/kraken/src/cls/lua/cls_lua.cc#L703
>
&g
concern, though in general either applications at this
level (ie. invoking object classes) are already trusted, or an
application could assert a known version of a set of objects that is
enforced automatically.
> Nick
>
>> -Original Message-
>> From: Noah Watkins [mailto:noah
Comments inline
> -- Forwarded message --
> From: Zheyuan Chen
> Date: Sat, Jun 3, 2017 at 1:45 PM
> Subject: [ceph-users] Bug report: unexpected behavior when executing
> Lua object class
> To: ceph-users@lists.ceph.com
>
> Bug 1: I can not get returned output in the first script
gt;> Unfortunately, this isn't a bug. Rados clears any returned data from
>> an object class method if the operation also writes to the object.
>
> Do you have any idea why RADOS behaves like this?
>
>
>
> On Sat, Jun 3, 2017 at 9:30 AM, Noah Watkins wrote:
>>
n 5, 2017 at 10:43 AM Noah Watkins wrote:
>>
>> I haven't taken the time to really grok why the limitation exists
>> (e.g. i'd be interested in to know if it's fundamental). There is a
>> comment here:
>>
>> https://github.com/ceph/ceph/blob/mas
Hi Jose,
I believe what you are referring to is using Hadoop over Ceph via the
VFS implementation of the Ceph client vs the user-space libcephfs
client library. The current Hadoop plugin for Ceph uses the client
library. You could run Hadoop over Ceph using a local Ceph mount
point, but it would t
1 - 100 of 102 matches
Mail list logo