On 04.06.2013 20:03, Gandalf Corvotempesta wrote:
> Any experiences with clustered FS on top of RBD devices?
> Which FS do you suggest for more or less 10.000 mailboxes accessed by 10
> dovecot nodes ?
There is an ongoing effort to implement librados storage in Dovecot,
AFAIK. Maybe it's worth loo
Hello!
We have simple setup as follows:
Debian GNU/Linux 6.0 x64
Linux h08 2.6.32-19-pve #1 SMP Wed May 15 07:32:52 CEST 2013 x86_64
GNU/Linux
ii ceph 0.61.2-1~bpo60+1
distributed storage and file system
ii ceph-common 0.61.2-1~bpo60+1
Hi,
i have cuttlefish and i'm using ceph-deploy.
My ceph-conf is this:
*fsid = 775cb230-1b4c-41fb-8473-5b92cexx**
**mon_initial_members = bd-0, bd-1, bd-2**
**mon_host = 147.172.xxx.x0,147.172.xxx.x1,147.172.xxx.x2**
**auth_supported = cephx**
**public_network = 147.172.xxx.0/24**
**cluster_n
Good day!
Tried to nullify thid osd and reinject it with no success. It works a
little bit then the crash again.
Regards, Artem Silenkov, 2GIS TM.
---
2GIS LLC
http://2gis.ru
a.silen...@2gis.ru
gtalk:artem.silen...@gmail.com
cell:+79231534853
2013/6/5 Artem Silenkov
> Hello!
> We have simple
>and I'm unable to mount the cluster with the following command:
>root@ceph1:/mnt# mount -t ceph 192.168.2.170:6789:/ /mnt
So, what it says?
I'm also recommend to you start from my russian doc
http://habrahabr.ru/post/179823
On Tue, Jun 4, 2013 at 4:22 PM, Явор Маринов wrote:
> That's the ex
The GridPP particle physics project is holding a Big Data event at
Imperial College, with participation from other areas too.
https://indico.cern.ch/conferenceDisplay.py?ovw=True&confId=246453
First - you are very welcome to attend.
Second - would any big Ceph users be willing to come and share
I've managed to start and mount the cluster by completely starting the
process from scratch. Other thing that i'm searching for is any
documentation how to add another node (or hard drives) on a running
cluster without affecting the mount point and the running service. Can
you point me for this
I did, what would like to know?Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone : +33 (0)1 49 70 99 72 – Mobile : +33 (0)6 52 84 44 70Email : sebastien@enovance.com – Skype : han.sbastienAddress : 10, rue de la Victoire – 75009 ParisWeb : www.enovance.com – Tw
Ohh,
i just realized, that after a reboot all OSDs were automatically mounted
from /dev/sdc. Wonderful.
Now the next thing is to change the journal from /dev/sdc? to the new
created /dev/sda?
How to do that and what is the prefered fs-type for journal (my OSDs are
btrfs)?
Thank you,
Markus
Markus,
I'm not sure if there is any recommended fs-type for journal, xfs is
recommended for filestore however journals should go straight to SSD.
You can read more about journal config reference here:
http://ceph.com/docs/next/rados/configuration/journal-ref/
Cheers,
Syed
On Wednesday 05 June
This would be easier to see with a log than with all the GDB stuff, but the
reference in the backtrace to "SyncEntryTimeout::finish(int)" tells me that
the filesystem is taking too long to sync things to disk. Either this disk
is bad or you're somehow subjecting it to a much heavier load than the
o
Hi all,
I already installed Ceph Bobtail in centos machines and it's run perfectly.
But now I have to install Ceph Cuttlefish over Redhat 6.4. I have two machines
(until the moment). We can assume the hostnames IP1 and IP2 ;). I want (just
to test) two monitors (one per host) and two osds (on
Hmm, no joy so far :(
Still getting:
hduser@dfs01:~$ hadoop fs -ls
Bad connection to FS. command aborted. exception: No FileSystem for scheme: ceph
hadoop-cephfs.jar from http://ceph.com/download/hadoop-cephfs.jar is in the
classpath
libcephfs.jar from libcephfs-java (0.61.2-1precise) package i
I wonder if it has something to do with them renaming /usr/bin/kvm, in qemu 1.4
packaged with ubuntu 13.04 it has been replaced with the following:
#! /bin/sh
echo "W: kvm binary is deprecated, please use qemu-system-x86_64 instead" >&2
exec qemu-system-x86_64 -machine accel=kvm:tcg "$@"
On Ju
You need to specify the ceph implementation in core-site.xml:
fs.ceph.impl
org.apache.hadoop.fs.ceph.CephFileSystem
Mike
On 5 June 2013 16:19, Ilja Maslov wrote:
> Hmm, no joy so far :(
>
> Still getting:
>
> hduser@dfs01:~$ hadoop fs -ls
> Bad connection to FS. command a
Hi,
I'm having trouble setting a CORS policy on a bucket.
Using the boto python library, I can create a bucket and so on, but
when I try to get or set the CORS policy radosgw responds with a 403:
AccessDenied
Would anyone be able to help me with where I'm going wrong?
(This is radosgw 0.61, so it
Good point. I forgot to disclose that for our grizzly testing, instead of a
full package installation, I only copied the 1.5 version of qemu-system-x86_64
binary to our ubuntu 12.04 nova hosts with /usr/bin/kvm linked to it. Maybe
that is why I didn't see the same errors as Wolfgang saw.
Prier
Sure,
http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/
On Wed, Jun 5, 2013 at 11:38 AM, Явор Маринов wrote:
> I've managed to start and mount the cluster by completely starting the
> process from scratch. Other thing that i'm searching for is any
> documentation how t
Thanks for all the help, guys!
Let me give back to community by listing all the minimal steps I needed to make
it work on current versions of the software.
Ubuntu 12.04.2 LTS
ceph 0.61.2-1precise
Hadoop 1.1.2
1. Install additional packages:
libcephfs-java
libcephfs-jni
2. Download http://ceph.
Thanks a lot for this Ilja! I'm going to update the documentation again soon,
so this very helpful.
On Jun 5, 2013, at 12:21 PM, Ilja Maslov wrote:
> export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true"
Was there actually a problem if you didn't set this?
> 4. Symink JNI library
> cd $HADOOP_
On Jun 5, 2013, at 12:51 PM, Noah Watkins wrote:
>> I have tried adding -Djava.library.path=/usr/lib/jni to HADOOP_OPTS in
>> hadoop-env.sh and exporting LD_LIBRARY_PATH=/usr/lib/jni in hadoop-env.sh,
>> but it didn't work for me. I'd love to hear about a more elegant method of
>> making Had
> > export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true"
>
> Was there actually a problem if you didn't set this?
>
I have commented this out and restarted mapred and everything still worked ok.
It is probably only needed for the HDFS processes.
> > 4. Symink JNI library
> > cd $HADOOP_HOME/lib/n
Alvaro,
I ran into this too. Clusters running with ceph-deploy now use upstart.
> start ceph
> stop ceph
Should work. I'm testing and will update the docs shortly.
On Wed, Jun 5, 2013 at 7:41 AM, Alvaro Izquierdo Jimeno
wrote:
> Hi all,
>
>
>
> I already installed Ceph Bobtail in centos machi
Ok. It's more like this:
sudo initctl list | grep ceph
This lists all your ceph scripts and their state.
To start the cluster:
sudo start ceph-all
To stop the cluster:
sudo stop ceph-all
You can also do the same with all OSDs, MDSs, etc. I'll write it up
and check it in.
On Wed, Jun 5, 201
You can also start/stop an individual daemon this way:
sudo stop ceph-osd id=0
sudo start ceph-osd id=0
On Wed, Jun 5, 2013 at 4:33 PM, John Wilkins wrote:
> Ok. It's more like this:
>
> sudo initctl list | grep ceph
>
> This lists all your ceph scripts and their state.
>
> To start the cluste
System info:
Ubuntu Server 13.04, AMD64.
QEMU 1.4.0
Ceph 0.61.2
I got a core dump when executing:
root@ceph-node1:~# qemu-img info -f rbd rbd:vm_disks/box1_disk1
Segmentation fault (core dumped)
Call dump info:
Core was generated by `qemu-img info -f rbd rbd:vm_disks/box1_disk1'.
Program ter
Hi,
I got a core dump when executing:
root@ceph-node1:~# qemu-img info -f rbd rbd:vm_disks/box1_disk1
Try leaving out "-f rbd" from the command - I have seen that make a
difference before.
--
Jens Kristian S?0?3gaard, Mermaid Consulting ApS,
j...@mermaidconsulting.dk,
http://www.mermaidcons
Yes, it made a difference:
root@ceph-node1:~# qemu-img info rbd:vm_disks/box1_disk1
image: rbd:vm_disks/box1_disk1
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: unavailable
I'm not sure if qemu-img guessed the format correctly.
Does the above output seem normal?
Thanks!
Hi,
root@ceph-node1:~# qemu-img info rbd:vm_disks/box1_disk1
image: rbd:vm_disks/box1_disk1
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: unavailable
I'm not sure if qemu-img guessed the format correctly.
Does the above output seem normal?
Yes, completely normal!
--
Jens
Hi John,
Thanks for your answer!
But Maybe I haven´t install the cuttlefish correctly in my hosts.
sudo initctl list | grep ceph
-> none
No ceph-all found anywhere.
Steps that I have done to install cuttlefish:
sudo rpm --import
'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.a
Good day!
Thank you, but it's not clear for me what is a bottleneck here.
- Hardware node - load average, disk IO
- underlying file system problem on osd or disk bad.
- ceph journal problem
Ceph osd partition is a part of block device which has practically no load
Device:tpsM
31 matches
Mail list logo