Hi all!
I try to use the rados_exec methods, it allows librados users to call the
custom methods !
my ceph version is 0.62。 It is worked for the class cls_rbd, for it is alerdy
build and load into the ceph class(/usr/local/lib/rados-class). but I do not
konw how to build and load a custom
On Wed, Nov 13, 2013 at 6:43 AM, Eric Eastman wrote:
> I built Ceph version 0.72 with --with-libzfs on Ubuntu 1304 after installing
> ZFS
> from th ppa:zfs-native/stable repository. The ZFS version is v0.6.2-1
>
> I do have a few questions and comments on Ceph using ZFS backed OSDs
>
> As ceph-dep
On Tue, Nov 12, 2013 at 7:28 PM, Joao Eduardo Luis wrote:
>
> This looks an awful lot like you started another instance of an OSD with
> the same ID while another was running. I'll walk you through the log lines
> that point me towards this conclusion. Would still be weird if the admin
> sockets
Hi Michael,
you are right, my system is installed on disk sdc, and sda is the journal
disk to be shared.
This is the output of partx -v /dev/sda, didn't see anything unusual:
device /dev/sda: start 0 size 117231408
gpt: 2 slices
# 1: 2048- 2099199 ( 2097152 sectors, 1073 MB)
# 2: 209920
On 11/12/2013 03:07 PM, Berant Lemmenes wrote:
I just restarted an OSD node and none of the admin sockets showed up on
reboot (though it joined the cluster fine and all OSDs are happy. The
node is a Ubuntu 12.04.3 system originally deployed via ceph-deploy on
dumpling.
The only thing that stands
Since the disk is failing and you have 2 other copies I would take osd.0 down.
This means that ceph will not attempt to read the bad disk either for clients
or to make another copy of the data:
* Not sure about the syntax of this for the version of ceph you are running
ceph osd down 0
Mar
Still working on it, watch this space :)
On Tue, Nov 12, 2013 at 3:44 PM, Dinu Vlad wrote:
> Out of curiosity - can you live-migrate instances with this setup?
>
>
>
> On Nov 12, 2013, at 10:38 PM, Dmitry Borodaenko
> wrote:
>
>> And to answer my own question, I was missing a meaningful error
>
Out of curiosity - can you live-migrate instances with this setup?
On Nov 12, 2013, at 10:38 PM, Dmitry Borodaenko
wrote:
> And to answer my own question, I was missing a meaningful error
> message: what the ObjectNotFound exception I got from librados didn't
> tell me was that I didn't have
Hi,
we're experiencing the same problem. We have a cluster with 6 machines and 60
OSDs (Supercmiro 2 HE 24 disks max, LSI controller). We have three R300 as
monitor nodes and two more R300 as iscsi-targets. We are using targetcli, too.
I don't need to say we have a cluster, public and iscsi-net
On 11/12/2013 04:43 PM, Eric Eastman wrote:
I built Ceph version 0.72 with --with-libzfs on Ubuntu 1304 after
installing ZFS
from th ppa:zfs-native/stable repository. The ZFS version is v0.6.2-1
I do have a few questions and comments on Ceph using ZFS backed OSDs
As ceph-deploy does not show su
On Tue, Nov 12, 2013 at 3:43 PM, Eric Eastman wrote:
> I built Ceph version 0.72 with --with-libzfs on Ubuntu 1304 after installing
> ZFS
> from th ppa:zfs-native/stable repository. The ZFS version is v0.6.2-1
>
> I do have a few questions and comments on Ceph using ZFS backed OSDs
>
> As ceph-dep
I built Ceph version 0.72 with --with-libzfs on Ubuntu 1304 after
installing ZFS
from th ppa:zfs-native/stable repository. The ZFS version is v0.6.2-1
I do have a few questions and comments on Ceph using ZFS backed OSDs
As ceph-deploy does not show support for ZFS, I used the instructions
at:
While updating my cluster to use a 2K block size for XFS, I've run
into a couple OSDs failing to start because of corrupted journals:
=== osd.1 ===
-10> 2013-11-12 13:40:35.388177 7f030458a7a0 1
filestore(/var/lib/ceph/osd/ceph-1) mount detected xfs
-9> 2013-11-12 13:40:35.388194 7f030458a
I think we removed the experimental warning in cuttlefish. It
probably wouldn't hurt to do it in bobtail particularly if you test it
extensively on a test cluster first. However, we didn't do extensive
testing on it until cuttlefish. I would upgrade to cuttlefish
(actually, dumpling or emperor,
We probably do need to go over it again and account for PG splitting.
On Fri, Nov 8, 2013 at 9:26 AM, Gregory Farnum wrote:
> After you increase the number of PGs, *and* increase the "pgp_num" to do the
> rebalancing (this is all described in the docs; do a search), data will move
> around and th
And to answer my own question, I was missing a meaningful error
message: what the ObjectNotFound exception I got from librados didn't
tell me was that I didn't have the images keyring file in /etc/ceph/
on my compute node. After 'ceph auth get-or-create client.images >
/etc/ceph/ceph.client.images.
I can get ephemeral storage for Nova to work with RBD backend, but I
don't understand why it only works with the admin cephx user? With a
different user starting a VM fails, even if I set its caps to 'allow
*'.
Here's what I have in nova.conf:
libvirt_images_type=rbd
libvirt_images_rbd_pool=images
Sorry, just spotted you're mounting on sdc. Can you chuck out a partx -v
/dev/sda to see if there's anything odd about the data currently on there?
-Michael
On 12/11/2013 18:22, Michael wrote:
As long as there's room on the SSD for the partitioner it'll just use
the conf value for osd journal
As long as there's room on the SSD for the partitioner it'll just use
the conf value for osd journal size to section it up as it adds OSD's (I
generally use the "ceph-deploy osd create srv:data:journal e.g.
srv-12:/dev/sdb:/dev/sde" format when adding disks).
Does it being /dev/sda mean you're p
I didn't think you could specify the journal in this manner (just pointing
multiple OSDs on the same host all to journal /dev/sda). Don't you either need
to partition the SSD and point each SSD to a separate partition, or format and
mount the SSD and each OSD will use a unique file on the mount
I just restarted an OSD node and none of the admin sockets showed up on
reboot (though it joined the cluster fine and all OSDs are happy. The node
is a Ubuntu 12.04.3 system originally deployed via ceph-deploy on dumpling.
The only thing that stands out to me is the failure on lock_fsid and the
er
The cls_crypto.cc file in src/ hasn't been included in the Ceph
compilation for a long time. Take a look at src/cls/* for a list of
modules that are compiled. In particular, there is a "Hello World"
example that is nice. These should work for you out-of-the-box.
You could also try to compile cls_c
Hello,
I have 3 node, with 3 OSD in each node. I'm using .rgw.buckets pool with 3
replica. One of my HDD (osd.0) has just bad sectors, when I try to read an
object from OSD direct, I get Input/output errror. dmesg:
[1214525.670065] mpt2sas0: log_info(0x3108): originator(PL),
code(0x08), sub_c
Hi guys,
I use ceph-deploy to manage my cluster, but I get failed while creating the
OSD, the process seems to hang up at creating first osd. By the way,
SELinux is disabled, and my ceph-disk is patched according to the page:
http://www.spinics.net/lists/ceph-users/msg03258.html
can you guys give m
Hi all!
long time no see!
I want to use the function rados_exec, and I found the class
cls_crypto.cc in the code source of ceph;
so I run the funtion like this:
rados_exec(ioctx, "foo_object", "crypto" , "md5", buf, sizeof(buf),buf2,
sizeof(buf2) )
ant the function r
On Nov 12, 2013 2:38 AM, "Berant Lemmenes" wrote:
>
> I noticed the same behavior on my dumpling cluster. They wouldn't show up
after boot, but after a service restart they were there.
>
> I haven't tested a node reboot since I upgraded to emperor today. I'll
give it a shot tomorrow.
>
> Thanks,
>
26 matches
Mail list logo