On Fri, 25 Jul 2014 13:31:34 +1000 Matt Harlum wrote:
> Hi,
>
> I’ve purchased a couple of 45Drives enclosures and would like to figure
> out the best way to configure these for ceph?
>
That's the second time within a month somebody mentions these 45 drive
chassis.
Would you mind elaborating wh
Hi,
I am using ceph-deploy to deploy my cluster. Whenever I try to add more
than one osd in a node, except the first osd, all the other osds get a
weight of 0, and they are in a state of down and out.
So, if I have three nodes in my cluster, I can successfully add 1 node
each in the three nodes,
Hi all,
Please suggest me some open source monitoring tools which can monitor radosgw
instances for coming user request traffic for uploading and downloading the
stored data and also for monitoring other features of radosgw
Regards
Pragya Jain___
ceph
hi all,
please somebody help me to know about the metrics gathered by StatsD server for
monitoring ceph storage cluster with radosgw client for object storage
Regards
Pragya Jain___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.c
Hi all,
One quick question about image format 1 & 2 :
I've got a img.qcow2 and I want to convert it :
The first solution is qemu-img convert -f qcow2 -O rbd img.qcow2
rbd:/mypool/myimage
As far as I understood It will converted into format 1 which is the default one
so I won't be able to clo
Hi,
I am deploying firefly version on CentOs 6.4. I am following quick
installation instructions available at ceph.com.
I have my customized kernel version in CentOs 6.4 which is 2.6.32.
I am able to create basic Ceph storage cluster with active+clean state.
Now I am trying to create block de
Hi Pratik
Ceph RBD support has been added in mainline Linux kernel starting 2.6.34 , The
following errors shows that , RBD module is not present in kernel.
Its advisable to run latest stable kernel release if you need RBD to be working.
> ERROR: modinfo: could not find module rbd
> FATAL: Modu
On 07/25/2014 02:54 AM, Christian Balzer wrote:
On Fri, 25 Jul 2014 13:31:34 +1000 Matt Harlum wrote:
Hi,
I’ve purchased a couple of 45Drives enclosures and would like to figure
out the best way to configure these for ceph?
That's the second time within a month somebody mentions these 45 dri
I finally reconverted my only "format 1" image into format 2 so now everything
is in format 2, but I'm still confused, my vm disks are still readonly (I've
tried different images centos 6.5 with kernel 2.6.32 and ubuntu with 3.13), do
I have to modprobe rbd on the host ?
De : ceph-users [mail
Hello Christian.
Our current setup has 4 osd's per node.When a drive fails the
cluster is almost unusable for data entry. I want to change pour set up
so that under no circumstances ever happens.We used drbd for 8 years,
and our main concern is high availability . 1200bps Modem spe
When you run qemu-img you are essentially converting the qcow2 image to
the appropriate raw format during the conversion and import process to the
cluster. When you use rbd import you are not doing a conversion, so the
image is being imported AS IS (you can validate this by looking at the
size of
root@cubie01:~# aptitude search perftools
p google-perftools - command
line utilities to analyze the performance of C++ programs
root@cubie01:~# aptitude install google-perftools
The following NEW packages will be installed:
google-perftools{b}
The foll
Dear Deven,
Another solution is to compile leveldb and ceph without tcmalloc support :)
Ceph and leveldb work just fine without gperftools, and I am yet to do
benchmarks as to how much performance benefit you get from
google-perftools replacement tcmalloc of globc malloc.
Best regards
Owen
O
I would like ceph-mon to bind to 0.0.0.0 since it is running on a machine
that gets its IP from a DHCP server and the IP changes on every boot.
Is there a way to specify this in the ceph.conf file?
Thanks
Akshay
___
ceph-users mailing list
ceph-users@li
On Fri, 25 Jul 2014 07:24:26 -0500 Mark Nelson wrote:
> On 07/25/2014 02:54 AM, Christian Balzer wrote:
> > On Fri, 25 Jul 2014 13:31:34 +1000 Matt Harlum wrote:
> >
> >> Hi,
> >>
> >> I’ve purchased a couple of 45Drives enclosures and would like to
> >> figure out the best way to configure these
On 07/25/2014 12:04 PM, Christian Balzer wrote:
On Fri, 25 Jul 2014 07:24:26 -0500 Mark Nelson wrote:
On 07/25/2014 02:54 AM, Christian Balzer wrote:
On Fri, 25 Jul 2014 13:31:34 +1000 Matt Harlum wrote:
Hi,
I’ve purchased a couple of 45Drives enclosures and would like to
figure out the bes
Hi again,
I've had a look at the qemu-kvm SRPM and RBD is intentionally disabled
in the RHEL 7.0 release packages. There's a block in the .spec file that
reads:
%if %{rhev}
--enable-live-block-ops \
--enable-ceph-support \
%else
--disable-live-block-ops \
--dis
On Fri, Jul 25, 2014 at 12:04 PM, Christian Balzer wrote:
>
> Well, if I read that link and the actual manual correctly, the most one
> can hope to get from this is 48Gb/s (2 mini-SAS with 4 lanes each) which is
> short of what 45 regular HDDs can dish out (or take in).
> And that's ignoring the
I've a question regarding advice from these threads:
https://mail.google.com/mail/u/0/#label/ceph/1476b93097673ad7?compose=1476ec7fef10fd01
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg11011.html
Our current setup has 4 osd's per node.When a drive fails the
cluster is almos
I'm trying to build DEB packages for my armhf devices, but my most recent
efforts are dying. Anny suggestions would be MOST welcome!
make[5]: Entering directory `/home/cubie/Source/ceph/src/java'
jar cf libcephfs.jar -C java com/ceph/fs/CephMount.class -C java
com/ceph/fs/CephStat.class -C java co
Make sure you are intializing the sub-modules.. the autogen.sh script
should probably notify users when these are missing and/or initialize
them automatically..
git submodule init
git submodule update
or alternatively, git clone --recursive ...
On Fri, Jul 25, 2014 at 11:48 AM, Deven Phillips
w
Oh, it looks like autogen.sh is smart about that now. If you using the
latest master, my suggestion may not be the solution.
On Fri, Jul 25, 2014 at 11:51 AM, Noah Watkins wrote:
> Make sure you are intializing the sub-modules.. the autogen.sh script
> should probably notify users when these are
Noah Watkins writes:
> Oh, it looks like autogen.sh is smart about that now. If you using the
> latest master, my suggestion may not be the solution.
>
> On Fri, Jul 25, 2014 at 11:51 AM, Noah Watkins
> wrote:
>> Make sure you are intializing the sub-modules.. the autogen.sh script
>> should pr
I'm using v0.82 tagged version from Git
On Fri, Jul 25, 2014 at 2:54 PM, Noah Watkins
wrote:
> Oh, it looks like autogen.sh is smart about that now. If you using the
> latest master, my suggestion may not be the solution.
>
> On Fri, Jul 25, 2014 at 11:51 AM, Noah Watkins
> wrote:
> > Make sur
Noah,
That DOES appear to have been at least part of the problem... The
src/lib3/ directory was empty and when I tried to use submodules to update
it I got errors about non-empty directories... Trying to fix that now..
Thanks!
Deven
On Fri, Jul 25, 2014 at 2:51 PM, Noah Watkins
wrote:
>
You can rm -rf those submodule directories and then re-run submodule
init/update to put the tree in a good state without re-cloning.
On Fri, Jul 25, 2014 at 12:10 PM, Deven Phillips
wrote:
> Noah,
>
> That DOES appear to have been at least part of the problem... The
> src/lib3/ directory was
On Fri, 25 Jul 2014 13:14:59 -0500 Schweiss, Chip wrote:
> On Fri, Jul 25, 2014 at 12:04 PM, Christian Balzer wrote:
>
> >
> > Well, if I read that link and the actual manual correctly, the most one
> > can hope to get from this is 48Gb/s (2 mini-SAS with 4 lanes each)
> > which is short of what
Hello,
actually replying in the other thread was fine by me, it was after
relevant in a sense to it.
And you mentioned something important there, which you didn't mention
below, that you're coming from DRBD with a lot of experience there.
So do I and Ceph/RBD simply isn't (and probably never wil
Hello,
I'm using btrfs for OSDs and want to know if it still helps to have the
journal on a faster drive. From what I've read I'm under the impression
that with btrfs journal, the OSD journal doesn't do much work anymore.
Best regards,
Cristian Falcas
_
29 matches
Mail list logo