> On 30 Sep 2014, at 16:38, Mark Nelson wrote:
>
> On 09/29/2014 03:58 AM, Dan Van Der Ster wrote:
>> Hi Emmanuel,
>> This is interesting, because we’ve had sales guys telling us that those
>> Samsung drives are definitely the best for a Ceph journal O_o !
>
> Our sales guys or Samsung sales gu
Dear all,
Anyone using CloudStack with Ceph RBD as primary storage? I am using
CloudStack 4.2.0 with KVM hypervisors and Ceph latest stable version of
dumpling.
Based on what I see, when Ceph cluster is in degraded state (not
active+clean), for example due to one node is down and in recovering
pr
Hey,
Thanks all for your replies. We finished the migration to XFS yesterday morning
and we can see that the load average on our VMs is back to normal.
Our cluster was just a test before scaling with bigger nodes. We don't know yet
how to use the SSDs between journals (as it was recommended) an
On Tue, Sep 30, 2014 at 04:38:41PM +0200, Mark Nelson wrote:
> On 09/29/2014 03:58 AM, Dan Van Der Ster wrote:
> > Hi Emmanuel,
> > This is interesting, because we?ve had sales guys telling us that those
> > Samsung drives are definitely the best for a Ceph journal O_o !
>
> Our sales guys or Sam
On Wed, 1 Oct 2014 09:28:12 +0200 Kasper Dieter wrote:
> On Tue, Sep 30, 2014 at 04:38:41PM +0200, Mark Nelson wrote:
> > On 09/29/2014 03:58 AM, Dan Van Der Ster wrote:
> > > Hi Emmanuel,
> > > This is interesting, because we?ve had sales guys telling us that
> > > those Samsung drives are defini
Hi,
I have a doubt in mapping rbd using client keyring file. Created keyring as
below
sudo ceph-authtool -C -n client.foo --gen-key /etc/ceph/keyring
sudo chmod +r /etc/ceph/keyring
sudo ceph-authtool -n client.foo --cap mds 'allow' --cap osd 'allow rw
pool=pool1' --cap mon 'allow r' /etc/ceph/
Hi,
We settled on Samsung pro 840 240GB drives 1½ year ago and we've been happy
so far. We've over-provisioned them a lot (left 120GB unpartitioned).
We have 16x 240GB and 32x 500GB - we've lost 1x 500GB so far.
smartctl states something like
Wear = 092%, Hours = 12883, Datawritten = 15321.83 TB
On Wed, Oct 01, 2014 at 01:31:38PM +0200, Martin B Nielsen wrote:
>Hi,
>
>We settled on Samsung pro 840 240GB drives 1½ year ago and we've been
>happy so far. We've over-provisioned them a lot (left 120GB
>unpartitioned).
>
>We have 16x 240GB and 32x 500GB - we've lost 1x 500G
Hello Christian,
Thank you for your detailed answer!
I have other pre-production environment with 4 Ceph servers, 4 SSD disks
per Ceph server (each Ceph OSD node on the separate SSD disk)
Probably I should move journals to other disks or it is not required in my
case?
[root@ceph-node ~]# mount |
Timur,
As far as I know, the latest master has a number of improvements for ssd disks.
If you check the mailing list discussion from a couple of weeks back, you can
see that the latest stable firefly is not that well optimised for ssd drives
and IO is limited. However changes are being made to
Timur, read this thread:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg12486.html
Тимур, прочитай эту ветку.
2014-10-01 16:24 GMT+04:00 Andrei Mikhailovsky :
> Timur,
>
> As far as I know, the latest master has a number of improvements for ssd
> disks. If you check the mailing list d
On Wed, Oct 1, 2014 at 2:57 AM, Sebastien Han
wrote:
> Hi,
>
> Unfortunately this is expected.
> If you take a snapshot you should not expect a clone but a RBD snapshot.
Unfortunate that it doesn't work, but fortunate for me I don't need to
figure out what I'm doing wrong :)
> Please see this BP
On 01 Oct 2014, at 15:26, Jonathan Proulx wrote:
> On Wed, Oct 1, 2014 at 2:57 AM, Sebastien Han
> wrote:
>> Hi,
>>
>> Unfortunately this is expected.
>> If you take a snapshot you should not expect a clone but a RBD snapshot.
>
> Unfortunate that it doesn't work, but fortunate for me I don't
On Wed, Oct 1, 2014 at 5:24 AM, Andrei Mikhailovsky wrote:
> Timur,
>
> As far as I know, the latest master has a number of improvements for ssd
> disks. If you check the mailing list discussion from a couple of weeks back,
> you can see that the latest stable firefly is not that well optimised fo
Greg, are they going to be a part of the next stable release?
Cheers
- Original Message -
> From: "Gregory Farnum"
> To: "Andrei Mikhailovsky"
> Cc: "Timur Nurlygayanov" , "ceph-users"
>
> Sent: Wednesday, 1 October, 2014 3:04:51 PM
> Subject: Re: [ceph-users] Why performance of benc
Hello,
On Wed, 1 Oct 2014 13:24:43 +0100 (BST) Andrei Mikhailovsky wrote:
> Timur,
>
> As far as I know, the latest master has a number of improvements for ssd
> disks. If you check the mailing list discussion from a couple of weeks
> back, you can see that the latest stable firefly is not tha
Hello,
On Wed, 1 Oct 2014 13:31:38 +0200 Martin B Nielsen wrote:
> Hi,
>
> We settled on Samsung pro 840 240GB drives 1½ year ago and we've been
> happy so far. We've over-provisioned them a lot (left 120GB
> unpartitioned).
>
> We have 16x 240GB and 32x 500GB - we've lost 1x 500GB so far.
>
Hello everyone,
I plan to use CephFS in production with Giant release, knowing it's not
perfectly ready at the moment and using a hot backup.
That said, I'm currently testing CephFS on version 0.80.5.
I have a 7 servers cluster (3 mon, 3 osd, 1 mon), and 30 osd (disks).
My mds has been working f
Thomas,
Sounds like you're looking for "ceph mds remove_data_pool". In
general you would do that *before* removing the pool itself (in more
recent versions we enforce that).
John
On Wed, Oct 1, 2014 at 4:58 PM, Thomas Lemarchand
wrote:
> Hello everyone,
>
> I plan to use CephFS in production w
Thank you very much, it's what I needed.
root@a-mon:~# ceph mds remove_data_pool 3
removed data pool 3 from mdsmap
It worked, and mds is ok.
--
Thomas Lemarchand
Cloud Solutions SAS - Responsable des systèmes d'information
On mer., 2014-10-01 at 17:02 +0100, John Spray wrote:
> Thomas,
>
>
All the stuff I'm aware of is part of the testing we're doing for
Giant. There is probably ongoing work in the pipeline, but the fast
dispatch, sharded work queues, and sharded internal locking structures
that Somnath has discussed all made it.
-Greg
Software Engineer #42 @ http://inktank.com | htt
On 10/01/2014 11:18 AM, Gregory Farnum wrote:
All the stuff I'm aware of is part of the testing we're doing for
Giant. There is probably ongoing work in the pipeline, but the fast
dispatch, sharded work queues, and sharded internal locking structures
that Somnath has discussed all made it.
I se
On Wed, Oct 1, 2014 at 9:21 AM, Mark Nelson wrote:
> On 10/01/2014 11:18 AM, Gregory Farnum wrote:
>>
>> All the stuff I'm aware of is part of the testing we're doing for
>> Giant. There is probably ongoing work in the pipeline, but the fast
>> dispatch, sharded work queues, and sharded internal l
Dear all,
i need few tips about Ceph best solution for driver controller.
I'm getting confused about IT mode, RAID and JBoD.
I read many posts about don't go for RAID but use instead a JBoD
configuration.
I have 2 storage alternatives right now in my mind:
*SuperStorage Server 2027R-E1CR24
Hi,
I am trying to run hadoop with ceph as the backend. I installed the
libcephfs-jni and libcephfs-java to get the libcephfs.jar and the related .so
libraries. Also I compiled the cephfs-hadoop-1.0-SNAPSHOT.jar from
https://github.com/GregBowyer/cephfs-hadoop since this was the only jar which
Hello,
On Wed, 01 Oct 2014 18:26:53 +0200 Massimiliano Cuttini wrote:
> Dear all,
>
> i need few tips about Ceph best solution for driver controller.
> I'm getting confused about IT mode, RAID and JBoD.
> I read many posts about don't go for RAID but use instead a JBoD
> configuration.
>
> I
Hello Christian,
Il 01/10/2014 19:20, Christian Balzer ha scritto:
Hello,
On Wed, 01 Oct 2014 18:26:53 +0200 Massimiliano Cuttini wrote:
Dear all,
i need few tips about Ceph best solution for driver controller.
I'm getting confused about IT mode, RAID and JBoD.
I read many posts about don't
Hey Yann,
The best way to know this would probably be to query Sage and
#ceph-devel, either that or just hit up the ceph-devel mailing list.
I'd be happy to move anything forward so you can update it with work
you want to do if needed. Let me know and I'll jump on it. Thanks!
Best Regards,
Pa
[adding ceph-devel]
> On Tue, Sep 30, 2014 at 5:30 PM, Yann Dupont wrote:
> >
> > Le 30/09/2014 22:55, Patrick McGarry a ?crit :
> >>
> >> Hey cephers,
> >>
> >> The schedule and call for blueprints is now up for our next CDS as we
> >> aim for the Hammer release:
> >>
> >> http://ceph.com/commun
On Wed, Oct 1, 2014 at 4:32 PM, Sage Weil wrote:
> [adding ceph-devel]
>
>
>
>> > Is there a way to know if those blueprints are implemented, or in active
>> > developpement ? In case of postponed blueprints, is there a way to promote
>> > them again, to get consideration for hammer ?
>
> Hmm, Pat
Le 01/10/2014 22:32, Sage Weil a écrit :
https://wiki.ceph.com/Planning/Blueprints/Giant/librados%3A_support_parallel_reads
(I made some comments yesterday)
Not implemented
OK,
and even an older one (somewhat related):
https://wiki.ceph.com/Planning/Blueprints/Emperor/librados%2F%2Fobjecte
Hi Ceph users,
I am stuck with the benchmark results that I
obtained from the ceph cluster.
Ceph Cluster:
1 Mon node, 4 osd nodes of 1 TB. I have one journal for each osd.
All disks are identical and nodes are connected by 10 G. Below is the dd
results
dd if=/dev/zero
Hello all,
For a federated configuration, does the radosgw-agent use any type of
prioritization in regards to the way endpoints are used for the
synchronization process? (i.e. the order they are listed in the region-map,
maybe "rgw dns name" used, etc). We have a dedicated node in each zone to
ha
Hi,
I use Ceph firefly (0.80.6) on Ubuntu Trusty (14.04).
When I add a new osd to a Ceph cluster, I run these
commands :
uuid=$(uuidgen)
osd_id=$(ceph --cluster "my_cluster" osd create "$uuid")
printf "The id of this osd will be $osd_id.\n"
And the osd id is chosen automatically by t
Sorry all for the typo. The master in zone-1 is
rgw01-zone1-r1.domain-name.com not rgw01-zone1-d1.domain-name.com. The
first paragraph should have read as follows:
For a federated configuration, does the radosgw-agent use any type of
prioritization in regards to the way endpoints are used for t
Hi François,
It's probably better to leave the OSD id to the Ceph cluster. Why do you need
it ?
Cheers
On 02/10/2014 00:38, Francois Lafont wrote:
> Hi,
>
> I use Ceph firefly (0.80.6) on Ubuntu Trusty (14.04).
> When I add a new osd to a Ceph cluster, I run these
> commands :
>
> uuid=$(
Le 02/10/2014 00:53, Loic Dachary a écrit :
> Hi François,
Hello,
> It's probably better to leave the OSD id to the Ceph cluster.
Ah, ok.
> Why do you need it ?
It's just to have:
srv1 172.31.10.1 --> osd-1
srv2 172.31.10.2 --> osd-2
srv3 172.31.10.3 --> osd-3
It's more friendly than:
srv1
Hi,
any news about this blueprint ?
https://wiki.ceph.com/Planning/Blueprints/Giant/rbd%3A_journaling
Regards,
Alexandre
- Mail original -
De: "Sage Weil"
À: "Patrick McGarry"
Cc: "Ceph-User" , ceph-de...@vger.kernel.org
Envoyé: Mercredi 1 Octobre 2014 22:32:30
Objet: Re: [cep
The agent itself only goes to the gateways it was configured to use.
However, in a cross zone copy of objects, the gateway will round robin
to any of the specified endpoints in its regionmap.
Yehuda
On Wed, Oct 1, 2014 at 3:46 PM, Lyn Mitchell wrote:
> Sorry all for the typo. The master in zon
This is a major bugfix release for firefly, fixing a range of issues
in the OSD and monitor, particularly with cache tiering. There are
also important fixes in librados, with the watch/notify mechanism used
by librbd, and in radosgw.
A few pieces of new functionality of been backported, including
On Wed, Oct 1, 2014 at 5:56 PM, Sage Weil wrote:
> This is a major bugfix release for firefly, fixing a range of issues
> in the OSD and monitor, particularly with cache tiering. There are
> also important fixes in librados, with the watch/notify mechanism used
> by librbd, and in radosgw.
>
> A
On Wed, 01 Oct 2014 20:12:03 +0200 Massimiliano Cuttini wrote:
> Hello Christian,
>
>
> Il 01/10/2014 19:20, Christian Balzer ha scritto:
> > Hello,
> >
> > On Wed, 01 Oct 2014 18:26:53 +0200 Massimiliano Cuttini wrote:
> >
> >> Dear all,
> >>
> >> i need few tips about Ceph best solution for dr
Hello,
On Wed, 1 Oct 2014 14:43:49 -0700 Jakes John wrote:
> Hi Ceph users,
> I am stuck with the benchmark results that I
> obtained from the ceph cluster.
>
> Ceph Cluster:
>
> 1 Mon node, 4 osd nodes of 1 TB. I have one journal for each osd.
>
> All disks are identi
Thanks Christian. You saved my time! I mistakenly assumed -b value to be
in KB.
Now, when I ran same benchmarks, I am getting ~106 MB/s for writes and
~1050MB/s for reads for replica of 2.
I am slightly confused about the Read and write bandwidth terminology. What
is the theoretical maximum for
44 matches
Mail list logo