Interesting, thanks for the link.
I hope the quality on the 3610/3710 is as good as the 3700... we
haven't yet seen a single failure in production.
Cheers, Dan
On Fri, Feb 20, 2015 at 8:06 AM, Alexandre DERUMIER wrote:
> Hi,
>
> Intel has just released new ssd s3610:
>
> http://www.anandtech.c
Hello,
On Fri, 20 Feb 2015 09:30:56 +0100 Dan van der Ster wrote:
> Interesting, thanks for the link.
Interesting indeed, more for a non-Ceph project of mine, but still. ^o^
> I hope the quality on the 3610/3710 is as good as the 3700... we
> haven't yet seen a single failure in production.
>
Hi all,
Back in the dumpling days, we were able to run the emperor MDS with
dumpling OSDs -- this was an improvement over the dumpling MDS.
Now we have stable firefly OSDs, but I was wondering if we can reap
some of the recent CephFS developments by running a giant or ~hammer
MDS with our firefly
Hello everyone,
I have a cluster running with OpenStack. It has 6 OSD (3 in each 2
different locations). Each pool has 3 replication size with 2 copy in
primary location and 1 copy at secondary location.
Everything is running as expected but the osd are not marked as down when I
poweroff a OSD se
Hello,
Le 20/02/2015 12:26, Sudarshan Pathak a écrit :
Hello everyone,
I have a cluster running with OpenStack. It has 6 OSD (3 in each 2
different locations). Each pool has 3 replication size with 2 copy in
primary location and 1 copy at secondary location.
Everything is running as expecte
Hi Dan,
I remember http://tracker.ceph.com/issues/9945 introducing some issues with
running cephfs between different versions of giant/firefly.
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg14257.html
So if you upgrade please be aware that you'll also have to update the
clients.
On
unsubscribe
--
Best regards,
Konstantin Khatskevich
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I have a Cluster of 3 hosts, running giant on Debian wheezy and
Backports Kernel 3.16.0-0.bpo.4-amd64.
For testing I did a
~# ceph osd out 20
from a clean state.
Ceph starts rebalancing, watching ceph -w one sees changing pgs stuck
unclean to get up and then go down to about 11.
Short after th
I manually edited my crushmap, basing my changes on
http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
I have SSDs and HDDs in the same box and was wanting to separate them by
ruleset. My current crushmap can be seen at http://pastie.org/9966238
I had it install
The process of creating an erasure coded pool and a replicated one is
slightly different. You can use Sebastian's guide to create/manage the osd
tree, but you should follow this guide
http://ceph.com/docs/giant/dev/erasure-coded-pool/ to create the EC pool.
I'm not sure (i.e. I never tried) to cre
Here was the process I went through.
1) I created an EC pool which created ruleset 1
2) I edited the crushmap to approximately its current form
3) I discovered my previous EC pool wasn't doing what I meant for it to do,
so I deleted it.
4) I created a new EC pool with the parameters I wanted and to
Oh, and I don't yet have any important data here, so I'm not worried about
losing anything at this point. I just need to get my cluster happy again so
I can play with it some more.
On Fri, Feb 20, 2015 at 11:00 AM, Kyle Hutson wrote:
> Here was the process I went through.
> 1) I created an EC po
On 02/19/2015 10:56 AM, Florian Haas wrote:
On Wed, Feb 18, 2015 at 10:27 PM, Florian Haas wrote:
On Wed, Feb 18, 2015 at 9:32 PM, Mark Nelson wrote:
On 02/18/2015 02:19 PM, Florian Haas wrote:
Hey everyone,
I must confess I'm still not fully understanding this problem and
don't exactly kn
Should I infer from the silence that there is no way to recover from the
"FAILED assert(last_e.version.version < e.version.version)" errors?
Thanks,
Jeff
- Forwarded message from Jeff -
Date: Tue, 17 Feb 2015 09:16:33 -0500
From: Jeff
To: ceph-users@lists.ceph.com
Subj
On Thu, Feb 19, 2015 at 8:30 PM, Christian Balzer wrote:
>
> Hello,
>
> I have a cluster currently at 0.80.1 and would like to upgrade it to
> 0.80.7 (Debian as you can guess), but for a number of reasons I can't
> really do it all at the same time.
>
> In particular I would like to upgrade the pr
That's pretty strange, especially since the monitor is getting the
failure reports. What version are you running? Can you bump up the
monitor debugging and provide its output from around that time?
-Greg
On Fri, Feb 20, 2015 at 3:26 AM, Sudarshan Pathak wrote:
> Hello everyone,
>
> I have a clust
On Fri, Feb 20, 2015 at 3:50 AM, Luis Periquito wrote:
> Hi Dan,
>
> I remember http://tracker.ceph.com/issues/9945 introducing some issues with
> running cephfs between different versions of giant/firefly.
>
> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg14257.html
Hmm, yeah, that's
On Fri, Feb 20, 2015 at 7:56 PM, Gregory Farnum wrote:
> On Fri, Feb 20, 2015 at 3:50 AM, Luis Periquito wrote:
>> Hi Dan,
>>
>> I remember http://tracker.ceph.com/issues/9945 introducing some issues with
>> running cephfs between different versions of giant/firefly.
>>
>> https://www.mail-archiv
You can try searching the archives and tracker.ceph.com for hints
about repairing these issues, but your disk stores have definitely
been corrupted and it's likely to be an adventure. I'd recommend
examining your local storage stack underneath Ceph and figuring out
which part was ignoring barriers.
I have a Cluster of 3 hosts, running Debian wheezy and Backports Kernel
3.16.0-0.bpo.4-amd64.
For testing I did a
~# ceph osd out 20
from a clean state.
Ceph starts rebalancing, watching ceph -w one sees changing pgs stuck unclean
to get up and then go down to about 11.
Short after that the cl
I have a Cluster of 3 hosts, running Debian wheezy and Backports Kernel 3.16.0-0.bpo.4-amd64.
For testing I did a
~# ceph osd out 20
from a clean state.
Ceph starts rebalancing, watching ceph -w one sees changing pgs stuck unclean to get up and then go down to about 11.
Short after that th
Okay thanks for pointing me in the right direction. From a quick read I
think this will work but will take a look in detail. Thanks!
Jake
On Tue, Feb 17, 2015 at 3:16 PM, Gregory Farnum wrote:
> On Tue, Feb 17, 2015 at 10:36 AM, Jake Kugel wrote:
> > Hi,
> >
> > I'm just starting to look at
Any update on this matter? I've been thinking of upgrading from 0.80.7 to
0.80.8 - lucky that I see this thread first...
On Thu, Feb 12, 2015 at 10:39 PM, 杨万元 wrote:
> thanks very much for your advice .
> yes,as you said,disabled the rbd_cache will improve the read request,but
> if i disabled rb
Is it possible to run an erasure coded pool using default k=2, m=2 profile on a
single node?
(this is just for functionality testing). The single node has 3 OSDs.
Replicated pools run fine.
ceph.conf does contain:
osd crush chooseleaf type = 0
-- Tom Deneau
Hi Tom,
On 20/02/2015 22:59, Deneau, Tom wrote:
> Is it possible to run an erasure coded pool using default k=2, m=2 profile on
> a single node?
> (this is just for functionality testing). The single node has 3 OSDs.
> Replicated pools run fine.
For k=2 m=2 to work you need four (k+m) OSDs. As
On 02/16/2015 12:57 PM, Steffen Winther wrote:
> Dan Mick writes:
>
>>
>> 0cbcfbaa791baa3ee25c4f1a135f005c1d568512 on the 1.2.3 branch has the
>> change to yo 1.1.0. I've just cherry-picked that to v1.3 and master.
> Do you mean that you merged 1.2.3 into master and branch 1.3?
I put just that
By the way, you may want to put these sorts of questions on
ceph-calam...@lists.ceph.com, which is specific to calamari.
On 02/16/2015 01:08 PM, Steffen Winther wrote:
> Steffen Winther writes:
>
>> Trying to figure out how to initially configure
>> calamari clients to know about my
>> Ceph Clus
27 matches
Mail list logo