Hey!
I'm trying to understand the peering algorithm based on [1] and [2]. There
are things that aren't really clear or I'm not entirely sure if I
understood them correctly, so I'd like to ask some clarification on the
points below:
1, Is it right, that the primary writes the operations to the PG
On 09/28/2015 12:55 PM, Paul Mansfield wrote:
>
> Hi,
>
> We used to rsync from eu.ceph.com into a local mirror for when we build
> our code. We need to re-do this to pick up fresh packages built since
> the intrusion.
>
> it doesn't seem possible to rsync from any current ceph download site
>
On 09/26/2015 03:58 PM, Iban Cabrillo wrote:
> HI cepher,
> I am getting download error form debian repos (I check it with firefly
> and hammer) :
>
> W: Failed to fetch http://ceph.com/debian-hammer/dists/trusty/InRelease
>
> W: Failed to fetch http://ceph.com/debian-hammer/dists/trusty/Rele
On 28.09.2015 19:55, Raluca Halalai wrote:
> I am trying to deploy a Ceph Storage Cluster on Amazon EC2, in different
> regions.
Don't do this.
What do you want to prove with such a setup?
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-sup
On 28.09.2015 20:47, Robert LeBlanc wrote:
> Ceph consulting was provided by Inktank[1], but the Inktank website is
> down. How do we go about getting consulting services now?
Have a look at the RedHat site for Ceph:
https://www.redhat.com/en/technologies/storage/ceph
There are also several inde
>
> What do you want to prove with such a setup?
>
It's for research purposes. We are trying different storage systems in a
WAN environment.
Best regards,
--
Raluca
>
> Regards
> --
> Robert Sander
> Heinlein Support GmbH
> Schwedter Str. 8/9b, 10119 Berlin
>
> http://www.heinlein-support.de
>
On 29.09.2015 09:54, Raluca Halalai wrote:
> What do you want to prove with such a setup?
>
>
> It's for research purposes. We are trying different storage systems in a
> WAN environment.
Then Ceph can be ticked off the list of candidates.
Its purpose is not to be a WAN storage system.
It w
I'm having some issues downloading a big file (60G+).
After some investigation it seems to be very similar to
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-May/001272.html,
however I'm currently running Hammer 0.94.3. However the files were
uploaded when the cluster was running Firefly
In my understanding, the deployment you suggested (local Ceph clusters +
Rados Gateways) would imply giving up strong consistency guarantees. In
that case, it is not what we are aiming for.
Thank you for your replies.
On Tue, Sep 29, 2015 at 10:02 AM, Robert Sander <
r.san...@heinlein-support.de>
Hello,
On Tue, 29 Sep 2015 10:21:00 +0200 Raluca Halalai wrote:
> In my understanding, the deployment you suggested (local Ceph clusters +
> Rados Gateways) would imply giving up strong consistency guarantees. In
> that case, it is not what we are aiming for.
>
Indeed, there is another planned p
On 29/09/15 08:24, Wido den Hollander wrote:
>> We used to rsync from eu.ceph.com into a local mirror for when we build
>
> $ rsync -avr --stats --progress eu.ceph.com::ceph .
>
> Worked just fine. What error did you get?
a colleague asked me to post the message, I now am not sure what he
might
Ah, this is a nice clear log!
I've described the bug here:
http://tracker.ceph.com/issues/13271
In the short term, you may be able to mitigate this by increasing
client_cache_size (on the client) if your RAM allows it.
John
On Tue, Sep 29, 2015 at 12:58 AM, Scottix wrote:
> I know this is an
Hi,
Am 2015-09-25 um 22:23 schrieb Udo Lembke:
> you can use this sources-list
>
> cat /etc/apt/sources.list.d/ceph.list
> deb http://gitbuilder.ceph.com/ceph-deb-jessie-x86_64-basic/ref/v0.94.3
> jessie main
The thing is: whatever I write into ceph.list, ceph-deploy just
overwrites it with "d
Le 29/09/2015 07:29, Jiri Kanicky a écrit :
> Hi,
>
> Is it possible to create journal in directory as explained here:
> http://wiki.skytech.dk/index.php/Ceph_-_howto,_rbd,_lvm,_cluster#Add.2Fmove_journal_in_running_cluster
Yes, the general idea (stop, flush, move, update ceph.conf, mkjournal,
sta
Hi Lionel.
Thank you for your reply. In this case I am considering to create
separate partitions for each disk on the SSD drive. Would be good to
know what is the performance difference, because creating partitions is
kind of waste of space.
One more question, is it a good idea to move journ
Hi Shinobu,
My keystone version is
2014.2.2
Thanks again.
Rob.
The information contained and transmitted in this e-mail is confidential
information, and is intended only for the named recipient to which it is
addressed. The content of this e-mail may not have be
On 09/25/2015 03:10 PM, Jogi Hofmüller wrote:
Am 2015-09-11 um 13:20 schrieb Florent B:
Jessie repository will be available on next Hammer release ;)
An how should I continue installing ceph meanwhile? ceph-deploy new ...
overwrites the /etc/apt/sources.list.d/ceph.list and hence throws an
e
Thanks, that worked. Is there a mapping in the other direction easily
available, I.e. To find where all the 4MB pieces of a file are?
On 9/28/15, 4:56 PM, "John Spray" wrote:
>On Mon, Sep 28, 2015 at 9:46 PM, Andras Pataki
> wrote:
>> Hi,
>>
>> Is there a way to find out which radios objects a
Jiri,
if you colocate more Journals on 1 SSD (we do...), make sure to understand
the following:
- if SSD dies, all OSDs that had their journals on it, are lost...
- the more journals you put on single SSD (1 journal being 1 partition),
the worse performance, since total SSD performance is not i.e
Hmm, so apparently a similar bug was fixed in 0.87: Scott, can you confirm
that your *clients* were 0.94 (not just the servers)?
Thanks,
John
On Tue, Sep 29, 2015 at 11:56 AM, John Spray wrote:
> Ah, this is a nice clear log!
>
> I've described the bug here:
> http://tracker.ceph.com/issues/132
I'm positive the client I sent you the log is 94. We do have one client
still on 87.
On Tue, Sep 29, 2015, 6:42 AM John Spray wrote:
>
> Hmm, so apparently a similar bug was fixed in 0.87: Scott, can you confirm
> that your *clients* were 0.94 (not just the servers)?
>
> Thanks,
> John
>
> On Tu
On Tue, Sep 29, 2015 at 3:59 AM, Jogi Hofmüller wrote:
> Hi,
>
> Am 2015-09-25 um 22:23 schrieb Udo Lembke:
>
>> you can use this sources-list
>>
>> cat /etc/apt/sources.list.d/ceph.list
>> deb http://gitbuilder.ceph.com/ceph-deb-jessie-x86_64-basic/ref/v0.94.3
>> jessie main
>
> The thing is: wh
The formula for objects in a file is .. So you'll have noticed they all look something like
12345.0001, 12345.0002, 12345.0003, ...
So if you've got a particular inode and file size, you can generate a
list of all the possible objects in it. To find the object->OSD
mapping you'd need t
Thanks, that makes a lot of sense.
One more question about checksumming objects in rados. Our cluster uses
two copies per object, and I have some where the checkums mismatch between
the two copies (that deep scrub warns about). Does ceph store an
authoritative checksum of what the block should lo
Hi,
Le 29/09/2015 13:32, Jiri Kanicky a écrit :
> Hi Lionel.
>
> Thank you for your reply. In this case I am considering to create
> separate partitions for each disk on the SSD drive. Would be good to
> know what is the performance difference, because creating partitions
> is kind of waste of spa
Le 27/09/2015 10:25, Lionel Bouton a écrit :
> Le 27/09/2015 09:15, Lionel Bouton a écrit :
>> Hi,
>>
>> we just had a quasi simultaneous crash on two different OSD which
>> blocked our VMs (min_size = 2, size = 3) on Firefly 0.80.9.
>>
>> the first OSD to go down had this error :
>>
>> 2015-09-27
It's an EIO. The osd got an EIO from the underlying fs. That's what
causes those asserts. You probably want to redirect to the relevant
fs maling list.
-Sam
On Tue, Sep 29, 2015 at 7:42 AM, Lionel Bouton
wrote:
> Le 27/09/2015 10:25, Lionel Bouton a écrit :
>> Le 27/09/2015 09:15, Lionel Bouto
[I'm cross posting this to the other Ceph threads to ensure that it's seen]
We've discussed this on Monday on IRC and again in the puppet-openstack IRC
meeting. The current census is that we will move from the deprecated
stackforge organization and will be moved to the openstack one. At this
time
On Thu, Sep 3, 2015 at 3:49 PM, Gurvinder Singh
wrote:
>> The density would be higher than the 36 drive units but lower than the
>> 72 drive units (though with shorter rack depth afaik).
> You mean the 1U solution with 12 disk is longer in length than 72 disk
> 4U version ?
This is a bit old and
Good move :-)
On 29/09/2015 23:45, Andrew Woodward wrote:
> [I'm cross posting this to the other Ceph threads to ensure that it's seen]
>
> We've discussed this on Monday on IRC and again in the puppet-openstack IRC
> meeting. The current census is that we will move from the deprecated
> stackf
I think I got over 10% improvement when I changed from cooked journal
file on btrfs based system SSD to a raw partition on the system SSD.
The cluster I've been testing with is all consumer grade stuff running
on top of AMD piledriver and kaveri based mobo's with the on-board
SATA. My SSDs ar
Hello,
Thank!!
Anyhow have you ever tried to access to swift object using v3?
Shinobu
- Original Message -
From: "Robert Duncan"
To: "Shinobu Kinjo" , ceph-users@lists.ceph.com
Sent: Tuesday, September 29, 2015 8:48:57 PM
Subject: Re: [ceph-users] radosgw and keystone version 3 domains
32 matches
Mail list logo