On Tue, Dec 03, 2013 at 02:09:26PM +0800, 飞 wrote:
>hello, I'm testing Ceph as storage for KVM virtual machine images,
>my cluster have 3 mons and 3 data nodes, every data node have 8x2T SATA
>HDD and 1 SSD for journal.
>when I shutdown one data node to imitate server fault, the cl
> your client writes the file to one osd, and before this osd acknowledges your
> write request,
> it ensure that it is copied to other osd(s).
I think this behaviour depends on how you configure you POOL:
osd pool default min size:
Description:
Sets the minimum number of written replicas for o
Just noticed that Ubuntu 13.10 (saucy) is still causing failures when
attempting to naively install ceph (in particular when using ceph-deploy).
Now I know this is pretty easy to work around (e.g s/saucy/raring/ in
ceph.list) but it seems highly undesirable to make installing ceph
*harder* tha
Hi Felix,
I've been running similar calculations recently. I've been using this
tool from Inktank to calculate RADOS reliabilities with different
assumptions:
https://github.com/ceph/ceph-tools/tree/master/models/reliability
But I've also had similar questions about RBD (or any multi-part files
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 06/12/13 22:56, Dimitri Maziuk wrote:
>>> Most servers novadays are re-provisioned even more often,
> Not where I work they aren't.
>
>>> Fedora release comes with more and more KVM/Libvirt features
>>> and resolved issues, so the net effect is p
I'm having a play with ceph-deploy after some time away from it (mainly
relying on the puppet modules).
With a test setup of only two debian testing servers, I do the following:
ceph-deploy new host1 host2
ceph-deploy install host1 host2 (installs emperor)
ceph-deploy mon create host1 host2
ceph-
Hi,
You didn't state what version of ceph or kvm/qemu you're using. I think it
wasn't until qemu 1.5.0 (1.4.2+?) that an async patch from inktank was
accepted into mainstream which significantly helps in situations like this.
If not using that on top of not limiting recovery threads you'll prob.
On Sat, Dec 7, 2013 at 7:17 PM, Mark Kirkwood
wrote:
> On 08/12/13 12:14, Mark Kirkwood wrote:
>
>> I wonder if it might be worth adding a check at the start of either
>> ceph-deploy to look for binaries we are gonna need.
>>
>
> ...growl: either ceph-deploy *or ceph-disk* was what I was thinking!
thanks for help.
moving slowly forward.
cheers
Wojciech
On 06/12/13 13:54, Alfredo Deza wrote:
On Fri, Dec 6, 2013 at 5:13 AM, Wojciech Giel
wrote:
Hello,
I trying to install ceph but can't get it working documentation is not clear
and confusing how to do it.
I have cloned 3 machines with ubunt
On 09/12/2013 01:54, Loic Dachary wrote:
>
>
> On 09/12/2013 00:13, Regola, Nathan (Contractor) wrote:
>> Hi Loic,
>>
>> I made a few changes to the text. Feel free to comment/change it.
>>
>
> Better indeed :-) Do you see a way to avoid the repetition of "future" ?
I saw you updated the sent
It looks like someone else may have made that change, but of course, that
is fine :-)
I ran it through a spell checker, and found two mistakes (now corrected)
in the pad. There are several people on the pad currently.
Best,
Nate
On 12/9/13 10:36 AM, "Loic Dachary" wrote:
>
>
>On 09/12/2013 01:
On Sun, Dec 8, 2013 at 8:33 PM, Mark Kirkwood wrote:
>
> I'd suggest testing the components separately - try to rule out NIC (and
> switch) issues and SSD performance issues, then when you are sure the bits
> all go fast individually test how ceph performs again.
>
> What make and model of SSD? I'
On Mon, Dec 9, 2013 at 6:49 AM, Matthew Walster wrote:
> I'm having a play with ceph-deploy after some time away from it (mainly
> relying on the puppet modules).
>
> With a test setup of only two debian testing servers, I do the following:
>
> ceph-deploy new host1 host2
> ceph-deploy install hos
On 12/09/2013 10:06 AM, Greg Poirier wrote:
On Sun, Dec 8, 2013 at 8:33 PM, Mark Kirkwood
mailto:mark.kirkw...@catalyst.net.nz>>
wrote:
I'd suggest testing the components separately - try to rule out NIC
(and switch) issues and SSD performance issues, then when you are
sure the bits
What SSDs are you using, and is there any under-provisioning on them?
On 2013-12-09 16:06, Greg Poirier wrote:
On Sun, Dec 8, 2013 at 8:33 PM, Mark Kirkwood
wrote:
I'd suggest testing the components separately - try to rule out NIC
(and switch) issues and SSD performance issues, then when you
This is a similar issue that we ran into, the root cause was that
ceph-deploy doesn't set the partition type guid (that is used to auto
activate the volume) on an existing partition. Setting this beforehand
while pre-creating the partition is a must or you have you put entries in
fstab.
On Mon, D
On 9 December 2013 16:26, Andrew Woodward wrote:
> This is a similar issue that we ran into, the root cause was that
> ceph-deploy doesn't set the partition type guid (that is used to auto
> activate the volume) on an existing partition. Setting this beforehand
> while pre-creating the partition
Hi folks,
I know it's short notice, but we have recently formed a Ceph users meetup group
in the DC area. We have our first meetup on 12/18. We should have more notice
before the next one, so please join the meetup group, even if you can't make
this one!
http://www.meetup.com/Ceph-DC/events/
On Mon, Dec 9, 2013 at 1:17 AM, Robert van Leeuwen
wrote:
>> your client writes the file to one osd, and before this osd acknowledges
>> your write request,
>> it ensure that it is copied to other osd(s).
>
> I think this behaviour depends on how you configure you POOL:
>
> osd pool default min s
On Wed, Dec 4, 2013 at 7:15 AM, Mr.Salvatore Rapisarda
wrote:
> Hi,
>
> i have a ceph cluster with 3 nodes on Ubuntu 12.04.3 LTS and ceph version
> 0.72.1
>
> My configuration is the follow:
>
> * 3 MON
> - XRVCLNOSTK001=10.170.0.110
> - XRVCLNOSTK002=10.170.0.111
> - XRVOSTKMNG001=10.170.0.
https://github.com/ceph/ceph/blob/master/udev/95-ceph-osd.rules Lists the 4
variants, in your case it sounds like a normal ceph volume so the guid you
want is probably 4fbd7e29-9d25-41b8-afd0-062c0ceff05d.
You will need sgdisk to set the guid correctly (part of gdisk)
from man
-t, --typeco
[ Re-added the list since I don't have log files. ;) ]
On Mon, Dec 9, 2013 at 5:52 AM, Oliver Schulz wrote:
> Hi Greg,
>
> I'll send this privately, maybe better not to post log-files, etc.
> to the list. :-)
>
>
>> Nobody's reported it before, but I think the CephFS MDS is sending out
>> too man
On 9 December 2013 17:35, Andrew Woodward wrote:
> https://github.com/ceph/ceph/blob/master/udev/95-ceph-osd.rules Lists the
> 4 variants, in your case it sounds like a normal ceph volume so the guid
> you want is probably 4fbd7e29-9d25-41b8-afd0-062c0ceff05d.
>
> You will need sgdisk to set the
@Alfredo - Is this something that ceph-deploy should do // or warn about?
or should we fix ceph-disk so that it set's the part guid on existing
partations?
On Mon, Dec 9, 2013 at 9:44 AM, Matthew Walster wrote:
> On 9 December 2013 17:35, Andrew Woodward wrote:
>
>> https://github.com/ceph/cep
Matthew,
I'll flag this for future doc changes. I noticed that you didn't run
ceph-deploy gatherkeys after creating your monitor(s). Any reason for that
omission?
On Mon, Dec 9, 2013 at 3:49 AM, Matthew Walster wrote:
> I'm having a play with ceph-deploy after some time away from it (mainly
>
John,
Good catch. I did run it, but missed it when reviewing my actions for this
post.
Matthew
On 9 Dec 2013 18:24, "John Wilkins" wrote:
> Matthew,
>
> I'll flag this for future doc changes. I noticed that you didn't run
> ceph-deploy gatherkeys after creating your monitor(s). Any reason for t
On Mon, Dec 9, 2013 at 12:54 PM, Andrew Woodward wrote:
> @Alfredo - Is this something that ceph-deploy should do // or warn about? or
> should we fix ceph-disk so that it set's the part guid on existing
> partations?
This looks like an omission on our end. I've created
http://tracker.ceph.com/is
The founding members of the Ceph User Committee (see below) are pleased to
announce its creation as of December 10th, 2013. We are actively engaged in
organizing meetups, collecting use cases, and more. Any Ceph user is welcome to
join, simply by sending an email to our mailing list
(ceph-commu
Is there any posibility to remove this meta files? (whithout recreate cluster)
Files names:
{path}.bucket.meta.test1:default.4110.{sequence number}__head_...
--
Regards
Dominik
2013/12/8 Dominik Mostowiec :
> Hi,
> My api app to put files to s3/ceph checks if bucket exists by create
> this bucket
We're running OpenStack (KVM) with local disk for ephemeral storage.
Currently we use local RAID10 arrays of 10k SAS drives, so we're quite rich
for IOPS and have 20GE across the board. Some recent patches in OpenStack
Havana make it possible to use Ceph RBD as the source of ephemeral VM
storage, s
> We're running OpenStack (KVM) with local disk for ephemeral storage.
> Currently we use local RAID10 arrays of 10k SAS drives, so we're quite rich
> for IOPS and have 20GE across the board. Some recent patches in OpenStack
> Havana make it possible to use Ceph RBD as the source of ephemeral VM
>
Yes, we use it now and it looks well. It make possible that each api
finished quicker. For example, create image, live-snapshot, create
instance and resize etc. Of course, not all mentioned have been done
on community version. But we are going
On Tue, Dec 10, 2013 at 10:04 AM, Blair Bethwaite
wro
32 matches
Mail list logo