Hi Nico.
If you are experiencing such issues it would be good if you provide more info
about your deployment: ceph version, kernel versions, OS, filesystem btrfs/xfs.
Thx Jiri
- Reply message -
From: "Nico Schottelius"
To:
Subject: [ceph-users] Is ceph production ready? [was: Ceph PG
On Thu, 8 Jan 2015 11:41:37 -0700 Robert LeBlanc wrote:
> On Wed, Jan 7, 2015 at 10:55 PM, Christian Balzer wrote:
> > Which of course begs the question of why not having min_size at 1
> > permanently, so that in the (hopefully rare) case of loosing 2 OSDs at
> > the same time your cluster still
You can have a look of what I did here with Christian:
* https://github.com/stackforge/swift-ceph-backend
* https://github.com/enovance/swiftceph-ansible
If you have further question just let us know.
> On 08 Jan 2015, at 15:51, Robert LeBlanc wrote:
>
> Anyone have a reference for documentati
Hello,
On Thu, 8 Jan 2015 17:36:43 +0100 Patrik Plank wrote:
> Hi,
>
> first of all, I am a “ceph-beginner“ so i am sorry for the trivial
> questions :).
>
> I have build a ceph three node cluster for virtualization.
>
>
>
> Hardware:
>
>
> Dell Poweredge 2900
>
> 8 x 300GB SAS 15k7 w
On 01/08/2015 03:35 PM, Michael J Brewer wrote:
Hi all,
I'm working on filling a cluster to near capacity for testing purposes.
Though I'm noticing that it isn't storing the data uniformly between
OSDs during the filling process. I currently have the following levels:
Node 1:
/dev/sdb1
On Thu, 8 Jan 2015 15:35:22 -0600 Michael J Brewer wrote:
>
>
> Hi all,
>
> I'm working on filling a cluster to near capacity for testing purposes.
> Though I'm noticing that it isn't storing the data uniformly between OSDs
> during the filling process. I currently have the following levels:
>
that depends.. with which block size do you get those numbers? Ceph is
really good with block sizes > 256kb, 1M, 4M...
German Anders
--- Original message ---
Asunto: [ceph-users] slow read-performance inside the vm
De: Patrik Plank
Para: ceph-users@lists.ceph.com
Fecha: Thu
On Thu, 8 Jan 2015 05:36:43 PM Patrik Plank wrote:
Hi Patrick, just a beginner myself, but have been through a similar process
recently :)
> With these values above, I get a write performance of 90Mb/s and read
> performance of 29Mb/s, inside the VM. (Windows 2008/R2 with virtio driver
> and wri
Hi all,
I'm working on filling a cluster to near capacity for testing purposes.
Though I'm noticing that it isn't storing the data uniformly between OSDs
during the filling process. I currently have the following levels:
Node 1:
/dev/sdb1 3904027124 2884673100 1019354024
Anyone have a reference for documentation to get Ceph to be a backend for Swift?
Thanks,
Robert LeBlanc
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks for your answer. But another doubt raised…
Suppose I have 4 hosts with a erasure pool created with k=3, m=1 and failure
domain by host and I lost a host. On this case I’ll face with the same issue on
the beginning of this thread because k+m > number of hosts, right?
- On this scenario, w
On Wed, Jan 7, 2015 at 10:15 PM, Noah Watkins wrote:
> I'm trying to install Firefly on an up-to-date FC20 box. I'm getting
> the following errors:
>
> [nwatkins@kyoto cluster]$ ../ceph-deploy/ceph-deploy install --release
> firefly kyoto
> [ceph_deploy.conf][DEBUG ] found configuration file at:
>
On Thu, 8 Jan 2015, Christopher Kunz wrote:
> Am 05.01.15 um 15:16 schrieb Christopher Kunz:
> > Hi all,
> >
> > I think I have a subtle problem with either understanding CRUSH or in
> > the actual implementation of my CRUSH map.
> >
> > Consider the following CRUSH map: http://paste.debian.net/h
Hi Noah,
The root cause has been found. Please see
http://tracker.ceph.com/issues/10476 for details.
In short, it's an issue between RPM obsoletes and yum priorities
plugin. Final solution is pending, but details of a work around are
in the issue comments.
- Travis
On Wed, Jan 7, 2015 at 4:0
On Wed, Jan 7, 2015 at 9:55 PM, Christian Balzer wrote:
> On Wed, 7 Jan 2015 17:07:46 -0800 Craig Lewis wrote:
>
>> On Mon, Dec 29, 2014 at 4:49 PM, Alexandre Oliva wrote:
>>
>> > However, I suspect that temporarily setting min size to a lower number
>> > could be enough for the PGs to recover.
On Wed, Jan 7, 2015 at 10:55 PM, Christian Balzer wrote:
> Which of course begs the question of why not having min_size at 1
> permanently, so that in the (hopefully rare) case of loosing 2 OSDs at the
> same time your cluster still keeps working (as it should with a size of 3).
The idea is that
Hi,
first of all, I am a “ceph-beginner“ so i am sorry for the trivial questions :).
I have build a ceph three node cluster for virtualization.
Hardware:
Dell Poweredge 2900
8 x 300GB SAS 15k7 with Dell Perc 6/i in Raid 0
2 x 120GB SSD in Raid 1 with Fujitsu Raid Controller for Journal
Awesome, thanks Michael.
Regards
William
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Michael J. Kidd
Sent: Wednesday, January 07, 2015 2:09 PM
To: ceph-us...@ceph.com
Subject: [ceph-users] PG num calculator live on Ceph.com
Hello all,
Just a quick heads up that we
I just finished configuring ceph up to 100 TB with openstack ... Since we
are also using Lustre in our HPC machines , just wondering what is the
bottle neck in ceph going on Peta Scale like Lustre .
any idea ? or someone tried it
--
Regards
Zeeshan Ali Shah
System Administrator - PDC HPC
PhD
Lindsay,
Yes, I would suggest starting with the 'RBD and libRados' use case from
the drop down, then adjusting the percentages / pool names (if you desire)
as appropriate. I don't have a ton of experience with CephFS, but I would
suspect that the metadata is less than 5% of the total data usage
On Thu, Jan 8, 2015 at 11:22 AM, Nur Aqilah
wrote:
> Do you mind telling me which guide you were following, that is, if you
> were using any?
>
(copying list back in)
I was just doing it off the top of my head in this instance. Running
"ceph-deploy install " just worked. It points the servers
I had problems in CentOS 7, with the normal Ceph's mirrors... try
using the eu.ceph.com ones... it helped me at the time!
Good luck!
Marco Garcês
#sysadmin
Maputo - Mozambique
On Thu, Jan 8, 2015 at 1:09 PM, John Spray wrote:
> On Tue, Jan 6, 2015 at 7:40 AM, Nur Aqilah
> wrote:
>>
>> I was won
Am 05.01.15 um 15:16 schrieb Christopher Kunz:
> Hi all,
>
> I think I have a subtle problem with either understanding CRUSH or in
> the actual implementation of my CRUSH map.
>
> Consider the following CRUSH map: http://paste.debian.net/hidden/085b3f20/
>
> I have 3 chassis' with 7 nodes each (
On Tue, Jan 6, 2015 at 7:40 AM, Nur Aqilah
wrote:
> I was wondering if anyone can give me some guidelines in installing ceph
> on Centos 7. I followed the guidelines on ceph.com on how to do the Quick
> Installation. But there was always this one particular error. When i typed
> in this command "
Thanks, Craig!
I'll try to do crush reweight if OSDs get more different. For now it is
mostly okay.
It looks like this could be automated by external tool that does 0.05 step
for the biggest OSD, waits for data to settle down and decides/asks if
another step should be performed. Balancing 106 nod
The short answer is that uniform distribution is a lower priority feature
of the CRUSH hashing algorithm.
CRUSH is designed to be consistent and stable in it's hashing. For the
details, you can read Sage's paper (
http://ceph.com/papers/weil-rados-pdsw07.pdf). The goal is that if you
make a chan
26 matches
Mail list logo