On Thu, 14 Aug 2014 12:07:54 -0700 Craig Lewis wrote:
> On Thu, Aug 14, 2014 at 12:47 AM, Christian Balzer wrote:
> >
> > Hello,
> >
> > On Tue, 12 Aug 2014 10:53:21 -0700 Craig Lewis wrote:
> >
> >> That's a low probability, given the number of disks you have. I
> >> would've taken that bet (wi
Hi,
With EC pools in Ceph you are free to choose any K and M parameters you
like. The documentation explains what K and M do, so far so good.
Now, there are certain combinations of K and M that appear to have more
or less the same result. Do any of these combinations have pro's and
con's that I s
Hi Erik,
On 15/08/2014 11:54, Erik Logtenberg wrote:
> Hi,
>
> With EC pools in Ceph you are free to choose any K and M parameters you
> like. The documentation explains what K and M do, so far so good.
>
> Now, there are certain combinations of K and M that appear to have more
> or less the sam
Hi,
I've been trying to tweak and improve the performance of our ceph cluster.
One of the operations that I can't seem to be able to improve much is the
delete. From what I've gathered every time there is a delete it goes
directly to the HDD, hitting its performance - the op may be recorded in
th
On 08/15/2014 12:23 PM, Loic Dachary wrote:
Hi Erik,
On 15/08/2014 11:54, Erik Logtenberg wrote:
Hi,
With EC pools in Ceph you are free to choose any K and M parameters you
like. The documentation explains what K and M do, so far so good.
Now, there are certain combinations of K and M that ap
On 08/15/2014 06:24 AM, Wido den Hollander wrote:
On 08/15/2014 12:23 PM, Loic Dachary wrote:
Hi Erik,
On 15/08/2014 11:54, Erik Logtenberg wrote:
Hi,
With EC pools in Ceph you are free to choose any K and M parameters you
like. The documentation explains what K and M do, so far so good.
Now
Now, there are certain combinations of K and M that appear to have more
or less the same result. Do any of these combinations have pro's and
con's that I should consider and/or are there best practices for
choosing the right K/M-parameters?
>>
>> Loic might have a better an
On 15/08/2014 13:24, Wido den Hollander wrote:
> On 08/15/2014 12:23 PM, Loic Dachary wrote:
>> Hi Erik,
>>
>> On 15/08/2014 11:54, Erik Logtenberg wrote:
>>> Hi,
>>>
>>> With EC pools in Ceph you are free to choose any K and M parameters you
>>> like. The documentation explains what K and M do,
On 15/08/2014 14:36, Erik Logtenberg wrote:
> Now, there are certain combinations of K and M that appear to have more
> or less the same result. Do any of these combinations have pro's and
> con's that I should consider and/or are there best practices for
> choosing the right K/M-
On Fri, 15 Aug 2014, Haomai Wang wrote:
> Hi Kenneth,
>
> I don't find valuable info in your logs, it lack of the necessary
> debug output when accessing crash code.
>
> But I scan the encode/decode implementation in GenericObjectMap and
> find something bad.
>
> For example, two oid has same ha
>>
>> I haven't done the actual calculations, but given some % chance of disk
>> failure, I would assume that losing x out of y disks has roughly the
>> same chance as losing 2*x out of 2*y disks over the same period.
>>
>> That's also why you generally want to limit RAID5 arrays to maybe 6
>> disk
On 15/08/2014 15:42, Erik Logtenberg wrote:
>>>
>>> I haven't done the actual calculations, but given some % chance of disk
>>> failure, I would assume that losing x out of y disks has roughly the
>>> same chance as losing 2*x out of 2*y disks over the same period.
>>>
>>> That's also why you gen
After dealing with ubuntu for a few days I decided to circle back to centos 7.
It appears that the latest ceph deploy takes care of the initial issues I had.
Now i'm hitting a new issue that has to do with an improperly defined url.
When I do "ceph-deploy install node1 node2 node3" it fails beca
Found the file. You need to edit /usr/lib/python2.7/site-
packages/ceph_deploy/hosts/centos/install.py line 31 change to return 'rhel'
+ distro.normalized_release.major
Probably a bug that needs to be fixed in the deploy packages.
___
ceph-users mail
Hi,
When I attempt to use the ceph-deploy install command on one of my nodes I
get this error:
][WARNIN] W: Failed to fetch
http://ceph.com/packages/google-perftools/debian/dists/wheezy/main/binary-armhf/Packages
404 Not Found [IP: 208.113.241.137 80]
[ceph1][WARNIN]
[ceph1][WARNIN] E: Some ind
Hi ceph team. I encoutered a problem like bug#8641. ( It's bigger than his.)
Then the eviction doesn't work. So I want to know the logic of evicting data or
where is the code of controlling the eviction.
The following text is my configuration.
max_bytes is 1G;
max_objects is 1M.
cache_target_
Hi,
I am running into an error when I am attempting to use ceph-deploy install
when creating my cluster. I am attempting to run ceph on Debian 7.0 wheezy
with an ARM processor. When I attempt to run ceph-deploy install I get the
following errors:
[ceph1][WARNIN] E: Unable to locate package ce
Hello everyone:
Since there's no cuttlefish package for 14.04 server on ceph
repository (only ceph-deploy there), I tried to build cuttlefish from
source on 14.04.
Here's what I did:
Get source by following http://ceph.com/docs/master/install/clone-source/
Enter the sourcecode directory
git check
Actually, we haven't had any CentOS7 builds, that is why there is no
`el7` in the repos. We are in the middle of getting that sorted out.
Sorry you had to find this!
Also, keep in mind that there is no need really to edit those files.
You can tell ceph-deploy what URL to use and force it with:
c
Running into an issue w/ Cuttlefish where an RBD snap removal (from
OpenStack Glance) crashed my MON. I was able to get the MON back up and
running by shutting Glance off, and restarting the MON.
Now, the OSDs are crashing when trying to catch up, seemingly due to the
same snapshot.
OSD Log pre-c
Hi there,
I am using CentOS 7 with Ceph version 0.80.5
(38b73c67d375a2552d8ed67843c8a65c2c0feba6), 3 OSD, 3 MON, 1 RadosGW (which
also serves as ceph-deploy node)
I followed all the instructions in the docs, regarding setting up a basic
Ceph cluster, and then followed the one to setup RadosGW.
I
There have been a ton of updates to Kraken over the past few months. Feel free
to take a look here: http://imgur.com/fDnqpO9
Just as easy to setup before, with a lot more functionality. OSD+MON+AUTH
operations are coming in the next release.
-Original Message-
From: Loic Dachary [mailt
It just hasn't been implemented yet. The developers are mostly working on
big features, and waiting to do these small optimizations later. I'm sure
there are plans to address this, but I doubt it will be soon.
If you're interested, you're welcome to contribute:
http://ceph.com/community/contribu
Hi,
I have installed created a single node/single osd cluster with latest master
for some experiment and saw it is creating only rbd pool by default not the
data/metadata pools. Is this something changed recently ?
Thanks & Regards
Somnath
PLEASE NOTE: The info
24 matches
Mail list logo