Hi Andrija,
I've got at least two more stories of similar nature. One is my friend running
a ceph cluster and one is from me. Both of our clusters are pretty small. My
cluster has only two osd servers with 8 osds each, 3 mons. I have an ssd
journal per 4 osds. My friend has a cluster of 3 mons
Hi Andrei, nice to meet you again ;)
Thanks for sharing this info with me - I though it was my mistake by
introducing new OSD components at the same time - I though that since it's
rebalancing, let's add those new OSD, so it also rebalances - so I don't
have to cause 2 data rebalancing - but duri
Hello,
new firefly cluster, currently just 1 storage node with 8 OSDs (3TB HDDs,
journals on 4 DC3700 SSDs), the rest of the storage nodes are in the queue
and 3 mons. Thus replication of 1.
Now this is the 2nd incarnation if this "cluster", I did a first one a few
days ago and this did NOT hap
Hey i have missed the webinar , is this available for later review or slides.
- Karan -
On 10 Jul 2014, at 18:27, Georgios Dimitrakakis wrote:
> That makes two of us...
>
> G.
>
> On Thu, 10 Jul 2014 17:12:08 +0200 (CEST), Alexandre DERUMIER wrote:
>> Ok, sorry, we have finally receive the
Hi All,
Just a quick question for the list, has anyone seen a significant increase in
ram usage since firefly? I upgraded from 0.72.2 to 80.3 now all of my Ceph
servers are using about double the ram they used to.
Only other significant change to our setup was a upgrade to Kernel
3.13.0-30-gen
Quenten,
It has been noted before and I've seen a thread on the mailing list about it.
In a long term, I've not noticed a great increase in ram. By that i mean that
initially, right after doing the upgrade from emperor to firefly and restarting
the odd servers I did notice about 20-25% more r
Hi Karan!
Due to the late reception of the login info I 've also missed
a very big part of the webinar.
They did send me an e-mail though saying that they will let me know as
soon as
a recording of the session will be available.
I will let you know again then.
Best,
G.
On Mon, 14 Jul 2014
Thanks Georgios
I will wait.
- Karan Singh -
On 14 Jul 2014, at 15:37, Georgios Dimitrakakis wrote:
> Hi Karan!
>
> Due to the late reception of the login info I 've also missed
> a very big part of the webinar.
>
> They did send me an e-mail though saying that they will let me know as soon
Hi Sage,
> * Tarball at http://ceph.com/download/ceph-0.80.3.tar.gz
Sources are currently not available at this link. Is this intentional?
Markus
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.c
It looks like those were not added when the build finished. That just
got corrected and they are showing up.
Thanks for letting us know!
On Mon, Jul 14, 2014 at 9:23 AM, Markus Blank-Burian wrote:
> Hi Sage,
>
>> * Tarball at http://ceph.com/download/ceph-0.80.3.tar.gz
> Sources are currently no
On Mon, Jul 14, 2014 at 2:16 AM, Christian Balzer wrote:
>
> Hello,
>
> new firefly cluster, currently just 1 storage node with 8 OSDs (3TB HDDs,
> journals on 4 DC3700 SSDs), the rest of the storage nodes are in the queue
> and 3 mons. Thus replication of 1.
>
> Now this is the 2nd incarnation i
On Fri, 2014-07-11 at 15:53 -0600, Tregaron Bayly wrote:
> I have a four node ceph cluster for testing. As I'm watching the
> relatively idle cluster I'm seeing quite a bit of traffic from one of
> the OSD nodes to the monitor. This node has 8 OSDs and each of them are
> involved in this behavior
I've added some additional notes/warnings to the upgrade and release
notes:
https://github.com/ceph/ceph/commit/fc597e5e3473d7db6548405ce347ca7732832451
If there is somewhere else where you think a warning flag would be useful,
let me know!
Generally speaking, we want to be able to cope with
Perhaps here: http://ceph.com/releases/v0-80-firefly-released/
Thanks
On 14 July 2014 18:18, Sage Weil wrote:
> I've added some additional notes/warnings to the upgrade and release
> notes:
>
>
> https://github.com/ceph/ceph/commit/fc597e5e3473d7db6548405ce347ca7732832451
>
> If there is somewh
There's no reason you can't create another set of zones that have a master
in us-west, call it us-west-2. Then users that need low latency to us-west
write to http://us-west-2.cluster/, and users that need low latency to
us-east write to http://us-east-1.cluster/.
In general, you want your replic
Hello.
Two questions about ceph pg dump:
1) What is the relationship between "degraded objects" count and "degraded
total" count? What exactly is "degraded total" the count of?
2) What is the relationship between "unfound objects" count and "unfound
total" count? What exactly is "degr
Hi,
which values are all changed with "ceph osd crush tunables optimal"?
Is it perhaps possible to change some parameter the weekends before the
upgrade is running, to have more time?
(depends if the parameter are available in 0.72...).
The warning told, it's can take days... we have an cluster w
On Mon, 14 Jul 2014, Udo Lembke wrote:
> Hi,
> which values are all changed with "ceph osd crush tunables optimal"?
There are some brand new crush tunables that fix.. I don't even remember
off hand.
In general, you probably want to stay away from 'optimal' unless this is a
fresh cluster and all
I've tracked this back to the following commit:
commit fa8b9b971453e960062a7e677bb09a7849e59744
Author: Greg Farnum
Date: Fri Apr 2 13:14:12 2010 -0700
rgw: convert + to space in url_decode
diff --git a/src/rgw/rgw_common.cc b/src/rgw/rgw_common.cc
index 6330fe2..da9debc 100644
--- a/src
Hi,
>>But in reality (yum update or by using ceph-deploy install nodename) -
>>the package manager does restart ALL ceph services on that node by its own...
debian packages don't restart ceph services on package update, maybe it's a bug
in rpm packaging ?
- Mail original -
De: "
Udo, I had all VMs completely unoperational - so don't set "optimal" for
now...
On 14 July 2014 20:48, Udo Lembke wrote:
> Hi,
> which values are all changed with "ceph osd crush tunables optimal"?
>
> Is it perhaps possible to change some parameter the weekends before the
> upgrade is running,
Hi Guys,
I am using ceph firefly on a three-node-cluster (all of them are
storage-servers AND monitors) with rhel 6.5. Right now I've updated ceph from
version from 0.80.2 to 0.80.3.
But during update something strange happened. After having updated node2 I
stopped ceph (osd and mon) on node 1.
On Mon, 14 Jul 2014, Quenten Grasso wrote:
>
> Hi All,
>
> Just a quick question for the list, has anyone seen a significant increase
> in ram usage since firefly? I upgraded from 0.72.2 to 80.3 now all of my
> Ceph servers are using about double the ram they used to.
One significant change is t
This is just the output if it fails to connect to the first monitor it
tries (in this case, the one that isn't running). If you let it run
for a while it should time out after 15 seconds or something, pick a
different monitor, and succeed.
-Greg
Software Engineer #42 @ http://inktank.com | http://c
On 07/14/2014 05:37 AM, Quenten Grasso wrote:
Hi All,
Just a quick question for the list, has anyone seen a significant
increase in ram usage since firefly? I upgraded from 0.72.2 to 80.3 now
all of my Ceph servers are using about double the ram they used to.
Can you tell me a bit about how yo
Hi Greg,
thanks for your fast answer.
At the first sight I immediately killed (CTRL+C) that process but the second
try then was successful (at the second try I waited some seconds). I just was
irritated and thought I should report that... It would be more helpful if
message like "connection ti
Hi Guys,
me again.
In the documentation about adding/removing monitors (see here:
http://ceph.com/docs/master/rados/operations/add-or-rm-mons/ ) I've found
something which should be written somehow noob-safe.
I've used ceph version 0.80.2 on three rhel6.5 nodes.
At "Adding monitors" in Step 3
Hi,
We used to have 0.80.1 on Ubuntu 12.04. We recently upgraded to 14.04.
However the mon process doesn't start while OSDs are ok. The ceph-mon log
shows,
2014-07-14 12:04:17.034407 7fa86ddcb700 -1
mon.storage1@0(electing).elector(147)
Shutting down because I do not support required monitor fe
On 07/14/2014 11:51 PM, Richard Zheng wrote:
Hi,
We used to have 0.80.1 on Ubuntu 12.04. We recently upgraded to 14.04.
However the mon process doesn't start while OSDs are ok. The ceph-mon
log shows,
2014-07-14 12:04:17.034407 7fa86ddcb700 -1
mon.storage1@0(electing).elector(147) Shutting do
I have installed and test Ceph on VMs before, i know a bit about
configuration and install.
Now i want to use physic PC Server to install Ceph and do some Test, i
think the prefermance will better than VMs. I have some question about how
to plan the ceph storage architecture.
>>> what do i have a
thank you very much, Craig, for your clear explanation against my questions.
Now I am very clear about the concept of pools in ceph.
But I have two small questions:
1. How does the deployer decide that a particular type of information will be
stored in a particular pool? Are there any settings
hi, everyone!
I test RGW get obj ops, when I use 100 threads get one and the same object ,
I find that performance is very good, meadResponseTime is 0.1s.
But when I use 150 threads get one and the same object, performace is very
bad, meadResponseTime is 1s.
and I observe the osd log and rgw
32 matches
Mail list logo