Hi Sage,
> What kernel version of this? It looks like an old kernel bug.
> Generally speaking you should be using 3.4 at the very least if you
> are using the kernel client. sage
This is the standard Wheezy kernel, i.e. 3.2.0-4-amd64
While I can recompile the kernel, I don't think would be managea
# ceph -v
ceph version 0.61.2 (fea782543a844bb277ae94d3391788b76c5bee60)
# rpm -qa | grep ceph
ceph-0.61.2-0.el6.x86_64
ceph-radosgw-0.61.2-0.el6.x86_64
ceph-deploy-0.1-31.g7c5f29c.noarch
ceph-release-1-0.el6.noarch
libcephfs1-0.61.2-0.el6.x86_64
thanks
Chris
-Original Message-
From: Sam
Hi,
is it ok to upgrade from 0.66 to 0.67 by just running 'apt-get upgrade'
and rebooting the nodes one by one ?
Thanks.
Regards,
Markus
Am 14.08.2013 07:32, schrieb Sage Weil:
Another three months have gone by, and the next stable release of Ceph is
ready: Dumpling! Thank you to everyone wh
> Hi,
> is it ok to upgrade from 0.66 to 0.67 by just running 'apt-get upgrade'
> and rebooting the nodes one by one ?
Is a full reboot required?
James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-c
On Wed, Aug 14, 2013 at 11:35 AM, Markus Goldberg
wrote:
> is it ok to upgrade from 0.66 to 0.67 by just running 'apt-get upgrade' and
> rebooting the nodes one by one ?
Did you see http://ceph.com/docs/master/release-notes/#upgrading-from-v0-66 ??
___
On 2013-08-14 07:32, Sage Weil wrote:
Another three months have gone by, and the next stable release of Ceph
is
ready: Dumpling! Thank you to everyone who has contributed to this
release!
This release focuses on a few major themes since v0.61 (Cuttlefish):
* rgw: multi-site, multi-datacenter
>>>It looks like at some point the filesystem is not passed to the options.
>>>Would >>>you mind running the `ceph-disk-prepare` command again but with
>>>the --verbose flag?
>>>I think that from the output above (correct it if I am mistaken) that would
>>>be >>>something like:
>>>ceph-disk-prep
On 2013-08-13 23:01, Nulik Nol wrote:
The server will be developed in C, client code
in HTML/Javascript and binary client (standalone app) in C++
So, my question is, how would you recommend me to design the backend ?
There are already established protocols and open source implementations
of the
Any suggestions for upgrading CentOS/RHEL? The yum repos don't appear to
have been updated yet.
I thought maybe with the "improved support for Red Hat platforms" that
would be the easy way of going about it.
On Wed, Aug 14, 2013 at 5:08 AM, wrote:
> On 2013-08-14 07:32, Sage Weil wrote:
>
>> A
http://ceph.com/rpm-dumpling/el6/x86_64/
--
Dan van der Ster
CERN IT-DSS
On Wednesday, August 14, 2013 at 4:17 PM, Kyle Hutson wrote:
> Any suggestions for upgrading CentOS/RHEL? The yum repos don't appear to have
> been updated yet.
>
> I thought maybe with the "improved support for Red Ha
Ah, didn't realize the repos were version-specific. Thanks Dan!
On Wed, Aug 14, 2013 at 9:20 AM, Dan van der Ster wrote:
> http://ceph.com/rpm-dumpling/el6/x86_64/
>
>
> --
> Dan van der Ster
> CERN IT-DSS
>
>
> On Wednesday, August 14, 2013 at 4:17 PM, Kyle Hutson wrote:
>
> > Any suggestions
On Wed, Aug 14, 2013 at 7:41 AM, Pavel Timoschenkov <
pa...@bayonetteas.onmicrosoft.com> wrote:
> >>>It looks like at some point the filesystem is not passed to the
> options. Would >>>you mind running the `ceph-disk-prepare` command again
> but with
>
> >>>the --verbose flag?
>
> >>>I t
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Wednesday, August 14, 2013 5:41 PM
To: Pavel Timoschenkov
Cc: Samuel Just; ceph-us...@ceph.com
Subject: Re: [ceph-users] ceph-deploy and journal on separate disk
On Wed, Aug 14, 2013 at 7:41 AM, Pavel Timoschenkov
mailto:pa...@bayonet
Hello List,
I am attempting to build a ceph cluster on RHEL6 machines. Everything
seems to work until I get to the step of creating new monitors with
ceph-deploy. It seems to work, but when I get to the gatherkeys step, then
it displays messages about not being able to get the various bootstrap
On Wednesday, August 14, 2013, wrote:
>
> Hi Sage,
>
> I just upgraded and everything went quite smoothly with osds, mons and
> mds, good work guys! :)
>
> The only problem I have ran into is with radosgw. It is unable to start
> after the upgrade with the following message:
>
> 2013-08-14 11:57:25
On Wed, Aug 14, 2013 at 10:49 AM, Jim Summers wrote:
>
> Hello List,
>
> I am attempting to build a ceph cluster on RHEL6 machines. Everything
> seems to work until I get to the step of creating new monitors with
> ceph-deploy. It seems to work, but when I get to the gatherkeys step, then
> it
It turned out that I had initially listed four machines as part of my
cluster. Thinking that two will have mon's and all four osd's. So I
noticed in the mon-log file that it was not able to communicate with two of
the machines. So I simply made them mons also and then the keys were
generated. I
On Wed, Aug 14, 2013 at 10:47 AM, Pavel Timoschenkov <
pa...@bayonetteas.onmicrosoft.com> wrote:
> ** **
>
> ** **
>
> *From:* Alfredo Deza [mailto:alfredo.d...@inktank.com]
> *Sent:* Wednesday, August 14, 2013 5:41 PM
> *To:* Pavel Timoschenkov
> *Cc:* Samuel Just; ceph-us...@ceph.com
>
> *Subje
Great, that fixed the issue of setting up the monitor, thanks. Now, I've got
the monitor up and running in the server (target) machine. But during the
process of setting up the OSD, I'm stuck with the following issue. Please
advise. (I did create the directory in the target machine before attemp
There are version specific repos, but you shouldn't need them if you want
the latest.
In fact, http://ceph.com/rpm/ is simply a link to
http://ceph.com/rpm-dumpling
Ian R. Colle
Director of Engineering
Inktank
Cell: +1.303.601.7713
Email: i...@inktank.com
Delivering the Future of Storage
<
Try restarting the two osd processes with debug osd = 20, debug ms =
1, debug filestore = 20. Restarting the osds may clear the problem,
but if it recurs, the logs should help explain what's going on.
-Sam
On Wed, Aug 14, 2013 at 12:17 AM, Jens-Christian Fischer
wrote:
> On 13.08.2013, at 21:09,
Thanks for that bit, too, Ian.
For what it's worth, I updated /etc/yum.repos.d/ceph.repo , installed the
latest version (from cuttlefish), restarted (monitors first, then
everything else) and everything looks great.
On Wed, Aug 14, 2013 at 1:28 PM, Ian Colle wrote:
> There are version specific
>>>It looks like at some point the filesystem is not passed to the options.
>>>Would >>>you mind running the `ceph-disk-prepare` command again but with
>>>the --verbose flag?
>>>I think that from the output above (correct it if I am mistaken) that would
>>>be >>>something like:
>>>ceph-disk-prep
Hello All,
Just starting out with ceph and wanted to make sure that ceph will do a
couple of things.
1. Has the ability to keep a cephfs available to users even if one of the
ODS servers has to be rebooted or whatever.
2. It is possible to keep adding disks to build one large storage /
cephfs.
Would it be possible to generate rpms for the latest OpenSuSE-12.3?
Regards,
Mikhail
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Ian Colle
Sent: Wednesday, August 14, 2013 2:29 PM
To: Kyle Hutson
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph
Hello All,
I just re-installed the ceph-release package on my RHEL system in an effort
to get dumpling installed.
After doing that I can not yum install ceph-deploy. Then I tyum installed
ceph but still no ceph-deploy?
Ideas?
Thanks
___
ceph-users ma
Hello Jim,
On 14/08/13 13:10, Jim Summers wrote:
Hello All,
Just starting out with ceph and wanted to make sure that ceph will do a
couple of things.
1. Has the ability to keep a cephfs available to users even if one of
the ODS servers has to be rebooted or whatever.
Yes. If the pool is re
On Thu, Aug 1, 2013 at 9:57 AM, Jeff Moskow wrote:
> Greg,
>
> Thanks for the hints. I looked through the logs and found OSD's with
> RETRY's. I marked those "out" (marked in orange) and let ceph rebalance.
> Then I ran the bench command.
> I now have many more errors than before :-(.
>
> he
Yep. I don't remember for sure but I think you may need to use the
ceph CLI to specify changes to these parameters, though — the config
file options will only apply to the initial creation of the OSD map.
("ceph pg set_nearfull_ratio 0.88" etc)
-Greg
Software Engineer #42 @ http://inktank.com | htt
Hello Everyone,
I have a Ceph test cluster doing storage for an OpenStack Grizzly platform
(also testing). Upgrading to 0.67 went fine on the Ceph side with the cluster
showing healthy but suddenly I can't upload images into Glance anymore. The
upload fails and glance-api throws an error:
2013-0
On 08/14/2013 02:22 PM, Michael Morgan wrote:
Hello Everyone,
I have a Ceph test cluster doing storage for an OpenStack Grizzly platform
(also testing). Upgrading to 0.67 went fine on the Ceph side with the cluster
showing healthy but suddenly I can't upload images into Glance anymore. The
upl
Yes that works! Thanks!
- WP
On Thu, Aug 15, 2013 at 5:01 AM, Gregory Farnum wrote:
> Yep. I don't remember for sure but I think you may need to use the
> ceph CLI to specify changes to these parameters, though — the config
> file options will only apply to the initial creation of the OSD map.
Sage et al,
This is an exciting release but I must say I'm a bit confused about some of the
new rgw details.
Questions:
1) I'd like to understand how regions work. I assume that's how you get
multi-site, multi-datacenter support working but must they be part of the same
ceph cluster still?
Doing that could paper over some other issues, but you certainly
shouldn't need every node in the cluster to be a monitor. If you could
be a bit clearer about what steps you took in both cases maybe
somebody can figure it out, though. :)
-Greg
Software Engineer #42 @ http://inktank.com | http://cep
Dear,Mr.:
Hi!After I deployed a cuttlefish cluster on several nodes,none other daemons
was found on monitor with "sudo initctl list | grep ceph ".As the content
below , I can only find the monitor daemon process.
ceph-osd-all stop/waiting
ceph-mds-all-starter stop/waiting
ceph-mds-all stop/wai
Hi lists,
in this release I see that the ceph command is not compatible with
python 3. The changes were not all trivial so I gave up, but for those
using gentoo, I made my ceph git repository available here with an
ebuild that forces the python version to 2.6 ou 2.7 :
git clone https://git.i
36 matches
Mail list logo