Hi,
I'm running a 3 nodes cluster with 6 OSDs in each node. I'm using two
types of pools, size 3 and size 2 and min_size 1, with the node being
the failure domain. I've stopped every OSD in a single node to make some
maintenance which left the cluster in a degraded, but operational state.
For
Hi,
I known that Loic Dachary was currently working on backporting new feature on
giant,
I see that 0.87.1 has been tagged in git too:
here the difference:
https://github.com/ceph/ceph/compare/v0.87...v0.87.1
Loic, any annoucement/release note, yet ?
- Mail original -
De: "Lindsay M
On 02/26/2015 03:24 PM, Kyle Hutson wrote:
Just did it. Thanks for suggesting it.
No, definitely thank you. Much appreciated.
On Wed, Feb 25, 2015 at 5:59 PM, Brad Hubbard mailto:bhubb...@redhat.com>> wrote:
On 02/26/2015 09:05 AM, Kyle Hutson wrote:
Thank you Thomas. You at le
Just did it. Thanks for suggesting it.
On Wed, Feb 25, 2015 at 5:59 PM, Brad Hubbard wrote:
> On 02/26/2015 09:05 AM, Kyle Hutson wrote:
>
>> Thank you Thomas. You at least made me look it the right spot. Their
>> long-form is showing what to do for a mon, not an osd.
>>
>> At the bottom of step
The Ceph Debian Giant repo (http://ceph.com/debian-giant) seems to have had
an update from 0.87 to 0.87-1 on the 24-Feb.
Are there release notes anywhere on what changed etc? is there an upgrade
procedure?
thanks,
--
Lindsay
___
ceph-users mailing lis
Not sure Pankaj, I always do this after deploying ceph as I spent a long time
on this earlier, I think it is mentioned in the docs but I may have overlooked
it at the time so now it is imprinted heavily? I have been using multiple
networks as well but had to do it in both case?
Thx
-Origin
Hi Alan,
Thanks. Worked like magic.
Why did this happen though? I have deployed on the same machine using same
ceph-deploy and it was fine.
Not sure if anything is different this time, except my network, which shouldn’t
affect this.
Thakns
Pankaj
-Original Message-
From: Alan Johnson [
Try sudo chmod +r /etc/ceph/ceph.client.admin.keyring for the error below?
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Garg,
Pankaj
Sent: Wednesday, February 25, 2015 4:04 PM
To: Travis Rhoden
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph
Hi folks,
I am curious about how RBD cache works, whether it caches and writes back
entire objects. For example, if my VM images are stored with order 23 (8MB
blocks), would a 64MB rbd cache only be able to cache 8 objects at a time?
Or does it work at a more granular fashion? Also, when a sync/fl
- Original Message -
> From: "Gregory Farnum"
> To: "Tom Deneau"
> Cc: ceph-users@lists.ceph.com
> Sent: Wednesday, February 25, 2015 3:20:07 PM
> Subject: Re: [ceph-users] mixed ceph versions
>
> On Wed, Feb 25, 2015 at 3:11 PM, Deneau, Tom wrote:
> > I need to set up a cluster where
I figured it out.at least first hurdle.
I have 2 networks, 10.18.240.x. and 192.168.240.xx.
I was specifying different public and cluster addresses. Somehow it doesn’t
like it.
Maybe the issue really is the ceph-deploy is old. I am on ARM64 and this is the
latest I have for Ubuntu.
After I g
On 02/26/2015 09:05 AM, Kyle Hutson wrote:
Thank you Thomas. You at least made me look it the right spot. Their long-form
is showing what to do for a mon, not an osd.
At the bottom of step 11, instead of
sudo touch /var/lib/ceph/mon/{cluster-name}-{hostname}/sysvinit
It should read
sudo touch
Hi Pankaj,
I can't say that it will fix the issue, but the first thing I would
encourage is to use the latest ceph-deploy.
you are using 1.4.0, which is quite old. The latest is 1.5.21.
- Travis
On Wed, Feb 25, 2015 at 3:38 PM, Garg, Pankaj
wrote:
> Hi,
>
> I had a successful ceph cluster th
Hi,
I had a successful ceph cluster that I am rebuilding. I have completely
uninstalled ceph and any remnants and directories and config files.
While setting up the new cluster, I follow the Ceph-deploy documentation as
described before. I seem to get an error now (tried many times) :
ceph-deplo
On Wed, Feb 25, 2015 at 3:11 PM, Deneau, Tom wrote:
> I need to set up a cluster where the rados client (for running rados
> bench) may be on a different architecture and hence running a different
> ceph version from the osd/mon nodes. Is there a list of which ceph
> versions work together for a
I need to set up a cluster where the rados client (for running rados
bench) may be on a different architecture and hence running a different
ceph version from the osd/mon nodes. Is there a list of which ceph
versions work together for a situation like this?
-- Tom
_
Thank you Thomas. You at least made me look it the right spot. Their
long-form is showing what to do for a mon, not an osd.
At the bottom of step 11, instead of
sudo touch /var/lib/ceph/mon/{cluster-name}-{hostname}/sysvinit
It should read
sudo touch /var/lib/ceph/osd/{cluster-name}-{osd-num}/sys
Heres the doc I used to get the info:
http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/
On Feb 25, 2015 5:55 PM, "Thomas Foster" wrote:
> I am using the long form and have it working. The one thing that I saw
> was to change from osd_host to just host. See if that works.
> O
I am using the long form and have it working. The one thing that I saw was
to change from osd_host to just host. See if that works.
On Feb 25, 2015 5:44 PM, "Kyle Hutson" wrote:
> I just tried it, and that does indeed get the OSD to start.
>
> However, it doesn't add it to the appropriate place
I just tried it, and that does indeed get the OSD to start.
However, it doesn't add it to the appropriate place so it would survive a
reboot. In my case, running 'service ceph status osd.16' still results in
the same line I posted above.
There's still something broken such that 'ceph-disk activat
It'd be nice to see a standard/recommended LB and HA approach for RGW
with supporting documentation too.
On 26 February 2015 at 06:31, Sage Weil wrote:
> Hey,
>
> We are considering switching to civetweb (the embedded/standalone rgw web
> server) as the primary supported RGW frontend instead of t
So I issue it twice? e.g.
ceph-osd -i X --mkfs --mkkey
...other commands...
ceph-osd -i X
?
On Wed, Feb 25, 2015 at 4:03 PM, Robert LeBlanc
wrote:
> Step #6 in
> http://ceph.com/docs/master/install/manual-deployment/#long-form
> only set-ups the file structure for the OSD, it doesn't start the
Step #6 in http://ceph.com/docs/master/install/manual-deployment/#long-form
only set-ups the file structure for the OSD, it doesn't start the long
running process.
On Wed, Feb 25, 2015 at 2:59 PM, Kyle Hutson wrote:
> But I already issued that command (back in step 6).
>
> The interesting part is
I tried finding an answer to this on Google, but couldn't find it.
Since BTRFS can parallel the journal with the write, does it make
sense to have the journal on the SSD (because then we are forcing two
writes instead of one)?
Our plan is to have a caching tier of SSDs in front of our rotational
But I already issued that command (back in step 6).
The interesting part is that "ceph-disk activate" apparently does it
correctly. Even after reboot, the services start as they should.
On Wed, Feb 25, 2015 at 3:54 PM, Robert LeBlanc
wrote:
> I think that your problem lies with systemd (even th
Cool, I'll see if we have some cycles to look at it.
On Wed, Feb 25, 2015 at 2:49 PM, Sage Weil wrote:
> On Wed, 25 Feb 2015, Robert LeBlanc wrote:
>> We tried to get radosgw working with Apache + mod_fastcgi, but due to
>> the changes in radosgw, Apache, mode_*cgi, etc and the documentation
>> l
I think that your problem lies with systemd (even though you are using
SysV syntax, systemd is really doing the work). Systemd does not like
multiple arguments and I think this is why it is failing. There is
supposed to be some work done to get systemd working ok, but I think
it has the limitation
On Wed, 25 Feb 2015, Robert LeBlanc wrote:
> We tried to get radosgw working with Apache + mod_fastcgi, but due to
> the changes in radosgw, Apache, mode_*cgi, etc and the documentation
> lagging and not having a lot of time to devote to it, we abandoned it.
> Where it the documentation for civetwe
We tried to get radosgw working with Apache + mod_fastcgi, but due to
the changes in radosgw, Apache, mode_*cgi, etc and the documentation
lagging and not having a lot of time to devote to it, we abandoned it.
Where it the documentation for civetweb? If it is appliance like and
easy to set-up, we w
I'm having a similar issue.
I'm following http://ceph.com/docs/master/install/manual-deployment/ to a T.
I have OSDs on the same host deployed with the short-form and they work
fine. I am trying to deploy some more via the long form (because I want
them to appear in a different location in the cr
Hey,
We are considering switching to civetweb (the embedded/standalone rgw web
server) as the primary supported RGW frontend instead of the current
apache + mod-fastcgi or mod-proxy-fcgi approach. "Supported" here means
both the primary platform the upstream development focuses on and what the
On 25/02/2015 17:31, shylesh kumar wrote:
I am trying to use ceph-deploy to deploy my cluster but it tries to
download the rpms which i don't want, however what i want is using
ceph-deploy to use my local git repo.
`ceph-deploy install` is specifically for configuring package repos and
installi
IIRC these global values for total size and available are just summations
from the (programmatic equivalent) of running df on each machine locally,
but the used values are based on actual space used by each PG. That has
occasionally produced some odd results depending on how you've configured
your
Also, did you successfully start your monitor(s), and define/create the
OSDs within the Ceph cluster itself?
There are several steps to creating a Ceph cluster manually. I'm unsure if
you have done the steps to actually create and register the OSDs with the
cluster.
- Travis
On Wed, Feb 25, 20
Check firewall rules and selinux. It sometimes is a pain in the ... :)
25 lut 2015 01:46 "Barclay Jameson" napisał(a):
> I have tried to install ceph using ceph-deploy but sgdisk seems to
> have too many issues so I did a manual install. After mkfs.btrfs on
> the disks and journals and mounted th
We use ceph-disk without any issues on CentOS7. If you want to do a
manual deployment, verfiy you aren't missing any steps in
http://ceph.com/docs/master/install/manual-deployment/#long-form.
On Tue, Feb 24, 2015 at 5:46 PM, Barclay Jameson
wrote:
> I have tried to install ceph using ceph-deploy
Hi All,
I am trying to use ceph-deploy to deploy my cluster but it tries to
download the rpms which i don't want, however what i want is using
ceph-deploy to use my local git repo.
I have compiled the code and did "make install" but starting monitor from
ceph-deploy fails with
Traceback (most r
Yes. :)
-Greg
On Wed, Feb 25, 2015 at 8:33 AM Jordan A Eliseo wrote:
> Hi all,
>
> Quick qestion, does the Crush map always strive for proportionality when
> rebalancing a cluster? i.e. Say I have 8 OSDs (with a two node cluster - 4
> OSDs per host - at ~90% utilization (which I know is bad, this
Thank you John.
I've solved the issue.
# ceph mds dump
dumped mdsmap epoch 128
epoch 128
flags 0
created 2015-02-24 15:55:10.631958
modified2015-02-25 17:22:20.946910
tableserver 0
root0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure
Hi all,
Quick qestion, does the Crush map always strive for proportionality when
rebalancing a cluster? i.e. Say I have 8 OSDs (with a two node cluster - 4
OSDs per host - at ~90% utilization (which I know is bad, this is just
hypothetical). Now if I add a total of 8 OSDs - 4 new OSDs for each h
On 25/02/2015 15:23, ceph-users wrote:
# ceph mds rm 23432 mds.'192.168.0.1'
Traceback (most recent call last):
File "/bin/ceph", line 862, in
sys.exit(main())
File "/bin/ceph", line 805, in main
sigdict, inbuf, verbose)
File "/bin/ceph", line 405, in new_style_command
valid_d
Ok John.
Recap:
If I have this situation:
# ceph mds dump
dumped mdsmap epoch 84
epoch 84
flags 0
created 2015-02-24 15:55:10.631958
modified2015-02-25 16:18:23.019144
tableserver 0
root0
session_timeout 60
session_autoclose 300
max_file_size 1099511627776
last_failure
Cant find out why this can happen:
Got an HEALTH_OK cluster. ceph version 0.87, all nodes are Debian Wheezy
with a stable kernel 3.2.65-1+deb7u1. ceph df shows me this:
*$ ceph df*
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
*242T 221T8519G 3.43 *
POOLS:
NA
On 25/02/2015 14:21, ceph-users wrote:
Hi John,
question: how to I retrieve the gid number?
Ah, I thought I had mentioned that in the previous email, but now I
realise I left that important detail out! Here's what I meant to write:
When you do "ceph mds dump", if there are any up daemons, yo
Hi Stéphane,
don't know what I did, but I can't reproduce this faulty behaviour any more.
I will purge my complete Cluster and try it again.
I'll tell you, when there are any news.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.
Hi John,
question: how to I retrieve the gid number?
Thank you,
Gian
On 24/02/2015 09:58, ceph-users wrote:
Hi all,
I've set up a ceph cluster using this playbook:
https://github.com/ceph/ceph-ansible
I've configured in my hosts list
[mdss]
hostname1
hostname2
I now need to remove thi
Hello!
I attempt to execute commands:
rbd create web-services/rbd1 --size 4096 -c /etc/ceph/ceph.conf -m
192.168.10.211:6789 -k /etc/ceph/ceph.client.admin.keyring
or
rados -p web-services ls
But when the command is invoked it doesn't return anything and it
continues forever.
I'm beginner
Hi all
Context : Firefly 0.80.8, Ubuntu 14.04 LTS
I tried to change < live > the debug level of a rados gateway using ceph daemon
/var/run/ceph/ceph-client.radosgw.fr-rennes-radosgw1.asok config set debug_rgw
20 the response is { "success": ""} but it has no effect.
Is there another para
Hi all,
Context : Firefly 0.80.8, Ubuntu 14.04 LTS, Lab cluster
Yesterday, I successfully deleted a s3 bucket "Bucket001ghis" after removing
the contents that were in.
Today, as I was browsing the radosgw system metadata, I discovered an
difference between the bucket metadata and the b
Thanks Greg
After seeing some recommendations I found in another thread, my impatience got
the better of me, and I've start the process again, but there is some logic, I
promise :-)
I've copied the process from Michael Kidd, I believe, and it goes along the
lines of:
setting noup, noin, noscru
On 24/02/2015 22:32, ceph-users wrote:
# ceph mds rm mds.-1.0
Invalid command: mds.-1.0 doesn't represent an int
mds rm : remove nonactive mds
Error EINVAL: invalid command
Any clue?
Thanks
Gian
See my previous message about use of "mds rm": you need to pass it a GID.
However, in this
Le Wed, 25 Feb 2015 16:24:24 +0530
khyati joshi écrivait:
> ,KEYBOARD IS NOT
> RESPONDING, OTHERWISE IT IS OK.
Obviously your keyboard is broken, you're writing in all caps.
Joke aside,
1° you're SCREAMING, this is rude and bad netiquette. At
this point you've probably been banned as rude/ob
Hi,
I would like to invite you to our next MeetUp in Berlin on March 23:
http://www.meetup.com/Ceph-Berlin/events/219958751/
Stephan Seitz will talk about HA-iSCSI with Ceph.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 0
I WANT TO DEPLOY CEPH INSIDE VIRTUALMACHINES USING VIRTUALBOX
.I HAVE INSTALLED CENTOS-5.11 USING VIRTUAL DISK IMAGE.
OS IS INSTALLED AND WHEN PROMPTED FOR LOCALHOST LOGINNAME, I ENTERED
ROOT AND TRIED ALSO WITH CENTOS.BUT WHEN PROMPTED FOR PASSWORD,
KEYBOARD DIDN'T WORKING. NOT ABLE TO TYPE ANY
54 matches
Mail list logo