Hey folks. Running RHEL7.1 with stock 3.10.0 kernel and trying to deploy
Infernalis. Haven't done this since Firefly but I used to know what I was
doing. My problem is "ceph-deploy new" and "ceph-deploy install" seem to go
well but "ceph-deploy mon create-initial" reliably fails when starting
Augh, never mind, firewall problem. Thanks anyway.
From: Gruher, Joseph R
Sent: Thursday, June 11, 2015 10:55 PM
To: ceph-users@lists.ceph.com
Cc: Gruher, Joseph R
Subject: MONs not forming quorum
Hi folks-
I'm trying to deploy 0.94.2 (Hammer) onto CentOS7. I used to be pretty good at
Hi folks-
I'm trying to deploy 0.94.2 (Hammer) onto CentOS7. I used to be pretty good at
this on Ubuntu but it has been a while. Anyway, my monitors are not forming
quorum, and I'm not sure why. They can definitely all ping each other and
such. Any thoughts on specific problems in the outpu
Hi all-
On this page:
http://ceph.com/dev-notes/updates-to-ceph-tgt-iscsi-support/
There's a mention (last comment) about booting off an RBD through PXE. I was
wondering if anyone here has done this, and how it worked out for you, and if
you might have a more detailed example of how to impleme
Aha – upgrade of kernel from 3.13 to 3.14 appears to have resolved the problem.
Thanks,
Joe
From: Gruher, Joseph R
Sent: Friday, April 04, 2014 11:48 AM
To: Ирек Фасихов; Ilya Dryomov
Cc: ceph-users@lists.ceph.com; Gruher, Joseph R
Subject: RE: [ceph-users] Ceph RBD 0.78 Bug or feature?
Meant
osdmap e216: 18 osds: 18 up, 18 in
flags noscrub,nodeep-scrub
pgmap v202112: 2784 pgs, 10 pools, 1637 GB data, 427 kobjects
2439 GB used, 12643 GB / 15083 GB avail
2784 active+clean
From: Gruher, Joseph R
Sent: Friday, April 04, 2014 11:44 AM
To:
Hi folks-
Was this ever resolved? I’m not finding a resolution in the email chain,
apologies if I am missing it. I am experiencing this same problem. Cluster
works fine for object traffic, can’t seem to get rbd to work in 0.78. Worked
fine in 0.72.2 for me. Running Ubuntu 13.04 with 3.12 k
Actually, I have to revise this, Ceph _is_ freeing capacity, but very slowly,
roughly 150G every 5 minutes. Is that normal? I feel like capacity is
generally freed almost immediately when I've previously deleted pools.
Thanks!
-Joe
From: Gruher, Joseph R
Sent: Thursday, April 03, 2014
Hi all-
I am testing on Ceph 0.78 running on Ubuntu 13.04 with 3.13 kernel. I had two
replication pools and five erasure code pools. Cluster was getting full so I
deleted all the EC pools. However, Ceph is not freeing the capacity. Note
below there is only 1636G in the two pools but the glo
You should refer to
>
> https://ceph.com/docs/v0.78/dev/erasure-coded-pool/
>
>Your diagnostic of the problem seems correct :-)
>
>Cheers
>
>On 24/03/2014 21:01, Gruher, Joseph R wrote:> Hi Folks-
>>
>>
>>
>> Having a bit of trouble with EC setup
ns and the solution is to change the failure domains to OSDs instead of
the default of hosts?
2. If so, how would you make such a change for an erasure code pool /
ruleset, in the 0.78 branch?
Thanks!
-Joe
From: Gruher, Joseph R
Sent: Monday, March 24, 2014 1:01 PM
To: ceph-users@lists.c
Hi Folks-
Having a bit of trouble with EC setup on 0.78. Hoping someone can help me out.
I've got most of the pieces in place, I think I'm just having a problem with
the ruleset.
I am running 0.78:
ceph --version
ceph version 0.78 (f6c746c314d7b87b8419b6e584c94bfe4511dbd4)
I created a new ru
Great, thanks! I'll watch (hope) for an update later this week. Appreciate
the rapid response.
-Joe
From: Ian Colle [mailto:ian.co...@inktank.com]
Sent: Sunday, March 16, 2014 7:22 PM
To: Gruher, Joseph R; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] erasure coding testing
Joe,
Hey all-
Can anyone tell me, if I install the latest development release (looks like it
is 0.77) can I enable and test erasure coding? Or do I have to wait for the
actual Firefly release? I don't want to deploy anything for production,
basically I just want to do some lab testing to see what
>> Ultimately this seems to be an FIO issue. If I use "--iodepth X" or "--
>iodepth=X" on the FIO command line I always get queue depth 1. After
>switching to specifying "iodepth=X" in the body of the FIO workload file I do
>get the desired queue depth and I can immediately see performance is muc
>-Original Message-
>From: Gregory Farnum [mailto:g...@inktank.com]
>Sent: Tuesday, February 04, 2014 9:46 AM
>To: Gruher, Joseph R
>Cc: Mark Nelson; ceph-users@lists.ceph.com; Ilya Dryomov
>Subject: Re: [ceph-users] Low RBD Performance
>
>On Tue, Feb 4, 2014 at 9
03/2014 07:29 PM, Gruher, Joseph R wrote:
>> Hi folks-
>>
>> I'm having trouble demonstrating reasonable performance of RBDs. I'm
>> running Ceph 0.72.2 on Ubuntu 13.04 with the 3.12 kernel. I have four
>> dual-Xeon servers, each with 24GB RAM, and an Intel 320
Hi folks-
I'm having trouble demonstrating reasonable performance of RBDs. I'm running
Ceph 0.72.2 on Ubuntu 13.04 with the 3.12 kernel. I have four dual-Xeon
servers, each with 24GB RAM, and an Intel 320 SSD for journals and four WD 10K
RPM SAS drives for OSDs, all connected with an LSI 1078
Hi all-
I'm creating some scripted performance testing for my Ceph cluster. The part
relevant to my questions works like this:
1. Create some pools
2. Create and map some RBDs
3. Write-in the RBDs using DD or FIO
4. Run FIO testing on the RBDs (small block random and
>-Original Message-
>From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
>boun...@lists.ceph.com] On Behalf Of Gregory Farnum
>Sent: Thursday, December 19, 2013 7:20 AM
>To: Christian Balzer
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] Failure probability with largish d
>I don't know how rbd works inside, but i think ceph rbd here returns zeros
>without real osd disk read if the block/sector of the rbd-disk is unused. That
>would explain the graph you see. You can try adding a second rbd image and
>not format/use it and benchmark this disk, then make a filesystem
>For ~$67 you get a mini-itx motherboard with a soldered on 17W dual core
>1.8GHz ivy-bridge based Celeron (supports SSE4.2 CRC32 instructions!).
>It has 2 standard dimm slots so no compromising on memory, on-board gigabit
>eithernet, 3 3Gb/s + 1 6Gb/s SATA, and a single PCIE slot for an additional
Hi Alfredo-
Have you looked at adding the ability to specify a proxy on the ceph-deploy
command line? Something like:
ceph-deploy install --proxy {http_proxy}
That would then need to run all the remote commands (rpm, curl, wget, etc) with
the proxy. Not sure how complex that would be
Those aren't really errors, when ceph-deploy runs commands on the host anything
that gets printed to stderr as a result is relayed back through ceph-deploy
with the [ERROR] tag. If you look at the content of the errors it just has the
output of the commands that were run in the step beforehand.
>-Original Message-
>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>Sent: Wednesday, November 20, 2013 7:17 AM
>To: Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] ceph-deploy disk zap fails but succeeds on retry
>
>On Mon,
>-Original Message-
>From: Gruher, Joseph R
>Sent: Tuesday, November 19, 2013 12:24 PM
>To: 'Wolfgang Hennerbichler'; Bernhard Glomm
>Cc: ceph-users@lists.ceph.com
>Subject: RE: [ceph-users] Size of RBD images
>
>So is there any size limit on RBD ima
So is there any size limit on RBD images? I had a failure this morning
mounting 1TB RBD. Deleting now (why does it take so long to delete if it was
never even mapped, much less written to?) and will retry with smaller images.
See output below. This is 0.72 on Ubuntu 13.04 with 3.12 kernel.
>-Original Message-
>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>Sent: Monday, November 18, 2013 6:34 AM
>To: Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] ceph-deploy disk zap fails but succeeds on retry
>
>I went ahead and c
Using ceph-deploy 1.3.2 with ceph 0.72.1. Ceph-deploy disk zap will fail and
exit with error, but then on retry will succeed. This is repeatable as I go
through each of the OSD disks in my cluster. See output below.
I am guessing the first attempt to run changes something about the initial
s
I didn't think you could specify the journal in this manner (just pointing
multiple OSDs on the same host all to journal /dev/sda). Don't you either need
to partition the SSD and point each SSD to a separate partition, or format and
mount the SSD and each OSD will use a unique file on the mount
>-Original Message-
>From: Dinu Vlad [mailto:dinuvla...@gmail.com]
>Sent: Thursday, November 07, 2013 10:37 AM
>To: ja...@peacon.co.uk; Gruher, Joseph R; ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] ceph cluster performance
>
>I was under the same impression -
Is there any plan to implement some kind of QoS in Ceph? Say I want to provide
service level assurance to my OpenStack VMs and I might have to throttle
bandwidth to some to provide adequate bandwidth to others - is anything like
that planned for Ceph? Generally with regard to block storage (rb
>From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
>boun...@lists.ceph.com] On Behalf Of ??
>Sent: Wednesday, November 06, 2013 10:04 PM
>To: ceph-users
>Subject: [ceph-users] please help me.problem with my ceph
>
>1. I have installed ceph with one mon/mds and one osd.When i use 'ceph -
>-Original Message-
>From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
>boun...@lists.ceph.com] On Behalf Of Dinu Vlad
>Sent: Thursday, November 07, 2013 3:30 AM
>To: ja...@peacon.co.uk; ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] ceph cluster performance
>In this case h
>-Original Message-
>From: Yehuda Sadeh [mailto:yeh...@inktank.com]
>Sent: Monday, November 04, 2013 12:40 PM
>To: Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] radosgw fails to start
>
>Not sure why you're able to run the '
why can't
radosgw start? Details below.
Thanks!
>-Original Message-
>From: Gruher, Joseph R
>Sent: Friday, November 01, 2013 11:50 AM
>To: Gruher, Joseph R
>Subject: RE: radosgw fails to start
>
>>Adding some debug arguments has generated output which I belie
>-Original Message-
>From: Derek Yarnell [mailto:de...@umiacs.umd.edu]
>Sent: Friday, November 01, 2013 12:20 PM
>To: Gruher, Joseph R; ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] radosgw fails to start
>
>On 11/1/13, 2:07 PM, Gruher, Joseph R wrote:
>>
.145,10.23.37.161,10.23.37.165
osd_journal_size = 1024
mon_initial_members = joceph01, joceph02, joceph03, joceph04
fsid = 74d808db-aaa7-41d2-8a84-7d590327a3c7
From: Gruher, Joseph R
Sent: Wednesday, October 30, 2013 12:24 PM
To: ceph-users@lists.ceph.com
Subject: radosgw fails to start, leaves no
I have CentOS 6.4 running with the 3.11.6 kernel from elrepo and it includes
the rbd module. I think you could make the same update on RHEL 6.4 and get
rbd. From there it is very simple to mount an rbd device. Here are a few
notes on what I did.
Update kernel:
sudo rpm --import http://elrepo
Hi all-
Trying to set up object storage on CentOS. I've done this successfully on
Ubuntu but I'm having some trouble on CentOS. I think I have everything
configured but when I try to start the radosgw service it reports starting, but
then the status is not running, with no helpful output as t
If you are behind a proxy try configuring the wget proxy through /etc/wgetrc.
I had a similar problem where I could complete wget commands manually but they
would fail in ceph-deploy until I configured the wget proxy in that manner.
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-bo
Try configuring the curl proxy in /root/.curlrc. I had a similar problem
earlier this week.
Overall I have to be sure to set all these proxies individually for ceph-deploy
to work on CentOS (Ubuntu is easier):
Curl: /root/.curlrc
rpm: /root/.rpmmacros
wget: /etc/wgetrc
yum: /etc/yum.conf
-Joe
>-Original Message-
>From: Tyler Brekke [mailto:tyler.bre...@inktank.com]
>Sent: Thursday, October 24, 2013 4:36 AM
>To: Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] Default PGs
>
>You have to do this before creating your first moni
>-Original Message-
>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>Sent: Thursday, October 24, 2013 5:24 AM
>To: Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] ceph-deploy hang on CentOS 6.4
>
>On Wed, Oct 23, 2013 at 12:43 PM,
Speculating, but it seems possible that the ':' in the path is problematic,
since that is also the separator between disk and journal (HOST:DISK:JOURNAL)?
Perhaps if you enclose in ''s or or use /dev/disk/by-id?
>-Original Message-
>From: ceph-users-boun...@lists.ceph.com [mailto:ceph-us
Hi all,
I have CentOS 6.4 with 3.11.6 kernel running (built from latest stable on
kernel.org) and I cannot load the rbd client module. Should I have to do
anything to enable/install it? Shouldn't it be present in this kernel?
[ceph@joceph05 /]$ cat /etc/centos-release
CentOS release 6.4 (Fina
Should osd_pool_default_pg_num and osd_pool_default_pgp_num apply to the
default pools? I put them in ceph.conf before creating any OSDs but after
bringing up the OSDs the default pools are using a value of 64.
Ceph.conf contains these lines in [global]:
osd_pool_default_pgp_num = 800
osd_pool_
>-Original Message-
>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>
>Did you tried working with the `--no-adjust-repos` flag in ceph-deploy ? It
>will
>allow you to tell ceph-deploy to just go and install ceph without attempting to
>import keys or doing anything with your repos.
er proxies
(environment variable and/or bashrc) didn't seem to work for me with CentOS, I
had to set each proxy individually.
-Joe
From: Gruher, Joseph R
Sent: Tuesday, October 22, 2013 1:20 PM
To: ceph-users@lists.ceph.com
Subject: ceph-deploy hang on CentOS 6.4
Hi all-
Ceph-Deploy 1.2.7
Hi all-
Ceph-Deploy 1.2.7 is hanging for me on CentOS 6.4 at this step:
[joceph01][INFO ] Running command: rpm -Uvh --replacepkgs
http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm
The command runs fine if I execute it myself via SSH with sudo to the target
system:
[ceph
..@inktank.com]
>Sent: Monday, October 07, 2013 1:27 PM
>To: Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] Client Timeout on Rados Gateway
>
>The ping tests you're running are connecting to different interfaces
>(10.23.37.175) than those you specify
t;From: Gregory Farnum [mailto:g...@inktank.com]
>Sent: Monday, October 07, 2013 1:27 PM
>To: Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] Client Timeout on Rados Gateway
>
>The ping tests you're running are connecting to different interfaces
&
>> In our small test deployments (160 HDs and OSDs across 20 machines)
>> our performance is quickly bounded by CPU and memory overhead. These
>> are 2U machines with 2x 6-core Nehalem; and running 8 OSDs consumed
>> 25% of the total CPU time. This was a cuttlefish deployment.
>
>You might be inte
Question about the ideal number of PGs. This is the advice I've read for a
single pool:
50-100 PGs per OSD
or
total_PGs = (OSDs * 100) / Replicas
What happens as the number of pools increases? Should each pool have that same
number of PGs, or do I need to increase or decrease the number of PG
tname match in
ceph.conf" or something along those lines.
>-Original Message-
>From: Fuchs, Andreas (SwissTXT) [mailto:andreas.fu...@swisstxt.ch]
>Sent: Thursday, October 03, 2013 12:57 AM
>To: myk...@gmail.com; Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject
Can anyone provide me a sample ceph.conf with multiple rados gateways? I must
not be configuring it correctly and I can't seem to Google up an example or
find one in the docs. Thanks!
-Joe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://
On my system my user is named "ceph" so I modified /home/ceph/.ssh/config.
That seemed to work fine for me. ~/ is shorthand for your user's home folder.
I think SSH will default to the current username so if you just use the same
username everywhere this may not even be necessary.
My file:
c
Along the lines of this thread, if I have OSD(s) on rotational HDD(s), but have
the journal(s) going to an SSD, I am curious about the best procedure for
replacing the SSD should it fail.
-Joe
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Scott
Hello-
I've set up a rados gateway but I'm having trouble accessing it from clients.
I can access it using rados command line just fine from any system in my ceph
deployment, including my monitors and OSDs, the gateway system, and even the
admin system I used to run ceph-deploy. However, when
>-Original Message-
>From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
>boun...@lists.ceph.com] On Behalf Of Gruher, Joseph R
>Sent: Monday, September 30, 2013 10:27 AM
>To: Yehuda Sadeh
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] failure s
>-Original Message-
>From: Yehuda Sadeh [mailto:yeh...@inktank.com]
>Sent: Friday, September 27, 2013 9:30 AM
>To: Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] failure starting radosgw after setting up object
>storage
>
>On W
Hi all-
I am following the object storage quick start guide. I have a cluster with two
OSDs and have followed the steps on both. Both are failing to start radosgw
but each in a different manner. All the previous steps in the quick start
guide appeared to complete successfully. Any tips on h
>-Original Message-
>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>Sent: Monday, September 23, 2013 5:45 AM
>To: Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] monitor deployment during quick start
>
>On Fri, Sep 20, 2013 at 3
debug monitor level (e.g. 10)
[cephtest02][ERROR ] --mkfs
[cephtest02][ERROR ] build fresh monitor fs
[ceph_deploy.mon][ERROR ] Failed to execute command: ceph-mon --cluster ceph
--mkfs -i cephtest02 --keyring /var/lib/ceph/tmp/ceph-cephtest02.mon.keyring
[ceph_deploy][ERROR ] Gener
Could someone make a quick clarification on the quick start guide for me? On
this page: http://ceph.com/docs/next/start/quick-ceph-deploy/. After I do
"ceph-deploy new" to a system is that system then a monitor from that point
forward? Or do I then have to do "ceph-deploy mon create" to that
>-Original Message-
>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>
>Can you try running ceph-deploy *without* sudo ?
>
Ah, OK, sure. Without sudo I end up hung here again:
ceph@cephtest01:~$ ceph-deploy install cephtest03 cephtest04 cephtest05
cephtest06
[cephtest03][INFO ] R
Using latest ceph-deploy:
ceph@cephtest01:/my-cluster$ sudo ceph-deploy --version
1.2.6
I get this failure:
ceph@cephtest01:/my-cluster$ sudo ceph-deploy install cephtest03 cephtest04
cephtest05 cephtest06
[ceph_deploy.install][DEBUG ] Installing stable version dumpling on cluster
ceph hosts ce
>>-Original Message-
>>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>>
>>Again, in this next coming release, you will be able to tell
>>ceph-deploy to just install the packages without mangling your repos
>>(or installing keys)
>
Updated to new ceph-deploy release 1.2.6 today but I
>-Original Message-
>From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
>boun...@lists.ceph.com] On Behalf Of Mike Dawson
>
> you need to understand losing an SSD will cause
>the loss of ALL of the OSDs which had their journal on the failed SSD.
>
>First, you probably don't want
>-Original Message-
>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>
>I was about to ask if you had tried running that command through SSH, but
>you did and had correct behavior. This is puzzling for me because that is
>exactly what ceph-deploy does :/
>
>When you say 'via SSH comman
>-Original Message-
>From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
>boun...@lists.ceph.com] On Behalf Of Gilles Mocellin
>
>So you can add something like this in all ceph nodes' /etc/sudoers (use
>visudo) :
>
>Defaults env_keep += "http_proxy https_proxy ftp_proxy no_proxy"
>-Original Message-
>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>Subject: Re: [ceph-users] problem with ceph-deploy hanging
>
>ceph-deploy will use the user as you are currently executing. That is why, if
>you are calling ceph-deploy as root, it will log in remotely as root.
>
>So
>But certainly, I am worried about why is it hanging for you here, this is a
>problem and I really want to make sure this is either fixed or confirmed it was
>some kind of misconfiguration.
>
>I believe that the problem is coming from using `sudo` + `root`. This is a
>problem that is certainly fixe
>From: Gruher, Joseph R
>>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>>On Fri, Sep 13, 2013 at 5:06 PM, Gruher, Joseph R
>> wrote:
>>
>>> root@cephtest01:~# ssh cephtest02 wget -q -O-
>>> 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f
>-Original Message-
>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>Sent: Friday, September 13, 2013 3:17 PM
>To: Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] problem with ceph-deploy hanging
>
>On Fri, Sep 13, 2013 at 5
>-Original Message-
>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>Sent: Friday, September 13, 2013 3:17 PM
>To: Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] problem with ceph-deploy hanging
>
>On Fri, Sep 13, 2013 at 5
wip-6284 shortlog | log | tree
...
ceph.git
RSS Atom
root@cephtest01:~#
Is this URL wrong, or is the data at the URL incorrect?
Thanks,
Joe
From: Gruher, Joseph R
Sent: Friday, September 13, 2013 1:17 PM
To: ceph-users@lists.ceph.com
Cc: Gruher, Joseph R
Subject: problem with ceph us
Hello all-
I'm setting up a new Ceph cluster (my first time - just a lab experiment, not
for production) by following the docs on the ceph.com website. The preflight
checklist went fine, I installed and updated Ubuntu 12.04.2, set up my user and
set up passwordless SSH, etc. I ran "ceph-deplo
78 matches
Mail list logo