On Thu, Oct 9, 2014 at 9:32 AM, Ramakrishnan Periyasamy
wrote:
> Hi,
>
> Thanks Ilya for reply and I require some more clarifications, correct me if
> somewhere am wrong.
>
> Am able to map rbd with --read-only option using user specific keyring for
> pool3 since it is having "rwx" but unable to
I ran into this - needed to actually be root via sudo -i or similar,
*then* it worked. Unhelpful error message is I think referring to no
intialized db.
On 09/10/14 16:36, lakshmi k s wrote:
Good workaround. But it did not work. Not sure what this error is all
about now.
gateway@gateway:~$ op
Hi guys, thanks for the hints...
I was able to fix it, by adding the line to nginx.conf (or fastcgi_params file):
fastcgi_param SERVER_PORT_SECURE $server_port;
Thank you so much!
Marco Garcês
#sysadmin
Maputo - Mozambique
On Wed, Oct 8, 2014 at 6:25 PM, Yehuda Sadeh wrote:
> On Wed, Oct 8,
I spoke to soon...
Now if I use HTTP I get errors!
Let me try to debug, and post back.
Thanks,
Marco Garcês
#sysadmin
Maputo - Mozambique
[Phone] +258 84 4105579
[Skype] marcogarces
On Thu, Oct 9, 2014 at 10:38 AM, Marco Garcês wrote:
> Hi guys, thanks for the hints...
> I was able to fix it,
Hi All,
Anybody know how to fix ceph-deploy problem like this?
[root@ceph01 ceph-new-2]# ceph-deploy osd activate
ceph03:/var/local/osd0 ceph04:/var/local/osd1
[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.17): /usr/bin/ce
Fixed.. I attach the server part, for nginx/tengine config file:
server {
listen 80;
server_name gateway.local;
error_log logs/error_http.log debug;
client_max_body_size 100m;
fastcgi_request_buffering off;
location / {
fastcgi_pass
On Wed, Oct 8, 2014 at 9:13 PM, Shawn Edwards wrote:
> On Wed, Oct 8, 2014 at 2:35 AM, Ilya Dryomov
> wrote:
>>
>> On Wed, Oct 8, 2014 at 2:19 AM, Shawn Edwards
>> wrote:
>> > Are there any docs on what is possible by writing/reading from the rbd
>> > driver's sysfs paths? Is it documented anyw
Hi,
I am setting up a test ceph cluster, on decommissioned hardware (hence : not
optimal, I know).
I have installed CentOS7, installed and setup ceph mons and OSD machines using
puppet, and now I'm trying to add OSDs with the servers OSD disks... and I have
issues (of course ;) )
I used the Ce
Hi Roman,
This was a recent change in ceph-deploy to enable Ceph services on
CentOS/RHEL/Fedora distros after deploying a daemon (an OSD in your
case).
There was an issue where the remote connection was closed before being
able to enable a service when creating an OSD and this just got fixed
yest
Bonjour,
I'm not familiar with RHEL7 but willing to learn ;-) I recently ran into
confusing situations regarding the content of /dev/disk/by-partuuid because
partprobe was not called when it should have (ubuntu). On RHEL, kpartx is used
instead because partprobe reboots, apparently. What is the
Hi Loic,
With this example disk/machine that I left untouched until now :
/dev/sdb :
/dev/sdb1 ceph data, prepared, cluster ceph, osd.44, journal /dev/sdb2
/dev/sdb2 ceph journal, for /dev/sdb1
[root@ceph1 ~]# ll /dev/disk/by-partuuid/
total 0
lrwxrwxrwx 1 root root 10 Oct 9 15:09 2c27dbda-fb
Does what do sgdisk --info=1 /dev/sde and sgdisk --info=2 /dev/sde print ?
It looks like the journal points to an incorrect location (you should see this
by mounting /dev/sde1). Here is what I have on a cluster
root@bm0015:~# ls -l /var/lib/ceph/osd/ceph-1/
total 56
-rw-r--r-- 1 root root 19
Hi Loic,
Back on sdb, as the sde output was from another machine on which I ran partx -u
afterwards.
To reply your last question first : I think the SG_IO error comes from the fact
that disks are exported as a single disks RAID0 on a PERC 6/E, which does not
support JBOD - this is decommissione
On 09/10/2014 16:04, SCHAER Frederic wrote:
> Hi Loic,
>
> Back on sdb, as the sde output was from another machine on which I ran partx
> -u afterwards.
> To reply your last question first : I think the SG_IO error comes from the
> fact that disks are exported as a single disks RAID0 on a PERC
-Message d'origine-
De : Loic Dachary [mailto:l...@dachary.org]
Envoyé : jeudi 9 octobre 2014 16:20
À : SCHAER Frederic; ceph-users@lists.ceph.com
Objet : Re: [ceph-users] ceph-dis prepare :
UUID=----
On 09/10/2014 16:04, SCHAER Frederic wrote:
> Hi Lo
On 09/10/2014 16:29, SCHAER Frederic wrote:
>
>
> -Message d'origine-
> De : Loic Dachary [mailto:l...@dachary.org]
> Envoyé : jeudi 9 octobre 2014 16:20
> À : SCHAER Frederic; ceph-users@lists.ceph.com
> Objet : Re: [ceph-users] ceph-dis prepare :
> UUID=----0
Hey guys,
Good news!! Ilya investigated the ticket and gave me a hint as to the issue
- we need to use `--net host` on the consuming container so that the
network context is what Ceph expects. I am now running my test container
like so:
docker run -i -v /sys:/sys --net host
172.21.12.100:5000/dei
Yep, that was it. My concern tho is that one node with a bad clock was able
to lock the whole 16 node cluster, should that be the case?
><>
nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
On Wed, Oct 8, 2014 at 6:48 PM, Gregory Farnum wrote:
> Check your
Hi All,
I'm trying to add a crush rule to my map, which looks like this:
rule rack_ruleset {
ruleset 1
type replicated
min_size 1
max_size 10
step take default
step choose firstn 2 type rack
step chooseleaf firstn 2 type host
step emit
}
I'm not configuring any pools to use the ruleset at this t
I have a trivial fix for the issue that I'd like to check and get this
one cleared, but never got to it due to some difficulties with a
proper keystone setup in my environment. If you can and would like to
test it so that we could get it merged it would be great.
Thanks,
Yehuda
On Wed, Oct 8, 201
On Thu, Oct 9, 2014 at 8:34 PM, Christopher Armstrong
wrote:
> Hey guys,
>
> Good news!! Ilya investigated the ticket and gave me a hint as to the issue
> - we need to use `--net host` on the consuming container so that the network
> context is what Ceph expects. I am now running my test container
Good point. I'll have to play around with it - was just excited to get past
the blocking map issue.
*Chris Armstrong*Head of Services
OpDemand / Deis.io
GitHub: https://github.com/deis/deis -- Docs: http://docs.deis.io/
On Thu, Oct 9, 2014 at 11:20 AM, Ilya Dryomov
wrote:
> On Thu, Oct 9, 20
Hi Yehuda,
Please share the fix/patch, we could test and confirm the fix status.
Thanks
Swami
On Thu, Oct 9, 2014 at 10:42 PM, Yehuda Sadeh wrote:
> I have a trivial fix for the issue that I'd like to check and get this
> one cleared, but never got to it due to some difficulties with a
> proper
Hi All,
I'm trying to add a crush rule to my map, which looks like this:
rule rack_ruleset {
ruleset 1
type replicated
min_size 1
max_size 10
step take default
step choose firstn 2 type rack
step chooseleaf firstn 2 type host
step emit
}
I'm not configuring any pools to use the ruleset at this t
On Thu, Oct 9, 2014 at 9:23 PM, Christopher Armstrong
wrote:
> Good point. I'll have to play around with it - was just excited to get past
> the blocking map issue.
This could be a docker bug - my understanding is that all devices have
to show up if running with --privileged, which I do on my tes
Adding `-v /dev:/dev` works as expected - after mapping, the device shows
up as /dev/rbd0. Agreed, though - I thought --privileged should do this.
*Chris Armstrong*Head of Services
OpDemand / Deis.io
GitHub: https://github.com/deis/deis -- Docs: http://docs.deis.io/
On Thu, Oct 9, 2014 at 11:3
Hi All,
I have few questions regarding the Primary affinity. In the original
blueprint
(https://wiki.ceph.com/Planning/Blueprints/Firefly/osdmap%3A_primary_role_affinity
), one example has been given.
For PG x, CRUSH returns [a, b, c]
If a has primary_affinity of .5, b and c have 1 ,
Hi Stephen,
It looks like you're hitting http://tracker.ceph.com/issues/9492 which has been
fixed but is not yet available in firefly. The simplest workaround is to
min_size 4 in this case.
Cheers
On 09/10/2014 19:31, Stephen Jahl wrote:> Hi All,
>
> I'm trying to add a crush rule to my map,
Thanks Loic,
In my case, I actually only have three replicas for my pools -- with this
rule, I'm trying to ensure that at OSDs in at least two racks are selected.
Since the replica size is only 3, I think I'm still affected by the bug
(unless of course I set my replica size to 4).
Is there a bett
Hi All,
There is a new release of ceph-deploy that includes a fix where
enabling the OSD service would
fail on certain distros.
There is also a new improvement for creating a monitor keyring if not
found when deploying
monitors.
The full changelog can be seen here:
http://ceph.com/ceph-deploy/do
Here's the fix, let me know if you need any help with that.
Thanks,
Yehuda
diff --git a/src/rgw/rgw_swift.cc b/src/rgw/rgw_swift.cc
index d9654a7..2445e17 100644
--- a/src/rgw/rgw_swift.cc
+++ b/src/rgw/rgw_swift.cc
@@ -505,6 +505,8 @@ int RGWSwift::validate_keystone_token(RGWRados
*store, const
The patch is already in the firefly maintenance branch:
https://github.com/ceph/ceph/commits/firefly
https://github.com/ceph/ceph/commit/548be0b2aea18ed3196ef8f0ab5f58a66e3a9af4
but I'm not sure when the 0.80.7 release will be published.
http://ceph.com/releases/v0-80-6-firefly-released/ was onl
Stephen,
You are right. Crash can happen if replica size doesn’t match
the no of osds. I am not sure if there exists any other solution for your
problem " choose first 2 replicas from a rack and choose third replica from any
other rack different from one”.
Some different thoug
So, I _do_ have three racks, but unfortunately, one of them has fewer OSDs
in it. Weighting takes care of a little bit of that, but I do end up with
an uneven distribution (according to the utilization numbers from crushtool
--test). Because of that, is how I ended up going down the "at least two
r
Thanks Mark. I got past this error being root. So essentially, I copied the
certs from openstack controller node to gateway node. Did the conversion using
certutil and copied the files back to controller node under /var/lib/ceph/nss
directory. Is this the correct directory? Ceph doc says /var/ce
So I can successfully map within the container, but when I try to
`mkfs.ext4 -m0 /dev/rbd0` I get:
Oct 09 19:31:03 deis-2 sh[1569]: mke2fs 1.42.9 (4-Feb-2014)
Oct 09 19:31:03 deis-2 sh[1569]: mkfs.ext4: Operation not permitted while
trying to determine filesystem size
Once the device is mapped wi
Turns out we need to explicitly list --privileged in addition to the other
flags. Here's how it runs now:
docker run --name deis-store-volume --rm -e HOST=$COREOS_PRIVATE_IPV4 --net
host --privileged -v /dev:/dev -v /sys:/sys -v /data:/data $IMAGE
*Chris Armstrong*Head of Services
OpDemand / Dei
Hello Ceph Users:
Ceph baremetal client attempting to map device volume via kernel RBD Driver,
resulting in unable to map device volume and outputs I/O error.
This is Ceph client only, no MDS,OSD or MON running…see I/O error output below.
Client Host Linux Kernel Version :
[root@root ceph]# una
On Thu, Oct 9, 2014 at 10:55 AM, Johnu George (johnugeo)
wrote:
> Hi All,
> I have few questions regarding the Primary affinity. In the
> original blueprint
> (https://wiki.ceph.com/Planning/Blueprints/Firefly/osdmap%3A_primary_role_affinity
> ), one example has been given.
>
> For PG x
Almost - the converted certs need to be saved on your *rgw* host in
nss_db_path (default is /var/ceph/nss but wherever you have it
configured should be ok). Then restart the gateway.
What is happening is the the rgw needs these certs to speak with
encryption to the keystone server (the latter
Right, I have these certs on both nodes - keystone node and rgw gateway node.
Not sure where I am going wrong. And what about SSL? Should the following be in
rgw.conf in gateway node? I am not using this as it was optional.
SSLEngine on
SSLCertificateFile /etc/apache2/ssl/apache.crt
SSLCertifica
That certainly fixes the issue for me. Removing the WSGIChunkedRequest
On directive from my keystone config and restarting brought back the
original error. Installing a new patched radosgw binary and restarting
got back a working swift.
Cheers
Mark
On 10/10/14 07:19, Yehuda Sadeh wrote:
Her
Great, I'll prepare it upstream.
Thanks,
Yehuda
On Thu, Oct 9, 2014 at 3:39 PM, Mark Kirkwood
wrote:
> That certainly fixes the issue for me. Removing the WSGIChunkedRequest On
> directive from my keystone config and restarting brought back the original
> error. Installing a new patched radosgw
No, I don't have any explicit ssl enabled in the rgw site.
Now you might be running into http://tracker.ceph.com/issues/7796 . So
check if you have enabled
WSGIChunkedRequest On
In your keystone virtualhost setup (explained in the issue).
Cheers
Mark
On 10/10/14 11:03, lakshmi k s wrote:
I have a question regarding submitting blueprints. Should only people who
intend to do the work of adding/changing features of Ceph submit
blueprints? I'm not primarily a programmer (but can do programming if
needed), but have a feature request for Ceph.
Thanks,
Robert LeBlanc
Hi Greg,
Thanks for your extremely informative post. My related questions
are posted inline
On 10/9/14, 2:21 PM, "Gregory Farnum" wrote:
>On Thu, Oct 9, 2014 at 10:55 AM, Johnu George (johnugeo)
> wrote:
>> Hi All,
>> I have few questions regarding the Primary affinity. In th
Have done this too, but in vain. I made changes to Horizon.conf as shown below.
I had only I do not see the user being validated in radosgw log at all.
root@overcloud-controller0-fjvtpqjip2hl:/etc/apache2/sites-available# ls
000-default.conf default-ssl.conf horizon.conf
-
Hmm - It looks to me like you added the chunked request into Horizon
instead of Keystone. You want virtual host *:35357
On 10/10/14 12:32, lakshmi k s wrote:
Have done this too, but in vain. I made changes to Horizon.conf as shown
below. I had only I do not see the user being validated in rado
Yes Mark, but there is no keystone.conf in this modified Openstack code. There
is only horizon.conf under /etc/apache2/sites-available folder. And that has
virtual host 80 only. Should I simply add :35357?
root@overcloud-controller0-fjvtpqjip2hl:/etc/apache2/sites-available# ls
000-default.conf
On Thu, Oct 9, 2014 at 4:24 PM, Johnu George (johnugeo)
wrote:
> Hi Greg,
> Thanks for your extremely informative post. My related questions
> are posted inline
>
> On 10/9/14, 2:21 PM, "Gregory Farnum" wrote:
>
>>On Thu, Oct 9, 2014 at 10:55 AM, Johnu George (johnugeo)
>> wrote:
>>> Hi
On Thu, Oct 9, 2014 at 4:01 PM, Robert LeBlanc wrote:
> I have a question regarding submitting blueprints. Should only people who
> intend to do the work of adding/changing features of Ceph submit blueprints?
> I'm not primarily a programmer (but can do programming if needed), but have
> a feature
Thanks:)
Just curious, what kind of applications use RBD? It cant be
applications which need high speed SAN storage performance
characteristics?
For VMs, I am trying to visualize how the RBD device would be exposed.
Where does the driver live exactly? If its exposed via libvirt and
QEMU, does the
Dear ceph,
# ceph -s
cluster e1f18421-5d20-4c3e-83be-a74b77468d61
health HEALTH_ERR 4 pgs inconsistent; 4 scrub errors
monmap e2: 3 mons at
{storage-1-213=10.1.0.213:6789/0,storage-1-214=10.1.0.214:6789/0,storage-1-215=10.1.0.215:6789/0},
election epoch 16, quorum 0,1,2 storage-1-213,storage-1-
Oh, I see. That complicates it a wee bit (looks back at your messages).
I see you have:
rgw_keystone_url = http://192.0.8.2:5000
So you'll need to amend/create etc a
and put it in there. I suspect you might be better off changing your rgw
kesytone url to use port 35357 (the public one). How
Given your setup appears to be non standard, it might be useful to see
the output of the 2 commands below:
$ keystone service-list
$ keystone endpoint-list
So we can avoid advising you incorrectly.
Regards
Mark
On 10/10/14 18:46, Mark Kirkwood wrote:
Also just to double check - 192.0.8.2 *i
55 matches
Mail list logo