@1(electing) e2
ms_get_authorizer for mon
Any idea ?
Thanks,
Steffen Thorhauer
--
______
Steffen Thorhauer
Department of Technical and Business Information Systems (ITI)
Faculty of Computer Science (FIN)
Otto von Guericke
On 06/03/2014 09:19 AM, Steffen Thorhauer wrote:
Hi,
I'm at the process of upgrading my ceph cluster from emperor to firefly.
After upgrading my 3 mons there is one out of quorum.
ceph health detail
HEALTH_WARN 1 mons down, quorum 0,2 u124-11,u124-13
mon.u124-12 (rank 1) addr 10.37.124.12
On 06/03/2014 09:19 AM, Steffen Thorhauer wrote:
Hi,
I'm at the process of upgrading my ceph cluster from emperor to firefly.
After upgrading my 3 mons there is one out of quorum.
ceph health detail
HEALTH_WARN 1 mons down, quorum 0,2 u124-11,u124-13
mon.u124-12 (rank 1) addr 10.37.124.12
dist-packages/oslo/config/cfg.py:1485
The glance-api service did not use my rbd_store_user = images option!!
Then I configured a client.glance auth and it worked with the
"implicit" glance user!!!
Now my question: Am I the only one with this problem??
Regards,
Steffen Thorhau
:
On 08/08/2013 06:01 AM, Steffen Thorhauer wrote:
Hi,
recently I had a problem with openstack glance and ceph.
I used the
http://ceph.com/docs/master/rbd/rbd-openstack/#configuring-glance
documentation and
http://docs.openstack.org/developer/glance/configuring.html
documentation
I'm using u
-4991c6864dc7
ceph-secret-of.client-volumes
Regards,
Steffen Thorhauer
On 11/21/2013 03:05 PM, Jens-Christian Fischer wrote:
Hi all
I'm playing with the boot from volume options in Havana and have run
into problems:
(Openstack Havana, Ceph Dumpling (0.67.4), rbd for glance, cinder and
experim
cache size = 1073741824
rbd cache max dirty = 536870912
rbd default format = 2
admin socket = /var/run/ceph/rbd-$pid.asok
rbd cache writethrough until flush = true
I guess I misunderstood some configuration options.
Has anybody similiar performance problems?
Regards,
Steffen Thor
:~$ dd if=zerofile-2 of=/dev/null bs=1G count=8
8+0 records in
8+0 records out
8589934592 bytes (8.6 GB) copied, 429.528 s, 20.0 MB/s
Thanks,
Steffen
--
__
Steffen Thorhauer
Department of Technichal and Business Information Systems
On 01/10/2014 02:35 PM, Wido den Hollander wrote:
Op 10 jan. 2014 om 18:49 heeft "Steffen Thorhauer"
<mailto:steffen.thorha...@iti.cs.uni-magdeburg.de>> het volgende
geschreven:
On 01/10/2014 01:21 PM, Ирек Фасихов wrote:
You need to use VirtIO.
with this param
0b7ba6282
osd.15 up in weight 1 up_from 7778 up_thru 7832 down_at 7776
last_clean_interval [6535,7775) 10.37.124.153:6804/13043
20.20.20.153:6802/13043 20.20.20.153:6803/13043 10.37.124.153:6805/13043
exists,up fcfcb094-cf6e-4bd4-af36-8681076f9c64
--
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
______
Steffen Thorhauer
Department of Technichal and Business Information Systems (ITI)
Faculty of Computer Science (
;
> On Wed, Feb 20, 2013 at 11:12 PM, Steffen Thorhauer
> wrote:
> > Hello,
> > I have a test ceph cluster on ubuntu 12.04 and made yesterday an upgrade to
> > 0.57 .
> > But after the upgrade the mds dies.
> > ceph -s says
> > health HEALTH
Hi,
I just upgraded one node of my ceph "cluster". I wanted upgrade node
after node.
osd on this node has no problem. but the mon (mon.4) has authorization
problems.
I did'nt change any config, just made an apt-get upgrade .
ceph -s
health HEALTH_WARN 1 mons down, quorum 0,1,2,3 0,1,2,3
;
failed: 'ulimit -n 8192; /usr/bin/ceph-mon -i 0 --pid-file
/var/run/ceph/mon.0.pid -c /etc/ceph/ceph.conf '
Steffen
On 03/21/2013 02:22 PM, Steffen Thorhauer wrote:
Hi,
I just upgraded one node of my ceph "cluster". I wanted upgrade node
after node.
osd on this node
stall the test "cluster".
-Steffen
Btw. ceph rbd, adding/removing osds works great.
On Fri, Mar 22, 2013 at 10:01:10AM +, Joao Eduardo Luis wrote:
> On 03/21/2013 03:47 PM, Steffen Thorhauer wrote:
> >I think, I was impatient and should wait for the v.59 announcement. It
>
15 matches
Mail list logo