Hello Guys
We have a fresh 'luminous' ( 12.2.0 )
(32ce2a3ae5239ee33d6150705cdb24d43bab910c)
luminous (rc) ( installed using ceph-ansible )
cluster contains 6 * Intel server board S2600WTTR ( 96 osds and 3
mons )
We have 6 nodes ( Intel server board S2600WTTR ) , Mem - 64G , CPU
-> I
Has anyone use this new dashboard already and can share his experience ?
On Thu, Jul 20, 2017 at 12:11 PM, Christian Wuerdig <
christian.wuer...@gmail.com> wrote:
> Judging by the github repo, development on it has all but stalled, the
> last commit was more then 3 months ago (https://github
Hello Cephers .
It looks like we have a problem to attach volume to an instance when
using 'scsi' bus as value for the 'hw_disk_bus' property . (
actually to boot from new volume and image which have hw_disk_bus = scsi
property )
The error that we get on nova's logs are -->
2017-06-15
from our experience even 80% is a 'dangerous zone' ( unfortunately
this is how it goes ... and it is quiet wasteful comparing to other
solutions )
On Thu, May 4, 2017 at 3:57 PM, David Turner wrote:
> The Ceph Enterprise default is 65% nearfull. Do not go above 85% nearfull
> unless y
Hello .
Is there a way to use cinder and glance without the cephx authentication ?
( as much as i understand we must create the keys and add it to libvirt ,
although cephx is disabled in our cluster )
Thanks
--
This e-mail, as well as any attached document, may contain material which
is con
Hello Pankaj
- Do you use the default port ( 7480 ) ?
- Do you use cephx ?
- i assume you use the default Citeweb ( embedded already ) .
if you wish to use another port , you should modify your conf file and
add the below
rgw_frontends = "civetweb port=80"( this is for port 80 )
- Now
ph/nss/
> rgw_keystone_verify_ssl = true
>
> Regards, I
>
> 2017-03-13 17:28 GMT+01:00 Yair Magnezi :
>
>> But per the doc the client stanza should include
>> client.radosgw.instance_name
>>
>>
>> [client.rgw.ceph-rgw-02]
>> host = cep
nd can share his configuration
?
Thanks
On Mon, Mar 13, 2017 at 5:28 PM, Abhishek Lekshmanan
wrote:
>
>
> On 03/13/2017 04:06 PM, Yair Magnezi wrote:
>
>> Thank you Abhishek
>>
>> But still ...
>>
>> root@ceph-rgw-02:/var/log/ceph# ps -ef | grep
more ideas
Thanks
On Mon, Mar 13, 2017 at 4:34 PM, Abhishek Lekshmanan
wrote:
>
>
> On 03/13/2017 03:26 PM, Yair Magnezi wrote:
>
>> Hello Wido
>>
>> yes , the is my /etc/cep/ceph.conf
>>
>> and yes radosgw.ceph-rgw-02 is the running instance .
>&g
Thanks
*Yair Magnezi *
*Storage & Data Protection TL // KenshooOffice +972 7 32862423 //
Mobile +972 50 575-2955__*
On Mon, Mar 13, 2017 at 4:06 PM, Wido den Hollander wrote:
>
> > Op 13 maart 2017 om 15:03 schreef
Hello Cephers .
I'm trying to modify the civetweb default port to 80 but from some
reason it insists on listening on the default 7480 port
My configuration is quiet simple ( experimental ) and looks like this :
[global]
fsid = 00c167db-aea1-41b4-903b-69b0c86b6a0f
mon_initial_members = ceph
Hello Guys .
I'm new to RGW and need some clarification ( i'm running 10.2.5 )
As much as i understand 'jewl' uses Civetweb instead of Apache and FastCGI
but in the configuration guide ( just the next step in the the install
guide ) it says "Configuring a Ceph Object Gateway requires a running
x7f1a9ef06d60).accept connect_seq 11 vs existing 11 state standby
2017-01-03 08:32:39.573457 7f1a3cd25700 0 -- 10.63.4.18:6838/84978 >>
10.63.4.18:6842/85573 pipe(0x7f1a9606f000 sd=475 :6838 s=0 pgs=0 cs=0 l=0
c=0x7f1a9ef06d60).accept connect_seq 12 vs existing 11 state standby
Thanks
on but it
doesn't seem to work so well
Is there any known bug with our version ? will a restart of the osds
solve this issue ( it was menstioned in one of the forum's threads but it
was related to firefly )
Many Thanks .
*Yair Magnezi *
*Storage &
889G 763G 127G 86%
/var/lib/ceph/osd/ceph-16
/dev/sdj1889G 732G 158G 83%
/var/lib/ceph/osd/ceph-18
/dev/sdd1889G 756G 134G 86%
/var/lib/ceph/osd/ceph-29
root@ecprdbcph03-opens:/var/log/ceph#
Thanks
*Yair Magnezi *
Thank you Jason .
Does RBD volumes consistency group supported in Jewel ? can we take
consistent snapshots for volumes consistency group .
Implementation is for openstack -->
http://docs.openstack.org/admin-guide/blockstorage-consistency-groups.html
Thanks Again .
*Yair Magnezi *
*Stor
Hello Guys .
I'm a little bit confused about ceph's capability to take a a consistency
snapshots ( more then one rbd image )
is there a way to do this ( we're running hammer right now )
Thanks
--
This e-mail, as well as any attached document, may contain material which
is confidential and pr
Hi Guys .
On a full ssd cluster , is it meaningful to put the journal on a different
drive ? does it have any impact on performance ?
Thanks
*Yair Magnezi *
*Storage & Data Protection // KenshooOffice +972 7 32862423 // Mobile
+972 50
10.00th=[ 414], 20.00th=[ 454],
| 30.00th=[ 494], 40.00th=[ 540], 50.00th=[ 612], 60.00th=[ 732],
| 70.00th=[ 1064], 80.00th=[10304], 90.00th=[37632], 95.00th=[38656],
| 99.00th=[40192], 99.50th=[41216], 99.90th=[43264], 99.95th=[43776],
Thanks
*Yair Magnezi *
On Fri, Mar 11, 2016 at 2:01 AM, Christian Balzer wrote:
>
> Hello,
>
> As alway there are many similar threads in here, googling and reading up
> stuff are good for you.
>
> On Thu, 10 Mar 2016 16:55:03 +0200 Yair Magnezi wrote:
>
> > Hello Cephers .
>
g and reading up
> stuff are good for you.
>
> On Thu, 10 Mar 2016 16:55:03 +0200 Yair Magnezi wrote:
>
> > Hello Cephers .
> >
> > I wonder if anyone has some experience with full ssd cluster .
> > We're testing ceph ( "firefly" ) with 4 nodes ( s
is much appreciated ( especially want to know which parameter is
crucial for read performance in full ssd cluster )
Thanks in Advance
*Yair Magnezi *
*Storage & Data Protection TL // KenshooOffice +972 7 32862423 //
Mobile +972 50 575-2955
22 matches
Mail list logo