[ceph-users] Re: Ssd cache question

2019-11-18 Thread Wesley Peng
RBD setup don’t help. RegardsManuel  De: Wesley Peng <wes...@160mail.com>Enviado el: lunes, 18 de noviembre de 2019 14:54Para: ceph-users@ceph.ioAsunto: [ceph-users] Ssd cache question Hello For today ceph deployment, is SSD cache pool the must for performance stuff? Th

[ceph-users] Ssd cache question

2019-11-18 Thread Wesley Peng
HelloFor today ceph deployment, is SSD cache pool the must for performance stuff? Thank you.Regards ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: unsubscribe

2019-09-24 Thread Wesley Peng
As the signature shows, please send an email to ceph-users-le...@ceph.io for unsubscribing. hou guanghua wrote: To unsubscribe send an email to ceph-users-le...@ceph.io ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: Unsubscribe

2019-09-17 Thread Wesley Peng
on 2019/9/17 20:34, Rimma Iontel wrote: To unsubscribe send an email to ceph-users-le...@ceph.io As the signature in sent mail shows, you would send a message to ceph-users-le...@ceph.io for quitting. regards. ___ ceph-users mailing list -- ceph

[ceph-users] Re: KRBD use Luminous upmap feature.Which version of the kernel should i ues?

2019-09-16 Thread Wesley Peng
Hi on 2019/9/16 20:19, 潘东元 wrote: my ceph cluster version is Luminous run the kernel version Linux 3.10 Please refer this page: https://docs.ceph.com/docs/master/start/os-recommendations/ see [LUMINOUS] section. regards. ___ ceph-users mailing

[ceph-users] Re: Activate Cache Tier on Running Pools

2019-09-16 Thread Wesley Peng
Hello on 2019/9/16 17:36, Eikermann, Robert wrote: Should it be possible to do that on a running pool? I tried to do so and immediately all VMs (Linux Ubuntu OS) running on Ceph disks got readonly filesystems. No errors were shown in ceph (but also no traffic arrived after enabling the cache t

[ceph-users] Re: unsubscribe

2019-09-11 Thread Wesley Peng
Hi on 2019/9/11 15:14, Gökhan Kocak wrote: ___ ceph-users mailing list --ceph-users@ceph.io To unsubscribe send an email toceph-users-le...@ceph.io The signature of message you just sent has the info to leave out. regards. __

[ceph-users] Re: 2 OpenStack environment, 1 Ceph cluster

2019-09-10 Thread Wesley Peng
on 2019/9/10 17:14, vladimir franciz blando wrote: I have 2 OpenStack environment that I want to integrate to an existing ceph cluster.  I know technically it can be done but has anyone tried this? Sure you can. Ceph could be deployed as separate storage service, openstack is just its cust

[ceph-users] Re: v14.2.3 Nautilus released

2019-09-04 Thread Wesley Peng
Thanks for your work. We are using Ceph happily. Abhishek Lekshmanan 于2019年9月4日 周三下午9:48写道: > > This is the third bug fix release of Ceph Nautilus release series. This > release fixes a security issue. We recommend all Nautilus users upgrade > to this release. For upgrading from older releases of

[ceph-users] Re: CEPH 14.2.3

2019-09-04 Thread Wesley Peng
on 2019/9/4 18:18, Fyodor Ustinov wrote: Please, make an announcement about the new version and prepare the documentation before posting the new version to the repository. It is very, very, very necessary. me +1 regards. ___ ceph-users mailing l

[ceph-users] Re: slow requests with the ceph osd dead lock?

2019-09-04 Thread Wesley Peng
Hi on 2019/9/4 15:00, linghucongsong wrote:     {     "time": "2019-09-04 13:38:54.343921",     "event": "reached_pg"     },     {     "time": "2019-09-04 13:38:54.343938",

[ceph-users] Re: rgw auth error with self region name

2019-09-03 Thread Wesley Peng
Hi on 2019/9/4 11:40, 黄明友 wrote:             I use the aws s3 java sdk , when make a new bucket , with the hostname " s3.my-self.mydomain.com" ; will get a auth error. but , when I use the hostname " s3.us-east-1.mydomian.com" ,will be ok, why ? Both the domains can be resolved by DNS? rega

[ceph-users] Re: ceph's replicas question

2019-08-25 Thread Wesley Peng
ave a drive failure and you have any other > error? A node failure? Another disk failure? A disk read error? All of > these could mean data loss. > > > > How important is the data you are storing and do you have a backup of it > as you will need that backup at some point. >

[ceph-users] ceph's replicas question

2019-08-24 Thread Wesley Peng
Hi, We have all SSD disks as ceph's backend storage. Consider the cost factor, can we setup the cluster to have only two replicas for objects? thanks & regards Wesley ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph