RBD setup don’t help. RegardsManuel De: Wesley Peng <wes...@160mail.com>Enviado el: lunes, 18 de noviembre de 2019 14:54Para: ceph-users@ceph.ioAsunto: [ceph-users] Ssd cache question Hello For today ceph deployment, is SSD cache pool the must for performance stuff? Th
HelloFor today ceph deployment, is SSD cache pool the must for performance stuff? Thank you.Regards ___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
As the signature shows, please send an email to ceph-users-le...@ceph.io
for unsubscribing.
hou guanghua wrote:
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
on 2019/9/17 20:34, Rimma Iontel wrote:
To unsubscribe send an email to ceph-users-le...@ceph.io
As the signature in sent mail shows, you would send a message to
ceph-users-le...@ceph.io for quitting.
regards.
___
ceph-users mailing list -- ceph
Hi
on 2019/9/16 20:19, 潘东元 wrote:
my ceph cluster version is Luminous run the kernel version Linux 3.10
Please refer this page:
https://docs.ceph.com/docs/master/start/os-recommendations/
see [LUMINOUS] section.
regards.
___
ceph-users mailing
Hello
on 2019/9/16 17:36, Eikermann, Robert wrote:
Should it be possible to do that on a running pool? I tried to do so and
immediately all VMs (Linux Ubuntu OS) running on Ceph disks got readonly
filesystems. No errors were shown in ceph (but also no traffic arrived
after enabling the cache t
Hi
on 2019/9/11 15:14, Gökhan Kocak wrote:
___
ceph-users mailing list --ceph-users@ceph.io
To unsubscribe send an email toceph-users-le...@ceph.io
The signature of message you just sent has the info to leave out.
regards.
__
on 2019/9/10 17:14, vladimir franciz blando wrote:
I have 2 OpenStack environment that I want to integrate to an existing
ceph cluster. I know technically it can be done but has anyone tried this?
Sure you can. Ceph could be deployed as separate storage service,
openstack is just its cust
Thanks for your work. We are using Ceph happily.
Abhishek Lekshmanan 于2019年9月4日 周三下午9:48写道:
>
> This is the third bug fix release of Ceph Nautilus release series. This
> release fixes a security issue. We recommend all Nautilus users upgrade
> to this release. For upgrading from older releases of
on 2019/9/4 18:18, Fyodor Ustinov wrote:
Please, make an announcement about the new version and prepare the
documentation before posting the new version to the repository.
It is very, very, very necessary.
me +1
regards.
___
ceph-users mailing l
Hi
on 2019/9/4 15:00, linghucongsong wrote:
{
"time": "2019-09-04 13:38:54.343921",
"event": "reached_pg"
},
{
"time": "2019-09-04 13:38:54.343938",
Hi
on 2019/9/4 11:40, 黄明友 wrote:
I use the aws s3 java sdk , when make a new bucket , with
the hostname " s3.my-self.mydomain.com" ; will get a auth error.
but , when I use the hostname " s3.us-east-1.mydomian.com" ,will be ok,
why ?
Both the domains can be resolved by DNS?
rega
ave a drive failure and you have any other
> error? A node failure? Another disk failure? A disk read error? All of
> these could mean data loss.
> >
> > How important is the data you are storing and do you have a backup of it
> as you will need that backup at some point.
>
Hi,
We have all SSD disks as ceph's backend storage.
Consider the cost factor, can we setup the cluster to have only two
replicas for objects?
thanks & regards
Wesley
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph
14 matches
Mail list logo