ser1
http://docs.ceph.com/docs/master/radosgw/adminops/
Regards,
Horace Ng
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
Same here, I've read some blog saying that vmware will frequently verify the
locking on VMFS over iSCSI, hence it will have much slower performance than NFS
(with different locking mechanism).
Regards,
Horace Ng
- Original Message -
From: w...@globe.de
To: ceph-
Thanks for your help!! That's what I need.
Regards,
Horace Ng
- Original Message -
From: "Daniel Gryniewicz"
To: ceph-users@lists.ceph.com
Sent: Thursday, July 21, 2016 10:34:14 PM
Subject: Re: [ceph-users] Radosgw admin ops API command question
On 07/21/2016 05:04 A
sdd | busy 3% | read 0 | write 1195 | KiB/w 57
| MBr/s0.0 | MBw/s6.7 | avio 0.24 ms |
Regards,
Horace Ng
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I'm using filestore with SSD journal and a 3x replication. I've only noticed
the low client IO after luminous upgrade, the actual traffic should be much
higher. It had never been that low since my giant deployment (yup, it is a very
old cluster)
Regards,
Horace Ng
- Origin
[23,1,11] 23 27953'182582
2018-05-23 06:20:56.08817227843'162478 2018-05-20 18:28:20.118632
With osd.23 and osd.11 being assigned on the same host.
Regards,
Horace Ng
___
ceph-users mailing list
ceph-users@lists.ceph.com
h
take default class hdd
step chooseleaf firstn -1 type host
step emit
}
Regards,
Horace Ng
- Original Message -
From: "horace"
To: "ceph-users"
Sent: Wednesday, May 23, 2018 3:56:20 PM
Subject: [ceph-users] SSD-primary crush rule doesn't work as
Oh, it's not working as intended though the ssd-primary rule is officially
listed on ceph documentation. I should file a feature request or bugzilla for
it?
Regards,
Horace Ng
From: "Paul Emmerich"
To: "horace"
Cc: "ceph-users"
Sent: Wednesday, M
-sata-and-ssd-within-the-same-box/
Regards,
Horace Ng
From: "Peter Linder"
To: "Paul Emmerich" , "horace"
Cc: "ceph-users"
Sent: Thursday, May 24, 2018 3:46:59 PM
Subject: Re: [ceph-users] SSD-primary crush rule doesn't work as intended
Seems there's no plan for that and the vmware kernel documentation will only
share to partners. You would better off to use iscsi. By the way, i found that
the performance is much better for SCST than ceph-iscsi. I don't think
ceph-iscsi is production-ready?
Regards,
Horace
You need 1 core per SATA disk, otherwise your load average will be skyrocketed
when your system is at full load and render the cluster unstable, i.e. ceph-mon
unreachable, slow requests, etc.
Regards,
Horace Ng
- Original Message -
From: "Brian :"
To: "Wladimir Mutel
Dear all,
Is anybody got any experience on this product? It is a BBU backed NVRAM cache,
I think it is most fit on Ceph.
https://www.microsemi.com/products/storage/flashtec-nvram-drives/nv1616
Regards,
Horace Ng
ISL E-Mail Disclaimer
(http://www.hkisl.net/index.php?hkisl_page=emailDisclaimer
those
servers last year, don't know if it is on the market yet.
http://searchsolidstatestorage.techtarget.com/news/450280262/HPE-launches-NVDIMM-NVMe-persistent-memory-products
Regards,
Horace Ng
-
ISL HK Limited
E-mail: hor...@hkisl.net
Tel: +852 27109880
Fax: +852 27704631
- Origin
Oh, the NVDIMM technology is almost ready in the mass market, linux kernel will
be fully supported starting 4.6 (not so far away), I think the NVDIMM hardware
is much cheaper than raid card, right? :P
http://www.admin-magazine.com/HPC/Articles/NVDIMM-Persistent-Memory
ISL E-Mail Disclaimer
(ht
14 matches
Mail list logo