Hi:
Recently , I was trying to find a way to map a rbd device that can
talk with back end with rdma, There are three ways to export a rbd
device , krbd, nbd, iscsi .It seems that only iscsi may give a
chance. Has anyone tried to configure this and can give some advices
?
__
Hi all:
I found when I set the bucket expiration rule , after the expiration
date, when I upload a new object , it will be deleted , and I found
the related code like the following:
if (prefix_iter->second.expiration_date != boost::none) {
//we have checked it before
Why this should be true ?
Can I use python34 package in python36 environment? If not , what
should I do to use python34 package in python36 ?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi all: extern "C" int rbd_discard(rbd_image_t image, uint64_t ofs,
uint64_t len)
{
librbd::ImageCtx *ictx = (librbd::ImageCtx *)image;
tracepoint(librbd, discard_enter, ictx, ictx->name.c_str(),
ictx->snap_name.c_str(), ictx->read_only, ofs, len);
if (len > std::numeric_limits::max()) {
tracepoint
Hi all: extern "C" int rbd_discard(rbd_image_t image, uint64_t ofs, uint64_t
len)
{
librbd::ImageCtx *ictx = (librbd::ImageCtx *)image;
tracepoint(librbd, discard_enter, ictx, ictx->name.c_str(),
ictx->snap_name.c_str(), ictx->read_only, ofs, len);
if (len > std::numeric_limits::max()) {
tracepoint
Hi:
I want to use ceph rbd, because it shows better performance. But I dont
like kernal module and isci target process. So here is my requirments:
I dont want to map it and mount it , But I still want to use some
filesystem like api, or at least I can write multiple files to the rbd
volume
ct Mellanos support over this one. They are really
> good guys.
>
>
>
> On 23 July 2018 at 08:14, Will Zhao wrote:
>
>> Hi John:
>>this is the information ibv_devinfo gives .
>>
>> hca_id: mlx4_0
>> transport: InfiniBand (0)
>> fw_ver:
2
>
> Do you have dual port cards?
>
>
> On 19 July 2018 at 11:25, Will Zhao wrote:
>
>> Hi all:
>> Has anyone successfully set up ceph with rdma over IB ?
>>
>> By following the instructions:
>>
>> (https://community.mellanox.com/doc
Hi all:
Has anyone successfully set up ceph with rdma over IB ?
By following the instructions:
(https://community.mellanox.com/docs/DOC-2721)
(https://community.mellanox.com/docs/DOC-2693)
(http://hwchiu.com/2017-05-03-ceph-with-rdma.html)
I'm trying to configure CEPH with RDMA feature
Hi all:
By following the instructions:
(https://community.mellanox.com/docs/DOC-2721)
(https://community.mellanox.com/docs/DOC-2693)
(http://hwchiu.com/2017-05-03-ceph-with-rdma.html)
I'm trying to configure CEPH with RDMA feature on environments as follows:
CentOS Linux release 7.2.151
Hi all:
By following the instructions:
(https://community.mellanox.com/docs/DOC-2721)
(https://community.mellanox.com/docs/DOC-2693)
(http://hwchiu.com/2017-05-03-ceph-with-rdma.html)
I'm trying to configure CEPH with RDMA feature on environments as follows:
CentOS Linux release 7.2.151
Hi :
I use libs3 to run test . The network is IB.
The error in libcurl is the following:
== Info: Operation too slow. Less than 1 bytes/sec transferred the last 15
seconds
== Info: Closing connection 766
and a full request error in rgw is as the following:
2018-07-12 15:42:30.501074 7fe8bc83f7
Hi:
I see that civetweb is still using poll and multithread, compared with
fastcgi, which one should I use ? Which one has better performance ?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph
Hi:
We are using ceph on infiniband and configure it with default
configuration. The ms_type is async + posix. I see there are 3 kinds of
types. Which one is the most stable and the best performance ? Which one do
you suggest shuold I use in production ?
__
met.
So if I create the sixth pool, the total PGs will increased , then
the PGs per OSD will be more then 100.
Will this not violate the rule ?
On Fri, Mar 9, 2018 at 5:40 PM, Janne Johansson wrote:
>
>
> 2018-03-09 10:27 GMT+01:00 Will Zhao :
>>
>> Hi all:
>>
>&g
Hi all:
I have a tiny question. I have read the documents, and it
recommend approximately 100 placement groups for normal usage.
Because the pg num can not be decreased, so if in current cluster,
the pg num have met this rule, and when I try to create a new pool ,
what pg num I should set ? I
d when adding (rather than replacing), it does not
> do minimal PG re-assignments. But it terms of overall efficiency of
> adding/removing of buckets at end and in middle of hierarchy it is the best
> overall over other algorithms as seen on chart 5 and table 2.
>
> On 2017-09-22 08:3
Hi Sage and all :
I am tring to understand cursh more deeply. I have tried to read
the code and paper, and search the mail list archives , but I still
have some questions and can't understand it well.
If I have 100 osds, and when I add a osd , the osdmap changes,
and how the pg is recaul
18 matches
Mail list logo