baijia...@126.com___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
when I execute a put file operation at 17:10 of the local time.
and this time convert UTC time that is 9:10.
and I execute "radosgw-admin usage show --uid=test1 --show-log-entries=true
--start-date="2015-04-27 09:00:00" " but it does not seem to see anything.
when I check the code, I find funct
when I create bucket, why rgw create 2 objects in the domain root pool.
and one object store struct RGWBucketInfo and the other object store struct
RGWBucketEntryPoint
and when I delete the bucket , why rgw only delete one object.
baijia...@126.com___
mplete_op with
CLS_RGW_OP_ADD and file size must be 0;
so at this moment bucket index record file size is zero. I think this is not
right.
baijia...@126.com
From: Yehuda Sadeh-Weinraub
Date: 2015-02-05 12:06
To: baijiaruo
CC: ceph-users
Subject: Re: [ceph-users] RGW put file question
when I put file failed, and run the function "
RGWRados::cls_obj_complete_cancel",
why we use CLS_RGW_OP_ADD not use CLS_RGW_OP_CANCEL?
why we set poolid is -1 and set epoch is 0?
baijia...@126.com___
ceph-users mailing list
ceph-users@lists.ceph.com
when I write a file named "1234%" in the master region, and rgw-agent send copy
obj request which contains "x-amz-copy-source:nofilter_bucket_1/1234%" to
the rep region fail 404 error;
I analysis that rgw-agent can't encode url
"x-amz-copy-source:nofilter_bucket_1/1234%" , but rgw could deco
I know single bucket has performance question from
http://tracker.ceph.com/issues/8473
I attempt to modify crush map that put bucket.index pool to ssd. but
performance is not good, and ssd performance never utilize.
this is op description,can you give me some suggests to improve performance:
I patch the http://tracker.ceph.com/issues/8452
run s3 test suite and still is error;
err log: ERROR: failed to get obj attrs,
obj=test-client.0-31zepqoawd8dxfa-212:_multipart_mymultipart.2/0IQGoJ7hG8ZtTyfAnglChBO79HUsjeC.meta
ret=-2
I found code that it may has problem:
when function exec "re
can I use librgw APIS like librados? if I can, how to do it?
baijia...@126.com___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
when I start all the osds, I find many osd start failed. logs as follow:
osd/SnapMapper.cc: 270: FAILED assert(check(oid))
ceph version ()
1: ceph-osd() [0x5e61c8]
2: (remove_dir(CephContext*, ObjectStore*, SnapMapper*, OSDriver*,
ObjectStore::Sequencer*, coll_t, std::tr1::shared_ptr,
Thre
when I read RGW code, and can't understand master_ver inside struct
rgw_bucket_dir_header .
who can explain this struct , in especial master_ver and stats , thanks
baijia...@126.com___
ceph-users mailing list
ceph-users@lists.ceph.com
http://list
baijia...@126.com___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
min_wait 600 seconds, rgw_gc_processor_max_time 300
seconds ,rgw_gc_processor_period 300 seconds.
after ten minutes, I see"default.4804.1__shadow " is deleted
but when do ceph delete ".bucket.meta.:default.4804.1" and
".dir.default.4804.1 " ?
baijia...@126.com
Fro
I create a bucket and put some objects in the bucket。but I delete the all the
objects and the bucket, why the bucket.meta object and bucket index object
are exist? when ceph recycle them?
baijia...@126.com___
ceph-users mailing list
ceph-users@lists
hi, everyone!
I test RGW get obj ops, when I use 100 threads get one and the same object ,
I find that performance is very good, meadResponseTime is 0.1s.
But when I use 150 threads get one and the same object, performace is very
bad, meadResponseTime is 1s.
and I observe the osd log and rgw
I find osd log contain "fault with nothing to send, going to standby ",what
happened?
baijia...@126.com___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I put .rgw.buckets.index pool to SSD osd,bucket object must write to the SSD,
and disk use ratio less than 50%. so I don't think disk is bottleneck
baijia...@126.com
From: baijia...@126.com
Date: 2014-07-04 01:29
To: Gregory Farnum
CC: ceph-users
Subject: Re: Re: [ceph-users] RGW perfor
here lock pg .
where lock pg for a long time?
thanks
baijia...@126.com
From: Gregory Farnum
Date: 2014-07-04 01:02
To: baijia...@126.com
CC: ceph-users
Subject: Re: [ceph-users] RGW performance test , put 30 thousands objects to
one bucket, average latency 3 seconds
It looks like you're ju
when I see the function "OSD::OpWQ::_process ". I find pg lock locks the whole
function. so when I use multi-thread write the same object , so are they must
serialize from osd handle thread to journal write thread ?
baijia...@126.com___
ceph-users m
hi, everyone
when I user rest bench testing RGW with cmd : rest-bench --access-key=ak
--secret=sk --bucket=bucket --seconds=360 -t 200 -b 524288 --no-cleanup
write
I found when RGW call the method "bucket_prepare_op " is very slow. so I
observed from 'dump_historic_ops',to see:
{ "descript
cost from 0.5 to 1 second , so the whole ondisk_finisher must wait 1
second. How can cancel pg lock in the
ReplicatedPG::op_commit ?
thanks
baijia...@126.com
发件人: Guang Yang
发送时间: 2014-07-01 11:39
收件人: baijiaruo
抄送: ceph-users
主题: Re: [ceph-users] Ask a performance question for the RGW
On Jun
nish” to "op_commit" cost 3.6 seconds。
so I can't understand this and what happened?
thanks
baijia...@126.com
发件人: Guang Yang
发送时间: 2014-06-30 14:57
收件人: baijiaruo
抄送: ceph-users
主题: Re: [ceph-users] Ask a performance question for the RGW
Hello,
There is a known limitation of
hello, everyone!
when I user rest bench test RGW performance and the cmd is:
./rest-bench --access-key=ak --secret=sk --bucket=bucket_name --seconds=600 -t
200 -b 524288 -no-cleanup write
test result:
Total time run: 362.962324 T
otal writes made: 48189
Write size: 524288
Bandwidth (MB/sec):
hello, everyone!
when I user rest bench test RGW performance and the cmd is:
./rest-bench --access-key=ak --secret=sk --bucket=bucket_name --seconds=600 -t
200 -b 524288 -no-cleanup write
test result:
Total time run: 362.962324 T
otal writes made: 48189
Write size: 524288
Bandwidth (MB/sec):
when I user rest bench test RGW performance with this argument:
./rest-bench --access-key=ak --secret=sk --bucket=bucket_name --seconds=600
-t 200 -b 524288 -no-cleanup write
test result:
Total time run: 362.962324
Total writes made: 48189
Write size: 524288
Bandwidth
25 matches
Mail list logo