hanges the sst files, when
compaction is triggered. The additional improvement is Snappy
compression. We rebuild ceph with support for it. I can create PR with
it, if you want :)
Best Regards,
Rafał Wądołowski
Cloud & Security Engineer
On 25.06.2019 22:16, Christian Wuerdig wrote:
&
Why are you selected this specific sizes? Are there any tests/research
on it?
Best Regards,
Rafał Wądołowski
On 24.06.2019 13:05, Konstantin Shalygin wrote:
>
>> Hi
>>
>> Have been thinking a bit about rocksdb and EC pools:
>>
>> Since a RADOS object writte
Yes, but look into pgs array. It shouldn't be empty.
That should by addressed by this PR: https://tracker.ceph.com/issues/40377
Best Regards,
Rafał Wądołowski
On 16.06.2019 07:06, huang jun wrote:
> osd send osd beacons every 300s, and it's used to let mon know that
> osd
"event": "forward_request_leader"
},
{
"time": "2019-06-14 06:39:37.973064",
"event": "forwarded"
}
],
Best Regards,
Rafał Wądołowski
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
We have some buckets with ~25M files inside.
We've also using bucket index sharding. The performance is good, we are
focused on read.
BR,
Rafal Wadolowski
On 20.04.2018 00:57, Robert Stanford wrote:
>
> The rule of thumb is not to have tens of millions of objects in a
> radosgw bucket,
ags hashpspool
stripe_width 0 application rgw
pool 8 'default.rgw.buckets.non-ec' replicated size 3 min_size 2
crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 125197
flags hashpspool stripe_width 0 application rgw
Thank you for your help
--
BR,
Rafał Wądołowski
(http://ceph.com/geen-categorie/quick-analysis-of-the-ceph-io-layer/)
and it will be good to have an explanation about it.
@Mark, Could you tell us (community) is it normal behaviour of these
tests? What is the difference?
BR,
Rafał Wądołowski
On 05.01.2018 19:29, Christian Wuerdig wrote:
They are configured with bluestore.
The network, cpu and disk are doing nothing. I was observing with atop,
iostat, top.
Similiar hardware configuration I have on jewel (with filestore), and
there are performing good.
Cheers,
Rafał Wądołowski
On 04.01.2018 17:05, Luis Periquito wrote
I have size of 2.
We know about this risk and we accept it, but we still don't know why
performance so so bad.
Cheers,
Rafał Wądołowski
On 04.01.2018 16:51, c...@elchaka.de wrote:
I assume you have size of 3 then divide your expected 400 with 3 and
you are not far Away from what yo
Hi folks,
I am currently benchmarking my cluster for an performance issue and I
have no idea, what is going on. I am using these devices in qemu.
Ceph version 12.2.2
Infrastructure:
3 x Ceph-mon
11 x Ceph-osd
Ceph-osd has 22x1TB Samsung SSD 850 EVO 1TB
96GB RAM
2x E5-2650 v4
4x10G Netwo
http_status=416 ==
2017-12-18 15:44:42.642716 7f065499c700 20 process_request() returned -34
During debugging i see that "int
RGWRados::init_bucket_index(RGWBucketInfo& bucket_info, int num_shards)"
function return -34.
Is it known bug or I have some wrong parame
Hi,
Is there any known fast procedure to delete objects in large buckets? I
have about 40 milions of objects. I used:
radosgw-admin bucket rm --bucket=bucket-3 --purge-objects
but it is very slow. I am using ceph luminous (12.2.1).
Is it working in parallel?
--
BR,
Rafał Wądołowski
Finally, I've founded the command:
ceph daemon osd.1 perf dump | grep bluestore
And there you have compressed data
Regards,
Rafał Wądołowski
http://cloudferro.com/ <http://cloudferro.com/>
On 04.12.2017 14:17, Rafał Wądołowski wrote:
Hi,
Is there any command or t
Hi,
Is there any command or tool to show effectiveness of bluestore
compression?
I see the difference (in ceph osd df tree), while uploading a object to
ceph, but maybe there are more friendly method to do it.
--
Regards,
Rafał Wądołowski
15 matches
Mail list logo