Hi,we are using the Ceph with RadosGW and S3 setting.With more and more objects
in the storage the writing speed slows down significantly. With 5 million
object in the storage we had a writing speed of 10MS/s. With 10 million objects
in the storage its only 5MB/s. Is this a common issue?Is the
Hi,
you can/need to configure the clusternetwork and the publicnetwork.
In the ceph.conf that means:
public_network = 1.2.3.0/24
cluster_network = 2.3.4.0/24
as an example.
So you will, for every node in your ceph cluster, use
1x 10G Port for public network ip
1x 10G Port for cluster network
Hello,
I have 2 pools as below:
1. pool1 with erasure type
2. pool2 with replicated type
I ran the "rados bench" with above 2 pool and the results came as below:
- Read performance - around 60% better for replicated type pool ie pool2
- Write performance - around 50 % better for erasure type poo
Yes that seems about right.
Erasure coding works similar to Raid5/6 so data will be striped, whereas
replicated pools are simple copies of data. When you write to a 3x
replicated pool you are having to write 3 times as much and so performance
is lower. When writing to an erasure coded pool (k=8 m=
If you want redundancy and link aggregation then you will use LACP (or one
of the Linux pseudo modes) and bonding in linux. This is common and not
Ceph specific.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Jan 23, 2016 6:39 PM, "名花" wrote:
>
> Hi, I have a 4 ports 10gb
Thanks for details.
What is the best suitable pool type for object storage?
Thanks
Swami
On Sun, Jan 24, 2016 at 8:58 PM, Nick Fisk wrote:
> Yes that seems about right.
>
> Erasure coding works similar to Raid5/6 so data will be striped, whereas
> replicated pools are simple copies of data. When
Did it work for you to just change 'straw' to 'straw2' in your crushmap?
Thanks,
Shinobu
On Mon, Sep 21, 2015 at 9:39 PM, Stefan Priebe - Profihost AG <
s.pri...@profihost.ag> wrote:
>
> Am 21.09.2015 um 13:47 schrieb Wido den Hollander:
> >
> >
> > On 21-09-15 13:18, Dan van der Ster wrote:
> >
Hi All,
I use 'ceph tell' command to inject an argument mon_osd_full_ratio by a
certain value like 10%, finding 'Rados bench' can write more than 10%(it may
slow down when approaching 10%, you can interrupt the process and restart
executing with 'rados bench'). Only 'ceph pg set_full_ratio
Hi,
there's a rogue file in our CephFS that we are unable to remove. Access
to the file (removal, move, copy, open etc.) results in the MDS starting
to spill out the following message to its log file:
2016-01-25 08:39:09.623398 7f472a0ee700 0 mds.0.cache
open_remote_dentry_finish bad remote
Hi,
is there a guide or recommendation to optimized SSD settings for hammer?
We have:
CPU E5-1650 v3 @ 3.50GHz (12 core incl. HT)
10x SSD / Node journal and fs on the same ssd
currently we're runnig:
- with auth disabled
- all debug settings to 0
and
ms_nocrc = true
osd_op_num_threads_per_shar
ms_nocrc options is changed to the following in Hammer..
ms_crc_data = false
ms_crc_header = false
Rest looks good , you need to tweak the shard/thread based on your cpu complex
and total number of OSDs running on a box..
BTW, with latest Intel instruction sets crc overhead is re
11 matches
Mail list logo