Jonathan,
Yes we have 10,000 containers and we're using COSBench to do the tests.
Sincerely, Yuan
On Wed, Jun 19, 2013 at 9:24 AM, Jonathan Lu wrote:
> Hi, Zhou,
> BTW, in test case 2, the number of container is 10,000 or just 10?
>
>
> Jonathan Lu
>
> On 2013/6/18 19:18, ZHOU Yuan wrot
Jonathan, we happen to use SN similar with yours and I could share you some
performance testing data here:
1. 100 container with 1 objects(base)
The performance is quite good and can hit HW bottleneck
2. 10kcontainer with 100M objects
The performance is not so good, which dropped 80% compared
Hi, Huang
Thanks a lot. I will try this test.
One more question:
In the 3 following situation, will the base line performance be
quite different?
1. only 1 sontaienr with 10m objects;
2. 100,000 objects per container at 100 containers
3. 1,000 objects per con
On Tue, Jun 18, 2013 at 12:35 PM, Jonathan Lu wrote:
> Hi, Huang
> Thanks for you explanation. Does it mean that the storage cluster of
> specific processing ability will be slower and slower with more and more
> objects? Is there any test about the rate of the decline or is there any
> lowe
Hi, Huang
Thanks for you explanation. Does it mean that the storage cluster
of specific processing ability will be slower and slower with more and
more objects? Is there any test about the rate of the decline or is
there any lower limit?
For example, my environment is:
Swift
Hi Huang,
Storage nodes will run out of memory for caching inode eventually right?
Have you ever measure the upper limit of caching capability of your
storage nodes?
Hi Jonathan,
The default reclaiming time is 7 days. Did you wait for 7 days or just
change the setting to 1 min in conf file ?
Hi Hugo,
I know the tombstone mechanism. In my opinion after the reclaim
time, the object of xxx.tombstone will be deleted at all. Is that right?
Maybe I misunderstand the doc :( ...
We try to "colddown" the swift system ( just wait for the
reclaiming) and test, but the result is not sa
Hi Jonathan ,
How did you perform "delete all the objects in the storage" ? Those
deleted objects still consume inodes in tombstone status until the reclaim
time.
Would you mind to compare the result of $> sudo cat /proc/slabinfo | grep
xfs , before/after set the vfs_cache_pressure
spongebob@
On Tue, Jun 18, 2013 at 10:42 AM, Jonathan Lu wrote:
> On 2013/6/17 18:59, Robert van Leeuwen wrote:
>
>> I'm facing the issue about the performance degradation, and once I
>>> glanced that changing the value in /proc/sys
>>> /vm/vfs_cache_pressure will do a favour.
>>> Can anyone explain to me w
On 2013/6/17 18:59, Robert van Leeuwen wrote:
I'm facing the issue about the performance degradation, and once I glanced that
changing the value in /proc/sys
/vm/vfs_cache_pressure will do a favour.
Can anyone explain to me whether and why it is useful?
Hi,
When this is set to a lower value th
> I'm facing the issue about the performance degradation, and once I glanced
> that changing the value in /proc/sys
> /vm/vfs_cache_pressure will do a favour.
> Can anyone explain to me whether and why it is useful?
Hi,
When this is set to a lower value the kernel will try to keep the inode/den
Can you say more about your environment and setup, and then about the
measurements you are taking to categorize the performance issue?
On Mon, Jun 17, 2013 at 3:25 AM, Jonathan Lu wrote:
> Hi, all,
> I'm facing the issue about the performance degradation, and once I
> glanced that changing th
Check the slab usage of xfs_* might help .
$ sudo slabtop
Hope it help
從我的 iPhone 傳送
Jonathan Lu 於 2013/6/17 下午3:25 寫道:
> Hi, all,
> I'm facing the issue about the performance degradation, and once I
> glanced that changing the value in /proc/sys/vm/vfs_cache_pressure will do a
> favou
13 matches
Mail list logo