Ah, totally forgot the additional details :)
OS is SUSE Enterprise Linux 12.0 with all patches,
Ceph version 0.94.3
4 node cluster with 2x 10GBe networking, one for cluster and one for public
network, 1 additional server purely as an admin server.
Test machine is also 10gbe connected
ceph.conf
On Wed, Dec 23, 2015 at 5:20 AM, HEWLETT, Paul (Paul)
wrote:
> Seasons Greetings Cephers..
>
> Can I assume that http://tracker.ceph.com/issues/12200 is fixed in
> Infernalis?
>
> Any chance that it can be back ported to Hammer ? (I don’t see it planned)
>
> We are hitting this bug more frequently
Hi all, after basically throwing away the SSDs we had because of very poor
journal write performance, I tested our test systems with spindle drives only.
The results are quite horrifying and i get the distinct feeling that i am doing
something wrong somewhere.
So read performance is great, giving
As another data point, I recently bought a few 240GB SM863s, and found I
was getting 79 MB/s on the single job test.
In my case the SSDs are running off the onboard Intel C204 chipset's
SATA controllers on a couple of systems with single Xeon E3-1240v2 CPUs.
Alex
On 23/12/2015 6:39 PM, Lione
Le 23/12/2015 18:37, Mart van Santen a écrit :
> So, maybe you are right and is the HBA the bottleneck (LSI Logic /
> Symbios Logic MegaRAID SAS 2108). Under all cirumstances, I do not get
> close to the numbers of the PM863 quoted by Sebastien. But his site
> does not state what kind of HBA he is
Hello,
On 12/23/2015 04:38 PM, Lionel Bouton wrote:
> Le 23/12/2015 16:18, Mart van Santen a écrit :
>> Hi all,
>>
>>
>> On 12/22/2015 01:55 PM, Wido den Hollander wrote:
>>> On 22-12-15 13:43, Andrei Mikhailovsky wrote:
Hello guys,
Was wondering if anyone has done testing on Samsu
Le 23/12/2015 16:18, Mart van Santen a écrit :
> Hi all,
>
>
> On 12/22/2015 01:55 PM, Wido den Hollander wrote:
>> On 22-12-15 13:43, Andrei Mikhailovsky wrote:
>>> Hello guys,
>>>
>>> Was wondering if anyone has done testing on Samsung PM863 120 GB version to
>>> see how it performs? IMHO the 48
Hi all,
On 12/22/2015 01:55 PM, Wido den Hollander wrote:
> On 22-12-15 13:43, Andrei Mikhailovsky wrote:
>> Hello guys,
>>
>> Was wondering if anyone has done testing on Samsung PM863 120 GB version to
>> see how it performs? IMHO the 480GB version seems like a waste for the
>> journal as you
>Thanks for your quick reply. Yeah, the number of file really will be the
>potential problem. But if just the memory problem, we could use more memory in
>our OSD
>servers.
Add more mem might not be a viable solution:
Ceph does not say how much data is stores in an inode but the docs say the
xa
Hi, Robert
Thanks for your quick reply. Yeah, the number of file really will be the
potential problem. But if just the memory problem, we could use more memory in
our OSD
servers.
Also, i tested it on XFS use mdtest, here is the result:
$ sudo ~/wulb/bin/mdtest -I 1 -z 1 -b 1024 -R -F
---
>In order to reduce the enlarge impact, we want to change the default size of
>the object from 4M to 32k.
>
>We know that will increase the number of the objects of one OSD and make
>remove process become longer.
>
>Hmm, here i want to ask your guys is there any other potential problems will
>3
Hi, cephers, Sage and Haomai
Recently we stuck of the performance down problem when recoverying. The scene
is simple:
1. run fio with rand write(bs=4k)
2. stop one osd; sleep 10; start the osd
3. the IOPS drop from 6K to about 200
We now know the SSD which that osd on is the bottleneck when reco
Hi,
On which operating system are you running this ?
Cheers
On 23/12/2015 08:50, gongfengguang wrote:
> Hi all,
>
>When I exec ./install-deps.sh, there are some errors:
>
>
>
>--> Already installed : junit-4.11-8.el7.noarch
>
> No uninstalled build requires
>
> Running virtualenv
Hi,
We did a PoC at Orange and encountered some difficulties in configurating
federation.
Can you check that placements targets are identical on each zone?
brgds
De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de
wd_hw...@wistron.com
Envoyé : vendredi 6 novembre 2015 03
Seasons Greetings Cephers..
Can I assume that http://tracker.ceph.com/issues/12200 is fixed in Infernalis?
Any chance that it can be back ported to Hammer ? (I don’t see it planned)
We are hitting this bug more frequently than desired so would be keen to see it
fixed in Hammer
Regards
Paul
___
Le 22/12/2015 20:03, koukou73gr a écrit :
Even the cheapest stuff nowadays has some more or less decent wear
leveling algorithm built into their controller so this won't be a
problem. Wear leveling algorithms cycle the blocks internally so wear
evens out on the whole disk.
But it would wear out
On Tue, Nov 24, 2015 at 8:48 PM, Somnath Roy wrote:
> Hi Yehuda/RGW experts,
>
> I have one cluster with RGW up and running in the customer site.
>
> I did some heavy performance testing on that with CosBench and as a result
> written significant amount of data to showcase performance on that.
>
>
After messing up some of my data in the past (my own doing, playing with
BTRFS in old kernels), I've been extra cautious and now run a ZFS mirror
across multiple RBD images. It's led me to believe that I have a faulty
SSD in one of my hosts:
sdb without a journal - fine (but slow)
sdc without a jo
18 matches
Mail list logo