On 1/15/20 11:58 PM, Paul Emmerich wrote:
we ran some benchmarks with a few samples of Seagate's new HDDs that
some of you might find interesting:
Blog post:
https://croit.io/2020/01/06/2020-01-06-benchmark-mach2
GitHub repo with scripts and raw data:
https://github.com/croit/benchmarks/tree
Hi,
Interesting technology!
It seems they have only one capacity: 14TB? Or are they planning
different sizes as well? Also the linked pdf mentions just this one disk.
And obviouly the price would be interesting to know...
MJ
On 1/16/20 9:51 AM, Konstantin Shalygin wrote:
On 1/15/20 11:58 P
More details, different capacities etc:
https://www.seagate.com/nl/nl/support/internal-hard-drives/enterprise-hard-drives/exos-X/
MJ
On 1/16/20 9:51 AM, Konstantin Shalygin wrote:
On 1/15/20 11:58 PM, Paul Emmerich wrote:
we ran some benchmarks with a few samples of Seagate's new HDDs that
Hello,
according to some prices we have heard so far, the Seagate dual actuator
HDD will cost around 15-20% more than a single actuator.
We can help with a good hardware selection if interested.
--
Martin Verges
Managing director
Hint: Secure one of the last slots in the upcoming 4-day Ceph Int
Anybody upgraded to 14.2.6 yet?
On a 1800 OSD cluster I see that ceph-mgr is consuming 200 to 450% CPU
on a 4C/8T system (Intel Xeon E3-1230 3.3Ghz CPU).
The logs don't show anything very special, it's just that the mgr is
super busy.
I noticed this when I executed:
$ ceph balancer status
That
Hi,
I've experienced exactly the same with 14.2.5 and upgraded to 14.2.6.
I'm running 7 node cluster with ~ 500 OSDs.
Since the upgrade the CPU for ceph-mgr is back to normal, and ceph
balancer status is responsive.
However, balancing is still not working... but this is another issue.
Thomas
Am
Hey Wido,
We upgraded a 550-osd cluster from 14.2.4 to 14.2.6 and everything seems to
be working fine. Here's top:
PID USER PR NIVIRTRESSHR S %CPU %MEM TIME+
COMMAND
1432693 ceph 20 0 3246580 2.0g 18260 S 78.4 13.9 2760:58
ceph-mgr
2075038 ceph 20 0
Hi,
The results look strange to me...
To begin with, it's strange that read and write performance differs. But
the thing is that a lot (if not most) large Seagate EXOS drives have
internal SSD cache (~8 GB of it). I suspect that new EXOS also does and
I'm not sure if Toshiba has it. It could
Hello, Cephers,
I have a small 6 node cluster with 36 OSDs. When running the
benchmark/torture tests I noticed that some nodes, usually storage2n6-la
and also sometimes others are utilized much more. I see some osds are used
100% and load average goes up to 21 while on the others the load averag
Hi,
The command "ceph daemon mds.$mds perf dump" does not give the
collection with MDS specific data anymore. In Mimic I get the following
MDS specific collections:
- mds
- mds_cache
- mds_log
- mds_mem
- mds_server
- mds_sessions
But those are not available in Nautilus anymore (14.2.4). Also no
Chiming in to mirror this.
250 OSDs, and after 14.2.6 CPU is much lower on the mgr, and the balancer
doesn't hang, which was the main thing that would stall previously.
Reed
> On Jan 16, 2020, at 4:30 AM, Dan van der Ster wrote:
>
> Hey Wido,
> We upgraded a 550-osd cluster from 14.2.4 to 14.
Sorry, we no longer have these test drives :(
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Thu, Jan 16, 2020 at 1:48 PM wrote:
> Hi,
>
> The results look stra
12 matches
Mail list logo