I'm contemplating the same thing as well. Or rather, I'm actually doing some
testing. I have a Netlist EV3 and have seen ~6GB/s read and write for any
block size larger than 16k or so, IIRC.
Sebastien Han has a blog page with journal benchmarks, I've added the specifics
there.
This week, I e
I'm contemplating the same thing as well. Or rather, I'm actually doing some
testing. I have a Netlist EV3 and have seen ~6GB/s read and write for any
block size larger than 16k or so, IIRC.
Sebastien Han has a blog page with journal benchmarks, I've added the specifics
there.
This week, I e
Having some issues with blocked ops on a small cluster. Running
0.94.5 with cache tiering. 3 cache nodes with 8 SSDs each and 3
spinning nodes with 12 spinning disk and journals. All the pools are
3x replicas.
Started experiencing problems with OSDs in the cold tier consuming the
entirety of th
Having some problems with my cluster. Wondering if I could get some
troubleshooting tips:
Running hammer 0.94.5. Small cluster with cache tiering. 3 spinning
nodes and 3 SSD nodes.
Lots of blocked ops. OSDs are consuming the entirety of the system
memory (128GB) and then falling over. Lots o
73/ceph/pg_9.21f.txt
When I query 12.258 it just hangs
pg 11.58 query:
https://dl.dropboxusercontent.com/u/90634073/ceph/pg_11.58.txt
Not sure where to go from here.
-H
On Tue, May 24, 2016 at 5:47 PM, Christian Balzer wrote:
>
> Hello,
>
>
> Hello,
>
> On Tue, 24 May
I fear I've hit a bug as well. Considering an upgrade to the latest release of
hammer. Somewhat concerned that I may lose those PGs.
-H
> On May 25, 2016, at 07:42, Gregory Farnum wrote:
>
>> On Tue, May 24, 2016 at 11:19 PM, Heath Albritton wrote:
>> Not going t
Separate OSPF areas would make this unnecessarily complex. In a world where
(some) routers are built to accommodate the number of Internet prefixes of over
a half million, your few hundred or few thousand /32s represent very little
load to a modern network element.
The number of links will hav
I'm wondering if anyone has some tips for managing different types of
pools, each of which fall on a different type of OSD.
Right now, I have a small cluster running with two kinds of OSD nodes,
ones with spinning disks (and SSD journals) and another with all SATA
SSD. I'm currently running cache
I've used the 400GB unit extensively for almost 18 months, one per six drives.
They've performed flawlessly.
In practice, journals will typically be quite small relative to the total
capacity of the SSD. As such, there will be plenty of room for wear leveling.
If there was some concern, one
I'm not sure what's normal, but I'm on Openstack Juno with ceph .94.5 using
separate pools for nova, glance, and cinder. Takes 16 seconds to start an
instance (el7 minimal).
Everything is on 10GE and I'm using cache tiering, which I'm sure speeds
things up. Can personally verify that COW is work
I've done a bit of testing with the Intel units: S3600, S3700, S3710, and
P3700. I've also tested the Samsung 850 Pro, 845DC Pro, and SM863.
All of my testing was "worst case IOPS" as described here:
http://www.anandtech.com/show/8319/samsung-ssd-845dc-evopro-preview-exploring-worstcase-iops/6
I've done a bit of testing with the Intel units: S3600, S3700, S3710, and
P3700. I've also tested the Samsung 850 Pro, 845DC Pro, and SM863.
All of my testing was "worst case IOPS" as described here:
http://www.anandtech.com/show/8319/samsung-ssd-845dc-evopro-preview-exploring-worstcase-iops/6
> Did you just do these tests or did you also do the "suitable for Ceph"
> song and dance, as in sync write speed?
These were done with libaio, so async. I can do a sync test if that
helps. My goal for testing wasn't specifically suitability with ceph,
but overall suitability in my environment,
> On Thu, Mar 3, 2016 at 10:17 PM, Christian Balzer wrote:
> Fair enough.
> Sync tests would be nice, if nothing else to confirm that the Samsung DC
> level SSDs are suitable and how they compare in that respect to the Intels.
I'll do some sync testing next week and maybe gather my other results
Neither of these file systems is recommended for production use underlying an
OSD. The general direction for ceph is to move away from having a file system
at all.
That effort is called "bluestore" and is supposed to show up in the jewel
release.
-H
> On Mar 18, 2016, at 11:15, Schlacta, Chr
The rule of thumb is to match the journal throughput to the OSD throughout.
I'm seeing ~180MB/s sequential write on my OSDs and I'm using one of the P3700
400GB units per six OSDs. The 400GB P3700 yields around 1200MB/s* and has
around 1/10th the latency of any SATA SSD I've tested.
I put a p
If you google "ceph bluestore" you'll be able to find a couple slide decks on
the topic. One of them by Sage is easy to follow without the benefit of the
presentation. There's also the " Redhat Ceph Storage Roadmap 2016" deck.
In any case, bluestore is not intended to address bitrot. Given th
17 matches
Mail list logo