On Sat, Apr 25, 2015 at 9:56 AM, Nikola Ciprich
wrote:
>>
>> It seems you just grepped for ceph-osd - that doesn't include sockets
>> opened by the kernel client, which is what I was after. Paste the
>> entire netstat?
> ouch, bummer! here are full netstats, sorry about delay..
>
> http://nik.lb
Hi,
Gregory Farnum wrote:
> The MDS will run in 1GB, but the more RAM it has the more of the metadata
> you can cache in memory. The faster single-threaded performance your CPU
> has, the more metadata IOPS you'll get. We haven't done much work
> characterizing it, though.
Ok, thanks for the ans
Thanks Greg and Steffen for your answer. I will make some tests.
Gregory Farnum wrote:
> Yeah. The metadata pool will contain:
> 1) MDS logs, which I think by default will take up to 200MB per
> logical MDS. (You should have only one logical MDS.)
> 2) directory metadata objects, which contain th
Yeah, that's definitely something that we'd address soon.
Yehuda
- Original Message -
> From: "Ben"
> To: "Ben Hines" , "Yehuda Sadeh-Weinraub"
>
> Cc: "ceph-users"
> Sent: Friday, April 24, 2015 5:14:11 PM
> Subject: Re: [ceph-users] Shadow Files
>
> Definitely need something to he
We're currently putting data into our cephfs pool (cachepool in front
of it as a caching tier), but the metadata pool contains ~50MB of data
for 36 million files. If that were an accurate estimation, we'd have a
metadata pool closer to ~140GB. Here is a ceph df detail:
http://people.beocat.cis.ksu
I'm able to reach around 2-25000iops with 4k block with s3500 (with
o_dsync) (so yes, around 80-100MB/S).
I'l bench new s3610 soon to compare.
- Mail original -
De: "Anthony Levesque"
À: "Christian Balzer"
Cc: "ceph-users"
Envoyé: Vendredi 24 Avril 2015 22:00:44
Objet: Re: [ceph-
That doesn't make sense -- 50MB for 36 million files is <1.5 bytes each.
How do you have things configured, exactly?
On Sat, Apr 25, 2015 at 9:32 AM Adam Tygart wrote:
> We're currently putting data into our cephfs pool (cachepool in front
> of it as a caching tier), but the metadata pool contain
cephfs (really ec84pool) is an ec pool (k=8 m=4), cachepool is a
writeback cachetier in front of ec84pool. As far as I know, we've not
done any strange configuration.
Potentially relevant configuration details:
ceph osd crush dump >
http://people.beocat.cis.ksu.edu/~mozes/ceph/crush_dump.txt
ceph
That's odd -- I almost want to think the pg statistics reporting is going
wrong somehow.
...I bet the leveldb/omap stuff isn't being included in the of statistics.
That could be why and would make sense with what you've got here. :)
-Greg
On Sat, Apr 25, 2015 at 10:32 AM Adam Tygart wrote:
> ceph
Probably the case. I've check a 10% of the objects in the metadata
pool (rbd -p metadata stat $objname). They've all been 0 byte objects.
Most of them have 1-10 omapvals usually 408 bytes each.
Based on the usage of the other pools on the SSDs, that comes out to
about ~46GB of omap/leveldb stuff.
On Sat, 25 Apr 2015, Gregory Farnum wrote:
> That's odd -- I almost want to think the pg statistics reporting is going
> wrong somehow.
> ...I bet the leveldb/omap stuff isn't being included in the of statistics.
> That could be why and would make sense with what you've got here. :)
Yeah, the pool
Yeah -- as I said, 4KB was a generous number. It's going to vary some
though, based on the actual length of the names you're using, whether you
have symlinks or hard links, snapshots, etc.
-Greg
On Sat, Apr 25, 2015 at 11:34 AM Adam Tygart wrote:
> Probably the case. I've check a 10% of the objec
Hello,
I think that the dd test isn't a 100% replica of what Ceph actually does
then.
My suspicion would be the 4k blocks, since when people test the maximum
bandwidth they do it with rados bench or other tools that write the
optimum sized "blocks" for Ceph, 4MB ones.
I currently have no unused
Hi
I was doing some testing on erasure coded based CephFS cluster. cluster is
running with giant 0.87.1 release.
Cluster info
15 * 36 drives node(journal on same osd)
3 * 4 drives SSD cache node( Intel DC3500)
3 * MON/MDS
EC 10 +3
10G Ethernet for private and cluster network
We got app
Hi,
With inspiration from all the other performance threads going on here, I
started to investigate on my own as well.
I’m seeing a lot iowait on the OSD, and the journal utilised at 2-7%, with
about 8-30MB/s (mostly around 8MB/s write). This is a dumpling cluster. The
goal here is to increase
15 matches
Mail list logo