Hi,
for me it's unknown what 100 TPS means in that particular case. But this 
doesn't make sense at all and I don't see such a low number in the postmark 
output here.

I think I get around 4690+-435 IOPS with 95% confidence.

Guest and the actual test system is FreeBSD9.1/64bit inside of Virtualbox.
Host system is MacOSX on 4year old macbook
Storage is VDI file backed on a SSD  (OCZ vortex 2) with a 2gb ZFS pool 

When you I postmark with 25K transactions I get an output like this. 
(http://fsbench.filesystems.org/bench/postmark-1_5.c)

pm>run
Creating files...Done
Performing transactions..........Done
Deleting files...Done
Time:
        6 seconds total
        5 seconds of transactions (5000 per second)

Files:
        13067 created (2177 per second)
                Creation alone: 500 files (500 per second)
                Mixed with transactions: 12567 files (2513 per second)
        12420 read (2484 per second)
        12469 appended (2493 per second)
        13067 deleted (2177 per second)
                Deletion alone: 634 files (634 per second)
                Mixed with transactions: 12433 files (2486 per second)

Data:
        80.71 megabytes read (13.45 megabytes per second)
        84.59 megabytes written (14.10 megabytes per second)

I did this 100 times on my notebook and summed up this.

root@freedb:/pool/nase # ministat -n *.txt
x alltransactions.txt
+ appended-no.txt
* created-no.txt
% deleted-no.txt
# reed-no.txt
    N           Min           Max        Median           Avg        Stddev
x 100          3571          5000          5000       4690.25     435.65125
+ 100          1781          2493          2493       2338.84      216.8531
* 100          1633          2613          2613       2396.59     256.53752
% 100          1633          2613          2613       2396.59     256.53752
# 100          1774          2484          2484       2330.22      216.3084


When I check "zpool iostat 1" I see

root@freedb:/pool/nase # zpool iostat 1
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
pool        10.6M  1.97G      0      8     28   312K
----------  -----  -----  -----  -----  -----  -----
pool        10.6M  1.97G      0     33      0  4.09M
----------  -----  -----  -----  -----  -----  -----
pool        10.6M  1.97G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
pool        10.6M  1.97G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
pool        10.6M  1.97G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
pool        19.6M  1.97G      0     89      0  4.52M
----------  -----  -----  -----  -----  -----  -----


around 30-90 TPS bursts. 

Did they counted this instead?


-dennis






Am 28.05.2013 um 15:02 schrieb Paul Pathiakis:

> Outperform at "out of the box" testing. ;-)
> 
> So, if I have a "desktop" distro like PCBSD, the only thing of relevance is 
> putting up my own web server???? (Yes, the benchmark showed PCBSD seriously 
> kicking butt with Apache on static pages.... but why would I care on a 
> desktop OS?)
> 
> Personally, I found the whole thing lacking coherency and relevancy on just 
> about anything.  
> 
> Don't get me wrong, I do like the fact that this was done.  However, there 
> are compiler differences (It was noted many times that CLANG was used and it 
> may have been a detriment but it doesn't go into the how or why.) and other 
> issues.
> 
> There was a benchmark on PostGreSQL, but I didn't see any *BSD results?
> 
> Transactions to a disk?  Does this measure the "bundling" effect of the 
> "groups of transactions" of ZFS?  That's a whole lot less transactions that 
> are sent to disk.  (Does anyone know any place where this can be found?  That 
> is, how does the whole "bundling of disk I/O" go from writing to memory, 
> locking those writes, then sending all the info in one shot to the disk?  
> This helps:  
> http://blog.delphix.com/ahl/2012/zfs-fundamentals-transaction-groups/ )
> 
> I was working at a company that had the intention of doing "electronic asset 
> ingestion and tagging".  Basically, take any thing moved to the front end web 
> servers, copy it to disk, replicate it to other machines, etc... (maybe not 
> in that order)  The whole system was java based.
> 
> This was 3 years ago.  I believe I was using Debian V4 (it had just come 
> out....  I don't recall the names etch, etc) and I took a single machine and 
> rebuilt it 12 times:  OpenSuSe with ext2, ext3, xfs.  Debian with ext2, ext3, 
> xfs.  CentOS with ext2, ext3, xfs.  FreeBSD 8.1 with ZFS, UFS2 w/ SU.
> 
> Well, the numbers came in and this was all done on the same HP 180 1u server 
> rebuilt that many times.  I withheld the FBSD results as the development was 
> done on Debian and people were "Linux inclined".  The requisite was for 15000 
> tpm per machine for I/O.  Linux could only get to 3500.  People were pissed 
> and they were looking at 5 years and $20m in time and development.  That's 
> when I put the FBSD results in front of them..... 75,200 tpm.  Now, this was 
> THEIR measurements and THEIR benchmarks (The Engineering team).  The machine 
> was doing nothing but running flat out on a horrible method of using 
> directory structure to organize the asset tags... (yeah, ugly)  However, ZFS 
> almost didn't care compared to a traditional filesystem.  
> 
> So, what it comes down do is simple.... you can benchmark anything you want 
> with various "authoritative" benchmarks, but in the end, your benchmark on 
> your data set (aka real world in your world) is the only thing that matters.
> 
> BTW, what happened in the situation I described?  Despite, a huge cost 
> savings and incredible performance....  "We have to use Debian as we never 
> put any type of automation in place that would allow us to be able to move 
> from one OS to another"...  Yeah, I guess a Systems Architect (like me) is 
> something that people tend to overlook.  System automation to allow nimble 
> transitions like that are totally overlooked.
> 
> Benchmarks are "nice".  However, tuning and understanding the underlying tech 
> and what's it's good for is priceless.  Knowing there are memory management 
> issues, scheduling issues, certain types of I/O on certain FS that cause it 
> to sing or sob, these are the things that will make someone invaluable.  No 
> one should be a tech bigot.  The mantra should be:  "The best tech for the 
> situation".  No one should care if it's BSD, Linux, or Windoze if it's what 
> works best in the situation.
> 
> P
> 
> PS -  When I see how many people are clueless about how much tech is ripped 
> off from BSD to make other vendors' products just work and then they slap at 
> BSD.... it's pretty bad.  GPLv3?  Thank you... there are so many people going 
> to a "no GPL products in house" policy that there is a steady increase in BSD 
> and ZFS.  I can only hope GPLv4 becomes "If you use our stuff, we own all the 
> machines and code that our stuff coexists on" :-)
> 
> 
> 
> 
> 
> 
> ________________________________
> From: Adrian Chadd <adr...@freebsd.org>
> To: O. Hartmann <ohart...@zedat.fu-berlin.de> 
> Cc: freebsd-performance@freebsd.org 
> Sent: Tuesday, May 28, 2013 5:03 AM
> Subject: Re: New Phoronix performance benchmarks between some Linuxes and 
> *BSDs
> 
> 
> outperform at what?
> 
> 
> 
> adrian
> 
> On 28 May 2013 00:08, O. Hartmann <ohart...@zedat.fu-berlin.de> wrote:
>> Phoronix has emitted another of its "famous" performance tests
>> comparing different flavours of Linux (their obvious favorite OS):
>> 
>> http://www.phoronix.com/scan.php?page=article&item=bsd_linux_8way&num=1
>> 
>> It is "impressive, too, to see that PHORONIX did not benchmark the
>> gaming performance - this is done exclusively on the Linux
>> distributions, I guess in the lack of suitable graphics cards at
>> Phronix (although it should be possible to compare the nVidia BLOB
>> performance between each system).
>> 
>> Although I'm not much impressed by the way the benchmarks are
>> orchestrated, Phoronix is the only platform known to me providing those
>> from time to time benchmarks on most recent available operating systems.
>> 
>> Also, the bad performance of ZFS compared to to UFS2 seems to have a
>> very harsh impact on systems were that memory- and performance-hog ZFS
>> isn't really needed.
>> 
>> Surprised and really disappointing (especially for me personally) is
>> the worse performance of the Rodinia benchmark on the BSDs, for what I
>> try to have deeper look inside to understand the circumstances of the
>> setups and what this scientific benchmark is supposed to do and
>> measure.
>> 
>> But the overall conclusion shown on Phoronix is that what I see at our
>> department which utilizes some Linux flavours, Ubuntu 12.01 or Suse and
>> in a majority CentOS (older versions), which all outperform the several
>> FreeBSd servers I maintain (FreeBSD 9.1-STABLE and FreeBSD
>> 10.0-CURRENT, so to end software compared to some older Linux kernels).
>> _______________________________________________
>> freebsd-performance@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
>> To unsubscribe, send any mail to 
>> "freebsd-performance-unsubscr...@freebsd.org"
> _______________________________________________
> freebsd-performance@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
> To unsubscribe, send any mail to "freebsd-performance-unsubscr...@freebsd.org"
> _______________________________________________
> freebsd-performance@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
> To unsubscribe, send any mail to "freebsd-performance-unsubscr...@freebsd.org"


_______________________________________________
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "freebsd-performance-unsubscr...@freebsd.org"

Reply via email to