Just for fun, because I know this is a very contrived test! I wrote a C program to read/writes block to an zFS file on a z114 z/OS system connected via FICON to an HDS SAN and Ubuntu server on a Dell PowerEdge blade server writing to SAS disks on the rack. Of course, there are latency differences and my program is probably quite lame. The z/OS was idle at the time as was the Linux system. I would imagine if they were both running at full capacity
with high throughput sustained I/O the results would be very different.

#include <stdio.h>
#include <stdlib.h>

int main()
{
    size_t numBlocks = 100000;
    char * filename = "./io.temp";

    #define BLOCK_SIZE 1024

    FILE * fp = fopen( filename, "w+" );
    if ( fp == NULL )
    {
        perror( "fopen" );
        exit( 8 );
    }
    char buffer[BLOCK_SIZE] = {0};
    for ( int j = 0; j < numBlocks; j++ )
    {
        if ( fwrite( buffer, sizeof buffer, 1, fp ) == 0 )
        {
            perror( "fwrite" );
            exit( 8 );
        }
    }
    rewind( fp );
    while ( fread( buffer, 1, sizeof buffer, fp ) )
    {
        continue;
    }
    remove( filename );
    return 0;
}


z/OS:

DOC:/u/doc/src: >time iospeed

real    0m 1.15s
user    0m 0.62s
sys     0m 0.20s

Dell:

davcra01@cervidae:~$ time ./iospeed

real    0m0.254s
user    0m0.048s
sys     0m0.199s




On 21/03/2016 10:43 PM, Steve Thompson wrote:
A few years ago, IBM took a Power system and a z/Architecture system and configured them as closely as they could.

As I recall, they both had the same amount of C-Store available to the operating system, and they both had the same number of channels (8 if I remember correctly), and they ran to equivalently sized RAID boxes.

And, if I remember correctly, they were using programs written in C, that were ported from the one to the other.

The object was to check out how efficiently I/O was prosecuted (done).

The z/Architecture machine finished long before the power system.

I thought I had a copy of that comparison, but I just can't find it so I can give a link to it.

Regards,
Steve Thompson


On 03/21/2016 09:40 AM, David Crayford wrote:
On 21/03/2016 9:14 PM, R.S. wrote:
Well,
I observed 1,3M IOPS on EC12 or z196 machine during WAS
installation. With minimal CPU utilisation (I mean regular CPU,
I haven't checked SAP).
IMNSHO a PC server with collection of new shining Emulex cards
has waaaay worse I/O capabilities.
We did some tests of database operations on PC. Effects are
unequivocal.


I admire you honesty :) What class of PC server? Was it connected
to a SAN and did it offload I/O to a peripheral device?

BTW: Typical z/OS I/O workload is very different from PC
workload. Much less IOPS, much more data, much less CPU%.


My wife used to work for HDS and I had some interesting
conversations with some of the engineers she used to work with.
That was a few years ago but in their opinion the
high end *nix servers could match a mainframe for I/O throughput.
PC commodity hardware is different but racked up with enterprise
kit I would be interested to know how
they would shape in a drag race. How would a Dell blade with
Infiniband, SAN and enterprise class HBAs compare?

----------------------------------------------------------------------

For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO
IBM-MAIN


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to