I'm sort of a newbie at interpreting the .f, but I am on a bit of a short leash so I have a question on some of the workloads and how the loops work...
My goal is to use about 2TB of space I have sitting around ;-) But I have been blowing WAY over my 2TB.... In a workload like bringover.f, copied below for convenience, is the result of something like: nthreads=40 nfiles=1000 filesize=1g going to be (40*1000*1g) = 40TB? or is it 40 threads working to create 1000*1g files, so 1TB? I have a feeling the 40 threads will end up with 40 threads doing 1000 1g files...based on the way the process is defined... Any help would be GREATLY appreciated.... define fileset name=srcfiles,path=$dir,size=$filesize,entries=$nfiles,dirwidth=$dirwidth,prealloc define fileset name=destfiles,path=$dir,size=$filesize,entries=$nfiles,dirwidth=$dirwidth define process name=filereader,instances=1 { thread name=filereaderthread,memsize=10m,instances=$nthreads { flowop openfile name=openfile1,filesetname=srcfiles,fd=1 flowop readwholefile name=readfile1,fd=1 flowop createfile name=createfile2,filesetname=destfiles,fd=2 flowop writewholefile name=writefile2,filesetname=destfiles,fd=2,srcfd=1 flowop closefile name=closefile1,fd=1 flowop closefile name=closefile2,fd=2 } } This message posted from opensolaris.org _______________________________________________ perf-discuss mailing list perf-discuss@opensolaris.org