I turned to pp_kernel (in kstat output and available to an unprivileged user)
and memstat (now 'wickedly' fast and still root-only) to help me accurately
determine kernel memory usage. What I see is that pp_kernel ALWAYS reports a
higher kernel memory utilization, sometimes as much as 800%(!),
This is great to see. I've put a Linux-port section on the main FileBench
wiki page.
We should try and get this integrated backinto a new public source repo.
Current one is on sourceforge, but is very out of date.
Do you know what Sun version you forked from? Does it have all the latest
fixes th
You can try this port: http://www.fsl.cs.sunysb.edu/~vass/filebench/
--
This message posted from opensolaris.org
___
perf-discuss mailing list
perf-discuss@opensolaris.org
If you want you can also try Filebench Linux port:
http://www.fsl.cs.sunysb.edu/~vass/filebench/
which should work without problems.
--
This message posted from opensolaris.org
___
perf-discuss mailing list
perf-discuss@opensolaris.org
We have similar problem and we think we have fixed them:
http://www.fsl.cs.sunysb.edu/~vass/filebench/
vasily
--
This message posted from opensolaris.org
___
perf-discuss mailing list
perf-discuss@opensolaris.org
If somebody still need Filebench for Linux (or FreeBSD), take a look at
http://www.fsl.cs.sunysb.edu/~vass/filebench/
This is the latest Filebench 1.4.8 that compiles on Linux/FreeBSD/Solaris.
Vasily
--
This message posted from opensolaris.org
___
per
We have a working FileBench 1.4.8 at this address:
http://www.opensolaris.org/jive/thread.jspa?messageID=390830
Vasily
--
This message posted from opensolaris.org
___
perf-discuss mailing list
perf-discuss@opensolaris.org
They should be comparable. Don't forget to drop caches after each run and make
the run for at least ten minutes. Sometimes even longer runs are required,
depending on the working set size
and your RAM size.
Vasily
--
This message posted from opensolaris.org
_
Look at our Filebench port, where we have fixed these problems:
http://www.fsl.cs.sunysb.edu/~vass/filebench/
Vasily
--
This message posted from opensolaris.org
___
perf-discuss mailing list
perf-discuss@opensolaris.org
Just look at the oltp.f file, look at the file size and the number of files.
Multiplication of
these numbers will give you the average starting size of the dataset. This size
can grow, if you
have appends in your workloads, but that's not the case for OLTP.
Vasily
--
This message posted from op
Yes, writes are regularly asynchronous. Only if sync/directio attributes are
specified (or there is a high memory pressure) they become synchronous.
Vasily
--
This message posted from opensolaris.org
___
perf-discuss mailing list
perf-discuss@opensolar
You can use 'reuse' attribute only for read-only workloads. Workloads that have
deletes won't work with 'reuse' attribute properly. In our port
(http://www.fsl.cs.sunysb.edu/~vass/filebench/) we're working on the support of
'reuse' attribute for workloads that have deletes.
Vasily
--
This mess
This is not true. Filebench does not take creation time into account. What most
probably happens
is that after the first run your caches (page/buffer caches) are warm (i.e. all
the files are already
in the RAM), so Filebench runs much faster. On Linux you need to run sync and
echo 3 > /proc/sys/
You can find the version that compiles for Solaris/OpenSolaris/Linux/FreeBSD
here:
http://www.fsl.cs.sunysb.edu/~vass/filebench/
Vasily
--
This message posted from opensolaris.org
___
perf-discuss mailing list
perf-discuss@opensolaris.org
Hi Jin Yao
1. Regarding your RMA example, wouldn't you be able to use the dtrace
cpc provider to get this information?
2. How do you propose handling the case where both the proposed
per-hardware thread data and overflow profiling are enabled?
Thanks
/kuriakose
_
15 matches
Mail list logo