ris Next; we've
been using it for OpenSolaris for a couple of years.
- Bart
--
Bart Smaalders Solaris Kernel Performance
bart.smaald...@oracle.com http://blogs.sun.com/barts
"You will contribute more with mercurial than with thunderbird."
important for file systems will
millions of files with relatively few changes.
Or to say keep indexing files on your desktop
This gives everyone a way to access the changes in a filesystem
order (number of files changed) instead of order(number of files extant).
- Bart
--
Bart Smaalders
derstand, the fix is expected "very soon"; your
performance is getting killed by the over-aggressive use of
bounce buffers...
- Bart
--
Bart Smaalders Solaris Kernel Performance
ba...@cyber.eng.sun.com http://blogs.sun.com/barts
"You will contribute more w
Cheers,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Is this using mpt driver? There's an issue w/ the fix for
6863127 that causes performance problems on larger memory
machines, filed as 6908360.
-
--
AFAIK, this is being actively developed, w/ a prototype working...
- Bart
--
Bart Smaalders Solaris Kernel Performance
ba...@cyber.eng.sun.com http://blogs.sun.com/barts
"You will contribute more with mercurial than w
t shouldn't be possible
> to corrupt a file system.
please try the following:
reboot the install cd
in a terminal as root or pfexec'd, type
# zpool import rpool
# reboot
and see if this doesn't get you going. There are some issues
w/ grub & sd on some drives, sigh.
- Bart
at contains a simple transaction facility
might use fsync() to ensure that all changes to a file or
files caused by a given transaction were recorded on a
storage medium.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED]
his depends entirely on the amount of disk & CPU on the fileserver...
A Thumper w/ 48 TB of disk and two dual-core CPUS is prob. somewhat
under-provisioned
w/ 8 GB of RAM.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun
Mirror if you can; this helps a lot because not only do you get more
IOPs because
you have more vdevs, each half of the mirror can satisfy independent
read requests.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs
t the OP was saying is that he somehow knows that an
unallocated block on the disk is bad, and he'd like to tell ZFS about it
ahead of time.
But repair implies there's data to read on the disk; ZFS won't read disk
blocks it didn't write.
- Bart
--
Bart Smaalders
up w/ RAID Z, my 2.6 Ghz dual
core AMD box sustains
100+ MB/sec read or write it happily saturates a GB nic w/ multiple
concurrent reads over
Samba.
W/ 16 drives direct attach you should see close to 500 MB/sec sustained
IO throughput.
- Bart
--
Bart Smaalders Sol
and
give zfs slice 2, or 2) don't have the BIOS do auto-detect on those
drives. Many BIOSs let you select None for the disk type; this will
allow the system to boot. Solaris has no problem finding the
drives even w/o the BIOSs help...
- Bart
--
Bart Smaalders Solaris
Brian D. Horn wrote:
> Take a look at CR 6634371. It's worse than you probably thought.
>
Actually, almost all of the problems noted in that bug are statistics.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://bl
Marcus Sundman wrote:
> Bart Smaalders <[EMAIL PROTECTED]> wrote:
>> UTF8 is the answer here. If you care about anything more than simple
>> ascii and you work in more than a single locale/encoding, use UTF8.
>> You may not understand the meaning of a filename, but at l
Did you pull out the old drive and add a new one in its place hot? What
does
cfgadm -al report? Your drives should look like this:
sata0/0::dsk/c7t0d0disk connectedconfigured ok
sata0/1::dsk/c7t1d0disk connectedconfigured ok
sata1/0::dsk/c8t0d0
Marcus Sundman wrote:
> Bart Smaalders <[EMAIL PROTECTED]> wrote:
>>> I'm unable to find more info about this. E.g., what does "reject
>>> file names" mean in practice? E.g., if a program tries to create a
>>> file using an utf8-incompatible filen
am
and zeroing ram on shutdown would seem simple to implement
safeguards. Yes, if someone steals the laptop while you're using
it you've got problems :-)
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
228G 136G48G 74%/export/home/cyber
: [EMAIL PROTECTED];
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
"You will contribute more with mercurial than with thunderbird."
slice.
So these new zvol-like things don't support snapshots, etc, right?
I take it they work by allowing overwriting of the data, correct?
Are these a zslice?
For those of us who've been swapping to zvols for some time, can
you describe the failure modes?
- Bart
--
Ba
ens the source file, mmaps it, opens the target file, and
does a single write of the entire file contents. /dev/null's
write routine doesn't actually do a copy into the kernel, it just
returns success. As a result, the source file is never paged into
memory.
--
Bart Smaalders
Ian Collins wrote:
Bart Smaalders wrote:
michael T sedwick wrote:
Given a 1.6TB ZFS Z-Raid consisting 6 disks:
And a system that does an extreme amount of small /(<20K) /random
reads /(more than twice as many reads as writes) /
1) What performance gains, if any does Z-Raid offer over ot
(approx) total parallel random read throughput.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensol
pretty painful... for a speedup, consider:
(cd server>; tar xf -)'
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensol
parallel C++ compilations can use up a
lot of RAM.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
ntain all the data. I ran out of drive
bays, so I used one of those 5 1/4" -> 3.5" adaptor brackets
to hang the boot drive where a second DVD drive would go...
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED]
utting them into a different form
factor make them faster?
Well, if I were doing that I'd use DRAM and provide
enough on-board capacitance and a small processor to copy
the contents of the DRAM to flash on power failure.
- Bart
--
Bart Smaalders Solaris Kernel Perform
? This would imply that rewriting
a zvol would be limited at below 50% of disk bandwidth, not
something I'm seeing at all.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/
e CPU (dual
core 2.6 GHz). It saturates a GB net w/ 4 drives & samba,
not working hard at all. A thumper does 2 GB/sec w 2 dual
core CPUs.
Do you have compression enabled? This can be a choke point
for weak CPUs.
- Bart
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED]
Adam
It was pointed out by Jürgen Keil that using ZFS compression
submits a lot of prio 60 tasks to the system task queues;
this would clobber interactive performance.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/
selves, or similar hacks?
Thanks,
-mg
This requires rewriting the block pointers; it's the same
problem as supporting vdev removal. I would guess that
they'll be solved at the same time.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED
. The box is a maxed out AMD QuadFX, so it should have plenty
of grunt for this.
Comments?
Ian
How big were the files, what OS build are you running and how
much memory on the system? Were you copying in parallel?
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL
Adam Lindsay wrote:
Bart Smaalders wrote:
Adam Lindsay wrote:
Okay, the way you say it, it sounds like a good thing. I
misunderstood the performance ramifications of COW and ZFS's
opportunistic write locations, and came up with much more pessimistic
guess that it would approach random w
lable disk
space.
Are you reading and writing the same file at the same time? Your cache
hit rate will be much better then
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/
es are all being read sequentially). If that's
the case, ZFS can do lots of clever prefetching on
the write side, ZFS due to its COW behavior will just
handle both random and sequentially writes pretty
much the same way.
- Bart
--
Bart Smaalders Solaris Kernel
27;t need to share the source for
your proprietary source files.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
___
zfs-discuss mailing list
[EMAIL PROTECTED]
http://ma
.
Regards.
Are you using nfsv4 for the mount? Or nfsv3?
Some idea of the failing app's system calls just prior to failure
may yield the answer as to what's causing the problem. These
problems are usually mishandled error conditions...
- Bart
--
Bart Smaalders Sola
?
- Basrt
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it takes just under 1 second to create 50,000 hardlinks to a file; it
takes just under 2 seconds to delete 'em w/ rm. It would prob. be
faster to use a program to delete them.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL
ay be wrong here.
Actually, what has to happen is that we stop using the SATA chipset
in IDE compat mode and write proper SATA drivers for it... and
manage the upgrade issues,driver name changes, etc.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROT
of the root pool. Yes, there'd be
reservations/allocations, etc. All we need then is a way
to have a dedicated dump device in the same pool...
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
s pretty simply; for small configs (where
a single CPU can saturate all the drives) the net throughput
of the drives doesn't vary significantly if one is reading a
single file or reading 10 files in parallel.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROT
esilvering. The system was pretty sluggish during this
operation, but it had only got 1GB of RAM, half of which
firefox wanted :-/.
This was build 55 of Nevada.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun
it w/ a bigger drive,
rebooting, typing zpool status, finding the name of the missing/
faulted drive and using that as the disk argument to zpool replace.
When the 4th resilver finished, I had lots more disk space
all of a sudden.
- Bart
--
Bart Smaalders Solaris Kernel Per
after reboot if any subsequent write are visible.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.
om the raw device to /dev/null?
Roughly 230Mb/s
Do you mean ~28MB/sec?
Something is definitely bogus. What happens when you do dd
from both drives at once?
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun
.
Best regards,
Constantin
Brilliant video, guys. I particularly liked the fellow
in the background with the hardhat and snow shovel :-).
The USB stick machinations were pretty cool, too.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED]
e machines do have very limited memory
bandwidth, so the checksumming will cost more here than on faster
CPUs.
How fast can you DD from the raw device to /dev/null?
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] htt
d feasible from both a usability and performance standpoint?
That's exactly how I'm running my ferrari laptop. Works
like a charm.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http:/
d boxes becomes important.
Also, of course, SATA is still relatively new and we don't yet
have extensive controller support (understatement).
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
_
the buffer, causing
additional performance and scalability issues.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.o
switching drivers & bios configs during upgrade is a non-trivial
exercise.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss
or example, I could get
3 more 320gig SATAII drives, and fill all the SATA ports. And hook up an
IDE drive as the system boot drive.
Sincerely,
You may wish to take a look at my latest blog post:
http://blogs.sun.com/barts
- Bart
--
Bart Smaalders Solaris Kerne
k bc=2
You can also do this test to a file to see what the peak is going to be...
What kind of write performance do people get out of those honkin' big x4500s ?
~2GB/sec locally, 1 GB/sec over the network.
This requires multiple writing threads; a single CPU just isn'
this. This works fine
between different Solaris versions; if the MAC folks
didn't change their on disk format it might just work
between OS-X and Solaris as well.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun
ec on reads and writes on single
or multiple streams. I'm running build 55; the box has a SI controller
running in PATA compat. mode.
One of the challenging aspects of performance work on these sorts of
things is separating out drivers vs cpus vs memory bandwidth vs disk
behavior vs intrinsi
e you doing random IO? Appending or overwriting?
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
a paper on this topic; there's a copy here:
http://infohost.nmt.edu/~val/review/hash.pdf
- Bart
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
___
zfs-discuss mailing l
re that with the checksum stored in
the block pointer and then use the parity data to
reconstruct the block if the checksums don't match.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
_
Jason J. W. Williams wrote:
Not sure. I don't see an advantage to moving off UFS for boot pools. :-)
-J
Except of course that snapshots & clones will surely be a nicer
way of recovering from "adverse administrative events"...
-= Bart
--
Bart Smaalders
.
If you're doing small random reads or writes, you'll be much more
limited by the number of spindles and the way you configure them.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blog
aris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Shut down the machine, replace the drive, reboot
and type:
zpool replace mypool2 c3t6d0
On earlier versions of ZFS I found it useful to do this
at the login prompt; it seemed fairly memory intensive.
- Bart
--
Bart Smaalders
the system's data
to prevent problems with losing mail, log file data, etc, when either
changing boot environments or pivoting root boot environments.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
n
modified file directory depth.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailm
le
disk for /, since I'm worried about safegarding data, not
making sure I have max availability.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
___
zfs-discuss mai
ata.
JBODs are simple, easy and relatively foolproof when used
w/ ZFS.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opens
n power cycle (NOT reset) the box and
it should boot w/o further problems.
- Bart (who ran into this on his home server).
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
___
zfs-dis
irectories that have more then one copy in case of a problem down the
road.
Actually, this is a perfect use case for setting the copies=2
property after installation. The original binaries are
quite replaceable; the customizations and personal files
created later on are not.
t
when not using ZFS?
Keep in mind that Solaris doesn't always use the most efficient
strategies for paging applications... this is something we're actively
working on fixing as part of the VM work going on...
-Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL
ntrol over ATA/SATA drives? :-).
A method of controlling write cache independent of drive
type, color or flavor is being developed I'll ping
the responsible parties (bcc'd).
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED]
Joseph Mocker wrote:
Bart Smaalders wrote:
How much swap space is configured on this machine?
Zero. Is there any reason I would want to configure any swap space?
--joe
Well, if you want to allocate 500 MB in /tmp, and your machine
has no swap, you need 500M of physical memory or the
e
Here's the ::memstat output for the whole process.
Before first "mkfile 512m"
How much swap space is configured on this machine?
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
_
Matthew Ahrens wrote:
On Mon, Jul 17, 2006 at 10:00:44AM -0700, Bart Smaalders wrote:
So as administrator what do I need to do to set
/export/home up for users to be able to create their own
snapshots, create dependent filesystems (but still mounted
underneath their /export/home/usrname)?
In
Matthew Ahrens wrote:
On Mon, Jul 17, 2006 at 09:44:28AM -0700, Bart Smaalders wrote:
Mark Shellenbaum wrote:
PERMISSION GRANTING
zfs allow -c [,...]
-c "Create" means that the permission will be granted (Locally) to the
creator on any newly-created descendant filesyste
____
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
So as administrator what do I need to do to set
/export/home up for users to be able to create their own
snapshots, create dependent filesystems (but still mounted
underneath their /export/home/usrname)?
In other words, is there a way to specify the rights of the
owner of a filesystem rather than the individual - eg, delayed
evaluation of the owner?
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
f not, there's something else going on
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
in the wrong place (the raid array)
all ZFS can do is tell you that your data is gone. With luck,
subsequent reads _might_ get the right data, but maybe not.
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
es on different filesystems may not complete
correctly during a power failure.
ZFS enables the write cache and flushes it when committing transaction
groups; this insures that all of a transaction group appears or does
not appear on disk.
- Bart
--
Bart Smaalders Solaris Kernel Per
for
general purpose file systems is strongly discouraged,
and may adversely affect performance.
- Bart
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
___
zfs-disc
me to market) here think of it but if we could just stick
to proper interfaces that would be best.
Nico
Perhaps an fadvise call is in order?
- Bart
--
Bart Smaalders Solaris Kernel Performance
[EMAIL PROTECTED] http://blogs.sun.com/barts
_
79 matches
Mail list logo