de 1 PB storage..
What does one do for power? What are the power requirements when the
system is first powered on? Can drive spin-up be staggered between
JBOD chassis? Does the server need to be powered up last so that it
does not time out on the zfs import?
Bob
--
Bob Fr
OpenIndiana and it should be able to work
without VT extensions.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs
used to exhibit this problem so I opened Illumos issue 2998
(https://www.illumos.org/issues/2998). The weird thing is that the
problem went away and has not returned.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
rld,
data may be sourced from many types of systems and filesystems.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-di
evious day's run. In most
cases only a small subset of the total files are updated (at least on
my systems) so the caching requirements are small. Files updated on
one day are more likely to be the ones updated on subsequent days.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us
emental stream sizes).
That is what I used to do before I learned better.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mail
ocks in place rather than
writing to a new temporary file first. As a result, zfs COW produces
primitive "deduplication" of at least the unchanged blocks (by writing
nothing) while writing new COW blocks for the changed blocks.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us
or example:
I am finding that rsync with the right options (to directly
block-overwrite) plus zfs snapshots is providing me with pretty
amazing "deduplication" for backups without even enabling
deduplication in zfs. Now backup storage goes a very long way.
Bob
--
B
"everything ZFS running on illumos-based
distributions."
Even FreeBSD's zfs is now based on zfs from Illumos. FreeBSD and
Linux zfs developers contribute fixes back to zfs in Illumos.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfrie
users/bfriesen/zfs-discuss/zfs-cache-test.ksh";.
The script will exercise an initial uncached read from disks, and then
a (hopefully) cached re-read from disks. I think that it serves as a
useful benchmark.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/user
renaming them.
You have this reversed. The older data is served from fewer spindles
than data written after the new vdev is added. Performance with the
newer data should be improved.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
been some cases where people said unfavorable things about Oracle on
this list. Oracle needs to control its message and the principle form
of communication will be via private support calls authorized by
service contracts and authorized corporate publications.
Bob
--
Bob Friesenhahn
bfrie
nnounced and invited people to join their
discussion forums, which are web-based and virtually dead.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/_
y also offer rather profound performance improvements.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@ope
hs,
or is only one switch used?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolari
On Sat, 19 Jan 2013, Jim Klimov wrote:
On 2013-01-19 18:17, Bob Friesenhahn wrote:
Resilver may in fact be just verifying that the pool disks are coherent
via metadata. This might happen if the fiber channel is flapping.
Correction: that (verification) would be scrubbing ;)
I don't
heck for messages in /var/adm/messages which might indicate
when and how FC connectivity has been lost?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
__
On Thu, 17 Jan 2013, Bob Friesenhahn wrote:
For NFS you should disable atime on the NFS client mounts.
This advice was wrong. It needs to be done on the server side.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
ot true for NFS), Zfs is lazy about updating atime
on disk and so it may not be updated on disk until the next
transaction group is written (e.g. up to 5 seconds) and so it does not
represent much actual load. Without this behavior, the system could
become unusable.
For NFS you should disable a
is is going continuously, then it may be causing
more fragmentation in conjunction with your snapshots.
See "http://www.brendangregg.com/dtrace.html";.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Main
e to their filesystem configuration) should improve performance
during normal operations and should reduce the number of blocks which
need to be sent in the backup by reducing write amplification due to
"overlap" blocks..
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.sim
further?
Do some filesystems contain many snapshots? Do some filesystems use
small zfs block sizes. Have the servers been used the same?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
-0412 | zfs recv -o version=4 repo/test
cannot receive: cannot override received version
You can send a version 6 file system into a version 28 pool, but it will still
be a version 6 file system.
Bob
___
zfs-discuss mailing list
zfs-di
Perhaps slightly elegant, you can do the new pool/rsync thing on the 11.1 live
CD so you don't actually have to stand up a new system to do this. Assuming
this is x86 and VirtualBox works on Illumos, you could fire up a VM to do this
as well.
Bob
Sent from my iPhone
On Dec 13, 2012,
at's what I did yesterday :)
Bob
Sent from my iPhone
On Dec 13, 2012, at 12:54 PM, Jan Owoc wrote:
> On Thu, Dec 13, 2012 at 11:44 AM, Bob Netherton
> wrote:
>> On Dec 13, 2012, at 10:47 AM, Jan Owoc wrote:
>>> Yes, that is correct. The last version of Solaris with sourc
other day and made this
mistake between 11ga and 11.1.
Watch the ZFS send approach because you might be sending a newer file system
version than is supported. Yes, I've done that too :)
Bob
Sent from my iPhone
On Dec 13, 2012, at 10:47 AM, Jan Owoc wrote:
> Hi,
>
> On Thu, Dec
Illumos or OpenIndiana mailing lists and I don't recall seeing this
issue in the bug trackers.
Illumos is not so good at dealing with huge memory systems but
perhaps it is also more stable as well.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfr
with no success
(sata_func_enable=0x5, ahci_msi_enabled=0, sata_max_queue_depth=1)
Is there anything else I can try?
If the SATA card you are using is a JBOD-style card (i.e. disks are
portable to a different controller), are you able/willing to swap it
for one that Solaris is known to supp
ndiana oi_151a7 instead? You could experiment by booting
from the live CD and seeing if your disks show up.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
_
and new pool with the new chassis and use 'zfs
send' to send a full snapshot of each filesystem to the new pool.
After the bulk of the data has been transferred, take new snapshots
and send the remainder. This expects that both pools can be available
at once.
Bob
--
Bob Friesenhahn
behavior and without rebooting. It is possible that my recollection
is wrong though. If my recollection is correct, then it is not so
important to know what is "good enough" before starting to put your
database in service.
Bob
--
Bob Friesenhahn
bfrie...@simpl
ion #2. Quite
often, option #3 is effective because problems just go away once
enough resources are available.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
ely that SATA boards would
work if they support the standard AHCI interface. I would not take
any chance with unknown SAS.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://ww
On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
On 10/25/2012 04:09 PM, Bob Friesenhahn wrote:
On Thu, 25 Oct 2012, Sašo Kiselkov wrote:
Look for Dell's "6Gbps SAS HBA" cards. They can be had new for <$100 and
are essentially rebranded LSI 9200-8e cards. Always try to look for
support eSATA? It seems unlikely.
I purchased an eSATA card (from SIIG, http://www.siig.com/) with the
intention to try it with Solaris 10 to see if it would work but have
not tried plugging it in yet.
It seems likely that a numer of cheap eSATA cards may work.
Bob
--
Bob Friesenhahn
bfrie...@s
n snapshots, or might not ever
appear in any snapshot.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zf
ste a
minimum of 4k. There might be more space consumed by the metadata
than the actual data.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagic
RC I
was told that I should be able to get 12k no problem. We are running
NFS in a heavily used environment with millions of very small files,
so low latency counts.
Your test method is not valid.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
Grap
t;drive" to come back on
line.
Quite a lot of product would need to be sold in order to pay for both
re-engineering and the cost of running a business.
Regardless, continual product re-development is necessary or else it
will surely die.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.
A battery-backed RAM cache with Flash backup can be a whole lot faster
and still satisfy many users.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
Is your DDRDrive product still supported and moving? Is it well
supported for Illumos?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
z
flush requests.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
ite caches which can lose even more data if there is a power
failure.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
come up immediately, or be slow to come up when
recovering from a power failure.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs
e ZIL should
not improve the pool storage layout because the pool already had a
ZIL.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-di
copies feature should be pretty effective.
Would the use of several copies cripple the write speeds?
It would reduce the write rate by 1/2 or by whatever number of copies
you have requested.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen
.
Verify that the zfs checksum algorithm you are using is a low-cost one
and that you have not enabled compression or deduplication.
You did not tell us how your zfs pool is organized so it is impossible
to comment more.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
will work with drives with 4k
sectors so Solaris 10 users will not be stuck.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
on top to request it. The closest
equivalent in a POSIX filesystem would be if a previously-null block
in a sparse file is updated to hold content.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://ww
ation, etc., are used.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
r the product. It would no longer be an
"appliance".
No doubt, Nexenta has developed new cool stuff for NexentaStor.
As others have said, only Oracle is capable of supporting the system
as the original product. It could be re-installed to become something
else.
Bob
--
Bob Fr
her than a multiple of 64k.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolari
On Tue, 17 Jul 2012, Roberto Scudeller wrote:
Hi Bob,
Thanks for the answers.
How do I test your theory?
I would use 'dd' to see if it is possible to transfer data from one of
the problem devices. Gain physical access to the system and check the
signal and power cables to the
134217728 512 408114 403473 761683 766615
268435456 64 418910 55239 768042 768498
268435456 128 408990 399732 763279 766882
268435456 256 413919 399386 760800 764468
268435456 512 410246 403019 766627 768739
Bob
--
Bo
th my boxes. All sata disks?
Unfortunately, I already put my pool into use and can not conveniently
destroy it now.
The disks I am using are SAS (7200 RPM, 1 GB) but return similar
per-disk data rates as the SATA disks I use for the boot pool.
Bob
--
Bob Friesenhahn
bfrie...@simple.da
o show hardware
information? Like 'lshw' in linux but for opensolaris.
cfgadm, prtconf, prtpicl, prtdiag
zpool status
fmadm faulty
It sounds like you may have a broken cable or power supply failure to
some disks.
Bob
The storage stopped working, but ping responds. SSH and NFS is out.
nt=2
2+0 records in
2+0 records out
262144 bytes (2.6 GB) copied, 0.379147 s, 6.9 GB/s
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
_
file (containing random
data such as returned from /dev/urandom) to a zfs filesystem, unmount
the filesystem, remount the filesystem, and then time how long it
takes to read the file once. The reason why this works is because
remounting the filesystem restarts the filesystem cache.
Bob
-
es were used as IDE disks. If not that,
then there must be a bottleneck in your hardware somewhere.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
__
has requested it.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
ormal for reads from mirrors to be faster than for a single
disk because reads can be scheduled from either disk, with different
I/Os being handled in parallel.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,
a day by day basis.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
designed to perform copyright violations.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
. Oracle recinded (or lost) the special Studio releases
needed to build the OpenSolaris kernel. The only way I can see to
obtain these releases is illegally.
However, Studio 12.3 (free download) produces user-space executables
which run fine under Illumos.
Bob
--
Bob Friesenhahn
bfrie
algorithm needs to assure that. Having an excellent random
distribution property is not sufficient if it is relatively easy to
compute some other block producing the same hash. It may be useful to
compromise a known block even if the compromized result is complete
garbage.
Bob
--
Bob Friesenhahn
has a future,
particularly once it frees itself from all Sun-derived binary
components.
Oracle continues with Solaris 11 and does seem to be funding necessary
driver and platform support. User access to Solaris 11 may be
abitrarily limited.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx
cal to take a well-known block
and compute a collision block.
For example, the well-known block might be part of a Windows
anti-virus package, or a Windows firewall configuration, and
corrupting it might leave a Windows VM open to malware attack.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.
On Wed, 11 Jul 2012, Joerg Schilling wrote:
Bob Friesenhahn wrote:
On Tue, 10 Jul 2012, Edward Ned Harvey wrote:
CPU's are not getting much faster. But IO is definitely getting faster. It's
best to keep ahead of that curve.
It seems that per-socket CPU performance is doub
o cause intentional harm by writing the magic
data block before some other known block (which produces the same
hash) is written. This allows one block to substitute for another.
It does seem that security is important because with a human element,
data is not necessarily ran
sets offer accelleration for some type of standard
encryption, then that needs to be considered. The CPU might not need
to do the encryption the hard way.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Graphics
ystem
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
nd VM page).
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
On Tue, 3 Jul 2012, James Litchfield wrote:
Agreed - msync/munmap is the only guarantee.
I don't see that the munmap definition assures that anything is
written to "disk". The system is free to buffer the data in RAM as
long as it likes without writing anything at al
2) is used on the mapping with the MS_SYNC option.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
z
, even if I had to drop to 3TB density.
Why would you want native 4k drives right now? Not much would work
with such drives.
Maybe in a dedicated chassis (e.g. the JBOD) they could be of some
use.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users
look more like an end of the road to me.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opens
sures. But I will
keep looking since sooner or later they will provide it.
I browsed the site and saw many 6GBit enclosures. I also saw one with
Nexenta (Solaris/zfs appliance) inside.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
Grap
given there
is no shortage of physical ram ?
Absent pressure for memory, no longer referenced pages will stay in
memory forever. They can then be re-referenced in memory.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
operation
for the specific disk would likely hasten progress.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing l
data layout would result in better
performance.
It seems safest to upgrade the OS before moving a lot of data. Leave
a fallback path in case the OS upgrade does not work as expected.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsM
I'll agree with Bob on this. A specific use case is a VirtualBox server
hosting lots of guests. I even made a point of mentioning this tunable in the
Solaris 10 Virtualization Essentials section on vbox :)
There are several other use cases as well.
Bob
Bob
Sent from my iPad
On M
applications.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
orthy old disks
will be replaced by newer ones).
I like this idea since it allows running two complete pools on the
same disks without using files. Due to using partitions, the disk
write cache will be disabled unless you specifically enable it.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.
initially additional risk
due to raidz1 in the pool since the drives will be about as full as
before.
I am not sure what additional risks are involved due to using files.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
start if a snapshot was taken. What sort of zfs is being used here?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing lis
any reads. Are
you SURE that deduplication was not enabled for this pool? This is
the sort of behavior that one might expect if deduplication was
enabled without enough RAM or L2 read cache.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
Graph
r
to test raidz
(http://www.simplesystems.org/users/bfriesen/zfs-discuss/2540-zfs-performance.pdf).
Most common benchmarking is sequential read/write and rarely
read-file/write-file where 'file' is a megabyte or two and the file is
different for each iteration.
Bob
--
Bob Fr
perational advantages obtained from simple
mirroring (duplex mirroring) with zfs.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discu
is good enough for me.
Bob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ps://sourceforge.net/projects/filebench/";.
Zfs is all about caching so the cache really does need to be included
(and not intentionally broken) in any realistic measurement of how the
system will behave.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users
ehave like it is short on memory only tests how the system will
behave when it is short on memory.
Testing multi-threaded synchronous writes with IOzone might actually
mean something if it is representative of your work-load.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us,
ssary.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
l disks seem to be failing at once.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opens
ee with all you said except that Solaris 10 has support for pool
version 29.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-disc
at
"http://www.youtube.com/user/deirdres"; and elsewhere on Youtube.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-disc
which might be caused by
dynamic load-balancing. That is what I did for my storage here, but
the preferences needed to be configured on the remote end.
It is likely possible to configure everything on the host end but
Solaris has special support for my drive array so it used the drive
array
or to posting. :-(
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
before there was anything like
SEEK_HOLE.
If file space usage is less than file directory size then it must
contain a hole. Even for compressed files, I am pretty sure that
Solaris reports the uncompressed space usage.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
crummy fans
that Intel provided with the CPUs. By replacing the Intel fans with
better quality fans, now the system is whisper quiet.
My system has two 6-core Xeons (E5649) with 48GB of RAM.
It is able to run OpenIndiana quite well but is being used to run
Linux as a desktop system.
Bob
er for Oracle. There may be a chicken-and-egg problem
since Oracle might not want to answer speculative questions but might
be more concrete if you have a system in hand.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,
1 - 100 of 1547 matches
Mail list logo