James Lever wrote:
>
> On 07/07/2009, at 8:20 PM, James Andrewartha wrote:
>
>> Have you tried putting the slog on this controller, either as an SSD or
>> regular disk? It's supported by the mega_sas driver, x86 and amd64 only.
>
> What exactly are you suggesting here? Configure one disk on thi
> "dt" == Don Turnbull writes:
dt> Any idea why this is?
maybe prefetch?
WAG, though.
dt> I work with Greenplum which is essentially a number of
dt> Postgres database instances clustered together.
haha, yeah I know who you are. Too bad the open source postgres can't
do that.
I am having trouble with a Raid-Z zpool "bigtank" of 5x 750GB drives that will
not import.
After having some trouble with this pool, I exported it and attempted a
reimport only to discover this issue:
I can see the pool by running zpool import, and the devices are all online
however
running "zp
> Sorry, don't have a thread reference
> to hand just now.
http://www.opensolaris.org/jive/thread.jspa?threadID=100296
Note that there's little empirical evidence that this is directly applicable to
the kinds of errors (single bit, or otherwise) that a single failing disk
medium would produce.
> Do you have data to back this up?
It's more of a logical observation. The random data corruption I've had up
through the years have generally either involved a single sector or two or a
full disk failure. 5% parity on a 128KB block size would allow you to lose
6.4KB, or ~10 512 byte sectors.
Christian Auby wrote:
You are describing the copies parameter. It really
helps to describe
it in pictures, rather than words. So I did that.
http://blogs.sun.com/relling/entry/zfs_copies_and_data
_protection
-- richard
It's not quite like copies as it's not actually a copy of the data I
> You are describing the copies parameter. It really
> helps to describe
> it in pictures, rather than words. So I did that.
> http://blogs.sun.com/relling/entry/zfs_copies_and_data
> _protection
> -- richard
It's not quite like copies as it's not actually a copy of the data I'm talking
about.
Thanks for the suggestion!
We've fiddled with this in the past. Our app is 32k instead of 8k
blocks and it is data warehousing so the I/O model is a lot more long
sequential reads generally. Changing the blocksize has very little
effect on us. I'll have to look at fsync; hadn't considered t
Have you set the recordsize for the filesystem to the blocksize Postgres is
using (8K)? Note this has to be done before any files are created.
Other thoughts: Disable postgres's fsync, enable filesystem compression if disk
I/O is your bottleneck as opposed to CPU. I do this with MySQL and it has
p
There was a discussion in zfs-code around error-correcting (rather than just
-detecting) properties of the checksums currently kept, an of potential
additional checksum methods with stronger properties.
It came out of another discussion about fletcher2 being both weaker than
desired, and flawed
I work with Greenplum which is essentially a number of Postgres database
instances clustered together. Being postgres, the data is held in a lot
of individual files which can be each fairly big (hundreds of MB or
several GB) or very small (50MB or less). We've noticed a performance
difference
On Tue, 2009-07-07 at 17:42 -0700, Richard Elling wrote:
> Christian Auby wrote:
> > ZFS is able to detect corruption thanks to checksumming, but for single
> > drives (regular folk-pcs) it doesn't help much unless it can correct them.
> > I've been searching and can't find anything on the topic,
Without any tuning, the default TCP window size and send buffer size for NFS
connections is around 48KB which is not very optimal for bulk transfer.
However
the 1.4MB/s write seems to indicate something else is seriously wrong.
iSCSI performance was good, so the network connection seems to be O
interesting; but presumably the ZIL/fsflush is not the reason for the
associated poor *read* performance?
where does latencytop point the finger in that case?
cheers,
calum.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
Christian Auby wrote:
ZFS is able to detect corruption thanks to checksumming, but for single drives
(regular folk-pcs) it doesn't help much unless it can correct them. I've been
searching and can't find anything on the topic, so here goes:
1. Can ZFS do parity data on a single drive? e.g. x%
ZFS is able to detect corruption thanks to checksumming, but for single drives
(regular folk-pcs) it doesn't help much unless it can correct them. I've been
searching and can't find anything on the topic, so here goes:
1. Can ZFS do parity data on a single drive? e.g. x% parity for all writes,
On 07/07/2009, at 8:20 PM, James Andrewartha wrote:
Have you tried putting the slog on this controller, either as an SSD
or
regular disk? It's supported by the mega_sas driver, x86 and amd64
only.
What exactly are you suggesting here? Configure one disk on this
array as a dedicated ZIL?
I've changed zilstat to have the option of recording changes
between txg commits in addition to the current time-based
changes. Before I turn it loose in the wild, I'd like a few
testers who have interesting sync workloads to try it out.
If you are interested, sign up by responding to my blog
ent
FYI...
The -u option is described in the ZFS admin guide and the ZFS
troubleshooting wiki in the areas of restoring root pool snapshots.
The -u option is described in the zfs.1m man page starting in the
b115 release:
http://docs.sun.com/app/docs/doc/819-2240/zfs-1m
Cindy
Lori Alt wrote:
T
None of the file recovery tools work with zfs. Testdisk is most advanced and
the author is looking at incorporating zfs, but when will it happen nobody
knows.
I want to try with dd.
Can anybody give me an example of how to read bytes cylinder by cylinder?
Filtering the output is easy and I wi
bump
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
To elaborate, the -u option to zfs receive suppresses all mounts. The
datasets you extract will STILL have mountpoints that might not work on
the local system, but at least you can unpack the entire hierarchy of
datasets and then modify mountpoints as needed to arrange to make the
file syste
You need the zfs receive -u option.
-- richard
Andrew Daugherity wrote:
I attempted to migrate data from one zfs pool to another, larger one (both
pools are currently mounted on the same host) using the snapshot send/receive
functionality. Of course, I could use something like rsync/cpio/tar
I'm no expert but I think you need to export a zfs volume before you
remove it or it'll complain when you try to import it on another
system.
"zfs admin guide pg. 89"
zfs export poolName
You can do a zfs -f import to import it anyway.
Hua-Ying
On Tue, Jul 7, 2009 at 4:34 PM, Karl Dalen wrote:
Interresting... I wonder what differs between your system and mine. With my
dirt-simple stress-test:
server1# zpool create X25E c1t15d0
server1# zfs set sharenfs=rw X25E
server1# chmod a+w /X25E
server2# cd /net/server1/X25E
server2# gtar zxf /var/tmp/emacs-22.3.tar.gz
and a fully patched X4242
Ian Collins wrote:
Brent Jones wrote:
On Fri, Jul 3, 2009 at 8:31 PM, Ian Collins wrote:
Ian Collins wrote:
I was doing an incremental send between pools, the receive side is
locked
up and no zfs/zpool commands work on that pool.
The stacks look different from those reported in the ear
I attempted to migrate data from one zfs pool to another, larger one (both
pools are currently mounted on the same host) using the snapshot send/receive
functionality. Of course, I could use something like rsync/cpio/tar instead,
but I'd have to first manually create all the target FSes, and se
I'm a new user of ZFS and I have an external USB drive which contains
a ZFS pool with file system. It seems that it does not get auto mounted
when I plug in the drive. I'm running osol-0811.
How can I manually mount this drive? It has a pool named rpool on it.
Is there any diagnostics commands tha
On Mon, 2009-07-06 at 10:00 +0200, Juergen Nickelsen wrote:
> DL Consulting writes:
> Do not use the snapshots made for the time slider feature. These are
> under control of the auto-snapshot service for exactly the time
> slider and not for anything else.
- or you could use the auto-snapshot:
Hello,
is anybody using this controller with opensolaris/snv ?
http://www.adaptec.com/de-DE/products/Controllers/Hardware/sas/entry/SAS-2405/
does it run out of the box ?
how does it perform with zfs? (especially when using it for zfs/nfs esx setup)
the driver from adaptec is for solaris 10 u
On Tue, 7 Jul 2009, Joerg Schilling wrote:
Based on the prior discussions of using mmap() with ZFS and the way
ZFS likes to work, my guess is that POSIX_FADV_NOREUSE does nothing at
all and POSIX_FADV_DONTNEED probably does not work either. These are
pretty straightforward to implement with UFS
Le 7 juil. 09 à 15:54, Darren J Moffat a écrit :
What compression algorithm are you using ? The default "on" value
of lzjb or are you doing something like gzip-9 ?
gzip-6. There is no speed problem with lzjb, but also not the same
compression ratio :-)
What build of OpenSolaris are you ru
Bob Friesenhahn wrote:
> On Tue, 7 Jul 2009, Joerg Schilling wrote:
> >
> > posix_fadvise seems to be _very_ new for Solaris and even though I am
> > frequently reading/writing the POSIX standards mailing list, I was not
> > aware of
> > it.
> >
> > From my tests with star, I cannot see a signif
On Tue, 7 Jul 2009, Joerg Schilling wrote:
posix_fadvise seems to be _very_ new for Solaris and even though I am
frequently reading/writing the POSIX standards mailing list, I was not aware of
it.
From my tests with star, I cannot see a significant performance increase but it
may have a 3% effe
Joerg Schilling wrote:
Alexander Skwar wrote:
Hi.
I've got a fully patched Solaris 10 U7 Sparc system, on which
I enabled SNMP disk monitoring by adding those lines to the
/etc/sma/snmp/snmpd.conf configuration file:
This is an OpenSolaris related list. Please repeat your tests on t
erik.ableson wrote:
OK - I'm at my wit's end here as I've looked everywhere to find some
means of tuning NFS performance with ESX into returning something
acceptable using osol 2008.11. I've eliminated everything but the NFS
portion of the equation and am looking for some pointers in the right
Tom Bird wrote:
Hi guys,
I've been having trouble with my archival kit, in the performance
department rather than data loss this time (phew!).
At the point when I took these stats where was about 250 mbit of
traffic outbound on an ixgb NIC on the thing, also about 100 mbit of
new stuff inco
On Mon, Jul 06, 2009 at 04:54:16PM +0100, Andrew Gabriel wrote:
> Andre van Eyssen wrote:
> >On Mon, 6 Jul 2009, Gary Mills wrote:
> >
> >>As for a business case, we just had an extended and catastrophic
> >>performance degradation that was the result of two ZFS bugs. If we
> >>have another one li
James Andrewartha wrote:
> Joerg Schilling wrote:
> > I would be interested to see a open(2) flag that tells the system that I
> > will
> > read a file that I opened exactly once in native oder. This could tell the
> > system to do read ahead and to later mark the pages as immediately
> > reus
Gaëtan Lehmann wrote:
Le 7 juil. 09 à 15:21, Darren J Moffat a écrit :
Gaëtan Lehmann wrote:
There will be two kinds of transfer protocol, once in production -
CIFS and one specific to the application.
But for a quick test, the test was made with scp.
CIFS and scp are very different protoc
The destroy process must have hit point in the FS with a hundred thousand
files. The destroy completed relatively quickly after passing that point.
Please disregard this post.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
Le 7 juil. 09 à 15:21, Darren J Moffat a écrit :
Gaëtan Lehmann wrote:
There will be two kinds of transfer protocol, once in production -
CIFS and one specific to the application.
But for a quick test, the test was made with scp.
CIFS and scp are very different protocols with very differen
Gaëtan Lehmann wrote:
There will be two kinds of transfer protocol, once in production - CIFS
and one specific to the application.
But for a quick test, the test was made with scp.
CIFS and scp are very different protocols with very different
performance characteristics.
Also really importa
Hi all!
I got a short question regarding data migration:
I want to copy my data (~2TB) from an old machine to a new machine with a
new raidz1 (6 disks x 1,5TB each ). Unfortunately this is not working
properly via network due to various (driver) problems on the old machine.
So my idea was:
to
Hi Darren,
Le 7 juil. 09 à 13:41, Darren J Moffat a écrit :
Gaëtan Lehmann wrote:
I'd like to compress quite well compressable (~4x) data on a file
server using ZFS compression, and still get good transfer speed.
The users are transferring several GB of data (typically, 8-10 GB).
The hos
Hi all!
I got a short question regarding data migration:
I want to copy my data (~2TB) from an old machine to a new machine with a
new raidz1 (6 disks x 1,5TB each ). Unfortunately this is not working
properly via network due to various (driver) problems on the old machine.
So my idea was:
to b
Hallo Jörg!
On Tue, Jul 7, 2009 at 13:53, Joerg
Schilling wrote:
> Alexander Skwar wrote:
>
>> Hi.
>>
>> I've got a fully patched Solaris 10 U7 Sparc system, on which
>> I enabled SNMP disk monitoring by adding those lines to the
>> /etc/sma/snmp/snmpd.conf configuration file:
>
> This is an Open
Alexander Skwar wrote:
> Hi.
>
> I've got a fully patched Solaris 10 U7 Sparc system, on which
> I enabled SNMP disk monitoring by adding those lines to the
> /etc/sma/snmp/snmpd.conf configuration file:
This is an OpenSolaris related list. Please repeat your tests on the current
development pl
Hi.
I've got a fully patched Solaris 10 U7 Sparc system, on which
I enabled SNMP disk monitoring by adding those lines to the
/etc/sma/snmp/snmpd.conf configuration file:
disk / 5%
disk /tmp 10%
disk /data 5%
That's supposed to mean, that I see <5% available on / to be
critical, <10% on /tmp and
Gaëtan Lehmann wrote:
I'd like to compress quite well compressable (~4x) data on a file server
using ZFS compression, and still get good transfer speed. The users are
transferring several GB of data (typically, 8-10 GB). The host is a
X4150 with 16 GB of RAM.
What protocol is being used for f
What is your NFS window size? 32kb * 120 * 7 should get you 25MB/s. Have
considered getting a Intel X25-E?Going from 840 sync nfs iops to 3-5k+
iops is not overkill for SSD slog device.
In fact probably cheaper to have one or two less vdevs and a single slog
device.
Nicholas
On Tue, Jul 7,
James Lever wrote:
> We also have a PERC 6/E w/512MB BBWC to test with or fall back to if we
> go with a Linux solution.
Have you tried putting the slog on this controller, either as an SSD or
regular disk? It's supported by the mega_sas driver, x86 and amd64 only.
--
James Andrewartha | Sysadmi
OK - I'm at my wit's end here as I've looked everywhere to find some
means of tuning NFS performance with ESX into returning something
acceptable using osol 2008.11. I've eliminated everything but the NFS
portion of the equation and am looking for some pointers in the right
direction.
Co
Hi guys,
I've been having trouble with my archival kit, in the performance
department rather than data loss this time (phew!).
At the point when I took these stats where was about 250 mbit of traffic
outbound on an ixgb NIC on the thing, also about 100 mbit of new stuff
incoming.
As you ca
Joerg Schilling wrote:
> I would be interested to see a open(2) flag that tells the system that I will
> read a file that I opened exactly once in native oder. This could tell the
> system to do read ahead and to later mark the pages as immediately reusable.
> This would make star even faster tha
I've got the same problem. Did you find any solution?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
56 matches
Mail list logo