The i7 and Xeon 3300 m/b that say they have ECC support have exactly this
problem as well.
On Wed, Jul 22, 2009 at 4:53 PM, Nicholas Lee wrote:
>
>
> On Tue, Jul 21, 2009 at 4:20 PM, chris wrote:
>
>> Thanks for your reply.
>> What if I wrap the ram in a sheet of lead?
On Tue, Jul 21, 2009 at 4:20 PM, chris wrote:
> Thanks for your reply.
> What if I wrap the ram in a sheet of lead?;-)
> (hopefully the lead itself won't be radioactive)
>
> I found these 4 AM3 motherboard with "optional" ECC memory support. I don't
> know whether this means ECC works, or ECC
What is your NFS window size? 32kb * 120 * 7 should get you 25MB/s. Have
considered getting a Intel X25-E?Going from 840 sync nfs iops to 3-5k+
iops is not overkill for SSD slog device.
In fact probably cheaper to have one or two less vdevs and a single slog
device.
Nicholas
On Tue, Jul 7,
On Fri, Jun 26, 2009 at 4:11 AM, Eric D. Mudama
wrote:
> True. In $ per sequential GB/s, rotating rust still wins by far.
> However, your comment about all flash being slower than rotating at
> sequential writes was mistaken. Even at 10x the price, if you're
> working with a dataset that needs r
On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson
wrote:
>
> However, it is a bit disconcerting to have to run with reduced data
> protection for an entire week. While I am certainly not going back to
> UFS, it seems like it should be at least theoretically possible to do this
> several orders of m
With XenServer 4 and NFS you had to "grow" the disks (modified manually from
thin to fat) in order to get decent performance.
On Fri, Jun 19, 2009 at 7:06 AM, lawrence ho wrote:
> We have a 7110 on try and buy program.
>
> We tried using the 7110 with XEN Server 5 over iSCSI and NFS. Nothing see
IDE flash DOM?
On Tue, Jun 2, 2009 at 8:46 AM, Ray Van Dolson wrote:
>
> Obviously we could throw in a couple smaller drives internally, or
> elsewhere... but are there any other options here?
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Not sure if this is a wacky question.
Given a slog device does not really need much more than 10 GB. If I was to
use a pair of X25-E (or STEC devices or whatever) in a mirror as the boot
device and then either 1. created a loopback file vdev or 2. separate
mirrored slice for the slog would this ca
I guess this also means the relative value of a slog is also limited by the
amount memory that can be allocated to the txg.
On Wed, May 20, 2009 at 4:03 PM, Eric Schrock wrote:
>
> Yes, that is correct. It is best to think of the ZIL and the txg sync
> process as orthogonal - data goes to bot
So txg is sync to the slog device but retained in memory, and then rather
than reading it back from the slog to memory it is copied to the pool from
memory the copy?
With the txg being a working set of the active commit, so might be a set of
NFS iops?
On Wed, May 20, 2009 at 3:43 PM, Eric Schrock
Does Solaris flush a slog device before it powers down? If so, removal
during a shutdown cycle wouldn't lose any data.
On Wed, May 20, 2009 at 7:57 AM, Dave wrote:
> If you don't have mirrored slogs and the slog fails, you may lose any data
> that was in a txg group waiting to be committed to
I've gotten Nexenta installed onto a USB stick on a SS4200-E. To get it
install required a PCI-E flex adapter. If you can reconfig EON for boot on a
USB stick and serial console it might be possible. I've got two SS4200 and I
might try EON on the second.
Nicholas
On Mon, Apr 20, 2009 at 8:39 PM,
On Thu, Apr 16, 2009 at 12:11 PM, Nicholas Lee wrote:
>
> Let me see if I understand this: A SSD slog can handle, say, 5000 (4k)
> transactions in a sec (20M/s) vs maybe 300 (4k) iops for a single HDD. The
> slog can then batch and dump say 30s worth of transactions - 600M as
>
On Thu, Apr 16, 2009 at 11:28 AM, Richard Elling
wrote:
> As for space, 18GBytes is much, much larger than 99.9+% of workloads
> require for slog space. Most measurements I've seen indicate that 100
> MBytes
> will be quite satisfactory for most folks. Unfortunately, there is no
> market
> for s
On Thu, Apr 16, 2009 at 3:32 AM, Greg Mason wrote:
>
> And it looks like the Intel fragmentation issue is fixed as well:
>>> http://techreport.com/discussions.x/16739
>>>
>>
>> FYI, Intel recently had a new firmware release. IMHO, odds are that
>> this will be as common as HDD firmware releases,
On Tue, Apr 14, 2009 at 5:57 AM, Will Murnane wrote:
>
> > Has anyone done any specific testing with SSD devices and solaris other
> than
> > the FISHWORKS stuff? Which is better for what - SLC and MLC?
> My impression is that the flash controllers make a much bigger
> difference than the type of
2009/4/14 Miles Nordin
>
> well that's not what I meant though. The battery RAM cache's behavior
> can't be determined by RTFS whether you use ZFS or not, and the
> behavior matters to both ZFS users and non ZFS users. The advantage I
> saw to ZFS slogs, is that you can inspect the source (and b
On Mon, Apr 13, 2009 at 3:27 PM, Miles Nordin wrote:
> >>>>> "nl" == Nicholas Lee writes:
>
> nl>1. Is the cache only used for RAID modes and not in JBOD
>nl> mode?
>
> well, there are different LSI cards and firmwares and drivers,
On Sun, Apr 12, 2009 at 7:24 PM, Miles Nordin wrote:
>
>nl> Supermicro have several LSI controllers. AOC-USASLP-L8i with
>nl> the LSI 1068E
>
> That's what I'm using. It uses the proprietary mpt driver.
>
>nl> and AOC-USASLP-H8iR with the LSI 1078.
>
> I'm not using this.
>
>nl>
Forgot to include links. See below.
Thanks.
On Sat, Apr 11, 2009 at 8:35 PM, Nicholas Lee wrote:
>
> Supermicro have several LSI controllers. AOC-USASLP-L8i with the LSI 1068E
> and AOC-USASLP-H8iR with the LSI 1078.
>
http://www.supermicro.com/products/accessories/addon/AOC-US
The standard controller that has been recommended in the past is the
AOC-SAT2-MV8 - an 8 port with a marvel chipset. There have been several
mentions of LSI based controllers on the mailing lists and I'm wondering
about them.
One obvious difference is that the Marvel contoller is PCI-X and the LS
On Mon, Feb 23, 2009 at 11:33 AM, Blake wrote:
> I thinks that's legitimate so long as you don't change ZFS versions.
>
> Personally, I'm more comfortable doing a 'zfs send | zfs recv' than I
> am storing the send stream itself. The problem I have with the stream
> is that I may not be able to r
A useful article about long term use of the Intel SSD X25-M:
http://www.pcper.com/article.php?aid=669 - Long-term performance analysis
of Intel Mainstream SSDs.
Would a zfs cache (ZIL or ARC) based on a SSD device see this kind of issue?
Maybe a periodic scrub via a full disk erase would be a use
Is there an issue with having additional resource that at support each
other?
If information is well documented, then it will be easy to tell if it is out
of date.
Regardless does the current HCL answer the questions I posed?
On Fri, Feb 6, 2009 at 2:26 PM, Richard Elling wrote:
> Nicho
On Fri, Feb 6, 2009 at 11:29 AM, Richard Elling
wrote:
>
> Seriously, is it so complicated that a best practice page is needed?
While you might be right about that, I think there is a need for a good
shared experiences site, howtos, etc.
For example, I want to put a new 2U 12 disk storage syst
Not sure is best to put something like this.
There is wikis like
http://www.solarisinternals.com/wiki/index.php/Solaris_Internals_and_Performance_FAQ
http://wiki.genunix.org/wiki/index.php/WhiteBox_ZFSStorageServer
But I haven't seen anything which has an active community like
http://www.thinkwik
Is it possible for someone to put up a wiki page somewhere with the various
SSD, ZIL, L2ARC options with Pros, Cons and Benchmarks.
Especially with notes like the below.
Given this is a key area of interest for zfs at the moment, seems like it
would be a useful resource.
On Wed, Feb 4, 2009 at 11
Another option to look at is:
set zfs:zfs_nocacheflush=1
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
Best option is to get a a fast ZIL log device.
Depends on your pool as well. NFS+ZFS means zfs will wait for write
completes before responding to a sync NFS write ops. I
I've got mine sitting on the floor at the moment. Need to find the time to
try out the install.
Do you know why it would not work with the DOM? I'm planning to use a spare
4GB DOM and keep the EMC one for backup if nothing works.
Did you use a video card to install?
On Fri, Jan 9, 2009 at 10:46 A
Since zfs is so smart is other areas is there a particular reason why a high
water mark is not calculated and the available space not reset to this?
I'd far rather have a zpool of 1000GB that said it only had 900GB but did
not have corruption as it ran out of space.
Nicholas
__
Has anyone tried runing zfs on the Intel SS4200-E [1],[[2]?
Doesn't have a video port, but you could replace the IDE flash DOM with a
pre-installed system.
I'm interested in this as a four disk smallish (34x41x12) portable ZFS
appliance.
Seems that people have got it running with Linux/Openfiler
On Sat, Nov 15, 2008 at 7:54 AM, Richard Elling <[EMAIL PROTECTED]>wrote:
> In short, separate logs with rotating rust may reduce sync write latency by
> perhaps 2-10x on an otherwise busy system. Using write optimized SSDs
> will reduce sync write latency by perhaps 10x in all cases. This is on
On 4/19/07, Adam Lindsay <[EMAIL PROTECTED]> wrote:
16x hot swap SATAII hard drives (plus an internal boot drive)
Tyan S2895 (K8WE) motherboard
Dual GigE (integral nVidia ports)
2x Areca 8-port PCIe (8-lane) RAID drivers
2x AMD Opteron 275 CPUs (2.2GHz, dual core)
8 GiB RAM
The supplier is used
On 4/17/07, Krzys <[EMAIL PROTECTED]> wrote:
and when I did try to run that last command I got the following error:
[16:26:00] [EMAIL PROTECTED]: /root > zfs send -i mypool/[EMAIL PROTECTED]
mypool/[EMAIL PROTECTED] |
zfs receive mypool2/[EMAIL PROTECTED]
cannot receive: destination has been mo
On 4/15/07, Chris Gerhard <[EMAIL PROTECTED]> wrote:
While I would really like to see a zpool dump and zpool restore so that I
could throw a whole pool to tape it is not hard to script the recursive zfs
send / zfs receive. I had to when I had to recover my laptop.
http://blogs.sun.com/chrisg/e
On 4/13/07, Eric Schrock <[EMAIL PROTECTED]> wrote:
You want:
6421958 want recursive zfs send ('zfs send -r')
Which is actively being worked on.
Exactly. :D
"Perhaps they all have to have the same snapnames (which will be easier with
'zfs snapshot -r')."
Maybe just assume that anyone who
Rather having to write something like:
#!/bin/bash
TIME=`date '+%Y-%m-%d-%H:%M:%S'`
zfs snapshot -r [EMAIL PROTECTED]
for i in `zfs list -H | grep $TIME | cut -f1` ; do
zfs send $i | ssh ihstore zfs receive -d tank/sstore-ztank ;
done
That is just a first run, I'll need to add a a touch /zta
On 4/13/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Only if you turn a compression on in ZFS.
Other than that 0s are stored as any other data.
There is some difference, but its marginal as the files get larger. The
disks in mtank are SATA2 ES 500Gb Seagates in a Intel V5000 system. The
s
On 3/30/07, Atul Vidwansa <[EMAIL PROTECTED]> wrote:
Lets say I reorganized my zpools. Now there are 2 pools:
Pool1:
Production data, combination of binary and text files. Only few files
change at a time. Average file sizes are around 1MB. Does it make
sense to take zfs snapshots of the pool? Wi
On 3/30/07, Shawn Walker <[EMAIL PROTECTED]> wrote:
Maybe, but they're far better at doing versioning and providing a
history of changes.
I;d have to agree. I track 6000 blobs (OOo gzip files, pdfs and other stuff)
in svn even with 1300 changesets over 3 years there is a marginal disk cost
on
On 3/30/07, Wee Yeh Tan <[EMAIL PROTECTED]> wrote:
> Careful consideration of the layout of your file
> system applies regardless of which type of file system it is (zfs,
> ufs, etc.).
True. ZFS does open up a whole new can of worms/flexibility.
How do hard-links work across zfs mount/files
On 3/29/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
BFU - just for testing I guess. I would rather propose waiting for SXCE
b62.
Is there a release date for this? I note that the install iso for b60 seems
to only release in the last week.
Nicholas
On 3/29/07, Malachi de Ælfweald <[EMAIL PROTECTED]> wrote:
Could I get your opinion then? I have just downloaded and burnt the b60
ISO. I was just getting ready to follow Tabriz and Tim's instructions from
last year in order to get the ZFS root boot. Seeing the Heads Up, it says
that the old me
On 3/29/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
1. Instructions for Manual set up:
http://fs.central/projects/zfsboot/zfsboot_manual_setup.html
2. Instructions for Netisntall set up:
http://fs.central/projects/zfsboot/how_to_netinstall_zfsboot
I think those documents should be
On 3/23/07, John-Paul Drawneek <[EMAIL PROTECTED]> wrote:
Can i do to Raidz2 over 5 and a Raidz2 over 4 with a spare for them all?
or two Raidz2 over 4 with 2 spare?
This is a question I was planning to ask as well.
Does zfs allow a hot spare to be allocated to multiple pools or as a system
On 3/23/07, John-Paul Drawneek <[EMAIL PROTECTED]> wrote:
I've got the same consideration at the moment.
Should i do 9 disk raidz2 with a spare, or could i do two raidz2 to get a
bit of performance?
Only done tests with striped mirrors which seems to give it a boost, so is
it worth it with a r
Has anyone run Solaris on one of these:
http://acmemicro.com/estore/merchant.ihtml?pid=4014&step=4
2U with 12 hotswap SATA disks. Supermicro motherboard, would have to add a
second Supermicro SATA2 controller to cover all the disks and the onboard
intel controller can only handle 6.
Nicholas
___
On 2/25/07, Ian Collins <[EMAIL PROTECTED]> wrote:
Interesting, 'cat /etc/driver_aliases | grep 373' shows nothing!
Have you tried the hardware detection tool on this system?
http://www.sun.com/bigadmin/hcl/hcts/device_detect.html
[EMAIL PROTECTED]:~$ cat /etc/driver_aliases | grep 373
nge
Note also I have the BIOS set to AHCI mode for the SATA controllers, not
IDE.
Nicholas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Attached.
Had to install xserver-xorg-core, but thanks to apt it was relatively easy.
Bit of interest is probably:
pci bus 0x cardnum 0x0d function 0x00: vendor 0x10de device 0x037f
nVidia Corporation MCP55 SATA Controller
CardVendor 0x3458 card 0xb002 (Card unknown)
STATUS0x00b0 COMM
On 2/25/07, Ian Collins <[EMAIL PROTECTED]> wrote:
Is the Gigabyte SATA2 controller recognised by Solaris?
Nexenta v6 seems to work. Based on the Nforce 55 chipset I believe. I
assume Opensolaris will work since it is based on that.
I couldn't tell you if NCQ works, as Solaris is pretty new
I just build a system with Gigabyte GA-M59SLI-S5 and 6 SATA2 drives. One
system Seagate ES 250Gb disk, 5 Seagate ES 500Gb disks. 2.2Tb with raidz.
Seems to work well with Nexenta. I could have put 5 ES 750Gb drives instead
and had another TB. All in a midi-tower with an Athlon 3800+. This
mother
Just installed Nexenta and I've been playing around with zfs.
[EMAIL PROTECTED]:/tank# uname -a
SunOS hzsilo 5.11 NexentaOS_20070105 i86pc i386 i86pc Solaris
[EMAIL PROTECTED]:/tank# zfs list
NAME USED AVAIL REFER MOUNTPOINT
home 89.5K 219G32K
On 2/22/07, Gregory Shaw <[EMAIL PROTECTED]> wrote:
I was thinking of something similar to a scrub. An ongoing process
seemed too intrusive. I'd envisioned a cron job similar to a scrub (or
defrag) that could be run periodically to show any differences between disk
performance over time.
On 2/22/07, Pawel Jakub Dawidek <[EMAIL PROTECTED]> wrote:
and you want to move huge amount of data from /tank/foo to /tank/bar.
If you use mv/tar/dump it will copy entire data. Much faster will be to
'zfs join tank tank/foo && zfs join tank tank/bar' then just mv the data
and 'zfs split' them b
On 2/19/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
5. there's no simple answer to this question as it greatly depends on
workload and data.
One thing you should keep in mind - Solaris *has* to boot in a 64bit
mode if you wan to
use all that memory as a cache for zfs, so old x86 32bi
On 2/20/07, Jason J. W. Williams <[EMAIL PROTECTED]> wrote:
Ah. We looked at them for some Windows DR. They do have a nice product.
Just waiting for them to get iscsi and vlan support. Supposely sometime in
the next couple months. Combined with zfs/iscsi it will make a very nice
small data
On 2/18/07, Jason J. W. Williams <[EMAIL PROTECTED]> wrote:
If by VI you are referring to VMware Infrastructure...you won't get
any support from VMware if you're using the iSCSI target on Solaris as
its not approved by them. Not that this is really a problem in my
experience as VMware tech suppo
Is there a best practice guide for using zfs as a basic rackable small
storage solution?
I'm considering zfs with a 2U 12 disk Xeon based server system vs
something like a second hand FAS250.
Target enviroment is mixature of Xen or VI hosts via iSCSI and nfs/cifs.
Being able to take snapshots o
59 matches
Mail list logo