Re: [zfs-discuss] New SSD options

2010-05-19 Thread Ragnar Sundblad
On 20 maj 2010, at 00.20, Don wrote: > "You can lose all writes from the last committed transaction (i.e., the > one before the currently open transaction)." > > And I don't think that bothers me. As long as the array itself doesn't go > belly up- then a few seconds of lost transactions are lar

Re: [zfs-discuss] Ideal SATA/SAS Controllers for ZFS

2010-05-19 Thread Marc Bevand
Deon Cui gmail.com> writes: > > So I had a bunch of them lying around. We've bought a 16x SAS hotswap > case and I've put in an AMD X4 955 BE with an ASUS M4A89GTD Pro as > the mobo. > > In the two 16x PCI-E slots I've put in the 1068E controllers I had > lying around. Everything is still being

[zfs-discuss] mpt hotswap procedure

2010-05-19 Thread Russ Price
I'm not having any luck hotswapping a drive attached to my Intel SASUC8I (LSI-based) controller. The commands which work for the AMD AHCI ports don't work for the LSI. Here's what "cfgadm -a" reports with all drives installed and operational: Ap_Id Type Recepta

[zfs-discuss] vibrations and consumer drives

2010-05-19 Thread David Magda
A recent post on StorageMojo has some interesting numbers on how vibrations can affect disks, especially consumer drives: http://storagemojo.com/2010/05/19/shock-vibe-and-awe/ He mentions a 2005 study that I wasn't aware of. In its conclusion it states: Based on the results of thes

Re: [zfs-discuss] New SSD options

2010-05-19 Thread Don
"You can lose all writes from the last committed transaction (i.e., the one before the currently open transaction)." I'll pick one- performance :) Honestly- I wish I had a better grasp on the real world performance of these drives. 50k IOPS is nice- and considering the incredible likelihood of d

Re: [zfs-discuss] zfs mount -a kernel panic

2010-05-19 Thread Lori Alt
First, I suggest you open a bug at https://defect.opensolaris.org/bz and get a bug number. Then, name your core dump something like "bug." and upload it using the instructions here: http://supportfiles.sun.com/upload Update the bug once you've uploaded the core and supply the name of th

Re: [zfs-discuss] New SSD options

2010-05-19 Thread Don
"You can lose all writes from the last committed transaction (i.e., the one before the currently open transaction)." And I don't think that bothers me. As long as the array itself doesn't go belly up- then a few seconds of lost transactions are largely irrelevant- all of the QA virtual machines

Re: [zfs-discuss] New SSD options

2010-05-19 Thread Nicolas Williams
On Wed, May 19, 2010 at 02:29:24PM -0700, Don wrote: > "Since it ignores Cache Flush command and it doesn't have any > persistant buffer storage, disabling the write cache is the best you > can do." > > This actually brings up another question I had: What is the risk, > beyond a few seconds of los

Re: [zfs-discuss] New SSD options

2010-05-19 Thread Richard Elling
On May 19, 2010, at 2:29 PM, Don wrote: > "Since it ignores Cache Flush command and it doesn't have any persistant > buffer storage, disabling the write cache is the best you can do." > > This actually brings up another question I had: What is the risk, beyond a > few seconds of lost writes, if

Re: [zfs-discuss] New SSD options

2010-05-19 Thread Don
"Since it ignores Cache Flush command and it doesn't have any persistant buffer storage, disabling the write cache is the best you can do." This actually brings up another question I had: What is the risk, beyond a few seconds of lost writes, if I lose power, there is no capacitor and the cache

Re: [zfs-discuss] New SSD options

2010-05-19 Thread Don
Well the larger size of the Vertex, coupled with their smaller claimed write amplification should result in sufficient service life for my needs. Their claimed MTBF also matches the Intel X25-E's. -- This message posted from opensolaris.org ___ zfs-dis

[zfs-discuss] zpool import On Fail Over Server Using Shared SAS zpool Storage But Not Shared cache SSD Devices

2010-05-19 Thread Preston Connors
Hello and good day, I will have two OpenSolaris snv_134 storage servers both connected to a SAS chassis with SAS disks used to store zpool data. One storage server will be the active storage server and the other will be the passive fail over storage server. Both servers will be able to access the

Re: [zfs-discuss] zfs mount -a kernel panic

2010-05-19 Thread John Andrunas
OK, I got a core dump, what do I do with it now? It is 1.2G in size. On Wed, May 19, 2010 at 10:54 AM, John Andrunas wrote: > Hmmm... no coredump even though I configured it. > > Here is the trace though  I will see what I can do about the coredump > > r...@cluster:/export/home/admin# zfs mount

Re: [zfs-discuss] ZFS memory recommendations

2010-05-19 Thread Erik Trimble
Miles Nordin wrote: "et" == Erik Trimble writes: et> frequently-accessed files from multiple VMs are in fact et> identical, and thus with dedup, you'd only need to store one et> copy in the cache. although counterintuitive I thought this wasn't part of the initial rel

Re: [zfs-discuss] ZFS memory recommendations

2010-05-19 Thread Miles Nordin
> "et" == Erik Trimble writes: et> frequently-accessed files from multiple VMs are in fact et> identical, and thus with dedup, you'd only need to store one et> copy in the cache. although counterintuitive I thought this wasn't part of the initial release. Maybe I'm wrong altoget

Re: [zfs-discuss] zfs mount -a kernel panic

2010-05-19 Thread John Andrunas
Hmmm... no coredump even though I configured it. Here is the trace though I will see what I can do about the coredump r...@cluster:/export/home/admin# zfs mount vol2/vm2 panic[cpu3]/thread=ff001f45ec60: BAD TRAP: type=e (#pf Page fault) rp=ff001f45e950 addr=30 occurred in module "zfs" d

Re: [zfs-discuss] ZFS memory recommendations

2010-05-19 Thread Erik Trimble
Bob Friesenhahn wrote: On Wed, 19 May 2010, Deon Cui wrote: http://constantin.glez.de/blog/2010/04/ten-ways-easily-improve-oracle-solaris-zfs-filesystem-performance It recommends that for every TB of storage you have you want 1GB of RAM just for the metadata. Interesting conclusion. Is

Re: [zfs-discuss] ZFS memory recommendations

2010-05-19 Thread Roy Sigurd Karlsbakk
- "Deon Cui" skrev: > I am currently doing research on how much memory ZFS should have for a > storage server. > > I came across this blog > > http://constantin.glez.de/blog/2010/04/ten-ways-easily-improve-oracle-solaris-zfs-filesystem-performance > > It recommends that for every TB of sto

Re: [zfs-discuss] ZFS in campus clusters

2010-05-19 Thread Nicolas Williams
On Wed, May 19, 2010 at 07:50:13AM -0700, John Hoogerdijk wrote: > Think about the potential problems if I don't mirror the log devices > across the WAN. If you don't mirror the log devices then your disaster recovery semantics will be that you'll miss any transactions that hadn't been committed t

Re: [zfs-discuss] inodes in snapshots

2010-05-19 Thread Nicolas Williams
On Wed, May 19, 2010 at 05:33:05AM -0700, Chris Gerhard wrote: > The reason for wanting to know is to try and find versions of a file. No, there's no such guarantee. The same inode and generation number pair is extremely unlikely to be re-used, but the inode number itself is likely to be re-used.

Re: [zfs-discuss] zfs mount -a kernel panic

2010-05-19 Thread Michael Schuster
On 19.05.10 17:53, John Andrunas wrote: Not to my knowledge, how would I go about getting one? (CC'ing discuss) man savecore and dumpadm. Michael On Wed, May 19, 2010 at 8:46 AM, Mark J Musante wrote: Do you have a coredump? Or a stack trace of the panic? On Wed, 19 May 2010, John And

Re: [zfs-discuss] zfs mount -a kernel panic

2010-05-19 Thread John Andrunas
Not to my knowledge, how would I go about getting one? (CC'ing discuss) On Wed, May 19, 2010 at 8:46 AM, Mark J Musante wrote: > > Do you have a coredump?  Or a stack trace of the panic? > > On Wed, 19 May 2010, John Andrunas wrote: > >> Running ZFS on a Nexenta box, I had a mirror get broken a

Re: [zfs-discuss] zfs mount -a kernel panic

2010-05-19 Thread Mark J Musante
Do you have a coredump? Or a stack trace of the panic? On Wed, 19 May 2010, John Andrunas wrote: Running ZFS on a Nexenta box, I had a mirror get broken and apparently the metadata is corrupt now. If I try and mount vol2 it works but if I try and mount -a or mount vol2/vm2 is instantly kerne

[zfs-discuss] zfs mount -a kernel panic

2010-05-19 Thread John Andrunas
Running ZFS on a Nexenta box, I had a mirror get broken and apparently the metadata is corrupt now. If I try and mount vol2 it works but if I try and mount -a or mount vol2/vm2 is instantly kernel panics and reboots. Is it possible to recover from this? I don't care if I lose the file listed bel

Re: [zfs-discuss] ZFS memory recommendations

2010-05-19 Thread Bob Friesenhahn
On Wed, 19 May 2010, Deon Cui wrote: http://constantin.glez.de/blog/2010/04/ten-ways-easily-improve-oracle-solaris-zfs-filesystem-performance It recommends that for every TB of storage you have you want 1GB of RAM just for the metadata. Interesting conclusion. Is this really the case that

Re: [zfs-discuss] ZFS in campus clusters

2010-05-19 Thread Richard Elling
comment below... On May 19, 2010, at 7:50 AM, John Hoogerdijk wrote: >>> From: zfs-discuss-boun...@opensolaris.org >> [mailto:zfs-discuss- >>> boun...@opensolaris.org] On Behalf Of John >> Hoogerdijk >>> >>> I'm building a campus cluster with identical >> storage in two locations >>> with ZFS mi

Re: [zfs-discuss] ZFS in campus clusters

2010-05-19 Thread John Hoogerdijk
> On Tue, May 18, 2010 20:45, Edward Ned Harvey wrote: > > > The whole point of a log device is to accelerate > sync writes, by providing > > nonvolatile storage which is faster than the > primary storage. You're not > > going to get this if any part of the log device is > at the other side of a

Re: [zfs-discuss] ZFS in campus clusters

2010-05-19 Thread John Hoogerdijk
> > From: zfs-discuss-boun...@opensolaris.org > [mailto:zfs-discuss- > > boun...@opensolaris.org] On Behalf Of John > Hoogerdijk > > > > I'm building a campus cluster with identical > storage in two locations > > with ZFS mirrors spanning both storage frames. Data > will be mirrored > > using zfs.

Re: [zfs-discuss] ZFS in campus clusters

2010-05-19 Thread Bob Friesenhahn
On Tue, 18 May 2010, Edward Ned Harvey wrote: Either I'm crazy, or I completely miss what you're asking. You want to have one side of a mirror attached locally, and the other side of the mirror attached ... via iscsi or something ... across the WAN? Even if you have a really fast WAN (1Gb or s

Re: [zfs-discuss] New SSD options

2010-05-19 Thread David Magda
On Wed, May 19, 2010 02:09, thomas wrote: > Is it even possible to buy a zeus iops anywhere? I haven't been able to > find one. I get the impression they mostly sell to other vendors like sun? > I'd be curious what the price is on a 9GB zeus iops is these days? Correct, their Zeus products are on

[zfs-discuss] Review: SuperMicro’s SC847 (S C847A) 4U chassis with 36 drive bays

2010-05-19 Thread Eugen Leitl
http://www.natecarlson.com/2010/05/07/review-supermicros-sc847a-4u-chassis-with-36-drive-bays/ Review: SuperMicro’s SC847 (SC847A) 4U chassis with 36 drive bays May 7, 2010 · 9 comments in Geek Stuff, Linux, Storage, Virtualization, Work Stuff SuperMicro SC847 Thumbnail [Or "my quest for th

Re: [zfs-discuss] ZFS in campus clusters

2010-05-19 Thread David Magda
On Tue, May 18, 2010 20:45, Edward Ned Harvey wrote: > The whole point of a log device is to accelerate sync writes, by providing > nonvolatile storage which is faster than the primary storage. You're not > going to get this if any part of the log device is at the other side of a > WAN. So eithe

Re: [zfs-discuss] New SSD options

2010-05-19 Thread Yuri Vorobyev
As for the Vertex drives- if they are within +-10% of the Intel they're still doing it for half of what the Intel drive costs- so it's an option- not a great option- but still an option. Yes, but Intel is SLC. Much more endurance. ___ zfs-discuss

Re: [zfs-discuss] inodes in snapshots

2010-05-19 Thread Chris Gerhard
The reason for wanting to know is to try and find versions of a file. If a file is renamed then the only way to know that the renamed file was the same as a file in a snapshot would be if the inode numbers matched. However for that to be reliable it would require the i-nodes are not reused. If

Re: [zfs-discuss] New SSD options

2010-05-19 Thread Don
Well- 40k IOPS is the current claim from ZEUS- and they're the benchmark. They use to be 17k IOPS. How real any of these numbers are from any manufacturer is a guess. Given the Intel's refusal to honor a cache flush, and their performance problems with the cache disabled- I don't trust them any

Re: [zfs-discuss] Ideal SATA/SAS Controllers for ZFS

2010-05-19 Thread Deon Cui
My work has bought a bunch of IBM servers recently as ESX hosts. They all come with LSI SAS1068E controllers as standard, which we remove and upgrade to a raid 5 controller. So I had a bunch of them lying around. We've bought a 16x SAS hotswap case and I've put in an AMD X4 955 BE with an ASUS

[zfs-discuss] ZFS memory recommendations

2010-05-19 Thread Deon Cui
I am currently doing research on how much memory ZFS should have for a storage server. I came across this blog http://constantin.glez.de/blog/2010/04/ten-ways-easily-improve-oracle-solaris-zfs-filesystem-performance It recommends that for every TB of storage you have you want 1GB of RAM just f

Re: [zfs-discuss] inodes in snapshots

2010-05-19 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > > If I create a file in a file system and then snapshot the file system. > > Then delete the file. > > Is it guaranteed that while the snapshot exists no new file will be > created with the same inode number as the deleted file?

[zfs-discuss] inodes in snapshots

2010-05-19 Thread Chris Gerhard
If I create a file in a file system and then snapshot the file system. Then delete the file. Is it guaranteed that while the snapshot exists no new file will be created with the same inode number as the deleted file? --chris -- This message posted from opensolaris.org ___

Re: [zfs-discuss] Very serious performance degradation

2010-05-19 Thread Philippe
> it looks like your 'sd5' disk is performing horribly > bad and except > for the horrible performance of 'sd5' (which > bottlenecks the I/O), > 'sd4' would look just as bad. Regardless, the first > step would be to > investigate 'sd5'. Hi Bob ! I've already tried the pool without the sd5 dis

Re: [zfs-discuss] Very serious performance degradation

2010-05-19 Thread Ian Collins
On 05/19/10 09:34 PM, Philippe wrote: Hi ! It is strange because I've checked the SMART data of the 4 disks, and everything seems really OK ! (on another hardware/controller, because I needed Windows to check it). Maybe it's a problem with the SAS/SATA controller ?! One question : if I halt t

Re: [zfs-discuss] Very serious performance degradation

2010-05-19 Thread Philippe
> How full is your filesystem? Give us the output of > "zfs list" > You might be having a hardware problem, or maybe it's > extremely full. Hi Edward, The "_db" filesystems have a recordsise of 16K (the others have the default 128K) : NAME USED AVAIL REFER MOUNTPOIN

Re: [zfs-discuss] Very serious performance degradation

2010-05-19 Thread Philippe
> mm.. Service time of sd3..5 are waay too high to be > good working disks. > 21 writes shouldn't take 1.3 seconds. > > Some of your disks are not feeling well, possibly > doing > block-reallocation like mad all the time, or block > recovery of some > form. Service times should be closer to what s

Re: [zfs-discuss] scsi messages and mpt warning in log - harmless, or indicating a problem?

2010-05-19 Thread Carson Gaspar
Willard Korfhage wrote: This afternoon, messages like the following started appearing in /var/adm/messages: May 18 13:46:37 fs8 scsi: [ID 365881 kern.info] /p...@0,0/pci8086,2...@1/pci15d9,a...@0 (mpt0): May 18 13:46:37 fs8 Log info 0x3108 received for target 5. May 18 13:46:37 fs8

Re: [zfs-discuss] New SSD options

2010-05-19 Thread Ragnar Sundblad
On 2010-05-19 08.32, sensille wrote: Don wrote: With that in mind- Is anyone using the new OCZ Vertex 2 SSD's as a ZIL? They're claiming 50k IOPS (4k Write- Aligned), 2 million hour MTBF, TRIM support, etc. That's more write IOPS than the ZEUS (40k IOPS, $) but at half the price of an In