Re: [zfs-discuss] [ZIL device brainstorm] intel x25-M G2 has ram cache?

2010-05-24 Thread Fred Liu
Yeah. It is also not so easy to capture the possible data loss during. There is no reliable way to figure it out. Thanks. Fred. -Original Message- From: rwali...@washdcmail.com [mailto:rwali...@washdcmail.com] Sent: 星期二, 五月 25, 2010 11:42 To: Erik Trimble Cc: Fred Liu; ZFS Discussions S

Re: [zfs-discuss] [ZIL device brainstorm] intel x25-M G2 has ram cache?

2010-05-24 Thread rwalists
On May 24, 2010, at 4:28 AM, Erik Trimble wrote: > yes, both the X25-M (both G1 and G2) plus the X25-E have a DRAM buffer on the > controller, and neither has a supercapacitor (or other battery) to back it > up, so there is the potential for data loss (but /not/ data corruption) in a > power-lo

[zfs-discuss] aliase for MPxIO path

2010-05-24 Thread Fred Liu
Hi, 1): Is it possible to do it? 2): What is the backplane hardware requirement for "luxadm led_blink" to work to bring Disk LED to the Blink Mode. Thanks. Fred ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris

Re: [zfs-discuss] New SSD options

2010-05-24 Thread Thomas Burgess
> > > > From earlier in the thread, it sounds like none of the SF-1500 based > drives even have a supercap, so it doesn't seem that they'd necessarily > be a better choice than the SLC-based X-25E at this point unless you > need more write IOPS... > > Ray > I think the upcoming OCZ Vertex 2 Pro wi

Re: [zfs-discuss] questions about zil

2010-05-24 Thread Thomas Burgess
> > > Not familiar with that model > > It's a sandforce sf-1500 model but without a supercapheres some info on it: Maximum Performance - Max Read: up to 270MB/s - Max Write: up to 250MB/s - Sustained Write: up to 235MB/s - Random Write 4k: 15,000 IOPS - Max 4k IOPS: 50,00

Re: [zfs-discuss] questions about zil

2010-05-24 Thread Thomas Burgess
> > > ZFS is always consistent on-disk, by design. Loss of the ZIL will result > in loss of the data in the ZIL which hasn't been flushed out to the hard > drives, but otherwise, the data on the hard drives is consistent and > uncorrupted. > > > > This is what i thought. I have read this list on

Re: [zfs-discuss] questions about zil

2010-05-24 Thread Garrett D'Amore
On 5/24/2010 2:48 PM, Thomas Burgess wrote: I recently got a new SSD (ocz vertex LE 50gb) Not familiar with that model It seems to work really well as a ZIL performance wise. My question is, how safe is it? I know it doesn't have a supercap so lets' say dataloss occursis it just

Re: [zfs-discuss] questions about zil

2010-05-24 Thread Erik Trimble
On 5/24/2010 2:48 PM, Thomas Burgess wrote: I recently got a new SSD (ocz vertex LE 50gb) It seems to work really well as a ZIL performance wise. My question is, how safe is it? I know it doesn't have a supercap so lets' say dataloss occursis it just dataloss or is it pool loss? ZFS is

Re: [zfs-discuss] questions about zil

2010-05-24 Thread Nicolas Williams
On Mon, May 24, 2010 at 05:48:56PM -0400, Thomas Burgess wrote: > I recently got a new SSD (ocz vertex LE 50gb) > > It seems to work really well as a ZIL performance wise. My question is, how > safe is it? I know it doesn't have a supercap so lets' say dataloss > occursis it just dataloss or

[zfs-discuss] questions about zil

2010-05-24 Thread Thomas Burgess
I recently got a new SSD (ocz vertex LE 50gb) It seems to work really well as a ZIL performance wise. My question is, how safe is it? I know it doesn't have a supercap so lets' say dataloss occursis it just dataloss or is it pool loss? also, does the fact that i have a UPS matter? the nu

Re: [zfs-discuss] ZFS no longer working with FC devices.

2010-05-24 Thread Andrew Daugherity
I had a similar problem with a RAID shelf (switched to JBOD mode, with each physical disk presented as a LUN) connected via FC (qlc driver, but no MPIO). Running a scrub would eventually generate I/O errors and many messages like this: Sep 6 15:12:53 imsfs scsi: [ID 107833 kern.warning] WARNI

[zfs-discuss] zfs/lofi/share panic

2010-05-24 Thread Frank Middleton
Many many moons ago, I submitted a CR into bugs about a highly reproducible panic that occurs if you try to re-share a lofi mounted image. That CR has AFAIK long since disappeared - I even forget what it was called. This server is used for doing network installs. Let's say you have a 64 bit iso

[zfs-discuss] zpool export takes too long time in build-134

2010-05-24 Thread autumn Wang
Hi, I did the zpool import/export performance testing on opensolaris build-134: 1). Create 100 zfs and 100 snapshots, then do zpool export/import export takes about 5 seconds import takes about 5 seconds 2). Create 200 zfs and 200 snapshots, then do zpool export/import export takes a

Re: [zfs-discuss] Removing disks from a ZRAID config?

2010-05-24 Thread Carson Gaspar
Forrest Aldrich wrote: I've seen this product mentioned before - the problem is, we use Veritas heavily on a public network and adding yet another software dependency would be a hard sell. :( Be very certain that you need synchronous replication before you do this. For some ACID systems it re

Re: [zfs-discuss] Removing disks from a ZRAID config?

2010-05-24 Thread iMx
> > Thanks for the pointer, I will look into it. > > > > The first thing that comes to mind is a possible performance hit, > > somewhere with the VxFS code. I could be wrong, tho. No worries, certainly worth looking into though - if performance is acceptable, it could be a good solution. Let

Re: [zfs-discuss] Removing disks from a ZRAID config?

2010-05-24 Thread Forrest Aldrich
I've seen this product mentioned before - the problem is, we use Veritas heavily on a public network and adding yet another software dependency would be a hard sell. :( -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discu

Re: [zfs-discuss] New SSD options

2010-05-24 Thread Ray Van Dolson
On Mon, May 24, 2010 at 11:30:20AM -0700, Ray Van Dolson wrote: > This thread has grown giant, so apologies for screwing up threading > with an out of place reply. :) > > So, as far as SF-1500 based SSD's, the only ones currently in existence > are the Vertex 2 LE and Vertex 2 EX, correct (I under

Re: [zfs-discuss] New SSD options

2010-05-24 Thread Ray Van Dolson
This thread has grown giant, so apologies for screwing up threading with an out of place reply. :) So, as far as SF-1500 based SSD's, the only ones currently in existence are the Vertex 2 LE and Vertex 2 EX, correct (I understand the Vertex 2 Pro was never mass produced)? Both of these are based

Re: [zfs-discuss] iSCSI confusion

2010-05-24 Thread Scott Meilicke
VMware will properly handle sharing a single iSCSI volume across multiple ESX hosts. We have six ESX hosts sharing the same iSCSI volumes - no problems. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensol

Re: [zfs-discuss] Removing disks from a ZRAID config?

2010-05-24 Thread iMx
> > Can you elaborate? > > > > Veritas has it's own filesystem -- we need the block-level > > replication functionality to backup our data (live) over the WAN to > > a disaster > > recover location. Therefore, you wouldn't be able to use Veritas > > with ZFS filesystem. zfs create -V 10G test/

Re: [zfs-discuss] New SSD options

2010-05-24 Thread Miles Nordin
> "d" == Don writes: > "hk" == Haudy Kazemi writes: d> You could literally split a sata cable and add in some d> capacitors for just the cost of the caps themselves. no, this is no good. The energy only flows in and out of the capacitor when the voltage across it changes. I

Re: [zfs-discuss] Removing disks from a ZRAID config?

2010-05-24 Thread Richard Elling
On May 24, 2010, at 10:47 AM, Forrest Aldrich wrote: > We have a Sun thumper 34 terabyte, with 24T free. I've been asked to find > out whether we can remove some disks from the zpool/ZRAID config (say about > 10T) and install Veritas volumes on those, then migrate some data to it for > block-

Re: [zfs-discuss] Removing disks from a ZRAID config?

2010-05-24 Thread iMx
- Original Message - > From: "Forrest Aldrich" > To: zfs-discuss@opensolaris.org > Sent: Monday, 24 May, 2010 6:47:40 PM > Subject: [zfs-discuss] Removing disks from a ZRAID config? > We have a Sun thumper 34 terabyte, with 24T free. I've been asked to > find out whether we can remove s

[zfs-discuss] Removing disks from a ZRAID config?

2010-05-24 Thread Forrest Aldrich
We have a Sun thumper 34 terabyte, with 24T free. I've been asked to find out whether we can remove some disks from the zpool/ZRAID config (say about 10T) and install Veritas volumes on those, then migrate some data to it for block-level replication over a WAN. I know, horrifying - but the pr

Re: [zfs-discuss] zfs recordsize change improves performance

2010-05-24 Thread Miles Nordin
> "ai" == Asif Iqbal writes: >> If you disable the ZIL for locally run Oracle and you have an >> unscheduled outage, then it is highly probable that you will >> lose data. ai> yep. that is why I am not doing it until we replace the ai> battery no, wait please, you st

Re: [zfs-discuss] cannot import pool from another system, device-ids different! please help!

2010-05-24 Thread Mark J Musante
On Mon, 24 May 2010, h wrote: but...wait..that cant be. i disconnected the 1TB drives and plugged in the 2TB's before doing replace command. no information could be written to the 1TBs at all since it is physically offline. Do the labels still exist? What does 'zdb -l /dev/rds

Re: [zfs-discuss] cannot import pool from another system, device-ids different! please help!

2010-05-24 Thread hmmmm
but...wait..that cant be. i disconnected the 1TB drives and plugged in the 2TB's before doing replace command. no information could be written to the 1TBs at all since it is physically offline. -- This message posted from opensolaris.org ___ zf

[zfs-discuss] hybrid drive: flash and platters

2010-05-24 Thread David Magda
Seagate is planning on releasing a disk that's part spinning rust and part flash: http://www.theregister.co.uk/2010/05/21/seagate_momentus_xt/ The design will have the flash be transparent to the operating system, but I wish they would have some way to access the two components sep

Re: [zfs-discuss] can you recover a pool if you lose the zil (b134+)

2010-05-24 Thread R. Eulenberg
I even have this problem on my (productive) backup server. I lost my system-hdd and my separate ZIL-device while the system crashes and now I'm in trouble. The old system was running under the least version of osol/dev with zfs v22. 10 days ago after the servers crashs I was very optimistc of so

Re: [zfs-discuss] ZFS no longer working with FC devices.

2010-05-24 Thread Richard Elling
On May 24, 2010, at 4:06 AM, Demian Phillips wrote: > On Sun, May 23, 2010 at 12:02 PM, Torrey McMahon wrote: >> On 5/23/2010 11:49 AM, Richard Elling wrote: >>> >>> FWIW, the A5100 went end-of-life (EOL) in 2001 and end-of-service-life >>> (EOSL) in 2006. Personally, I hate them with a passion

Re: [zfs-discuss] cannot import pool from another system, device-ids different! please help!

2010-05-24 Thread hmmmm
yes i used "zpool replace". why is one drive recognized? shouldnt the labels be wiped on all of them? am i screwed? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] cannot import pool from another system, device-ids different! please help!

2010-05-24 Thread Mark J Musante
On Mon, 24 May 2010, h wrote: i had 6 disks in a raidz1 pool that i replaced from 1TB drives to 2TB drives. i have installed the older 1TB drives in another system and would like to import the old pool to access some files i accidentally deleted from the new pool. Did you use the 'zpool

[zfs-discuss] cannot import pool from another system, device-ids different! please help!

2010-05-24 Thread hmmmm
Hi! i had 6 disks in a raidz1 pool that i replaced from 1TB drives to 2TB drives. i have installed the older 1TB drives in another system and would like to import the old pool to access some files i accidentally deleted from the new pool. the first system (with the 2TB's) is a Opensolaris system a

Re: [zfs-discuss] ZFS no longer working with FC devices.

2010-05-24 Thread Demian Phillips
On Sun, May 23, 2010 at 12:02 PM, Torrey McMahon wrote: >  On 5/23/2010 11:49 AM, Richard Elling wrote: >> >> FWIW, the A5100 went end-of-life (EOL) in 2001 and end-of-service-life >> (EOSL) in 2006. Personally, I  hate them with a passion and would like to >> extend an offer to use my tractor to

Re: [zfs-discuss] [ZIL device brainstorm] intel x25-M G2 has ram cache?

2010-05-24 Thread Fred Liu
Yes. I mentioned this in my thread. And also I contacted Chris, ;-) -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of J.P. King Sent: 星期一, 五月 24, 2010 18:41 To: Andrew Gabriel Cc: ZFS Discussions Subject: Re: [zfs-discus

Re: [zfs-discuss] [ZIL device brainstorm] intel x25-M G2 has ram cache?

2010-05-24 Thread Fred Liu
Yeah, If plus the ability to backup the data to the BIOS/EPROM on the motherboard, that should be the utmost solution…. From: Andrew Gabriel [mailto:andrew.gabr...@oracle.com] Sent: 星期一, 五月 24, 2010 18:37 To: Erik Trimble Cc: Fred Liu; ZFS Discussions Subject: Re: [zfs-discuss] [ZIL device brains

Re: [zfs-discuss] [ZIL device brainstorm] intel x25-M G2 has ram cache?

2010-05-24 Thread J.P. King
What you probably want is a motherboard which has a small area of main memory protected by battery, and a ramdisk driver which knows how to use it. Then you'd get the 1,000,000 IOPS. No idea if anyone makes such a thing. You are correct that ZFS gets an enormous benefit from even tiny amounts i

Re: [zfs-discuss] [ZIL device brainstorm] intel x25-M G2 has ram cache?

2010-05-24 Thread Andrew Gabriel
Erik Trimble wrote: Frankly, I'm really surprised that there's no solution, given that the *amount* of NVRAM needed for ZIL (or similar usage) is really quite small. a dozen GB is more than sufficient, and really, most systems do fine with just a couple of GB (3-4 or so).  Producing a small

Re: [zfs-discuss] [ZIL device brainstorm] intel x25-M G2 has ram cache?

2010-05-24 Thread Fred Liu
From: Erik Trimble [mailto:erik.trim...@oracle.com] Sent: 星期一, 五月 24, 2010 16:28 To: Fred Liu Cc: ZFS Discussions Subject: Re: [zfs-discuss] [ZIL device brainstorm] intel x25-M G2 has ram cache? On 5/23/2010 11:30 PM, Fred Liu wrote: Hi, I have hit the synchronous NFS writing wall just like man

Re: [zfs-discuss] zfs replace multiple drives

2010-05-24 Thread Ragnar Sundblad
On 24 maj 2010, at 10.26, Brandon High wrote: > On Mon, May 24, 2010 at 1:02 AM, Ragnar Sundblad wrote: >> Is that really true if you use the "zpool replace" command with both >> the old and the new drive online? > > Yes. (Don't you mean "no" then? :-) > zpool replace [-f] pool old_device

Re: [zfs-discuss] [ZIL device brainstorm] intel x25-M G2 has ram cache?

2010-05-24 Thread Erik Trimble
On 5/23/2010 11:30 PM, Fred Liu wrote: Hi, I have hit the synchronous NFS writing wall just like many people do. There also have lots of discussion about the solutions here. I want to post all of my exploring fighting done recently to discuss and share: 1): using the normal SATA-SSDs(intel

Re: [zfs-discuss] zfs replace multiple drives

2010-05-24 Thread Brandon High
On Mon, May 24, 2010 at 1:02 AM, Ragnar Sundblad wrote: > Is that really true if you use the "zpool replace" command with both > the old and the new drive online? Yes. zpool replace [-f] pool old_device [new_device] Replaces old_device with new_device. This is equivalent

Re: [zfs-discuss] zfs replace multiple drives

2010-05-24 Thread Ragnar Sundblad
On 24 maj 2010, at 02.44, Erik Trimble wrote: > On 5/23/2010 5:00 PM, Andreas Iannou wrote: >> Is it safe or possible to do a zpool replace for multiple drives at once? I >> think I have one of the troublesome WD Green drives as replacing it has >> taken 39hrs and only reslivered 58Gb, I have a