Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-19 Thread Khyron
This is how rumors get started. >From reading that thread, the OP didn't seem to know much of anything about... anything. Even less so about Solaris and OpenSolaris. I'd advise not to get your news from mailing lists, especially not mailing lists for people who don't use the product you're inter

Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-19 Thread Michael Schuster
On 20.04.10 07:52, Ken Gunderson wrote: On Tue, 2010-04-20 at 12:27 +0700, "C. Bergström" wrote: Ken Gunderson wrote: Greetings All: Granted there has been much fear, uncertainty, and doubt following Oracle's take over of Sun, but I ran across this on a FreeBSD mailing list post dated 4/20/20

Re: [zfs-discuss] Can RAIDZ disks be slices ?

2010-04-19 Thread Ian Collins
On 04/20/10 05:32 PM, Sunil wrote: ouch! My apologies! I did not understand what you were trying to say. I was gearing towards: 1. Using the newer 1TB in the eventual RAIDZ. Newer hardware typically means (slightly) faster access times and sequential throughput. Using a slice on a newer 1T

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Richard Elling
On Apr 19, 2010, at 12:44 PM, Miles Nordin wrote: >> "dm" == David Magda writes: > >dm> Given that ZFS is always consistent on-disk, why would you >dm> lose a pool if you lose the ZIL and/or cache file? > > because of lazy assertions inside 'zpool import'. you are right there > is

Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-19 Thread Ken Gunderson
On Tue, 2010-04-20 at 12:27 +0700, "C. Bergström" wrote: > Ken Gunderson wrote: > > Greetings All: > > > > Granted there has been much fear, uncertainty, and doubt following > > Oracle's take over of Sun, but I ran across this on a FreeBSD mailing > > list post dated 4/20/2010" > > > > "...Seems t

Re: [zfs-discuss] Can RAIDZ disks be slices ?

2010-04-19 Thread Edho P Arief
On Tue, Apr 20, 2010 at 12:32 PM, Sunil wrote: > ouch! My apologies! I did not understand what you were trying to say. > > I was gearing towards: > > 1. Using the newer 1TB in the eventual RAIDZ. Newer hardware typically means > (slightly) faster access times and sequential throughput. > 2. Getti

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Richard Elling
On Apr 19, 2010, at 7:11 PM, Bob Friesenhahn wrote: > On Mon, 19 Apr 2010, Edward Ned Harvey wrote: >> Improbability assessment aside, suppose you use something like the DDRDrive >> X1 ... Which might be more like 4G instead of 32G ... Is it even physically >> possible to write 4G to any device in

Re: [zfs-discuss] Can RAIDZ disks be slices ?

2010-04-19 Thread Sunil
ouch! My apologies! I did not understand what you were trying to say. I was gearing towards: 1. Using the newer 1TB in the eventual RAIDZ. Newer hardware typically means (slightly) faster access times and sequential throughput. 2. Getting the RAIDZ serviceable quick. Your method will cause two f

Re: [zfs-discuss] Can RAIDZ disks be slices ?

2010-04-19 Thread Edho P Arief
On Tue, Apr 20, 2010 at 12:07 PM, Ian Collins wrote: >> And lose my existing data on those 2 500GB disks? >> >> > > Copy it back form the temporary pool, you are replacing your existing pool, > aren't you?  So you'll loose the data on it regardless. > >> Please, at least read the post before reply

Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-19 Thread C. Bergström
Ken Gunderson wrote: Greetings All: Granted there has been much fear, uncertainty, and doubt following Oracle's take over of Sun, but I ran across this on a FreeBSD mailing list post dated 4/20/2010" "...Seems that Oracle won't offer support for ZFS on opensolaris" This guy probably 1)

Re: [zfs-discuss] Making an rpool smaller?

2010-04-19 Thread Richard Elling
On Apr 19, 2010, at 4:33 PM, Brandon High wrote: > On Mon, Apr 19, 2010 at 4:21 PM, Brandon High wrote: >> I think I remember someone posting a method to copy the boot drive's layout >> with prtvtoc and fmthard, but I don't remember the exact syntax. > > Apparently Google and the man pages know

[zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-19 Thread Ken Gunderson
Greetings All: Granted there has been much fear, uncertainty, and doubt following Oracle's take over of Sun, but I ran across this on a FreeBSD mailing list post dated 4/20/2010" "...Seems that Oracle won't offer support for ZFS on opensolaris" Link here to full post here:

Re: [zfs-discuss] Snapshots and Data Loss

2010-04-19 Thread Richard Elling
Hi Geoff, The Canucks have already won their last game of the season :-) more below... On Apr 18, 2010, at 11:21 PM, Geoff Nordli wrote: >> On Apr 13, 2010, at 5:22 AM, Tony MacDoodle wrote: >> >>> I was wondering if any data was lost while doing a snapshot on a >> running system? >> >> ZFS wil

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Richard Elling
On Apr 19, 2010, at 7:02 PM, Bob Friesenhahn wrote: > On Mon, 19 Apr 2010, Don wrote: > >> Continuing on the best practices theme- how big should the ZIL slog disk be? >> >> The ZFS evil tuning guide suggests enough space for 10 seconds of my >> synchronous write load- even assuming I could cram

Re: [zfs-discuss] Can RAIDZ disks be slices ?

2010-04-19 Thread Ian Collins
On 04/20/10 05:00 PM, Sunil wrote: On 04/20/10 04:13 PM, Sunil wrote: Hi, I have a strange requirement. My pool consists of 2 500GB disks in stripe which I am trying to convert into a RAIDZ setup without data loss but I have only two additional disks: 750GB and 1TB. So, here is w

Re: [zfs-discuss] Can RAIDZ disks be slices ?

2010-04-19 Thread Sunil
> On 04/20/10 04:13 PM, Sunil wrote: > > Hi, > > > > I have a strange requirement. My pool consists of 2 > 500GB disks in stripe which I am trying to convert > into a RAIDZ setup without data loss but I have only > two additional disks: 750GB and 1TB. So, here is what > I thought: > > > > 1. Carve

Re: [zfs-discuss] Can RAIDZ disks be slices ?

2010-04-19 Thread Ian Collins
On 04/20/10 04:13 PM, Sunil wrote: Hi, I have a strange requirement. My pool consists of 2 500GB disks in stripe which I am trying to convert into a RAIDZ setup without data loss but I have only two additional disks: 750GB and 1TB. So, here is what I thought: 1. Carve a 500GB slice (A) in 750

[zfs-discuss] Can RAIDZ disks be slices ?

2010-04-19 Thread Sunil
Hi, I have a strange requirement. My pool consists of 2 500GB disks in stripe which I am trying to convert into a RAIDZ setup without data loss but I have only two additional disks: 750GB and 1TB. So, here is what I thought: 1. Carve a 500GB slice (A) in 750GB and 2 500GB slices (B,C) in 1TB. 2

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
> A STEC Zeus IOPS SSD (45K IOPS) will behave quite differently than an Intel > X-25E (~3.3K IOPS). Where can you even get the Zeus drives? I thought they were only in the OEM market and last time I checked they were ludicrously expensive. I'm looking for between 5k and 10k IOPS using up to 4 dr

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Bob Friesenhahn
On Mon, 19 Apr 2010, Don wrote: I'm curious if anyone knows how ZIL slog performance scales. For example- how much benefit would you expect from 2 SSD slogs over 1? Would there be a significant benefit to 3 over 2 or does it begin to taper off? I'm sure a lot of this is dependent on the enviro

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
I always try to plan for the worst case- I just wasn't sure how to arrive at the worst case. Thanks for providing the information- and I will definitely checkout the dtrace zilstat script. Considering the smallest SSD I can buy from a manufacturer that I trust seems to be 32GB- that's probably

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Bob Friesenhahn
On Mon, 19 Apr 2010, Edward Ned Harvey wrote: Improbability assessment aside, suppose you use something like the DDRDrive X1 ... Which might be more like 4G instead of 32G ... Is it even physically possible to write 4G to any device in less than 10 seconds? Remember, to achieve worst case, highe

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Bob Friesenhahn
On Mon, 19 Apr 2010, Don wrote: Continuing on the best practices theme- how big should the ZIL slog disk be? The ZFS evil tuning guide suggests enough space for 10 seconds of my synchronous write load- even assuming I could cram 20 gigabits/sec into the host (2 10 gigE NICs) That only comes o

Re: [zfs-discuss] upgrade zfs stripe

2010-04-19 Thread Bob Friesenhahn
On Mon, 19 Apr 2010, Edward Ned Harvey wrote: Just be aware that if *any* of your devices fail, all is lost. (Because you've said it's configured as a nonredundant stripe.) The good news is that it is easy to convert any single-disk vdev into a mirror vdev. It is also easy to convert a mirr

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
> I think the size of the ZIL log is basically irrelevant That was the understanding I got from reading the various blog posts and tuning guide. > only a single SSD, just due to the fact that you've probably got dozens of > disks attached, and you'll probably use multiple log devices striped jus

Re: [zfs-discuss] upgrade zfs stripe

2010-04-19 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Albert Frenz > > since i am really new to zfs, i got 2 important questions for starting. > i got a nas up and running zfs in stripe mode with 2x 1,5tb hdd. my > question for future proof would

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Don > > Continuing on the best practices theme- how big should the ZIL slog > disk be? > > The ZFS evil tuning guide suggests enough space for 10 seconds of my > synchronous write load- even a

Re: [zfs-discuss] Making an rpool smaller?

2010-04-19 Thread Brandon High
On Mon, Apr 19, 2010 at 4:21 PM, Brandon High wrote: > I think I remember someone posting a method to copy the boot drive's layout > with prtvtoc and fmthard, but I don't remember the exact syntax. Apparently Google and the man pages know the answer. prtvtoc /dev/rdsk/c5t0d0s2 | fmthard -s - /d

[zfs-discuss] upgrade zfs stripe

2010-04-19 Thread Albert Frenz
hi there, since i am really new to zfs, i got 2 important questions for starting. i got a nas up and running zfs in stripe mode with 2x 1,5tb hdd. my question for future proof would be, if i could add just another drive to the pool and zfs can integrate it flawlessly? and second if this hdd cou

Re: [zfs-discuss] Making an rpool smaller?

2010-04-19 Thread Brandon High
On Mon, Apr 19, 2010 at 7:42 AM, Cindy Swearingen wrote: > I don't think LU cares that the disks in the new pool are smaller, > obviously they need to be large enough to contain the BE. It doesn't look like OpenSolaris includes LU, at least on x86-64. Anyhow, wouldn't the method you mention fail

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Christopher George
> I think the DDR drive has a battery and can dump to a cf card. The DDRdrive X1's automatic backup/restore feature utilizes on-board SLC NAND (high quality Flash) and is completely self- contained. Neither the backup nor restore feature involves data transfer over the PCIe bus or to/from remo

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
Continuing on the best practices theme- how big should the ZIL slog disk be? The ZFS evil tuning guide suggests enough space for 10 seconds of my synchronous write load- even assuming I could cram 20 gigabits/sec into the host (2 10 gigE NICs) That only comes out to 200 Gigabits which = 25 Gigab

Re: [zfs-discuss] Large size variations - what is canonical method

2010-04-19 Thread Cindy Swearingen
Hi Harry, Both du and df are pre-ZFS commands and don't really understand ZFS space issues, which are described in the ZFS FAQ here: http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq Why does du(1) report different file sizes for ZFS and UFS? Why doesn't the space consumption that is

Re: [zfs-discuss] Large size variations - what is canonical method

2010-04-19 Thread Harry Putnam
Will Murnane writes: > It's important to consider what you want this data for. Considering > upgrading your storage to get more room? Check out "zpool list". > Need to know whether accounting or engineering is using more space? > Look at "zfs list". Looking at a sparse or compressed file, and

Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-19 Thread Edward Ned Harvey
> From: Kyle McDonald [mailto:kmcdon...@egenera.com] > > I think I saw an ARC case go by recently for anew 'zfs diff' command. I > think it allows you compare 2 snapshots, or maybe the live filesystem > and a snapshot and see what's changed. > > It sounds really useful, Hopefully it will integrate

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Brandon High
I think the DDR drive has a battery and can dump to a cf card. -B Sent from my Nexus One. On Apr 19, 2010 10:41 AM, "Carson Gaspar" wrote: Edward Ned Harvey wrote: > I'm saying that even a single pair of disks (maybe 4 disks if you're usi... And you are confusing throughput with latency (in a

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Miles Nordin
> "dm" == David Magda writes: dm> Given that ZFS is always consistent on-disk, why would you dm> lose a pool if you lose the ZIL and/or cache file? because of lazy assertions inside 'zpool import'. you are right there is no fundamental reason for it---it's just code that doesn't exi

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Christopher George
To clarify, the DDRdrive X1 is not an option for OpenSolaris today, irrespective of specific features, because the driver is not yet available. When our OpenSolaris device driver is released, later this quarter, the X1 will have updated firmware to automatically provide backup/restore based on

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
I understand that important bit about having the cachefile is the GUID's (although the disk record is, I believe, helpful in improving import speeds) so we can recover in certain oddball cases. As such- I'm still confused why you say it's unimportant. Is it enough to simply copy the /etc/cluste

Re: [zfs-discuss] SSD sale on newegg

2010-04-19 Thread Carson Gaspar
Bob Friesenhahn wrote: On Sun, 18 Apr 2010, Carson Gaspar wrote: Before (Mac OS 10.6.3 NFS client over GigE, local subnet, source file in RAM): carson:arthas 0 $ time tar jxf /Volumes/RamDisk/gcc-4.4.3.tar.bz2 real92m33.698s user0m20.291s sys 0m37.978s That's awful! ... tar j

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Carson Gaspar
Edward Ned Harvey wrote: I'm saying that even a single pair of disks (maybe 4 disks if you're using cheap slow disks) will outperform a 1Gb Ethernet. So if your bottleneck is the 1Gb Ethernet, you won't gain anything (significant) by accelerating the stuff that isn't the bottleneck. And you a

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Ross Walker
On Apr 19, 2010, at 12:50 PM, Don wrote: Now I'm simply confused. Do you mean one cachefile shared between the two nodes for this zpool? How, may I ask, would this work? The rpool should be in /etc/zfs/zpool.cache. The shared pool should be in /etc/cluster/zpool.cache (or wherever you p

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
I apologize- I didn't mean to come across as rude- I'm just not sure if I'm asking the right question. I'm not ready to use the ha-cluster software yet as I haven't finished testing it. For now I'm manually failing over from the primary to the backup node. That will change- but I'm not ready to

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Richard Elling
On Apr 19, 2010, at 9:50 AM, Don wrote: > Now I'm simply confused. In one sentence, the cachefile keeps track of what is currently imported. > Do you mean one cachefile shared between the two nodes for this zpool? How, > may I ask, would this work? Each OS instance has a default cachefile. >

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Darren J Moffat
On 19/04/2010 17:50, Don wrote: Now I'm simply confused. Do you mean one cachefile shared between the two nodes for this zpool? How, may I ask, would this work? Either that or a way for the nodes to update each others copy very quickly. Such as a parallel filesystem. It is the job of the

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
Now I'm simply confused. Do you mean one cachefile shared between the two nodes for this zpool? How, may I ask, would this work? The rpool should be in /etc/zfs/zpool.cache. The shared pool should be in /etc/cluster/zpool.cache (or wherever you prefer to put it) so it won't come up on system s

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Darren J Moffat
On 19/04/2010 17:13, Don wrote: That section of the man page is actually helpful- as I wasn't sure what I was going to do to ensure the nodes didn't try to bring up the zpool on their own- outside of clustering software or my own intervention. That said- it still doesn't explain how I would ke

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
That section of the man page is actually helpful- as I wasn't sure what I was going to do to ensure the nodes didn't try to bring up the zpool on their own- outside of clustering software or my own intervention. That said- it still doesn't explain how I would keep the secondary nodes zpool.cach

Re: [zfs-discuss] ZFS for ISCSI ntfs backing store.

2010-04-19 Thread Katzke, Karl
We're using a x4250 with a J4400 attached for a similar configuration. However, it's running Solaris 10u8. We have 16 disks in the x4250, 10 of which make up 2x raidz (4-disk each) groups, with 2 available hot spares. These are 300gb disks, so I'm less afraid of data loss from a parity failure

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Bob Friesenhahn
On Mon, 19 Apr 2010, Edward Ned Harvey wrote: There's no point trying to accelerate your disks if you're only going to use a single client over gigabit. This is a really strange statement. It does not make any sense. I'm saying that even a single pair of disks (maybe 4 disks if you're using

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Darren J Moffat
On 19/04/2010 16:46, Don wrote: I want to know if there is a way for a second node- connected to > a set of shared disks- to keep its zpool.cache up to date > _without_ actually importing the ZFS pool. See zpool(1M): cachefile=path | none Controls the location of where the pool

Re: [zfs-discuss] Best way to expand a raidz pool

2010-04-19 Thread Freddie Cash
On Mon, Apr 19, 2010 at 1:42 AM, Ian Garbutt wrote: > Having looked through the forum I gather that you cannot just add an > additional device to to raidz pool. This being the case is what are the > alternatives that I could to expand a raidz pool? > > You can't expand the number of disks in a r

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
Ok- I think perhaps I'm failing to explain myself. I want to know if there is a way for a second node- connected to a set of shared disks- to keep its zpool.cache up to date _without_ actually importing the ZFS pool. As I understand it- keeping the zpool up to date on the second node would pro

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Bob Friesenhahn
On Mon, 19 Apr 2010, Don wrote: If the zpool.cache file differs between the two heads for some reason- how do I ensure that the second head has an accurate copy without importing the ZFS pool? The zpool.cache file can only be valid for one system at a time. If the pool is imported to a diff

Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-19 Thread Kyle McDonald
On 4/17/2010 9:03 AM, Edward Ned Harvey wrote: > It would be cool to only list files which are different. >>> Know of any way to do that? >>> >> cmp >> > Oh, no. Because cmp and diff require reading both files, it could take > forever, especially if you have a lot of

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread David Magda
On Mon, April 19, 2010 06:26, Michael DeMan wrote: > B. The current implementation stores that cache file on the zil device, > so if for some reason, that device is totally lost (along with said .cache > file), it is nigh impossible to recover the entire pool it correlates > with. Given that ZFS

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread David Magda
On Mon, April 19, 2010 07:32, Edward Ned Harvey wrote: > I'm saying that even a single pair of disks (maybe 4 disks if you're using > cheap slow disks) will outperform a 1Gb Ethernet. So if your bottleneck > is the 1Gb Ethernet, you won't gain anything (significant) by accelerating > the stuff th

Re: [zfs-discuss] Making an rpool smaller?

2010-04-19 Thread Cindy Swearingen
Hi Brandon, I think I've done a similar migration before by creating a second root pool, and then create a new BE in the new root pool, like this: # zpool create rpool2 mirror disk-1 disk2 # lucreate -n newzfsBE -p rpool2 # luactivate newzfsBE # installgrub ... I don't think LU cares that the

[zfs-discuss] ZFS destroy snapshot stall writes? (snv_130).

2010-04-19 Thread Ricardo Junior
I'm using EON 0.60.0 based on svn_130. I populated a 13.5 TB mirrored ZFS volume with serveral large files (dd from /dev/zero, ~100GB) and created a handful of snapshots. NO DEDUPE. Having deleted some of the files, the snapshots grew considerably: tank/t...@140410075500 1.12T - 8.17T -

Re: [zfs-discuss] recomend sata controller 4 Home server with zfs raidz2 and 8x1tb hd

2010-04-19 Thread Tim Cook
On Monday, April 19, 2010, Roy Sigurd Karlsbakk wrote: > - "Harry Putnam" skrev: > >> Erik Trimble writes: >> >> >> Do you think it would be a problem having a second sata card in a >> PCI >> >> slot?  That would be 8 sata ports in all, since the A-open AK86 >> >> motherboard has 2 built in.

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-04-19 Thread Richard Skelton
> On 18/03/10 08:36 PM, Kashif Mumtaz wrote: > > Hi, > > I did another test on both machine. And write > performance on ZFS extraordinary slow. > > Which build are you running? > > On snv_134, 2x dual-core cpus @ 3GHz and 8Gb ram (my > desktop), I > see these results: > > > $ time dd if=/dev/ze

Re: [zfs-discuss] Best way to expand a raidz pool

2010-04-19 Thread Ian Garbutt
Get a life -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] newbie, WAS: Re: SSD best practices

2010-04-19 Thread Michael DeMan (OA)
By the way, I would like to chip in about how informative this thread has been, at least for me, despite (and actually because of) the strong opinions on some of the posts about the issues involved. >From what I gather, there is still an interesting failure possibility with >ZFS, although prob

Re: [zfs-discuss] recomend sata controller 4 Home server with zfs raidz2 and 8x1tb hd

2010-04-19 Thread Erik Trimble
Harry Putnam wrote: Erik Trimble writes: Do you think it would be a problem having a second sata card in a PCI slot? That would be 8 sata ports in all, since the A-open AK86 motherboard has 2 built in. Or should I swap out the 2prt for the 4 prt. I really only need 2 more prts currently,

Re: [zfs-discuss] recomend sata controller 4 Home server with zfs raidz2 and 8x1tb hd

2010-04-19 Thread Roy Sigurd Karlsbakk
- "Harry Putnam" skrev: > Erik Trimble writes: > > >> Do you think it would be a problem having a second sata card in a > PCI > >> slot? That would be 8 sata ports in all, since the A-open AK86 > >> motherboard has 2 built in. Or should I swap out the 2prt for the > 4 > >> prt. I really

Re: [zfs-discuss] Large size variations - what is canonical method

2010-04-19 Thread Harry Putnam
Will Murnane writes: > In short, there are many commands because there are many answers, and > many questions. No single tool has all the information available to > it. Thanks for such a complete answer... and nicely put too. ___ zfs-discuss mailing

Re: [zfs-discuss] recomend sata controller 4 Home server with zfs raidz2 and 8x1tb hd

2010-04-19 Thread Harry Putnam
Erik Trimble writes: >> Do you think it would be a problem having a second sata card in a PCI >> slot? That would be 8 sata ports in all, since the A-open AK86 >> motherboard has 2 built in. Or should I swap out the 2prt for the 4 >> prt. I really only need 2 more prts currently, but would be

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
I'm not certain if I'm misunderstanding you- or if you didn't read my post carefully. Why would the zpool.cache file be current on the _second_ node? The first node is where I've added my zpools and so on. The second node isn't going to have an updated cache file until I export the zpool from t

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
I must note that you haven't answered my question... If the zpool.cache file differs between the two heads for some reason- how do I ensure that the second head has an accurate copy without importing the ZFS pool? -- This message posted from opensolaris.org __

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Don
Yes yes- /etc/zfs/zpool.cache - we all hate typos :) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] newbie, WAS: Re: SSD best practices

2010-04-19 Thread Michael DeMan
In all honesty, I haven't done much at sysadmin level with Solaris since it was SunOS 5.2. I found ZFS after becoming concerned with reliability of traditional RAID5 and RAID6 systems once drives exceeded 500GB. I have a few months running ZFS on FreeBSD lately on a test/augmentation basis wit

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Edward Ned Harvey
> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] > Sent: Sunday, April 18, 2010 11:34 PM > To: Edward Ned Harvey > Cc: Christopher George; zfs-discuss@opensolaris.org > Subject: RE: [zfs-discuss] SSD best practices > > On Sun, 18 Apr 2010, Edward Ned Harvey wrote: > >> This seems to b

Re: [zfs-discuss] Best way to expand a raidz pool

2010-04-19 Thread clark anderson
He called for unity of the people and consolidate power in a state project inclusive and credible, but also admitted that the construction of this new state is not easy especially because of resistance from powerful groups that do not leave peacefully and voluntarily, "tried ... overthrow the co

Re: [zfs-discuss] Best way to expand a raidz pool

2010-04-19 Thread clark anderson
I always like such type of creative posts. It looks that you are highly expert blogger. Your post is an excellent example of why I keep coming back to read your excellent quality content that is forever updated. _ [url=http://www.ac

Re: [zfs-discuss] newbie, WAS: Re: SSD best practices

2010-04-19 Thread Khyron
I would advise getting familiar with the basic terminology and vocabulary of ZFS first. Start with the Solaris 10 ZFS Administration Guide. It's a bit more complete for a newbie. http://docs.sun.com/app/docs/doc/819-5461?l=en You can then move on to the Best Practices Guide, Configuration Guide

[zfs-discuss] newbie, WAS: Re: SSD best practices

2010-04-19 Thread Michael DeMan
Also, pardon my typos, and my lack of re-titling my subject to note that it is a fork from the original topic. Corrections in text that I noticed after finally sorting out getting on the mailing list are below... On Apr 19, 2010, at 3:26 AM, Michael DeMan wrote: > By the way, > > I would like

Re: [zfs-discuss] zpool lists 2 controllers the same, how do I replace one?

2010-04-19 Thread Mark J Musante
On Sun, 18 Apr 2010, Michelle Bhaal wrote: zpool lists my pool as having 2 disks which have identical names. One is offline, the other is online. How do I tell zpool to replace the offline one? If you're lucky, the device will be marked as not being present, and then you can use the GUID.

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Michael DeMan
By the way, I would like to chip in about how informative this thread has been, at least for me, despite (and actually because of) the strong opinions on some of the posts about the issues involved. >From what I gather, there is still an interesting failure possibility with >ZFS, although prob

Re: [zfs-discuss] Best way to expand a raidz pool

2010-04-19 Thread Ian Garbutt
Thats not easy for me, I have all the storage spilt up into the same size LUNS so I can't allocate larger luns and the vdev (looking at previous posts) won't give the raid protection. -- This message posted from opensolaris.org ___ zfs-discuss mailing

Re: [zfs-discuss] crypted zvol bandwith => lofidevice=`pfexec lofiadm -a /dev/zvol/rdsk/$volumepath -c aes-256-cbc`

2010-04-19 Thread Darren J Moffat
On 17/04/2010 10:53, Mickael Lambert wrote: My mean is about bandwidth. what I see is that if I write xMb/s to zfs fs then zfs write nearly xMb/s to the pool and that's attended. This pool write nearly xMb/s to lofi and that's attended. iostat lofi seems showing xMb/s input attended also. I am n

Re: [zfs-discuss] Best way to expand a raidz pool

2010-04-19 Thread Ian Collins
On 04/19/10 08:42 PM, Ian Garbutt wrote: Having looked through the forum I gather that you cannot just add an additional device to to raidz pool. This being the case is what are the alternatives that I could to expand a raidz pool? Either replace *all* the drives with bigger ones, or add

[zfs-discuss] Best way to expand a raidz pool

2010-04-19 Thread Ian Garbutt
Having looked through the forum I gather that you cannot just add an additional device to to raidz pool. This being the case is what are the alternatives that I could to expand a raidz pool? Thanks Ian -- This message posted from opensolaris.org ___

Re: [zfs-discuss] ZFS forensics/revert/restore shellscript and how-to.

2010-04-19 Thread fred pam
Hi Max, Thanks, that's what I was looking for. So, after reading it I come to the conclusion that it's actually the fact I've lost my MOS that makes it 'impossible' to retrieve the data. My understanding of it all (growing yet still meager ;-): Uberblocks do not point to different MOS-es but