Re: [zfs-discuss] NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?

2010-07-02 Thread Tim Cook
't get a single response, I have a hard time recommending ANYONE go to Nexenta. It's great they're employing you now, but the community edition has an extremely long way to go before it comes close to touching the community that still hangs around here, despite Oracle's lack of care and feeding. http://www.nexenta.org/boards/1/topics/211 --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?

2010-07-02 Thread Tim Cook
On Fri, Jul 2, 2010 at 9:25 PM, Richard Elling wrote: > On Jul 2, 2010, at 6:48 PM, Tim Cook wrote: > > Given that the most basic of functionality was broken in Nexenta, and not > Opensolaris, and I couldn't get a single response, I have a hard time > recommending ANYONE

Re: [zfs-discuss] NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?

2010-07-02 Thread Tim Cook
On Fri, Jul 2, 2010 at 9:55 PM, James C. McPherson wrote: > On 3/07/10 12:25 PM, Richard Elling wrote: > >> On Jul 2, 2010, at 6:48 PM, Tim Cook wrote: >> >>> Given that the most basic of functionality was broken in Nexenta, and not >>> Opensolaris, and I coul

Re: [zfs-discuss] Legality and the future of zfs...

2010-07-11 Thread Tim Cook
rown jewels, then runs off to a new company and creates a filesystem that looks and feels so similar. Of course, taking stabs in the dark on this mailing list without having access to all of the court documents isn't really constructive in the first place. Then again, neither are people trying

Re: [zfs-discuss] Legality and the future of zfs...

2010-07-12 Thread Tim Cook
On Mon, Jul 12, 2010 at 8:32 AM, Edward Ned Harvey wrote: > > From: Tim Cook [mailto:t...@cook.ms] > > > > Because VSS isn't doing anything remotely close to what WAFL is doing > > when it takes snapshots. > > It may not do what you want it to do, but it'

[zfs-discuss] invalid vdev configuration meltdown

2010-07-14 Thread Tim Castle
First SectorLast * Sector CountSector * 34 1953525100 1953525133 * * First SectorLast * Partition Tag FlagsSector CountSector Mount Directory j...@opensolaris:~# OK. There it is. Should I carefully dd label 0 and 1 to the label 2 and 3 place on each drive? What about the strange prtvtoc statuses? Please help me: How can I import my pool? What should I do? Tim -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Solaris Filesystem

2010-07-14 Thread Tim Cook
on that controller) Disk 0 (first disk at the end of that target) http://www.idevelopment.info/data/Unix/Solaris/SOLARIS_UnderstandingDiskDeviceFiles.shtml --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Legality and the future of zfs...

2010-07-14 Thread Tim Cook
out there. Unless they're google, and they can leave a dead server in a rack for years, it's an unsustainable plan. Out of the fortune 500, I'd be willing to bet there's exactly zero companies that use whitebox systems, and for a reason. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] invalid vdev configuration meltdown

2010-07-15 Thread Tim Castle
:~# zpool import -f files internal error: Value too large for defined data type Abort (core dumped) j...@opensolaris:~# zpool import -d /dev ...shows nothing after 20 minutes Tim -- This message posted from opensolaris.org ___ zfs-discuss mailing list zf

Re: [zfs-discuss] invalid vdev configuration meltdown

2010-07-15 Thread Tim Castle
Alright, I created the links # ln -s /dev/ad6 /mydev/ad6 ... # ln -s /dev/ad10 /mydev/ad10 and ran 'zpool import -d /mydev' Nothing - the links in /mydev are all broken. Thanks again, Tim -- This message posted from opensolaris.org ___ z

Re: [zfs-discuss] Legality and the future of zfs...

2010-07-15 Thread Tim Cook
On Thu, Jul 15, 2010 at 1:50 AM, BM wrote: > On Thu, Jul 15, 2010 at 1:51 PM, Tim Cook wrote: > > Not to mention you've then got full-time staff on-hand to constantly be > replacing > > parts. > > Maybe I don't understand something, but we also had on-ha

Re: [zfs-discuss] Legality and the future of zfs...

2010-07-15 Thread Tim Cook
On Thu, Jul 15, 2010 at 9:09 AM, David Dyer-Bennet wrote: > > On Wed, July 14, 2010 23:51, Tim Cook wrote: > > On Wed, Jul 14, 2010 at 9:27 PM, BM wrote: > > > >> On Thu, Jul 15, 2010 at 12:49 AM, Edward Ned Harvey > >> wrote: > >> > I'll s

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Tim Cook
x27;re simply using ZFS, there is no VMFS to worry about. You don't have to have another ESX box if something goes wrong, any client with an nfs client can mount the share and diagnose the VMDK. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Tim Cook
hort of a piss-poor NFS server implementation, I've never once seen iSCSI beat out NFS in a VMware environment. I have however seen countless examples of their "clustered filesystem" causing permanent SCSI locks on a LUN that result in an entire datastore going offline. --Tim

Re: [zfs-discuss] ZFS and VMware

2010-08-11 Thread Tim Cook
ng VMFS > resignaturing, which is also irritating. > > I don't want to argue with you about the other stuff. > > Which is why block with vmware blows :) --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Opensolaris is apparently dead

2010-08-13 Thread Tim Cook
s in its tracks before it really even got started (perhaps that explains the timing of this press release) as well as killed the Opensolaris community. Quite frankly, I think there will be an even faster decline of Solaris installed base after this move. I know I have no interest in pushing it anywher

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-13 Thread Tim Cook
On Fri, Aug 13, 2010 at 3:54 PM, Erast wrote: > > > On 08/13/2010 01:39 PM, Tim Cook wrote: > >> http://www.theregister.co.uk/2010/08/13/opensolaris_is_dead/ >> >> I'm a bit surprised at this development... Oracle really just doesn't >> get it. The

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-15 Thread Tim Cook
unning. > > The previous Sun software support pricing model was completely bogus. The > Oracle model is also bogus, but at least it provides a means for an > entry-level user to be able to afford support. > > Bob > > The cost discussion is ridiculous, period. $400 is a ste

Re: [zfs-discuss] Help! Dedup delete FS advice needed!!

2010-08-15 Thread Tim Cook
ermine > how much longer left? > > I'd appreciate any advice, cheers > > It would be extremely beneficial for you to switch off and upgrade to 8GB. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Tim Cook
ute it due to the GPL. The original author is free to license the code as many times under as many conditions as they like, and release or not release subsequent changes they make to their own code. I absolutely guarantee Oracle can and likely already has dual-licensed BTRFS. --Tim __

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Tim Cook
On Mon, Aug 16, 2010 at 10:40 AM, Ray Van Dolson wrote: > On Mon, Aug 16, 2010 at 08:35:05AM -0700, Tim Cook wrote: > > No, no they don't. You're under the misconception that they no > > longer own the code just because they released a copy as GPL. That > > is

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Tim Cook
BTRFS+Oracle-license troll-ml > Before making yourself look like a fool, I suggest you look at the BTRFS commits. Can you find a commit submitted by anyone BUT Oracle employees? I've yet to see any significant contribution from anyone outside the walls of Oracle to the project. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Tim Cook
egated itself to a non-player in the Linux filesystem > space... > > So, yes, they can do it if they want, I just think they're not THAT > stupid. :) > > > Or, for all you know, Chris Mason's contract has a non-compete that states if he leaves Oracle he's not al

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Tim Cook
2010/8/16 "C. Bergström" > Tim Cook wrote: > >> >> >> 2010/8/16 "C. Bergström" > codest...@osunix.org>> >> >> >>Joerg Schilling wrote: >> >>"C. Bergström" ><mailto:codest.

Re: [zfs-discuss] Quickest way to find files with cksum errors without doing scrub

2009-09-28 Thread Tim Cook
e, the duplicate metadata copy might be corrupt but the problem >>> is not detected since it did not happen to be used. >>> >> >> Too bad we cannot scrub a dataset/object. >> > > Can you provide a use case? I don't see why scrub couldn't start and > stop at specific txgs for instance. That won't necessarily get you to a > specific file, though. > -- richard > I get the impression he just wants to check a single file in a pool without waiting for it to check the entire pool. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] "Hot Space" vs. hot spares

2009-09-30 Thread Tim Cook
the face of a drive failure. BTW, you shouldn't need one disk per tray of 14 disks. Unless you've got some known bad disks/environmental issues, every 2-3 should be fine. Quite frankly, if you're doing raid-z3, I'd feel comfortable with one per thumper. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] FW: Supermicro AOC-SAT2-MV8 hang when drive removed

2009-10-13 Thread Tim Cook
FS > utilities (zfs list, zpool list, zpool status) causes a hang until I replace > the disk. > -- > Did you set your failmode to continue? --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] SSD over 10gbe not any faster than 10K SAS over GigE

2009-10-13 Thread Tim Cook
rt is I cannot estimate how much of the old disks have life > is left because in a few months, I am going to have a handful of the fastest > SSD's around and not sure if I would trust them for much of anything. > > Am I really that wrong? > > Derek > I'll take them whe

Re: [zfs-discuss] FW: Supermicro AOC-SAT2-MV8 hang when drive removed

2009-10-13 Thread Tim Cook
On Tue, Oct 13, 2009 at 9:42 AM, Aaron Brady wrote: > I did, but as tcook suggests running a later build, I'll try an > image-update (though, 111 > 2008.11, right?) > It should be, yes. b111 was released in April of 2009. --Tim __

Re: [zfs-discuss] fishworks on x4275?

2009-10-16 Thread Tim Cook
run on systems purchased as a 7000 series, Sun will not support it on anything else. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] fishworks on x4275?

2009-10-16 Thread Tim Cook
On Fri, Oct 16, 2009 at 1:14 PM, Frank Cusack wrote: > On October 16, 2009 1:08:17 PM -0500 Tim Cook wrote: > >> On Fri, Oct 16, 2009 at 1:05 PM, Frank Cusack >> wrote: >> >>> Can the software which runs on the 7000 series servers be installed >>> on

Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-20 Thread Tim Cook
r problem with its latency? Assuming you aren't using absurdly large block sizes, it would appear to fly. 0.15ms is bad? http://blogs.sun.com/BestPerf/entry/1_6_million_4k_iops --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-20 Thread Tim Cook
ch a workload in the real world. It sounds like you're comparing paper numbers for the sake of comparison, rather than to solve a real-world problem... BTW, latency does not give you "# of random access per second". 5microsecond latency for one access != # of random access per second, sorry. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-21 Thread Tim Cook
that can somehow take in 1billion IO requests, process them, have a memory back end that can return them, but does absolutely nothing with them for a full minute. Even if you scale those numbers down, your theory is absolutely ridiculous. Of course, you also failed to address the other issue. H

Re: [zfs-discuss] new google group for ZFS on OSX

2009-10-23 Thread Tim Cook
t and > repository will >also be removed shortly. > > The community is migrating to a new google group: >http://groups.google.com/group/zfs-macos > > -- richard > Any official word from Apple on the abandonment? --Tim __

Re: [zfs-discuss] zpool with very different sized vdevs?

2009-10-23 Thread Tim Cook
a > while without losing anything? I would expect the system to resliver the > data onto the remaining vdevs, or tell me to go jump off a pier. :) > -- > Jump off a pier. Removing devices is not currently supported but it is in the works. --Tim __

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Tim Cook
expect. http://www.sun.com/servers/x64/x4540/server_architecture.pdf One drive per channel, 6 channels total. I also wouldn't be surprised to find out that they found this the optimal configuration from a performance/throughput/IOPS perspective as well. Can't seem to find those numbers publ

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Tim Cook
t once. You're talking (VERY conservatively) 2800 IOPS. Even ignoring that, I know for a fact that the chip can't handle raw throughput numbers on 46 disks unless you've got some very severe raid overhead. That chip is good for roughly 2GB/sec each direction. 46 7200RPM drive

[zfs-discuss] Checksums

2009-10-23 Thread Tim Cook
zfs and then upgraded to the latest? Second, would all of the blocks be re-checksummed with a zfs send/receive on the receiving side? --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Tim Cook
;s backup. My assumption would be it's something coming in over the network, in which case I'd say you're far, far better off throttling at the network stack. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-23 Thread Tim Cook
On Fri, Oct 23, 2009 at 7:17 PM, Richard Elling wrote: > > Tim has a valid point. By default, ZFS will queue 35 commands per disk. > For 46 disks that is 1,610 concurrent I/Os. Historically, it has proven to > be > relatively easy to crater performance or cause problems with ver

Re: [zfs-discuss] Checksums

2009-10-23 Thread Tim Cook
On Fri, Oct 23, 2009 at 7:19 PM, Adam Leventhal wrote: > On Fri, Oct 23, 2009 at 06:55:41PM -0500, Tim Cook wrote: > > So, from what I gather, even though the documentation appears to state > > otherwise, default checksums have been changed to SHA256. Making that > > a

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-24 Thread Tim Cook
On Sat, Oct 24, 2009 at 4:49 AM, Adam Cheal wrote: > The iostat I posted previously was from a system we had already tuned the > zfs:zfs_vdev_max_pending depth down to 10 (as visible by the max of about 10 > in actv per disk). > > I reset this value in /etc/system to 7, rebooted, and started a sc

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-24 Thread Tim Cook
On Sat, Oct 24, 2009 at 11:20 AM, Tim Cook wrote: > > > On Sat, Oct 24, 2009 at 4:49 AM, Adam Cheal wrote: > >> The iostat I posted previously was from a system we had already tuned the >> zfs:zfs_vdev_max_pending depth down to 10 (as visible by the max of about 10 >&

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-24 Thread Tim Cook
On Sat, Oct 24, 2009 at 12:30 PM, Carson Gaspar wrote: > > I saw this with my WD 500GB SATA disks (HDS725050KLA360) and LSI firmware > 1.28.02.00 in IT mode, but I (almost?) always had exactly 1 "stuck" I/O. > Note that my disks were one per channel, no expanders. I have _not_ seen it > since rep

Re: [zfs-discuss] zfs code and fishworks "fork"

2009-10-27 Thread Tim Cook
MORE if they're forced into having to deal with third party vendors that are pointing fingers at software problems vs. hardware problems and wasting Sun support engineers valuable time. I think you'd find yourself unpleasantly surprised at the end price tag. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs code and fishworks "fork"

2009-10-27 Thread Tim Cook
XYZ isnt' working is because their hardware isn't supported... oh, and they have no plans to ever add support either. I honestly can't believe this is even a discussion. What next, are you going to ask NetApp to support ONTAP on Dell systems, and EMC to support Enginuity on

Re: [zfs-discuss] zpool import single user mode incompatible version

2009-10-27 Thread Tim Cook
n old version of zfs. Grab a new iso. How would you expect a system that shipped with verison 10 of zfs to know what to do with version 15? --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool import single user mode incompatible version

2009-10-27 Thread Tim Cook
On Tue, Oct 27, 2009 at 4:59 PM, dick hoogendijk wrote: > Tim Cook wrote: > >> >> >> On Tue, Oct 27, 2009 at 4:25 PM, Paul Lyons > paulrly...@gmail.com>> wrote: >> >>When I boot off Solaris 10 U8 I get the error that pool is >>forma

Re: [zfs-discuss] zpool failmode

2009-10-27 Thread Tim Cook
ot; will cause the system to panic and core dump. The only real advantage I see in wait is that it will alert the admin to a failure rather quickly if you aren't checking the health of the system on a regular basis. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs code and fishworks "fork"

2009-10-27 Thread Tim Cook
7;s a step up from a whitebox 2-disk mirror from some no-name > vendor who won't exist in 6 months. > > --eric > > PS: Not having enough engineers to support a growing and paying > customer base is a *good* problem to have. The opposite is much, much > worse. > So use Nexenta? --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs code and fishworks "fork"

2009-10-28 Thread Tim Cook
2009/10/28 Eric D. Mudama > On Wed, Oct 28 at 13:40, "C. Bergström" wrote: > >> Tim Cook wrote: >> >>> >>> >>> PS: Not having enough engineers to support a growing and paying >>> customer base is a *good* problem to have.

Re: [zfs-discuss] zfs-discuss gone from web?

2009-10-28 Thread Tim Cook
t; -Kyle > > Either they don't like you, or you don't read your emails :) It's now hub.opensolaris.org for the main page. The forums can be found at: http://opensolaris.org/jive/index.jspa?categoryID=1 Although they appear to be having technical difficulties with the forum at the moment. -

[zfs-discuss] marvell88sx2 driver build126

2009-11-01 Thread Tim Cook
I've sent this to the driver list as well, but since the zfs folks tend to be intimately involved with the marvell driver stack, I figured I'd give you guys a shot too. Does anyone happen to know if there was a driver change with build 126? I had a pool that was 2x5+1 raidz vdev's. I moved all

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-03 Thread Tim Cook
you Sun folks comment on this? --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Where is green-bytes dedup code?

2009-11-03 Thread Tim Cook
m Sun.  It seems the conflicts from the lawsuit may or > may not be resolved, but still.. > > Where's the code? I highly doubt you're going to get any commentary from sun engineers on pending litigation. --Tim ___ zfs-discuss ma

Re: [zfs-discuss] ZFS + fsck

2009-11-04 Thread Tim Haley
d, in most cases ZFS currently provides *much* better solution to random data corruption than any other filesystem+fsck in the market. The code for the putback of 2009/479 allows reverting to an earlier uberblock AND defers the re-use of blocks for a short time to make this &q

Re: [zfs-discuss] ZFS + fsck

2009-11-05 Thread Tim Haley
Orvar Korvar wrote: Does this putback mean that I have to upgrade my zpool, or is it a zfs tool? If I missed upgrading my zpool I am smoked? The putback did not bump zpool or zfs versions. You shouldn't have to upgrade your pool. -tim __

Re: [zfs-discuss] ZFS + fsck

2009-11-05 Thread Tim Haley
The current build in-process is 128 and that's the build into which the changes were pushed. -tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-07 Thread Tim Cook
ivers causing the problem or not. It's tough to say what exactly is causing the problems. I would imagine ripping something like sd from the older version would break more than it would fix. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-07 Thread Tim Cook
On Sat, Nov 7, 2009 at 12:02 PM, Cindy Swearingen wrote: > Hi Tim and all, > > I believe you are saying that marvell88sx2 driver error messages started > in build 126, along with new disk errors in RAIDZ pools. > > Is this correct? If so, please send me the following informati

Re: [zfs-discuss] Accidentally mixed-up disks in RAIDZ

2009-11-07 Thread Tim Cook
le > personal information. > > Thanks in advance. > > Leandro. > > -- > Of course, it doesn't matter which drive is plugged in where. When you import a pool, zfs scans the headers of each disk to verify if they're part of a pool or no

Re: [zfs-discuss] Quick dedup question

2009-11-07 Thread Tim Haley
Rich Teer wrote: Congrats for integrating dedup! Quick question: in what build of Nevada will dedep first be found? b126 is the current one presently. Cheers, 128 -tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] dedupe question

2009-11-07 Thread Tim Haley
ratio from 1.16x to 1.11x which seems to indicate that dedupe does not detect the english text is identical in every file. Theory: Your files may end up being in one large 128K block or maybe a couple of 64K blocks where there isn't much redundancy to de-dup.

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-08 Thread Tim Cook
nstall b125? Like "0.5.12-0.125"? > No. That's the SunOS version number, and you should always use 0.5.11- for anything in opensolaris today. Solaris 10= "5.10". Opensolaris="5.11". 9=5.9 etc. etc. etc. http://en.wikipedia.org/wiki/Solaris_%28operating

Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread Tim Cook
his VM, I'm prepared to do that. > > > Is this idea retarded? Something you would recommend or do yourself? All of > this convenience is pointless if there will be significant problems, I would > like to eventually serve production serve

Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread Tim Cook
On Sun, Nov 8, 2009 at 11:20 AM, Joe Auty wrote: > Tim Cook wrote: > > On Sun, Nov 8, 2009 at 2:03 AM, besson3c wrote: > >> I'm entertaining something which might be a little wacky, I'm wondering >> what your general reaction to this scheme might be :) >&

Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread Tim Cook
On Sun, Nov 8, 2009 at 11:37 AM, Joe Auty wrote: > Tim Cook wrote: > > On Sun, Nov 8, 2009 at 11:20 AM, Joe Auty wrote: > >> Tim Cook wrote: >> >> On Sun, Nov 8, 2009 at 2:03 AM, besson3c wrote: >> >>> I'm entertaining something which mi

Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread Tim Cook
On Sun, Nov 8, 2009 at 11:48 AM, Joe Auty wrote: > Tim Cook wrote: > > > >> It appears that one can get more in the way of features out of VMWare >> Server for free than with ESX, which is seemingly a hook into buying more >> VMWare stuff. >> >> I&#x

Re: [zfs-discuss] RAID-Z and virtualization

2009-11-08 Thread Tim Cook
the entire product suite. vCenter is only required for advanced functionality like HA/DPM/DRS that you don't have with VMware server either. Are you just throwing out buzzwords, or do you actually know what they do? --Tim ___ zfs-discuss mailing list zf

Re: [zfs-discuss] PSARC recover files?

2009-11-09 Thread Tim Haley
#x27;d probably lose a lot of other data at the same time. We don't offer the ability to rollback if the pool can be opened/imported successfully anyway. -tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-10 Thread Tim Cook
gt; > driver change with build 126? > not for the SATA framework, but for HBAs there is: > http://hub.opensolaris.org/bin/view/Community+Group+on/2009093001 > > I will find a thumper, load build 125, create a raidz pool, and > upgrade to b126. > > I'll also send the error

[zfs-discuss] Odd sparing problem

2009-11-10 Thread Tim Cook
Anyone have any thoughts? I'm trying to figure out how to get c7t6d0 back to being a hotspare since c7t5d0 is installed, there, and happy. It's almost as if it's using both disks for "spare-11" right now. --Tim ___ zfs-discuss

Re: [zfs-discuss] PSARC recover files?

2009-11-10 Thread Tim Cook
d to corrupt blocks that are part of an existing snapshot though, as they'd be read-only. The only way that should even be able to happen is if you took a snapshot after the blocks were already corrupted. Any new writes would be allocated from new blocks. --Tim _

Re: [zfs-discuss] PSARC recover files?

2009-11-10 Thread Tim Cook
On Tue, Nov 10, 2009 at 3:19 PM, A Darren Dunham wrote: > On Tue, Nov 10, 2009 at 03:04:24PM -0600, Tim Cook wrote: > > No. The whole point of a snapshot is to keep a consistent on-disk state > > from a certain point in time. I'm not entirely sure how you managed to > &g

Re: [zfs-discuss] Odd sparing problem

2009-11-10 Thread Tim Cook
On Tue, Nov 10, 2009 at 4:38 PM, Cindy Swearingen wrote: > Hi Tim, > > I'm not sure I understand this output completely, but have you > tried detaching the spare? > > Cindy > > Hey Cindy, Detaching did in fact solve the issue. During my previous issues when the

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-10 Thread Tim Cook
c7t3d0ONLINE 0 0 0 2.05G resilvered c7t4d0 ONLINE 0 0 0 c7t5d0 ONLINE 0 0 0 spares c7t6d0AVAIL errors: No known data erro

Re: [zfs-discuss] Fwd: [ilugb] Does ZFS support Hole Punching/Discard

2009-11-10 Thread Tim Cook
e same boat, it should constantly be filling and emptying as new data comes in. I'd imagine the TRIM would just add unnecessary overhead. It could in theory help there by zeroing out blocks ahead of time before a new batch of writes come in if you have a period of little I/O. My thou

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-10 Thread Tim Cook
On Tue, Nov 10, 2009 at 5:15 PM, Tim Cook wrote: > > > On Tue, Nov 10, 2009 at 10:55 AM, Richard Elling > wrote: > >> >> On Nov 10, 2009, at 1:25 AM, Orvar Korvar wrote: >> >> Does this mean that there are no driver changes in marvell88sx2, between >&g

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-11 Thread Tim Cook
gelog-b126.html --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Fwd: [ilugb] Does ZFS support Hole Punching/Discard

2009-11-11 Thread Tim Cook
On Wed, Nov 11, 2009 at 11:51 AM, Bob Friesenhahn < bfrie...@simple.dallas.tx.us> wrote: > On Tue, 10 Nov 2009, Tim Cook wrote: > >> >> My personal thought would be that it doesn't really make sense to even >> have it, at least for readzilla. In theory, you al

[zfs-discuss] Manual drive failure?

2009-11-11 Thread Tim Cook
p? Am I just missing something obvious? Detach seems to only apply to mirrors and hot spares. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs eradication

2009-11-11 Thread Tim Cook
On Wed, Nov 11, 2009 at 12:29 PM, Darren J Moffat wrote: > Joerg Moellenkamp wrote: > >> Hi, >> >> Well ... i think Darren should implement this as a part of zfs-crypto. >> Secure Delete on SSD looks like quite challenge, when wear leveling and bad >> block relocation kicks in ;) >> > > No I won't

[zfs-discuss] zpool not growing after drive upgrade

2009-11-12 Thread Tim Cook
I've tried exporting and importing the pool, and it doesn't make a difference. NAMESIZE USED AVAILCAP HEALTH ALTROOT fserv 3.25T 2.73T 532G84% ONLINE - --Tim ___ zfs-discuss mailing list zfs-discuss@opens

Re: [zfs-discuss] zpool not growing after drive upgrade

2009-11-12 Thread Tim Cook
On Thu, Nov 12, 2009 at 4:05 PM, Cindy Swearingen wrote: > Hi Tim, > > In a pool with mixed disk sizes, ZFS can use only the amount of disk > space that is equal to the smallest disk and spares aren't included in > pool size until they are used. > > In your RAIDZ-2 pool

Re: [zfs-discuss] dedupe question

2009-11-13 Thread Tim Cook
previous thread, Adam had said that it automatically keeps more copies of a block based on how many references there are to that block. IE: If there's 20 references it would keep 2 copies, whereas if there's 20,000 it would keep 5. I'll have to see if I can dig up th

Re: [zfs-discuss] zfs/io performance on Netra X1

2009-11-13 Thread Tim Cook
problem with the SCSI bus > termination or a bad cable? > > > Bob > SCSI? Try PATA ;) --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] scrub differs in execute time?

2009-11-13 Thread Tim Cook
es scrub finish in 8h, and > then rearranging the SATA cables, it takes 15h - with the same data? > > What's the motherboard model? --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Disk I/O in RAID-Z as new disks are added/removed

2009-11-15 Thread Tim Cook
sponse because the first result on google should have the answer you're looking for. In any case, if memory serves correctly, Jeff's blog should have all the info you need: http://blogs.sun.com/bonwick/entry/raid_z --Tim ___ zfs-discuss m

Re: [zfs-discuss] Best config for different sized disks

2009-11-15 Thread Tim Cook
t space possible out of them. > So have two raidsets. One with the 1TB drives, and one with the 300's. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Best config for different sized disks

2009-11-15 Thread Tim Cook
ll just stripe across all the drives. You're taking a performance penalty for a setup that essentially has 0 redundancy. You lose a 500gb drive, you lose everything. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Old zfs version with OpenSolaris 2009.06 JeOS ??

2009-11-16 Thread Tim Cook
2008/07/19/opensolaris-upgrade-instructions/ If you want the latest development build, which would be required to get to a build 21 zpool, you'd need to change your repository. http://pkg.opensolaris.org/dev/en/index.shtml --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Best config for different sized disks

2009-11-16 Thread Tim Cook
On Mon, Nov 16, 2009 at 12:09 PM, Bob Friesenhahn < bfrie...@simple.dallas.tx.us> wrote: > On Sun, 15 Nov 2009, Tim Cook wrote: > >> >> Once again I question why you're wasting your time with raid-z. You might >> as well just stripe across all the drives. You

Re: [zfs-discuss] hung pool on iscsi

2009-11-16 Thread Tim Cook
On Mon, Nov 16, 2009 at 2:10 PM, Martin Vool wrote: > I encountered the same problem...like i sed in the first post...zpool > command freezes. Anyone knows how to make it respond again? > -- > > Is your failmode set to wait? --Tim

Re: [zfs-discuss] hung pool on iscsi

2009-11-16 Thread Tim Cook
On Mon, Nov 16, 2009 at 4:00 PM, Martin Vool wrote: > I already got my files back acctuay and the disc contains already new > pools, so i have no idea how it was set. > > I have to make a virtualbox installation and test it. > Can you please tell me how-to set the failmode? > > > http://prefetch

[zfs-discuss] [Fwd: [zfs-auto-snapshot] Heads up: SUNWzfs-auto-snapshot obsoletion in snv 128]

2009-11-17 Thread Tim Foster
ocess that stream (sshing to a remote server and doing a zfs recv, for example) If you do use that functionality, it'd be good to drop a mail to the thread[1] on the zfs-auto-snapshot alias. It's been a wild ride, but my work on zfs-auto-snapshot is done I think :-) cheers

Re: [zfs-discuss] hung pool on iscsi

2009-11-18 Thread Tim Cook
t;> > Also, I never said anything about setting it to panic. I'm not sure why you can't set it to continue while alerting you that a vdev has failed? -- --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] upgrading to the latest zfs version

2009-11-18 Thread Tim Cook
ions > on http://pkg.opensolaris.org/dev/en/index.shtml to bring my system up to > date, however, the system is reporting no updates are available and stays at > zfs v19, any ideas? > > v21 isn't included in b127. As far as I know, the only way to get to 21 is to buil

Re: [zfs-discuss] hung pool on iscsi

2009-11-18 Thread Tim Cook
On Wed, Nov 18, 2009 at 12:49 PM, Jacob Ritorto wrote: > Tim Cook wrote: > > > Also, I never said anything about setting it to panic. I'm not sure why > > you can't set it to continue while alerting you that a vdev has failed? > > > Ah, right, thanks for the

Re: [zfs-discuss] CIFS shares being lost

2009-11-20 Thread Tim Cook
ay be going awry, could > anyone tell me or point me in the right direction? > > Thanks, > Emily > > -- > Emily > CIFS information generally gets dumped into /var/adm/messages. What do you mean by "it stops working". You have to remount the shar

<    1   2   3   4   5   6   7   8   9   10   >