Re: [zfs-discuss] zvol access rights - chown zvol on reboot / startup / boot

2012-11-16 Thread Brian Wilson
property, so the SMF service doesn't constantly > scan all the filesystems and volumes for their zfs properties. It just checks > the conf file and knows instantly which ones need to be chown'd. > > ___ > zfs-discuss mailing list > zf

Re: [zfs-discuss] LUN expansion choices

2012-11-13 Thread Brian Wilson
ing that are - from what Karl said about balancing the data out as one example. Cheers, Brian > > -- > -Peter Tribble > http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ > ___ > zfs-discuss mail

Re: [zfs-discuss] LUN sizes

2012-10-29 Thread Brian Wilson
your SAN HBA. Summary - my experience on FC SANs (previous and ongoing) is that ZFS is great in that it doesn't tell me what LUN sizes are the best to use. It's a combination of what my storage array limitations and strengths are, as well as my OS configuration and application workload t

Re: [zfs-discuss] Scenario sanity check

2012-07-11 Thread Brian Wilson
On 07/ 9/12 04:36 PM, Ian Collins wrote: On 07/10/12 05:26 AM, Brian Wilson wrote: Yep, thanks, and to answer Ian with more detail on what TruCopy does. TruCopy mirrors between the two storage arrays, with software running on the arrays, and keeps a list of dirty/changed 'tracks'

Re: [zfs-discuss] Scenario sanity check

2012-07-09 Thread Brian Wilson
On 07/06/12, Richard Elling wrote: First things first, the panic is a bug. Please file one with your OS supplier.More below... Thanks! It helps that it recurred a second night in a row. On Jul 6, 2012, at 4:55 PM, Ian Collins wrote: > On 07/ 7/12 11:29 AM, Brian Wilson wr

Re: [zfs-discuss] Scenario sanity check

2012-07-06 Thread Brian Wilson
On 07/ 6/12 04:17 PM, Ian Collins wrote: On 07/ 7/12 08:34 AM, Brian Wilson wrote: Hello, I'd like a sanity check from people more knowledgeable than myself. I'm managing backups on a production system. Previously I was using another volume manager and filesystem on Solaris, and

[zfs-discuss] Scenario sanity check

2012-07-06 Thread Brian Wilson
uns go read-only, but I could be wrong. Anyway, am I off my rocker? This should work with ZFS, right? Thanks! Brian -- --- Brian Wilson, Solaris SE, UW-Madison DoIT Room 3114 CS&S608-263-8047 brian

Re: [zfs-discuss] ZFS and zpool for NetApp FC LUNs

2012-05-16 Thread Brian Wilson
___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- --- Brian Wilson, Solaris SE, UW-Madison DoIT Room 3114 CS&S608-263-8047 brian.wil

Re: [zfs-discuss] Poor relative performance of SAS over SATA drives

2011-10-27 Thread Brian Wilson
n straight sequential IO, where on something more random I would bet they won't perform as well as they do in this test. The tool I've seen used for that sort of testing is iozone - I'm sure there are others as well, and I can't attest what's better or worse. cheers, B

Re: [zfs-discuss] about btrfs and zfs

2011-10-19 Thread Brian Wilson
x27;s redundancy), and in every case I've had it repair data automatically via a scrub. The one case where it didn't was when the disk controller both drives happened to share (bad design, yes) started erroring and corrupting writes to both disks in parallel, so there was no good data

Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Brian Wilson
On 10/18/11 11:46 AM, Mark Sandrock wrote: On Oct 18, 2011, at 11:09 AM, Nico Williams wrote: On Tue, Oct 18, 2011 at 9:35 AM, Brian Wilson wrote: I just wanted to add something on fsck on ZFS - because for me that used to make ZFS 'not ready for prime-time' in 24x7 5+ 9s uptime en

Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Brian Wilson
issing aren't required for my 24x7 5+ 9s application to run (e.g. log files), I can get it rolling again without them quickly, and then get those files recovered from backup afterwards as needed, without having to recover the entire pool from backup. cheers, Brian -- --

Re: [zfs-discuss] Wrong rpool used after reinstall!

2011-08-05 Thread Brian Wilson
e all my drives available. I cannot move these drives to any other box because they are consumer drives and my servers all have ultras. Most modern boards will be boot from a live USB stick. -- --- Brian Wilson,

Re: [zfs-discuss] Have my RMA... Now what??

2011-05-28 Thread Brian O'Connell
Thanks for the input. On Sat, May 28, 2011 at 1:35 PM, Richard Elling wrote: > On May 28, 2011, at 10:15 AM, Edward Ned Harvey wrote: > >>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >>> boun...@opensolaris.org] On Behalf Of Brian >>> >&

Re: [zfs-discuss] Have my RMA... Now what??

2011-05-28 Thread Brian O'Connell
On Sat, May 28, 2011 at 1:15 PM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Brian >> >> I have a raidz2 pool with one disk that seems to be going bad, several > errors >> ar

[zfs-discuss] Have my RMA... Now what??

2011-05-28 Thread Brian
I have a raidz2 pool with one disk that seems to be going bad, several errors are noted in iostat. I have an RMA for the drive, however - no I am wondering how I proceed. I need to send the drive in and then they will send me one back. If I had the drive on hand, I could do a zpool replace.

[zfs-discuss] Migrating iSCSI volumes between pools

2011-01-13 Thread Brian
I have a situation coming up soon in which I will have to migrate some iSCSI backing stores setup with comstar. Are there steps published anywhere on how to move these between pools? Does one still use send/receive or do I somehow just move the backing store? I have moved filesystems before us

[zfs-discuss] Understanding Disk Errors

2010-12-23 Thread Brian
I am trying to understand the various error conditions reported by iostat. I noticed during a recent scrub that my transport errors were increasing. However, after a fair amount of searching I am unsure if that indicates a drive failure or not. I also have a lot of illegal request errors. No

Re: [zfs-discuss] Large Drives

2010-12-10 Thread Brian
I had not really considered that. I was going under the assumption that 2TB drives are still "too big" for a single vdev in terms of resilver times if there is a failure. I also have a 20 bay case, so I have plenty of room to expand. So I wold keep my 1TB drives around anyhow. Thanks for the

Re: [zfs-discuss] Large Drives

2010-12-10 Thread Brian
Thanks. I hadn't come across the Hitachi's They certainly seem to have a price premium associated with them - but I suppose that is to be expected. I was sort of looking towards 'greener' drives since performance wasn't a large factor for either of these vdevs. Seems too bad all the others

[zfs-discuss] Large Drives

2010-12-10 Thread Brian
The time has come to expand my OpenSolaris NAS. Right now I have 6 1TB Samsung Spinpoints in a Raidz2 configuration. I also have a mirrored root pool. The Raidz2 configuration should be for my most critical data - but right now it is holding everything so I need to add some more pools and mo

Re: [zfs-discuss] Running on Dell hardware?

2010-11-01 Thread Brian Kolaci
I've been having the same problems, and it appears to be from a remote monitoring app that calls zpool status and/or zfs list. I've also found problems with PERC and I'm finally replacing the PERC cards with SAS5/E controllers (which are much cheaper anyway). Every time I reboot, the PERC tel

Re: [zfs-discuss] hot spare remains in use

2010-10-04 Thread Brian Kolaci
Thanks, that did it. I thought "detach" was only for mirrors and I have a raidz2, so I didn't think to use that there. I tried replace/remove. I guess the "spare" is actually a mirror of the disk and the spare disk and is treated as such. Thanks again, Brian On

[zfs-discuss] hot spare remains in use

2010-10-04 Thread Brian Kolaci
c10t22d0 INUSE currently in use errors: No known data errors How can I get the spare out of the pool? Thanks, Brian ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Proper procedure when device names have changed

2010-09-13 Thread Brian
That seems to have done the trick. I was worried because in the past I've had problems importing faulted file systems. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mail

[zfs-discuss] Proper procedure when device names have changed

2010-09-13 Thread Brian
"missing". What is the proper procedure to deal with this? -brian -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] pool scrub clean, filesystem broken

2010-08-30 Thread Brian
I've posted a post-mortem followup thread: http://opensolaris.org/jive/thread.jspa?threadID=133472 -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-disc

Re: [zfs-discuss] Postmortem - file system recovered [SEC=UNCLASSIFIED]

2010-08-30 Thread Brian
I am afraid I can't describe the exact procedure that eventually fixed the file system as I merely observed it while Victor was logged into my system. I am quoting from the explanation he provided but if he reads this perhaps he could add whatever details seem pertinent. -- This message posted

[zfs-discuss] Postmortem - file system recovered

2010-08-29 Thread Brian
was. 3) Could this error be recovered from automatically? This was the root of a zfs file system and regardless of the mode bits it was probably clear that it should be treated as a directory. Thanks for everyone's help with diagnosing this. -brian -- This message posted

Re: [zfs-discuss] pool scrub clean, filesystem broken

2010-08-26 Thread Brian Merrell
error they somehow introduced or perhaps I've found a unique codepath that is also relevant pre-134 as well. Earlier today I was able to send some zdb dump information to Cindy which hopefully will shed some light on the situation (I would be happy to send to you as well) -brian On Tue, Aug 17,

Re: [zfs-discuss] ZFS with EMC PowerPath

2010-08-10 Thread Brian Kolaci
On Aug 10, 2010, at 4:07 PM, Cindy Swearingen wrote: > Hi Brian, > > Is the pool exported before the update/upgrade of PowerPath software? Yes, that's the standard procedure. > This recommended practice might help the resulting devices to be more > coherent. > > If t

[zfs-discuss] ZFS with EMC PowerPath

2010-08-09 Thread Brian Kolaci
On some machines running PowerPath, there's sometimes issues after an update/upgrade of the PowerPath software. Sometimes the pseudo devices get remapped and change names. ZFS appears to handle it OK, however sometimes it then references half native device names and half the emcpower pseudo d

Re: [zfs-discuss] pool scrub clean, filesystem broken

2010-08-03 Thread Brian Merrell
t on the filesystem and it was working well when I gracefully shutdown (to physically move the computer). I am a bit at a loss. With copy-on-write and a clean pool how can I have corruption? -brian On Mon, Aug 2, 2010 at 12:52 PM, Cindy Swearingen < cindy.swearin...@oracle.com> wrote: > B

Re: [zfs-discuss] pool scrub clean, filesystem broken

2010-08-02 Thread Brian
Thanks Preston. I am actually using ZFS locally, connected directly to 3 sata drives in a raid-z pool. The filesystem is ZFS and it mounts without complaint and the pool is clean. I am at a loss as to what is happening. -brian -- This message posted from opensolaris.org

[zfs-discuss] pool scrub clean, filesystem broken

2010-08-02 Thread Brian
h the filesystem (cd, chown, etc) -brian -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Recovering from an apparent ZFS Hang

2010-07-13 Thread Brian Leonard
ill any of these processes. Time for hard-reboot. /Brian -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Recovering from an apparent ZFS Hang

2010-07-13 Thread Brian Leonard
recognizable until I restart the enclosure. This same demo works fine when using USB sticks, and maybe that's because each USB stick has its own controller. Thanks for your help, Brian -- This message posted from opensolaris.org ___ zfs-discuss

[zfs-discuss] Recovering from an apparent ZFS Hang

2010-07-12 Thread Brian Leonard
Hi, I'm currently trying to work with a quad-bay USB drive enclosure. I've created a raidz pool as follows: bleon...@opensolaris:~# zpool status r5pool pool: r5pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM r5poolONLINE

Re: [zfs-discuss] pool wide corruption, "Bad exchange descriptor"

2010-07-07 Thread Brian Kolaci
On 7/6/2010 10:37 AM, Victor Latushkin wrote: On Jul 6, 2010, at 6:30 PM, Brian Kolaci wrote: Well, I see no takers or even a hint... I've been playing with zdb to try to examine the pool, but I get: # zdb -b pool4_green zdb: can't open pool4_green: Bad exchange descriptor

Re: [zfs-discuss] pool wide corruption, "Bad exchange descriptor"

2010-07-06 Thread Brian Kolaci
s in the logs and it just "disappeared" without a trace. The only logs are from subsequent reboots where it says a ZFS pool failed to open. It does not give me a warm & fuzzy about using ZFS as I've relied on it heavily in the past 5 years. Any advice would be well appreciate

[zfs-discuss] pool wide corruption, "Bad exchange descriptor"

2010-07-02 Thread Brian Kolaci
type='disk' id=6 guid=14740659507803921957 path='/dev/dsk/c10t6d0s0' devid='id1,s...@n60026b9040e26100139d854a09957d56/a' phys_path='/p...@0

Re: [zfs-discuss] Permanet errors detected in :<0x13>

2010-06-30 Thread W Brian Leonard
below - but the backup did complete as the pool remained online. Thanks for your help Cindy, Brian Cindy Swearingen wrote: I reviewed the zpool clear syntax (looking at my own docs) and didn't remember that a one-device pool probably doesn't need the device specified. For pools with ma

Re: [zfs-discuss] Permanet errors detected in :<0x13>

2010-06-30 Thread W Brian Leonard
Interesting, this time it worked! Does specifying the device to clear cause the command to behave differently? I had assumed w/out the device specification, the clear would just apply to all devices in the pool (which are just the one). Thanks, Brian Cindy Swearingen wrote: Hi Brian

Re: [zfs-discuss] Permanet errors detected in :<0x13>

2010-06-30 Thread W Brian Leonard
Hi Cindy, The scrub didn't help and yes, this is an external USB device. Thanks, Brian Cindy Swearingen wrote: Hi Brian, You might try running a scrub on this pool. Is this an external USB device? Thanks, Cindy On 06/29/10 09:16, Brian Leonard wrote: Hi, I have a zpool whi

[zfs-discuss] Permanet errors detected in :<0x13>

2010-06-29 Thread Brian Leonard
r to destroy and recreate the pool? Thanks, Brian -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Announce: zfsdump

2010-06-28 Thread Brian Kolaci
never had to restore a >> whole file system. I get requests for a few files, >> or somebody's mailbox or somebody's http document >> root. >> You can directly install it from CSW (or blastwave). > > Thanks for your comments, Brian. I should look at Bacula i

Re: [zfs-discuss] Announce: zfsdump

2010-06-28 Thread Brian Kolaci
I use Bacula which works very well (much better than Amanda did). You may be able to customize it to do direct zfs send/receive, however I find that although they are great for copying file systems to other machines, they are inadequate for backups unless you always intend to restore the whole f

Re: [zfs-discuss] c5->c9 device name change prevents beadm activate

2010-06-24 Thread Brian Nitz
device names from the sending hardware? On 06/23/10 18:15, Lori Alt wrote: Cindy Swearingen wrote: On 06/23/10 10:40, Evan Layton wrote: On 6/23/10 4:29 AM, Brian Nitz wrote: I saw a problem while upgrading from build 140 to 141 where beadm activate {build141BE} failed because installgrub

Re: [zfs-discuss] Steps to Recover a ZFS pool.

2010-06-22 Thread Brian
Ok - So I unmounted all the directories, and then deleted them from /media, then I rebooted and everything remounted correctly and the system is functioning again.. Ok. time for a zpool scrub, then I will try my export and import.. whew :-) -- This message posted from opensolaris.org _

Re: [zfs-discuss] Steps to Recover a ZFS pool.

2010-06-22 Thread Brian
Did some more reading.. Should have exported first... gulp... So, I powered down and moved the drives around until the system came back up and zpool status is clean.. However, now I can't seem to boot. During boot it finds all 17 ZFS filesystems and starts mounting them. I have several file

[zfs-discuss] Steps to Recover a ZFS pool.

2010-06-22 Thread Brian
Did a search, but could not find the info I am looking for. I built out my OSOL system about a month ago and have been gradually making changes before I move it into production. I have setup a mirrored rpool and a 6 drive raidz2 pool for data. In my system I have 2 8-port SAS cards and 6 port

Re: [zfs-discuss] Solaris 10U8, Sun Cluster, and SSD issues.

2010-06-01 Thread Brian Wilson
On Jun 1, 2010, at 2:43 PM, Steve D. Jost wrote: Definitely not a silly question. And no, we create the pool on node1 then set up the cluster resources. Once setup, sun cluster manages importing/exporting the pool into only the active cluster node. Sorry for the lack of clarity.. not

Re: [zfs-discuss] Solaris 10U8, Sun Cluster, and SSD issues.

2010-06-01 Thread Brian Wilson
Silly question - you're not trying to have the ZFS pool imported on both hosts at the same time, are you? Maybe I misread, had a hard time following the full description of what exact configuration caused the scsi resets. On Jun 1, 2010, at 2:22 PM, Steve Jost wrote: Hello All, We are

Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
Ok. What worked for me was booting with the live CD and doing: pfexec zpool import -f rpool reboot After that I was able to boot with AHCI enabled. The performance issues I was seeing are now also gone. I am getting around 100 to 110 MB/s during a scrub. Scrubs are completing in 20 minutes

Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
Not completely. I noticed my performance problem in my "tank" rather than my rpool. But my rpool was sharing a controller (the motherboard controller) with some devices in both the rpool and tank. -- This message posted from opensolaris.org ___ zfs-d

Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
Sometimes when it hangs on boot hitting space bar or any key won't bring it back to the command line. That is why I was wondering if there was a way to not show the splashscreen at all, and rather show what it was trying to load when it hangs. -- This message posted from opensolaris.org __

Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
Thanks - I can give reinstalling a shot. Is there anything else I should do first? Should I export my tank pool? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailm

Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
I am not sure I fully understand the question... It is setup as raidz2 - is that what you wanted to know? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/

Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
Is there a way within opensolaris to detect if AHCI is being used by various controllers? I suspect you may be accurate an AHCI is not turned on. The bios for this particular motherboard is fairly confusing on the AHCI settings. The only setting I have is actually in the raid section, and it

Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
Following up with some more information here: This is the output of "iostat -xen 30" extended device statistics errors --- r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device 296.82.9 36640.27.5 7.8 2.0 26.1

[zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
I am new to OSOL/ZFS but have just finished building my first system. I detailed the system setup here: http://opensolaris.org/jive/thread.jspa?threadID=128986&tstart=15 I ended up having to add an additional controller card as two ports on the motherboard did not work as standard Sata port.

Re: [zfs-discuss] Ideal SATA/SAS Controllers for ZFS

2010-05-15 Thread Brian
Very helpful. I just started to setup my system and have run into a problem where my SATA port 7/8 aren't really SATA ports they are behind an unsupported RAID controller, so I am in the market for a compatible controller. Very helpful post. -- This message posted from opensolaris.org

Re: [zfs-discuss] Using WD Green drives?

2010-05-13 Thread Brian
(3) Was more about the size than the Green vs. Black issue. This is all assuming most people are looking at green drives for the cost benefits associated with their large sizes. You are correct Green and Black would most likely have the same number of platters per size. -- This message posted

Re: [zfs-discuss] Using WD Green drives?

2010-05-13 Thread Brian
I am new to OSOL/ZFS myself -- just placed an order for my first system last week. However, I have been reading these forums for a while - a lot of the data seems to be anecdotal, but here is what I have gathered as to why the WD green drives are not a good fit for a RAIDZ(n) system. (1) They s

Re: [zfs-discuss] Validating alignment of NTFS/VMDK/ZFS blocks

2010-03-18 Thread Brian H. Nelson
could be a cause of the problem you are describing. This doc from VMware is aimed at block-based storage but it has some concepts that might be helpful as well as info on aligning guest OS partitions: http://www.vmware.com/pdf/esx3_partition_align.pdf -Brian Chris Murray wrote: Good evenin

Re: [zfs-discuss] ZFS Large scale deployment model

2010-03-02 Thread Brian Kolaci
On Mar 2, 2010, at 11:09 AM, Bob Friesenhahn wrote: > On Tue, 2 Mar 2010, Brian Kolaci wrote: >> >> What is probability of corruption with ZFS in Solaris 10 U6 and up in a SAN >> environment? Have people successfully recovered? > > The probability of corruption in

[zfs-discuss] ZFS Large scale deployment model

2010-03-02 Thread Brian Kolaci
s, they require redundancy at the hardware level, and they won't budge on that and won't do additional redundancy at the ZFS level. So given the environment, would it be better for lots of small pools, or a large shared pool? Thanks, Brian ___

[zfs-discuss] panic: assertion failed: 0 == dmu_buf_hold_array(os, object, offset, size, FALSE, FTAG, &numbufs, &dbp), file: ../../common/fs/zfs/dmu.c, line: 591

2010-02-23 Thread Brian Kolaci
re RAIDs. I'm not too sure what to do with zdb to see anything. Any ideas as to what I can do to recover the rest of the data? There's still some database files on there I need. Thanks, Brian ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [storage-discuss] Disk Issues

2010-02-20 Thread Brian McKerr
Thanks everyone who has tried to help. this has gotten a bit crazier, I removed the 'faulty' drive and let the pool run in degraded mode. It would appear that now another drive has decided to play up; de-bash-4.0# zpool status pool: data state: DEGRADED status: One or more devices has b

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-16 Thread Brian E. Imhoff
Some more back story. I initially started with Solaris 10 u8, and was getting 40ish MB/s reads, and 65-70MB/s writes, which was still a far cry from the performance I was getting with OpenFiler. I decided to try Opensolaris 2009.06, thinking that since it was more "state of the art & up to dat

[zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-10 Thread Brian E. Imhoff
I am in the proof-of-concept phase of building a large ZFS/Solaris based SAN box, and am experiencing absolutely poor / unusable performance. Where to begin... The Hardware setup: Supermicro 4U 24 Drive Bay Chassis Supermicro X8DT3 Server Motherboard 2x Xeon E5520 Nehalem 2.26 Quad Core CPUs 4

Re: [zfs-discuss] Disk Issues

2010-02-08 Thread Brian McKerr
Ok, I changed the cable and also tried swapping the port on the motherboard. The drive continued to have huge asvc_t and also started to have huge wsvc_t. I unplugged it and the 'pool' is now operating as per expected performance wise. See the 'storage' forum for any further updates as I am now

Re: [zfs-discuss] Disk Issues

2010-02-07 Thread Brian McKerr
> > > I'd say your easiest two options are swap ports and > see if the problem > follows the drive. If it does, swap the drive out. > > > --Tim > ___ Yep, that sounds like a plan. Thanks for your suggestion. -- This message posted from opensolaris

[zfs-discuss] Disk Issues

2010-02-07 Thread Brian McKerr
While not strictly a ZFS issue as such I thought I'd post here as this and the storage forums are my best bet in terms of getting some help. I have a machine that I recently set up with b130, b131 and b132. With each build I have been playing around with ZFS raidz2 and mirroring to do a little

Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Brian
Interesting comments.. But I am confused. Performance for my backups (compression/deduplication) would most likely not be #1 priority. I want my VMs to run fast - so is it deduplication that really slows things down? Are you saying raidz2 would overwhelm current I/O controllers to where I cou

Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Brian
It sounds like the consensus is more cores over clock speed. Surprising to me since the difference in clocks speed was over 1Ghz. So, I will go with a quad core. I was leaning towards 4GB of ram - which hopefully should be enough for dedup as I am only planning on dedupping my smaller file sy

Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Brian
Thanks for the reply. Are cores better because of the compression/deduplication being mult-threaded or because of multiple streams? It is a pretty big difference in clock speed - so curious as to why core would be better. Glad to see your 4 core system is working well for you - so seems like

[zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Brian
I am Starting to put together a home NAS server that will have the following roles: (1) Store TV recordings from SageTV over either iSCSI or CIFS. Up to 4 or 5 HD streams at a time. These will be streamed live to the NAS box during recording. (2) Playback TV (could be stream being recorded, co

Re: [zfs-discuss] Root Mirror - Permission Denied

2010-01-17 Thread Brian Fitzhugh
Got an answer emailed to me that said, "you need to use a second pfexec after the | like this: pfexec prtvtoc /dev/rdsk/c7d0s2 | pfexec fmthard -s - /dev/rdsk/c7d1s2" Thanks for the quick response email'er. -- This message posted from opensolaris.org

[zfs-discuss] Root Mirror - Permission Denied

2010-01-17 Thread Brian Fitzhugh
/rdsk/c7d1s2 I get: fmthard: Cannot open device /dev/rdsk/c7d1s2 - Permission denied Any ideas as to what i might be doing wrong here? Thanks, Brian -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opens

Re: [zfs-discuss] adpu320 scsi timeouts only with ZFS

2010-01-14 Thread Brian Kolaci
I was frustrated with this problem for months. I've tried different disks, cables, even disk cabinets. The driver hasn't been updated in a long time. When the timeouts occurred, they would freeze for about a minute or two (showing the 100% busy). I even had the problem with less than 8 L

Re: [zfs-discuss] ZFS upgrade.

2010-01-07 Thread Brian H. Nelson
, but I cannot tell you if any newer versions support later zfs versions. John, You are already running the Update 8 kernel (141444-09). That is the latest version of ZFS that is available for Solaris 10. -Brian ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] best way to configure raidz groups

2009-12-30 Thread Brian
I can't answer your question - but I would like to see more details about the system you are building (sorry if off topic here). What motherboard and what compact flash adapters are you using? -- This message posted from opensolaris.org ___ zfs-discus

[zfs-discuss] Separate Zil on HDD ?

2009-12-02 Thread Brian McKerr
Hi all, I have a home server based on SNV_127 with 8 disks; 2 x 500GB mirrored root pool 6 x 1TB raidz2 data pool This server performs a few functions; NFS : for several 'lab' ESX virtual machines NFS : mythtv storage (videos, music, recordings etc) Samba : for home directories for all networke

[zfs-discuss] adpu320 scsi timeouts only with ZFS

2009-11-22 Thread Brian Kolaci
d UFS on the disks. I had planned on making this system a master database server, however I'm still getting with it running as a slave, so I don't have any comfort to promote this system to the master with the timeouts. Any suggestions? Thanks, Brian

Re: [zfs-discuss] Backing up ZVOLs

2009-11-14 Thread Brian McKerr
Thanks for the help. I was curious whether the zfs send|receive was considered suitable given a few things I've read which said somethings along the lines of "don't count on being able to restore this stuff". Ideally that is what I would use with the 'incremental' option so as to only backup ch

[zfs-discuss] Backing up ZVOLs

2009-11-14 Thread Brian McKerr
Hello all, Are there any best practices / recommendations for ways of doing this ? In this case the ZVOLs would be iSCSI LUNS containing ESX VMs .I am aware of the of the need for the VMs to be quiesced for the backups to be useful. Cheers. -- This message posted from opensolaris.org _

Re: [zfs-discuss] zfs/io performance on Netra X1

2009-11-13 Thread Brian H. Nelson
machine is basically a desktop machine in a rack mount case (similar to a Blade 100) and is also vintage 2001. I wouldn't expect much performance out of it regardless. -Brian ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.

Re: [zfs-discuss] zfs eradication

2009-11-11 Thread Brian Kolaci
Thanks all, It was a government customer that I was talking too and it sounded like a good idea, however with the certification paper trails required today, I don't think it would be of such a benefit after all. It may be useful on the disk evacuation, but they're still going to need their pa

[zfs-discuss] zfs eradication

2009-11-10 Thread Brian Kolaci
eradication patterns back to the removed blocks. By any chance, has this been discussed or considered before? Thanks, Brian ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Resilvering, amount of data on disk, etc.

2009-10-26 Thread Brian
Why does resilvering an entire disk, yield different amounts of data that was resilvered each time. I have read that ZFS only resilvers what it needs to, but in the case of replacing an entire disk with another formatted clean disk, you would think the amount of data would be the same each time

Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-21 Thread Brian Hechinger
Please don't feed the troll. :) -brian On Wed, Oct 21, 2009 at 06:32:42AM -0700, Robert Dupuy wrote: > There is a debate tactic known as complex argument, where so many false and > misleading statements are made at once, that it overwhelms the respondent. > > I'm just

Re: [zfs-discuss] Stupid to have 2 disk raidz?

2009-10-15 Thread Brian Hechinger
On Thu, Oct 15, 2009 at 11:09:32AM -0600, Cindy Swearingen wrote: > Hi Greg, > > With two disks, I would start with a mirror. Then, you could add Additionally, with a two disk RAIDZ1 you are doing parity calculations for no good reason. I would recommend a mirror. -brian -- "

[zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-14 Thread Brian
I am have a strange problem with liveupgrade of ZFS boot environment. I found a similar discussion on the zones-discuss, but, this happens for me on installs with and without zones, so I do not think it is related to zones. I have been able to reproduce this on both sparc (ldom) and x86 (phsy

Re: [zfs-discuss] NFS sgid directory interoperability with Linux

2009-10-13 Thread Brian De Wolf
On 10/12/2009 04:38 PM, Paul B. Henson wrote: I only have ZFS filesystems exported right now, but I assume it would behave the same for ufs. The underlying issue seems to be the Sun NFS server expects the NFS client to apply the sgid bit itself and create the new directory with the parent directo

Re: [zfs-discuss] Incremental snapshot size

2009-09-30 Thread Brian Hubbleday
I had a 50mb zfs volume that was an iscsi target. This was mounted into a Windows system (ntfs) and shared on the network. I used notepad.exe on a remote system to add/remove a few bytes at the end of a 25mb file. -- This message posted from opensolaris.org __

Re: [zfs-discuss] Incremental snapshot size

2009-09-30 Thread Brian Hubbleday
Just realised I missed a rather important word out there, that could confuse. So the conclusion I draw from this is that the --incremental-- snapshot simply contains every written block since the last snapshot regardless of whether the data in the block has changed or not. -- This message poste

Re: [zfs-discuss] Incremental snapshot size

2009-09-30 Thread Brian Hubbleday
I took binary dumps of the snapshots taken in between the edits and this showed that there was actually very little change in the block structure, however the incremental snapshots were very large. So the conclusion I draw from this is that the snapshot simply contains every written block since

[zfs-discuss] Incremental snapshot size

2009-09-30 Thread Brian Hubbleday
I am looking to use Opensolaris/ZFS to create an iscsi SAN to provide storage for a collection of virtual systems and replicate to an offiste device. While testing the environment I was surprised to see the size of the incremental snapshots, which I need to send/receive over a WAN connection, c

Re: [zfs-discuss] Sun Flash Accelerator F20

2009-09-24 Thread Brian H. Nelson
e the other day, my first thought was "Oh cool, they reinvented Prestoserve!" -Brian ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  1   2   3   4   5   >