> From: Richard Elling [mailto:rich...@nexenta.com]
> >
> > Regardless of multithreading, multiprocessing, it's absolutely
> possible to
> > have contiguous files, and/or file fragmentation. That's not a
> > characteristic which depends on the threading model.
>
> Possible, yes. Probable, no. C
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Brad
>
> Hi! I'd been scouring the forums and web for admins/users who deployed
> zfs with compression enabled on Oracle backed by storage array luns.
> Any problems with cpu/memory overhead?
> From: Haudy Kazemi [mailto:kaze0...@umn.edu]
>
> With regard to multiuser systems and how that negates the need to
> defragment, I think that is only partially true. As long as the files
> are defragmented enough so that each particular read request only
> requires one seek before it is time to
> From: Richard Elling [mailto:rich...@nexenta.com]
> > With appropriate write caching and grouping or re-ordering of writes
> algorithms, it should be possible to minimize the amount of file
> interleaving and fragmentation on write that takes place.
>
> To some degree, ZFS already does this. Th
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Wolfraider
>
> We are looking into the possibility of adding a dedicated ZIL and/or
> L2ARC devices to our pool. We are looking into getting 4 – 32GB Intel
> X25-E SSD drives. Would this be a
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ramesh Babu
>
> I would like to know if I can create ZFS file system without ZFS
> storage pool. Also I would like to know if I can create ZFS pool/ZFS
> pool on Veritas Volume.
Unless I'm mi
> From: Richard Elling [mailto:rich...@nexenta.com]
>
> > Suppose you want to ensure at least 99% efficiency of the drive. At
> most 1%
> > time wasted by seeking.
>
> This is practically impossible on a HDD. If you need this, use SSD.
Lately, Richard, you're saying some of the craziest illogic
> From: Richard Elling [mailto:rich...@nexenta.com]
>
> It is practically impossible to keep a drive from seeking. It is also
The first time somebody (Richard) said "you can't prevent a drive from
seeking," I just decided to ignore it. But then it was said twice. (Ian.)
I don't get why anybod
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Wolfraider
>
> target mode, using both ports. We have 1 zvol connected to 1 windows
> server and the other zvol connected to another windows server with both
> windows servers having a qlogic 2
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of David Dyer-Bennet
>
> > For example, if you start with an empty drive, and you write a large
> > amount
> > of data to it, you will have no fragmentation. (At least, no
> significant
> > fragme
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Marty Scholes
>
> What appears to be missing from this discussion is any shred of
> scientific evidence that fragmentation is good or bad and by how much.
> We also lack any detail on how much
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bryan Horstmann-Allen
>
> The ability to remove the slogs isn't really the win here, it's import
> -F. The
Disagree.
Although I agree the -F is important and good, I think the log device
remov
> From: Neil Perrin [mailto:neil.per...@oracle.com]
>
> > you lose information. Not your whole pool. You lose up to
> > 30 sec of writes
>
> The default is now 5 seconds (zfs_txg_timeout).
When did that become default? Should I *ever* say 30 sec anymore?
In my world, the oldest machine is
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Tom Bird
>
We recently had a long discussion in this list, about resilver times versus
raid types. In the end, the conclusion was: resilver code is very
inefficient for raidzN. Someday it m
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
>
> It is very unusual to obtain the same number of errors (probably same
> errors) from two devices in a pair. This should indicate a common
> symptom such as a memory error (
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Peter Taps
>
> The dedup property is set on a filesystem, not on the pool.
>
> However, the dedup ratio is reported on the pool and not on the
> filesystem.
As with most other ZFS concepts, t
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Freddie Cash
>
> The following works well:
> dd if=/dev/random of=/dev/disk-node bs=1M count=1 seek=whatever
>
> If you have long enough cables, you can move a disk outside the case
> and ru
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Brad Stone
>
> For de-duplication to perform well you need to be able to fit the de-
> dup table in memory. Is a good rule-of-thumb for needed RAM Size=(pool
> capacity/avg block size)*270 byt
> From: opensolaris-discuss-boun...@opensolaris.org [mailto:opensolaris-
> discuss-boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>
> I'm using a custom snaopshot scheme which snapshots every hour, day,
> week and month, rotating 24h, 7d, 4w and so on. What would be the best
> way to z
> From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]
>
> > For now, the rule of thumb is 3G ram for every 1TB of unique data,
> > including
> > snapshots and vdev's.
>
> 3 gigs? Last I checked it was a little more than 1GB, perhaps 2 if you
> have small files.
http://opensolaris.org/jive/thr
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> It is relatively easy to find the latest, common snapshot on two file
> systems.
> Once you know the latest, common snapshot, you can send the
> incrementals
> up to the latest.
I've always relied on the snapshot names matching. Is the
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jason J. W. Williams
>
> I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x
> raid-z2 stripes with 6 disks per raid-z2. Disks are 500gb in size. No
> checksum errors.
27G o
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>
> extended device statistics
> devicer/sw/s kr/s kw/s wait actv svc_t %w %b
> sd1 0.5 140.30.3 2426.3 0.0 1.07.2 0 14
>
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Peter Taps
>
> As I understand, the hash generated by sha256 is "almost" guaranteed
> not to collide. I am thinking it is okay to turn off "verify" property
> on the zpool. However, if there is
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Scott Meilicke
>
> Why do you want to turn verify off? If performance is the reason, is it
> significant, on and off?
Under most circumstances, verify won't hurt performance. It won't hurt
re
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Tony MacDoodle
>
> Is it possible to add 2 disks to increase the size of the pool below?
>
> NAME STATE READ WRITE CKSUM
> testpool ONLINE 0 0 0
> mirror-0 ONLINE 0 0 0
> c1t2d0 ONLINE 0 0 0
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Stephan Budach
>
> Ian,
>
> yes, although these vdevs are FC raids themselves, so the risk is… uhm…
> calculated.
Whenever possible, you should always JBOD the storage and let ZFS manage the
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Stephan Budach
>
> Now, scrub would reveal corrupted blocks on the devices, but is there a
> way to identify damaged files as well?
I saw a lot of people offering the same knee-jerk reaction t
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Stephan Budach
>
> I
> conducted a couple of tests, where I configured my raids as jbods and
> mapped each drive out as a seperate LUN and I couldn't notice a
> difference in performance in any
> From: edmud...@mail.bounceswoosh.org
> [mailto:edmud...@mail.bounceswoosh.org] On Behalf Of Eric D. Mudama
>
> On Wed, Oct 6 at 22:04, Edward Ned Harvey wrote:
> > * Because ZFS automatically buffers writes in ram in order to
> > aggregate as previously mentioned, the
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Kevin Walker
>
> We are a running a Solaris 10 production server being used for backup
> services within our DC. We have 8 500GB drives in a zpool and we wish
> to swap them out 1 by 1 for 1TB
> From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com]
>
> I would not discount the performance issue...
>
> Depending on your workload, you might find that performance increases
> with ZFS on your hardware RAID in JBOD mode.
Depends on the raid card you're comparing to. I've certainly s
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian Collins
>
> I would seriously consider raidz3, given I typically see 80-100 hour
> resilver times for 500G drives in raidz2 vdevs. If you haven't
> already,
If you're going raidz3, with 7
Is there a ZFS equivalent (or alternative) of inotify?
You have some thing, which wants to be notified whenever a specific file or
directory changes. For example, a "live" sync application of some kind...
___
zfs-discuss mailing list
zfs-discuss@ope
> From: Peter Jeremy [mailto:peter.jer...@alcatel-lucent.com]
> Sent: Thursday, October 07, 2010 10:02 PM
>
> On 2010-Oct-08 09:07:34 +0800, Edward Ned Harvey
> wrote:
> >If you're going raidz3, with 7 disks, then you might as well just make
> >mirrors instead,
> From: cas...@holland.sun.com [mailto:cas...@holland.sun.com] On Behalf
> Of casper@sun.com
>
> >Is there a ZFS equivalent (or alternative) of inotify?
>
> Have you looked at port_associate and ilk?
port_associate looks promising. But google is less than useful on "ilk."
Got any pointers,
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>
> In addition to this comes another aspect. What if one drive fails and
> you find bad data on another in the same VDEV while resilvering. This
> is quite common these da
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian D
>
> the help to community can provide. We're running the latest version of
> Nexenta on a pretty powerful machine (4x Xeon 7550, 256GB RAM, 12x
> 100GB Samsung SSDs for the cache, 50GB S
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Harry Putnam
>
> beep beep beep beep beep beep
>
> I'm kind of having a brain freeze about this:
> So what are the standard tests or cmds to run to collect enough data
>
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of David Dyer-Bennet
>
> I must say that this concept of scrub running w/o error when corrupted
> files, detectable to zfs send, apparently exist, is very disturbing.
As previously mentioned, the
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Stephan Budach
>
> You are implying that the issues resulted from the H/W raid(s) and I
> don't think that this is appropriate.
Please quote originals when you reply. If you don't - then it's
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Stephan Budach
>
> c3t211378AC0253d0 ONLINE 0 0 0
How many disks are there inside of c3t211378AC0253d0?
How are they configured? Hardware raid 5? A mirror of
> From: Stephan Budach [mailto:stephan.bud...@jvm.de]
>
> I now also got what you meant by "good half" but I don't dare to say
> whether or not this is also the case in a raid6 setup.
The same concept applies to raid5 or raid6. When you read the device, you
never know if you're actually reading
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ray Van Dolson
>
> I have a pool with a single SLOG device rated at Y iops.
>
> If I add a second (non-mirrored) SLOG device also rated at Y iops will
> my zpool now theoretically be able to h
I have a Dell R710 which has been flaky for some time. It crashes about
once per week. I have literally replaced every piece of hardware in it, and
reinstalled Sol 10u9 fresh and clean.
I am wondering if other people out there are using Dell hardware, with what
degree of success, and in wha
> From: Markus Kovero [mailto:markus.kov...@nebula.fi]
> Sent: Wednesday, October 13, 2010 10:43 AM
>
> Hi, we've been running opensolaris on Dell R710 with mixed results,
> some work better than others and we've been struggling with same issue
> as you are with latest servers.
> I suspect somekin
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Steve Radich, BitShop, Inc.
>
> Do you have dedup on? Removing large files, zfs destroy a snapshot, or
> a zvol and you'll see hangs like you are describing.
Thank you, but no.
I'm running so
> From: edmud...@mail.bounceswoosh.org
> [mailto:edmud...@mail.bounceswoosh.org] On Behalf Of Eric D. Mudama
>
> Out of curiosity, did you run into this:
> http://blogs.everycity.co.uk/alasdair/2010/06/broadcom-nics-dropping-
> out-on-solaris-10/
I personally haven't had the broadcom problem. Wh
Dell R710 ... Solaris 10u9 ... With stability problems ...
Notice that I have several CPU's whose current_cstate is higher than the
supported_max_cstate.
Logically, that sounds like a bad thing. But I can't seem to find
documentation that defines the meaning of supported_max_cstates, to verify
th
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> Dell R710 ... Solaris 10u9 ... With stability problems ...
> Notice that I have several CPU's whose current_cstate is higher than
> the
> suppo
> From: Henrik Johansen [mailto:hen...@scannet.dk]
>
> The 10g models are stable - especially the R905's are real workhorses.
You would generally consider all your machines stable now?
Can you easily pdsh to all those machines?
kstat | grep current_cstate ; kstat | grep supported_max_cstates
I'
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of dirk schelfhout
>
> Wanted to test the zfs diff command and ran into this.
What's zfs diff? I know it's been requested, but AFAIK, not implemented
yet. Is that new feature being developed no
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Peter Taps
>
> If I have 20 disks to build a raidz3 pool, do I create one big raidz
> vdev or do I create multiple raidz3 vdevs? Is there any advantage of
> having multiple raidz3 vdevs in a si
> From: David Magda [mailto:dma...@ee.ryerson.ca]
>
> On Wed, October 13, 2010 21:26, Edward Ned Harvey wrote:
>
> > I highly endorse mirrors for nearly all purposes.
>
> Are you a member of BAARF?
>
> http://www.miracleas.com/BAARF/BAARF2.html
Never hear
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Toby Thain
>
> > I don't want to heat up the discussion about ZFS managed discs vs.
> > HW raids, but if RAID5/6 would be that bad, no one would use it
> > anymore.
>
> It is. And there's no r
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian D
>
> ok... we're making progress. After swapping the LSI HBA for a Dell
> H800 the issue disappeared. Now, I'd rather not use those controllers
> because they don't have a JBOD mode. We
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Wilkinson, Alex
>
> can you paste them anyway ?
Note: If you have more than one adapter, I believe you can specify -aALL in
the commands below, instead of -a0
I have 2 disks (slots 4 & 5) th
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Derek G Nokes
>
> r...@dnokes.homeip.net:~# zpool create marketData raidz2
> c0t5000C5001A6B9C5Ed0 c0t5000C5001A81E100d0 c0t5000C500268C0576d0
> c0t5000C500268C5414d0 c0t5000C500268CFA6Bd0 c0t5
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Phil Harman
>
> I'm wondering whether your HBA has a write through or write back cache
> enabled? The latter might make things very fast, but could put data at
> risk if not sufficiently non-vo
> From: Stephan Budach [mailto:stephan.bud...@jvm.de]
>
> Point taken!
>
> So, what would you suggest, if I wanted to create really big pools? Say
> in the 100 TB range? That would be quite a number of single drives
> then, especially when you want to go with zpool raid-1.
You have a lot of disk
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Cassandra Pugh
>
> I would like to know how to replace a failed vdev in a non redundant
> pool?
Non redundant ... Failed ... What do you expect? This seems like a really
simple answer... You
> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
>
> > raidzN takes a really long time to resilver (code written
> inefficiently,
> > it's a known problem.) If you had a huge raidz3, it would literally
> never
> > finish, because it couldn't resilver as fast as new data appears. A
>
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>
> > The vdev only.
Right on.
Furthermore, as shown in the "zpool status," a 7-disk raidz2 is certainly a
reasonable vdev configuration.
> scrub: resilver in progres
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> If scrub is operating at a block-level (and I think it is), then how
> can
> checksum failures be mapped to file names? For example, this is a
> l
The default blocksize is 128K. If you are using mirrors, then each block on
disk will be 128K whenever possible. But if you're using raidzN with a
capacity of M disks (M disks useful capacity + N disks redundancy) then the
block size on each individual disk will be 128K / M. Right? This is one
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Freddie Cash
>
> If you lose 1 vdev, you lose the pool.
As long as 1 vdev is striped and not mirrored, that's true.
You can only afford to lose a vdev, if your vdev itself is mirrored.
You co
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Habony, Zsolt
>
> If I use a zpool which is one LUN from the SAN, and when
> it becomes full I add a new LUN to it.
> But I cannot guarantee that the LUN will not come from the s
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> > http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg41998.html
>
> Slabs don't matter. So the rest of this argument is moot.
Tell it to Erik. He might want to know. Or maybe he knows better than you.
> 2. Each slab is sprea
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> On Oct 17, 2010, at 6:17 AM, Edward Ned Harvey wrote:
>
> >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> >> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
> >>
>
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> > This is one of the reasons the raidzN resilver code is inefficient.
> > Since you end up waiting for the slowest seek time of any one disk in
> > the vdev, and when that's done, the amount of data you were able to
> > process was at mo
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Marty Scholes
>
> Would it make sense for scrub/resilver to be more aware of operating in
> disk order instead of zfs order?
It would certainly make sense. As mentioned, even if you do the en
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Simon Breden
>
> So are we all agreed then, that a vdev failure will cause pool loss ?
Yes. When I said you could mirror a raidzN vdev, it was based on nothing
more credible than assumption b
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>
> Last I checked, you lose the pool if you lose the slog on zpool
> versions < 19. I don't think there is a trivial way around this.
You should plan for this to be true
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Gil Vidals
>
> What would the performance impact be of splitting up a 64 GB SSD into
> four partitions of 16 GB each versus having the entire SSD dedicated to
> each pool?
This is a common que
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Trond Michelsen
>
> Hi.
I think everything you said sounds perfectly right.
As for estimating the time required to "zfs send" ... I don't know how badly
"zfs send" gets hurt by the on-disk or
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
>
> Ignore Edward Ned Harvey's response because he answered the wrong
> question.
Indeed.
Although, now that I go back and actually read the question correctly, I
wonder why n
> From: Stephan Budach [mailto:stephan.bud...@jvm.de]
>
> > Just in case this wasn't already clear.
> >
> > After scrub sees read or checksum errors, zpool status -v will list
> > filenames that are affected. At least in my experience.
> > --
> > - Tuomas
>
> That didn't do it for me. I used scru
> From: Edward Ned Harvey [mailto:sh...@nedharvey.com]
>
> Let's crunch some really quick numbers here. Suppose a 6Gbit/sec
> sas/sata bus, with 6 disks in a raid-5. Each disk is 1TB, 1000G, and
> each disk is capable of sustaining 1 Gbit/sec sequential operations.
In a discussion a few weeks back, it was mentioned that the Best Practices
Guide says something like "Don't put more than ___ disks into a single
vdev." At first, I challenged this idea, because I see no reason why a
21-disk raidz3 would be bad. It seems like a good thing.
I was operating on ass
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> 4. Guess what happens if you have 2 or 3 failed disks in your raidz3,
> and
> they're trying to resilver at the same time. Does the system ignor
> From: Stephan Budach [mailto:stephan.bud...@jvm.de]
>
> Although, I have to say that I do have exactly 3 files that are corrupt
> in each snapshot until I finally deleted them and restored them from
> their original source.
>
> zfs send will abort when trying to send them, while scrub doesn't
>
> -Original Message-
> From: Darren J Moffat [mailto:darr...@opensolaris.org]
> > It's one of the big selling points, reasons for ZFS to exist. You
> should
> > always give ZFS JBOD devices to work on, so ZFS is able to scrub both
> of the
> > redundant sides of the data, and when a checks
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Kyle McDonald
>
> I'm currently considering purchasing 1 or 2 Dell R515's.
>
> With up to 14 drives, and up to 64GB of RAM, it seems like it's well
> suited
> for a low-end ZFS server.
>
> I
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Dave
>
> I have a 14 drive pool, in a 2x 7 drive raidz2, with l2arc and slog
> devices attached.
> I had a port go bad on one of my controllers (both are sat2-mv8), so I
> need to replace it (I
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Stephan Budach
>
> I actually have three Dell R610 boxes running OSol snv134 and since I
> switched from the internal Broadcom NICs to Intel ones, I didn't have
> any issue with them.
I am sti
> From: Stephan Budach [mailto:stephan.bud...@jvm.de]
>
> > What sort of problems did you have with the bcom NICs in your R610?
>
> Well, basically the boxes would hang themselves up, after a week or so.
> And by hanging up, I mean becoming inaccessible by either the network
> via ssh or the loca
> From: Markus Kovero [mailto:markus.kov...@nebula.fi]
>
> Any other feasible alternatives for Dell hardware? Wondering, are these
> issues mostly related to Nehalem-architectural problems, eg. c-states.
> So is there anything good in switching hw vendor? HP anyone?
In googling around etc ... Ma
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian Collins
>
> Sun hardware? Then you get all your support from one vendor.
+1
Sun hardware costs more, but it's worth it, if you want to simply assume
your stuff will work. In my case, I'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian D
>
> I get that multi-cores doesn't necessarily better performances, but I
> doubt that both the latest AMD CPUs (the Magny-Cours) and the latest
> Intel CPUs (the Beckton) suffer from inc
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Mark Sandrock
>
> I'm working with someone who replaced a failed 1TB drive (50%
> utilized),
> on an X4540 running OS build 134, and I think something must be wrong.
>
> Last Tuesday aft
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Chad Leigh -- Shire.Net LLC
>
> >> 1) The ZFS box offers a single iSCSI target that exposes all the
> >> zvols as individual disks. When the FreeBSD initiator finds it, it
> >> creates a sepa
Since combining ZFS storage backend, via nfs or iscsi, with ESXi heads, I'm
in love. But for one thing. The interconnect between the head & storage.
1G Ether is so cheap, but not as fast as desired. 10G ether is fast enough,
but it's overkill and why is it so bloody expensive? Why is there
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Alexander Skwar
>
> I've got a Solaris 10 10/08 Sparc system and use ZFS pool version 15. I'm
> playing around a bit to make it break.
>
> Now I write some garbage to one of the log mirror dev
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of VO
>
> The server hardware is pretty ghetto with whitebox components such as
> non-ECC RAM (cause of the pool loss). I know the hardware sucks but
> sometimes non-technical people don't underst
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bryan Horstmann-Allen
> |
> | I am a newbie on Solaris.
> | We recently purchased a Sun Sparc M3000 server. It comes with 2
identical
> hard drives. I want to setup a raid 1. After searching on
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Toby Thain
>
> The corruption will at least be detected by a scrub, even in cases where
it
> cannot be repaired.
Not necessarily. Let's suppose you have some bad memory, and no ECC. Your
app
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>
> SAS Controller
> and all ZFS Disks/ Pools are passed-through to Nexenta to have full
ZFS-Disk
> control like on real hardware.
This is precisely the thing I'm interested in. How do you do that? On my
ESXi (test) server, I hav
> From: Saxon, Will [mailto:will.sa...@sage.com]
>
> In order to do this, you need to configure passthrough for the device at
the
> host level (host -> configuration -> hardware -> advanced settings). This
Awesome. :-)
The only problem is that once a device is configured to pass-thru to the
gues
601 - 700 of 1156 matches
Mail list logo