my desktop to recognize this drive?
> >
> >
> "zpool import" will tell you which pools are available.
>
> "zpool import wd149" will import your pool.
>
> --
> Ian.
>
And to further that point, ideally you'd do a "zpool export wd149&
Here's hoping they can come in with a realistic price for
home enthusiasts... I highly doubt that will ever happen though.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ture of COW. Enabling
realtime compression with an 800mhz p3? Kiss any performance, however poor
it was, goodbye.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Nov 29, 2008 at 12:02 PM, Ray Clark <[EMAIL PROTECTED]>wrote:
> tcook,
>
> You bring up a good point. exponentially slow is very different from
> crashed, though they may have the same net effect. Also that other factors
> like timeouts would come into play.
>
> Regarding services, I am
both drives hanging off a single IDE bus, that can
further hurt performance.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
he very least dedicate a channel to each disk and just disconnect the cdrom
drive if you have one in the system, or spend 2$ on ebay for a pci add-on
controller.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
this, I guess my question is: Can you shut down the
linux box and throw the ram from it into this box and see what kind of
performance you are getting? I believe you'll see far, far better results
with 1.5G in the system.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t; before finding it! You seem to say it is easy to buy a PCI add-in and have
> it work under Solaris - what card are you thinking of, and where did you
> find it?
>
Every one of the promise IDE (non-raid) cards work just fine. Worst case
scenario yo
workload. Claiming it's "only 3MB/sec" and downplaying all the bad design
decisions you've made so far isn't helping the situation at all.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Nov 29, 2008 at 6:31 PM, Ray Clark <[EMAIL PROTECTED]>wrote:
> Tim,
>
> I don't think we would really disagree if we were in the same room. I
> think in the process of the threaded communication that a few things got
> overlooked, or the wrong thing attribut
.
> --Ray
> --
>
I don't believe either are bundled. Search google for arcstat.pl and
nicstat.pl
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
job in the performance
> arena, stability is
> definitely higher on the list of what's really important to me.
>
> Thanks,
>
> -brian
>
I believe the issue you're running into is the failmode you currently have
set. Take a look at this:
http://prefetch.net/blog/index.php/2008/03/01/configuring-zfs-to-gracefully-deal-with-failures/
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Nevada
> builds (b91 uptime=132 days now with no problems), I'd not be sleeping
> much at night.. Imagine my embarrassment had I taken the high road and
> spent the $$$ for a Thumper for this purpose..
>
Can't you just run opensolaris? They've got support contracts fo
o have different performance based on
what exactly it is you're testing. Similar is probably accurate for a lot
of things, but not everything.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Dec 3, 2008 at 3:11 PM, Joseph Zhou <[EMAIL PROTECTED]>wrote:
> Thanks Tim,
> At this moment, I am looking into OpenStorage as NAS (file serving) vs.
> Linux NAS (Samba) vs. Win2008 NAS vs. NetApp (ONTAP, not GX) performance.
>
> I am also interested in block-based
On Wed, Dec 3, 2008 at 3:36 PM, Joseph Zhou <[EMAIL PROTECTED]>wrote:
> Thanks Ian, Tim,
> Ok, let me really hit one topic instead of trying to see in general what
> data are out there...
>
> Let's say OpenSolaris doing Samba vs. Linux doing Samba, in CIFS
> perform
On Wed, Dec 3, 2008 at 3:51 PM, Joseph Zhou <[EMAIL PROTECTED]>wrote:
> Ok, thanks Tim, which SPC are you talking about?
>
> SPC-1 and SPC-2 don't test NAS, those are block perf.
> SPECsfs97 v2/v3 and sfs2008 have no OpenStorage results.
>
> If there are standard st
On Wed, Dec 3, 2008 at 4:15 PM, Joseph Zhou <[EMAIL PROTECTED]>wrote:
> haha, Tim, yes, I see the Open spirit in this reply! ;-)
>
> As I said, I am just exploring data.
>
> The Sun J4000 SPC1 and SPC2 benchmark results were nice, just lacking other
> published resul
n, but allocate a large dataset > physical memory, and
> I've seen very similar stalls. There is 0 IO, but the application is
> blocked on something. I guess I should try to insert to some debug
> code, or use dtruss to see if the application is waiting on a syscall.
>
Are you leav
or address space). Maybe it would run well under 32-but Linux? I can't
> speak to that as I refuse to run Linux.
>
> -brian
>
"Solaris + ZFS and this is a concern"
Sounds to me like they want to try out solaris + zfs, not "zfs on fuse".
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
> --
>
Why can't you just upgrade zfs and change the pool setting? Why would you
need to copy the data?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
gt; be available until all the disks have been swapped."
>
> Is this correct?
>
As far as I know it should be as simple as (where newdisk and olddisk are
the actual cXtXdX of your drives):
zpool attach rpool olddisk-s0 newdisk-s0
let it completely resilve
ess of the merits of their
patents and prior art, etc., this is not something revolutionarily new. It
may be "revolution" in the sense that it's the first time it's come to open
source software and been given away, but it's hardly "revolutionary" in file
systems as a
e shops that are moving to a model of "all
internal disk with applications running on them". The sun box will just be
"a box at the end of the wire", a-la storage 7000 when it's an
nfs/cifs/iscsi target. Centralized storage is a *good thing*.
--Tim
__
of
performance... so you're trading one high priced storage for another. Your
snapshot creation and deletion is identical. Your incremental generations
is identical. End-to-end checksums? Yup.
Let's see... they don't have block-level compression, they chose dedup
instead which n
d properly, although if you managed to get
Solaris installed on it already from this systems, that's probably a moot
point.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
et go.
There shouldn't ever be an instance where zfs would report a checksum error
when the drive really didn't return one. If there were, I'd consider that a
serious flaw.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
systems... so
that is an interesting question.
As for the hanging, and forgive me if he said this as I've not read the OP's
post, but couldn't you simply do a detach before removing the disk, and do a
re-attach everytime you wanted to re-mirror? Then there'd be no hanging
involv
;
> I hope my explanation is clear in that obviously the data would have to be
> copied, possibly to the new drive I've added, as I want to remove the old
> one.
>
vdev evacuation has been talked about, but there's still no plans for anyone
fro
may apply to sparc based systems as
well since they're also running OBP (like apple).
Thoughts?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ted" file system!!!
>
> --
> Any sufficiently advanced technology is indistinguishable from magic.
>Arthur C. Clarke
>
> My blog: http://initialprogramload.blogspot.com
>
I would expect there to be some sort of clean-up process that either runs
automatically or that c
fmthard -s - /dev/rdsk/c1t3d0s2
Where the cXtXdXs2 relate to your disk ID's. You only do it for s2. After
that you should have no issues. In your case I believe it would be:
prtvtoc /dev/dsk/c1d0s2 | fmthard -s - /dev/rdsk/c2d0s2
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
yte drive?
> Will ZFS try to repair ZFSraid1 with the terabyte drive because it was
> inserted into slot 3? Will I be able to create ZFSraid2 with slots 3-7?
>
Not unless you told zfs that one of the TB drives was a hot spare for
raid1.
Just a thought/question, but don't you have any
On Sat, Dec 27, 2008 at 3:24 PM, Miles Nordin wrote:
> >>>>> "t" == Tim writes:
>
> t> couldn't you simply do a detach before removing the disk, and
> t> do a re-attach everytime you wanted to re-mirror?
>
> no, for two reasons. Fi
like to
> bother the storage guys" or, "We thin provision everything no matter the
> app/fs/os" or .
>
>
Assign your database admin who swears he needs 2TB day one a 2TB lun. And 6
months from now when he's really only using 200G
oot, or a zpool export/import, but that's been fixed in the latest
versions of opensolaris.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Dec 29, 2008 at 8:52 PM, Torrey McMahon wrote:
> On 12/29/2008 8:20 PM, Tim wrote:
>
> I run into the same thing but once I say, "I can add more space without
> downtime" they tend to smarten up. Also, ZFS will not reuse blocks in a, for
> lack of better words, e
7;s
part of the iSCSI standard. It was more of a corner case for iSCSI to try
to say "look, I'm as good as Fibre Channel" than anything else (IMO).
Although that opinion may very well be inaccurate :)
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
unlikely, when data is written in
> a redundant way to two different disks, that both disks lose or
> misdirect the same writes.
>
> Maybe ZFS could have an option to enable instant readback of written
> blocks, if one wants to be absolutely sure, data is written correctly to
> disk.
>
it raidz2?
> --
>
Google is your friend ;)
http://www.pcguide.com/ref/hdd/perf/raid/levels/singleLevel2-c.html
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sk will look identical to a SCSI disk plugged
directly into the motherboard. That's not entirely accurate, but close
enough for you to get an idea.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Jan 6, 2009 at 6:19 PM, Sam wrote:
> I was hoping that this was the problem (because just buying more discs is
> the cheapest solution given time=$$) but running it by somebody at work they
> said going over 90% can cause decreased performance but is unlikely to cause
> the strange errors
#x27;s NEVER a good idea to put a default limitation in place to protect a
*regular user*. If they can't RTFM from front cover to back they don't
deserve to use a computer.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Jan 7, 2009 at 11:45 AM, Tim wrote:
>
>
>
>> >> Decision #2: 1.5TB Seagate vs. 1TB WD (or someone else)
>> The 1.5TB drives have a sketchy reputation as compared to any other
>> Seagate drives. The rumor is that reliability was not high enough for
>
ave the problem. Things like change rate, how the data was laid down in
the first place, etc play a pretty big role. Fragmentation anyone?
90% full completely fragmented is a world of difference from 90% full
optimally laid out.
--Tim
_
arty apps that work universally with
any storage system.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
m a geek, my vmware farm
needs it's nfs mounts on some solid, high performing gear.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
IFS and Networking lists to prevent anyone
> else from waiting time writing a reply, as the appropriate place for this
> thread is now confirmed to be zfs-discuss.
>
> -g.
> --
>
That seems really, relaly low. What are your sustained read speeds?
--Tim
hanks.
> --
>
Since you're going to have to power down to install the card anyways, I'd
just do a 'touch /reconfigure" when you shutdown. On reboot it should do a
reconfigure boot and pick up the card.
--Tim
___
zfs-discus
s that zfs will
not use the drive cache if it doesn't own the whole disk since it won't know
whether or not it should be flushing cache at any given point in time.
It could cause corruption if you had UFS and zfs on the same disk.
--Tim
___
zfs
ou've dedicated a disk to ZFS, you have to turn the
> write cache off yourself somehow using 'format -e' if you are no
> longer using a disk for ZFS only. Or am I remembering wrong?
>
ZFS does turn it off if it doesn't have the whole disk. Th
On Wed, Jan 14, 2009 at 2:40 PM, Mattias Pantzare wrote:
> On Wed, Jan 14, 2009 at 20:03, Tim wrote:
> >
> >
> > On Tue, Jan 13, 2009 at 6:26 AM, Brian Wilson
> > wrote:
> >>
> >> Does creating ZFS pools on multiple partitions on the same physical
&g
e irrelevant
> > ramblings, need I go on?
>
> The ZFS discussion list has produced its first candidate for the
> rubber room that I mentioned here previously. A reduction in crystal
> meth intake could have a profound effect though.
>
> Bob
Just the product of Engli
bry
>
>
>From what I understand:
zpool list shows total capacity of all the drives in the pool. df shows
usable capacity after parity.
I wouldn't really call that retarded, it allows you to see what kind of
space you're chewing up with parity fairly easily.
--Tim
__
I guess most documentation I've seen officially address
them as "raidz" or "raidz2", there is no "raidz1".
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
7;re saying zfs does absolutely no right-sizing? That sounds like a
bad idea all around...
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ks
> --
> Tom
>
> // www.portfast.co.uk -- internet services and consultancy
> // hosting from 1.65 per domain
>
Those are supposedly the two inodes that are corrupt. The 0x0 is a bit
scary... you should be able to find out what file(s) they're ti
seagate. I can't replace the drive anymore?
*GREAT*.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t;
> What exactly does "right size" drives mean? They don't use all of the
> disk?
>
> Casper
>
"right-sizing" is when the volume manager short strokes the drive
intentionally because not all vendors 500GB is the same size. Hence the
OP's problem.
Ho
x27;s done in software by HDS, NetApp, and EMC, that's complete
bullshit. Forcing people to spend 3x the money for a "Sun" drive that's
identical to the seagate OEM version is also bullshit and a piss-poor
answer.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t; GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
>
Take a look at drives on the market, figure out a percentage, and call it a
day. If there's a significant issue with "20TB" drives of the future, issue
a bug report and a fix, just like every other issue that comes up.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
cket science and impossible, but the
fact remains the rest of the industry has managed to make it work. I have a
REAL tough time believing that Sun and/or zfs is so deficient it's an
insurmountable obstacle for them.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Jan 18, 2009 at 1:56 PM, Eric D. Mudama
wrote:
> On Sun, Jan 18 at 13:43, Tim wrote:
>
>> You look at the size of the drive and you take a set percentage off...
>> If
>> it's a "LUN" and it's so far off it still can't be added with t
On Sun, Jan 18, 2009 at 1:57 PM, Louis Hoefler
wrote:
> But what is the recommended way to share a directory?
> --
>
I don't know that there currently is a good way to just share a directory
with the built-in cifs server. I'd imagine your best bet would be to
ause the 1TB drive I can buy from Sun today is in no way, shape, or form
able to store 1TB of data. You use the same *fuzzy math* the rest of the
industry does.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ave to disable smb/server.
According to Mark below, you should be able to share just a folder
leveraging sharemgr though, so maybe look into that first.
Here's a start:
http://blogs.sun.com/dougm/entry/sharemgr_and_zfs
--Tim
___
zfs-discuss mail
On Sun, Jan 18, 2009 at 3:39 PM, Richard Elling wrote:
> Tim wrote:
> It is naive to think that different storage array vendors
> would care about people trying to use another array vendors
> disks in their arrays. In fact, you should get a flat,
impersonal, "not supported&quo
On Sun, Jan 18, 2009 at 4:36 PM, Timothy Renner wrote:
> A few questions on data replication:
> Assuming I've created a pool named zfspool containing two unmirrored
> disks and I create:
>
> zfs create zfspool/test2
> zfs set copies=2 zfspool/test2
>
> Will data copied in there be guaranteed to be
nd_data_protection
Honestly, I believe this list... when other people have asked if they can
use the copies= to avoid mirroring everything. I can't say I've saved any
of the threads because they didn't seem of any particular importance to me
at the time.
Perhaps if I get motivat
gt; Thanks.
>
> Adam
>
So because an enterprise vendor requires you to use their drives in their
array, suddenly zfs can't right-size? Vendor requirements have absolutely
nothing to do with their right-sizing, and everything to do with them
wanting your money.
Are you telling me zfs i
we'll do that."
Remember that one time when I talked about limiting snapshots to protect a
user from themselves, and you joined into the fray of people calling me a
troll? Can you feel the irony oozing out between your lips, or are you
completely oblivious to it?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
't screwed. It's a design choice to
be both sane, and to make the end-users life easier. You know, sort of like
you not letting people choose their raid layout...
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
why Sun currently sells arrays that do JUST THAT.
I'd wager fishworks does just that as well. Why don't you open source that
code and prove me wrong ;)
I'm wondering why they don't come right out with it and say "we want to
intentionally make this painful to our end us
r is "right-size by default, give admins the option to skip
it if they really want". Sort of like I'd argue the right answer on the
7000 is to give users the raid options you do today by default, and allow
them to lay it out themselves from some sort of advanced *at your own risk*
On Mon, Jan 19, 2009 at 5:39 PM, Adam Leventhal wrote:
> > And again, I say take a look at the market today, figure out a
> percentage,
> > and call it done. I don't think you'll find a lot of users crying foul
> over
> > losing 1% of their drive space when they don't already cry foul over the
>
On Tue, Jan 20, 2009 at 2:26 PM, Moore, Joe wrote:
>
> Other storage vendors have specific compatibility requirements for the
> disks you are "allowed" to install in their chassis.
>
And again, the reason for those requirements is 99% about making money, not
a technical one. If you go back far
oing to hurt anything doing so. If you're really paranoid, issue a
snapshot before you pull the drives and power down the system.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Jan 27, 2009 at 9:28 PM, Jorgen Lundman wrote:
>
> Thanks for your reply,
>
> While the savecore is working its way up the chain to (hopefully) Sun,
> the vendor asked us not to use it, so we moved x4500-02 to use x4500-04
> and x4500-05. But perhaps moving to Sol 10 10/08 on x4500-02 whe
the various
projects full of people who don't subscribe or really care about the
advocacy list but are directly affected by their decisions. It's far easier
for someone to say "here, look at this discussion" than to say "I promise,
there's a bunch of pe
ers.
>
What type of spindles were in the FC attached disk?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Feb 1, 2009 at 10:30 AM, Frank Cusack wrote:
> >> nevermind, i will just get a Promise array.
> >
> > Don't. I don't normally like to badmouth vendors, but my experience
> > with Promise was one of the worst in my career, for reasons that should
> > be relevant other ZFS-oriented custome
elling drive trays is crap. You already
claimed in the other thread that Sun has contracts for custom disks so they
don't have to worry about short stroking/right-sizing so it should be pretty
frigging obvious to support if the disks in the JBOD are Sun or "some random
crap purchased f
4652
I wouldn't think grabbing 8GB memory would be a big deal after dropping that
much on the controller??
http://www.newegg.com/Product/Product.aspx?Item=N82E16820134652
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
is a *WORSE THING*. I don't
think zfs should brag about anything if my pool can be down for hours or
days because I'm not given the option to roll back to a consistent state
when I *KNOW* it's what I want to do.
Of course, making that easy wouldn't sell support contracts, wou
ly mentioning ZFS in the context of Snow
> Leopard *Server*, so that's probably enterprise-type disks again.
>
> Cheers,
>
> Chris
>
You apparently have not used apple's disk. It's nothing remotely resembling
"enterprise-type" disk.
--Tim
___
d go, and I've never once lost data, or had
it become unrecoverable or even corrupted.
And yes, I do keep checksums of all the data sitting on them and
periodically check it. So, for all of your ranting and raving, the fact
remains even a *crappy* filesystem like fat32 manages to handle a hot unplug
On Wed, Feb 11, 2009 at 10:33 AM, Steven Sim wrote:
> Tim;
>
> The proper procedure for ejecting a USB drive in Windows is to right click
> the device icon and eject the appropriate listed device.
>
I'm well aware of what the proper procedure is. My point is, I've
thing to write to.
I don't know what exactly it is you put on your USB drives, but I'm
certainly aware of whether or not things on mine are in use before pulling
the drive out. If a picture is open and in an editor, I'm obviously not
going to save it then pull the drive mid-sa
, if it means I have to restore hundreds of terabytes if
not petabytes from tape instead of just restoring the files that were
corrupted or running an fsck, we've got issues.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
R (if you don't mind that it only has 4
> internal ports). It's available on eBay for $80 right now.
>
> Will
There's several people on this list who have already stated it does just
that. As have Supermicro support. I'd write off the hardocp post (which
you apparen
if he'll enlighten us as to what sort of slot he stuck it in.
Brandon? Did you stick the card into a "supermicro approved UIO slot" or
just a standard PCI-E slot?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
on,
> and increases the risk of finding faults like this. While they will be
> rare, they should be expected, and ZFS should be designed to handle them.
>
I'd imagine for the exact same reason short-stroking/right-sizing isn't a
concern.
"We don't have this pro
g correctly by issuing a commit and
immediately reading back to see if it was indeed committed or not. Like a
"zfs test cXtX". Of course, then you can't just blame the hardware
everytime something in zfs breaks ;)
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Feb 12, 2009 at 1:22 PM, Will Murnane wrote:
> On Thu, Feb 12, 2009 at 19:02, Brandon High wrote:
> > There's a post there from a guy using two of the AOC-USAS-L8i in his
> > system here:
> > http://hardforum.com/showthread.php?p=1033321345
> Read again---he's using the AOC-SAT2-MV8, whic
olid. The newer x64 has been leaving a bad
taste in my mouth TBQH. The engineering behind the systems when I open them
up is absolutely phenomenal. The failure rate, however, is downright scary.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e weighed down didn't think of. I don't think it hurts in the least
to throw out some ideas. If they aren't valid, it's not hard to ignore them
and move on. It surely isn't a waste of anyone's time to spend 5 minutes
reading a response and weighing if the idea is vali
redundancy at ZFS level
> and the answer is yes.
>
> Thanks and regards,
> Sanjeev
>
>
Uhhh, S10 box that provide zfs backed iSCSI is NOT fine. Cite the plethora
of examples on this list of how the fault management stack takes so long to
respond it's basically unusable as it stands today.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be noticeably faster than USB.
>
> So I've stopped buying firewire.
>
> --
>
Odd, my firewire enclosure transfers are north of 50MB/sec, while the same
drive in a USB enclosure is lucky to break 25MB/sec. You sure your local
disk isn't just dog slow?
--Tim
_
redict the future. Don't be a dick. He's
asking if they can share some of their intentions based on their current
internal roadmap. If you're telling me Sun doesn't have a 1yr/2yr/3yr
roadmap for ZFS I'd say we're all in
ight call bullshit.
I was trying to be nice about it. If you're making stuff up as you go
along that's likely why you're struggling. Modifying plans is one thing.
Not having any is another thing entirely.
--Tim
___
zfs-discuss mailing l
101 - 200 of 959 matches
Mail list logo