Russ Price fubegra.net> writes:
>
> I had recently started setting up a homegrown OpenSolaris NAS with
> a large RAIDZ2 pool, and had found its RAIDZ2 performance severely
> lacking - more like downright atrocious. As originally set up:
>
> * Asus M4A785-M motherboard
> * Phenom II X2 550 Black
Russ Price fubegra.net> writes:
>
> > Did you enable AHCI mode on _every_ SATA controller?
> >
> > I have the exact opposite experience with 2 of your 3
> > types of controllers.
>
> It wasn't possible to do so, and that also made me think that a real HBA would
work better. First off, with the
Oliver Seidel os1.net> writes:
>
> Hello,
>
> I'm a grown-up and willing to read, but I can't find where to read.
> Please point me to the place that explains how I can diagnose this
> situation: adding a mirror to a disk fills the mirror with an
> apparent rate of 500k per second.
I don't
I have done quite some research over the past few years on the best (ie.
simple, robust, inexpensive, and performant) SATA/SAS controllers for ZFS.
Especially in terms of throughput analysis (many of them are designed with an
insufficient PCIe link width). I have seen many questions on this list
The LSI SAS1064E slipped through the cracks when I built the list.
This is a 4-port PCIe x8 HBA with very good Solaris (and Linux)
support. I don't remember having seen it mentionned on zfs-discuss@
before, even though many were looking for 4-port controllers. Perhaps
the fact it is priced too clos
Marc Nicholas gmail.com> writes:
>
> Nice write-up, Marc.Aren't the SuperMicro cards their funny "UIO" form
> factor? Wouldn't want someone buying a card that won't work in a standard
> chassis.
Yes, 4 or the 6 Supermicro cards are UIO cards. I added a warning about it.
Thanks.
-mrb
__
Thomas Burgess gmail.com> writes:
>
> A really great alternative to the UIO cards for those who don't want the
> headache of modifying the brackets or cases is the Intel SASUC8I
>
> This is a rebranded LSI SAS3081E-R
>
> It can be flashed with the LSI IT firmware from the LSI website and
> is p
Deon Cui gmail.com> writes:
>
> So I had a bunch of them lying around. We've bought a 16x SAS hotswap
> case and I've put in an AMD X4 955 BE with an ASUS M4A89GTD Pro as
> the mobo.
>
> In the two 16x PCI-E slots I've put in the 1068E controllers I had
> lying around. Everything is still being
Hi,
Brandon High freaks.com> writes:
>
> I only looked at the Megaraid that he mentioned, which has a PCIe
> 1.0 4x interface, or 1000MB/s.
You mean x8 interface (theoretically plugged into that x4 slot below...)
> The board also has a PCIe 1.0 4x electrical slot, which is 8x
> physical.
On Wed, May 26, 2010 at 6:09 PM, Giovanni Tirloni wrote:
> On Wed, May 26, 2010 at 9:22 PM, Brandon High wrote:
>>
>> I'd wager it's the PCIe x4. That's about 1000MB/s raw bandwidth, about
>> 800MB/s after overhead.
>
> Makes perfect sense. I was calculating the bottlenecks using the
> full-duple
Giovanni Tirloni sysdroid.com> writes:
>
> The chassis has 4 columns of 6 disks. The 18 disks I was testing were
> all on columns #1 #2 #3.
Good, so this confirms my estimations. I know you said the current
~810 MB/s are amply sufficient for your needs. Spreading the 18 drives
across all 4 port
Graham McArdle ccfe.ac.uk> writes:
>
> This thread from Marc Bevand and his blog linked therein might have some
useful alternative suggestions.
> http://opensolaris.org/jive/thread.jspa?messageID=480925
> I've bookmarked it because it's quite a handy summary and I h
Richard Connamacher indieimage.com> writes:
>
> I was thinking of custom building a server, which I think I can do for
> around $10,000 of hardware (using 45 SATA drives and a custom enclosure),
> and putting OpenSolaris on it. It's a bit of a risk compared to buying a
> $30,000 server, but would
Richard Connamacher indieimage.com> writes:
>
> Also, one of those drives will need to be the boot drive.
> (Even if it's possible I don't want to boot from the
> data dive, need to keep it focused on video storage.)
But why?
By allocating 11 drives instead of 12 to your data pool, you will re
Frank Middleton apogeect.com> writes:
>
> As noted in another thread, 6GB is way too small. Based on
> actual experience, an upgradable rpool must be more than
> 20GB.
It depends on how minimal your install is.
The OpenSolaris install instructions recommend 8GB minimum, I have
one OpenSolaris 2
Bob Friesenhahn simple.dallas.tx.us> writes:
> [...]
> X25-E's write cache is volatile), the X25-E has been found to offer a
> bit more than 1000 write IOPS.
I think this is incorrect. On the paper the X25-E offers 3300 random write
4kB IOPS (and Intel is known to be very conservative about the
Bob Friesenhahn simple.dallas.tx.us> writes:
>
> The Intel specified random write IOPS are with the cache enabled and
> without cache flushing.
For random write I/O, caching improves I/O latency not sustained I/O
throughput (which is what random write IOPS usually refer to). So Intel can't
ch
Ross Walker gmail.com> writes:
>
> Scrubbing on a routine basis is good for detecting problems early, but
> it doesn't solve the problem of a double failure during resilver.
Scrubbing doesn't solve double failures, but it significantly decreases their
likelihood. The assumption here is that t
(I am aware I am replying to an old post...)
Arne Jansen gmx.net> writes:
>
> Now the test for the Vertex 2 Pro. This was fun.
> For more explanation please see the thread "Crucial RealSSD C300 and cache
> flush?"
> This time I made sure the device is attached via 3GBit SATA. This is also
> only
Marc Bevand gmail.com> writes:
>
> This discrepancy between tests with random data and zero data is puzzling
> to me. Does this suggest that the SSD does transparent compression between
> its Sandforce SF-1500 controller and the NAND flash chips?
Replying to myself: ye
Richard Jacobsen unixboxen.net> writes:
>
> Hi all,
>
> I'm getting a very strange problem with a recent OpenSolaris b134 install.
>
> System is:
> Supermicro X5DP8-G2 BIOS 1.6a
> 2x Supermicro AOC-SAT2-MV8 1.0b
As Richard pointed out this is a bug in the AOC-SAT2-MV8 firmware 1.0b.
It incorre
It looks like you *think* you are trying to add the new drive, when you are in
fact re-adding the old (failing) one. A new drive should never show up as
ONLINE in a pool with no action from your part, if only because it contains no
partition and no vdev label with the right pool GUID.
If I am r
I noticed some errors in ls(1), acl(5) and the ZFS Admin Guide about ZFS/NFSv4
ACLs:
ls(1): "read_acl (r) Permission to read the ACL of a file." The compact
representation of read_acl is "c", not "r".
ls(1): "-c | -vThe same as -l, and in addition displays the [...]" The
options are in
Vanja gmail.com> writes:
>
> And finally, if this is the case, is it possible to make an array with
> 3 drives, and then add the mirror later?
I assume you are asking if it is possible to create a temporary 3-way raidz,
then transfer your data to it, then convert it to a 4-way raidz ? No it is
Alan peak.org> writes:
>
> I was just thinking of a similar "feature request": one of the things
> I'm doing is hosting vm's. I build a base vm with standard setup in a
> dedicated filesystem, then when I need a new instance "zfs clone" and voila!
> ready to start tweaking for the needs of the n
Bryan, Thomas: these hangs of 32-bit Solaris under heavy (fs, I/O) loads are a
well known problem. They are caused by memory contention in the kernel heap.
Check 'kstat vmem::heap'. The usual recommendation is to change the
kernelbase. It worked for me. See:
http://mail.opensolaris.org/pipermai
Borys Saulyak eumetsat.int> writes:
> root omases11:~[8]#zpool import
> [...]
> pool: private
> id: 3180576189687249855
> state: ONLINE
> action: The pool can be imported using its name or numeric identifier.
> config:
>
> private ONLINE
> c7t60060160CBA21000A6D22553CA91DC11d0 ONLIN
Borys Saulyak eumetsat.int> writes:
>
> > Your pools have no redundancy...
>
> Box is connected to two fabric switches via different HBAs, storage is
> RAID5, MPxIP is ON, and all after that my pools have no redundancy?!?!
As Darren said: no, there is no redundancy that ZFS can use. It is impor
Tim tcsac.net> writes:
>
> That's because the faster SATA drives cost just as much money as
> their SAS counterparts for less performance and none of the
> advantages SAS brings such as dual ports.
SAS drives are far from always being the best choice, because absolute IOPS or
throughput numbers
Erik Trimble Sun.COM> writes:
> Marc Bevand wrote:
> > 7500rpm (SATA) drives clearly provide the best TB/$, throughput/$, IOPS/$.
> > You can't argue against that. To paraphrase what was said earlier in this
> > thread, to get the best IOPS out of $1000, s
Marc Bevand gmail.com> writes:
>
> Well let's look at a concrete example:
> - cheapest 15k SAS drive (73GB): $180 [1]
> - cheapest 7.2k SATA drive (160GB): $40 [2] (not counting a 80GB at $37)
> The SAS drive most likely offers 2x-3x the IOPS/$. Certainly not 180/4
Erik Trimble Sun.COM> writes:
>
> Bottom line here is that when it comes to making statements about SATA
> vs SAS, there are ONLY two statements which are currently absolute:
>
> (1) a SATA drive has better GB/$ than a SAS drive
> (2) a SAS drive has better throughput and IOPs than a SATA driv
About 2 years ago I used to run snv_55b with a raidz on top of 5 500GB SATA
drives. After 10 months I ran out of space and added a mirror of 2 250GB
drives to my pool with "zpool add". No pb. I scrubbed it weekly. I only saw 1
CKSUM error one day (ZFS self-healed itself automatically of course).
Charles Menser gmail.com> writes:
>
> Nearly every time I scrub a pool I get small numbers of checksum
> errors on random drives on either controller.
These are the typical symptoms of bad RAM/CPU/Mobo. Run memtest for 24h+.
-marc
___
zfs-discuss mai
Ross googlemail.com> writes:
> Now this is risky if you don't have backups, but one possible approach might
be:
> - Take one of the 1TB drives off your raid-z pool
> - Use your 3 1TB drives, plus two sparse 1TB files and create a 5 drive
raid-z2
> - disconnect the sparse files. You now have a 3
Robert Rodriguez comcast.net> writes:
>
> A couple of follow up question, have you done anything similar before?
I have done similar manipulations to experiment with ZFS
(using files instead of drives).
> Can you assess the risk involved here?
If any one of your 8 drives die during the procedu
Carsten Aulbert aei.mpg.de> writes:
>
> Put some stress on the system with bonnie and other tools and try to
> find slow disks
Just run "iostat -Mnx 2" (not zpool iostat) while ls is slow to find the slow
disks. Look at the %b (busy) values.
-marc
_
Aaron Blew gmail.com> writes:
>
> I've done some basic testing with a X4150 machine using 6 disks in a
> RAID 5 and RAID Z configuration. They perform very similarly, but RAIDZ
> definitely has more system overhead.
Since hardware RAID 5 implementations usually do not checksum data (they only
Carsten Aulbert aei.mpg.de> writes:
>
> In RAID6 you have redundant parity, thus the controller can find out
> if the parity was correct or not. At least I think that to be true
> for Areca controllers :)
Are you sure about that ? The latest research I know of [1] says that
although an algorith
Carsten Aulbert aei.mpg.de> writes:
>
> Well, I probably need to wade through the paper (and recall Galois field
> theory) before answering this. We did a few tests in a 16 disk RAID6
> where we wrote data to the RAID, powered the system down, pulled out one
> disk, inserted it into another comput
Mattias Pantzare gmail.com> writes:
>
> He was talking about errors that the disk can't detect (errors
> introduced by other parts of the system, writes to the wrong sector or
> very bad luck). You can simulate that by writing diffrent data to the
> sector,
Well yes you can. Carsten and I are bo
Mattias Pantzare gmail.com> writes:
> On Tue, Dec 30, 2008 at 11:30, Carsten Aulbert wrote:
> > [...]
> > where we wrote data to the RAID, powered the system down, pulled out one
> > disk, inserted it into another computer and changed the sector checksum
> > of a few sectors (using hdparm's utilit
The copy operation will make all the disks start seeking at the same time and
will make your CPU activity jump to a significant percentage to compute the
ZFS checksum and RAIDZ parity. I think you could be overloading your PSU
because of the sudden increase in power consumption...
However if yo
dick hoogendijk nagual.nl> writes:
>
> I live in Holland and it is not easy to find motherboards that (a)
> truly support ECC ram and (b) are (Open)Solaris compatible.
Virtually all motherboards for AMD processors support ECC RAM because the
memory controller is in the CPU and all AMD CPUs supp
dick hoogendijk nagual.nl> writes:
>
> Than why is it that most AMD MoBo's in the shops clearly state that ECC
> Ram is not supported on the MoBo?
To restate what Erik explained: *all* AMD CPUs support ECC RAM, however poorly
written motherboard specs often make the mistake of confusing "non-EC
Bill Moore sun.com> writes:
>
> Moving on, modern high-capacity SATA drives are in the 100-120MB/s
> range. Let's call it 125MB/s for easier math. A 5-port port multiplier
> (PM) has 5 links to the drives, and 1 uplink. SATA-II speed is 3Gb/s,
> which after all the framing overhead, can get yo
Marc Bevand gmail.com> writes:
>
> So in conclusion, my SBNSWAG (scientific but not so wild-ass guess)
> is that the max I/O throughput when reading from all the disks on
> 1 of their storage pod is about 1000MB/s.
Correction: the SiI3132 are on x1 (not x2) links, so my
Tim Cook cook.ms> writes:
>
> Whats the point of arguing what the back-end can do anyways? This is bulk
data storage. Their MAX input is ~100MB/sec. The backend can more than
satisfy that. Who cares at that point whether it can push 500MB/s or
5000MB/s? It's not a database processing tran
Neal Pollack Sun.COM> writes:
>
> Pliant Technologies just released two "Lightning" high performance
> enterprise SSDs that threaten to blow away the competition.
One can build an SSD-based storage device that gives you:
o 320GB of storage capacity (2.1x better than their 2.5" model: 150GB)
o 10
Joe S gmail.com> writes:
>
> I'm going to create 3x 2-way mirrors. I guess I don't really *need* the
> raidz at this point. My biggest concern with raidz is getting locked into
> a configuration i can't grow out of. I like the idea of adding more
> 2 way mirrors to a pool.
The raidz2 option will
It occured to me that there are scenarios where it would be useful to be
able to "zfs send -i A B" where B is a snapshot older than A. I am
trying to design an encrypted disk-based off-site backup solution on top
of ZFS, where budget is the primary constraint, and I wish zfs send/recv
would allow m
Matthew Ahrens sun.com> writes:
>
> True, but presumably restoring the snapshots is a rare event.
You are right, this would only happen in case of disaster and total
loss of the backup server.
> I thought that your onsite and offsite pools were the same size? If so then
> you should be able to
Matthew Ahrens sun.com> writes:
>
> So the errors on the raidz2 vdev indeed indicate that at least 3 disks below
> it gave the wrong data for a those 2 blocks; we just couldn't tell which 3+
> disks they were.
Something must be seriously wrong with this server. This is the first time I
see an
MC eastlink.ca> writes:
>
> Obviously 7zip is far more CPU-intensive than anything in use with ZFS
> today. But maybe with all these processor cores coming down the road,
> a high-end compression system is just the thing for ZFS to use.
I am not sure you realize the scale of things here. Assumi
Pawel Jakub Dawidek FreeBSD.org> writes:
>
> This is how RAIDZ fills the disks (follow the numbers):
>
> Disk0 Disk1 Disk2 Disk3
>
> D0 D1 D2 P3
> D4 D5 D6 P7
> D8 D9 D10 P11
> D12 D13 D14 P15
> D1
David Runyon sun.com> writes:
>
> I'm trying to get maybe 200 MB/sec over NFS for large movie files (need
(I assume you meant 200 Mb/sec with a lower case "b".)
> large capacity to hold all of them). Are there any rules of thumb on how
> much RAM is needed to handle this (probably RAIDZ for all
I would like to test ZFS boot on my home server, but according to bug
6486493 ZFS boot cannot be used if the disks are attached to a SATA
controller handled by a driver using the new SATA framework (which
is my case: driver si3124). I have never heard of someone having
successfully used ZFS boot w
Michael bigfoot.com> writes:
>
> Excellent.
>
> Oct 9 13:36:01 zeta1 scsi: [ID 107833 kern.warning] WARNING:
> /pci 2,0/pci1022,7458 8/pci11ab,11ab 1/disk 2,0 (sd13):
> Oct 9 13:36:01 zeta1 Error for Command: readError
Level: Retryable
>
> Scrubbing now.
This is on
ool.
Of course there are other cases where neither ZFS nor any other checksumming
filesystem is capable of detecting anything (e.g. the sequence of events: data
is corrupted, checksummed, written to disk).
--
Marc Bevand
___
zfs-discuss mailing list
z
Robert telia.com> writes:
>
> I simply need to rename/remove one of the erronous c2d0 entries/disks in
> the pool so that I can use it in full again, since at this time I can't
> reconnect the 10th disk in my raid and if one more disk fails all my
> data would be lost (4 TB is a lot of disk to wa
William Fretts-Saxton sun.com> writes:
>
> Some more information about the system. NOTE: Cpu utilization never
> goes above 10%.
>
> Sun Fire v40z
> 4 x 2.4 GHz proc
> 8 GB memory
> 3 x 146 GB Seagate Drives (10k RPM)
> 1 x 146 GB Fujitsu Drive (10k RPM)
And what version of Solaris or what bui
William Fretts-Saxton sun.com> writes:
>
> I disabled file prefetch and there was no effect.
>
> Here are some performance numbers. Note that, when the application server
> used a ZFS file system to save its data, the transaction took TWICE as long.
> For some reason, though, iostat is showing
Neil Perrin Sun.COM> writes:
>
> The ZIL doesn't do a lot of extra IO. It usually just does one write per
> synchronous request and will batch up multiple writes into the same log
> block if possible.
Ok. I was wrong then. Well, William, I think Marion Hakanson has the
most plausible explanatio
aris 10U4 install on a Thumper is affected by:
http://bugs.opensolaris.org/view_bug.do?bug_id=6587133
Which was discussed here:
http://opensolaris.org/jive/thread.jspa?messageID=189256
http://opensolaris.org/jive/thread.jspa?messageID=163460
Apply T-PATCH 127871-02, or up
To answer Paul's question about how to upgrade to snv_73 (if you
still want to upgrade for another reason): actually I would recommend
you the latest SXDE (Solaris Express Developer Edition 1/08, based
on build 79). Boot from the install disc, and choose the "Upgrade
Install"
I figured the following ZFS 'success story' may interest some readers here.
I was interested to see how much sequential read/write performance it would be
possible to obtain from ZFS running on commodity hardware with modern features
such as PCI-E busses, SATA disks, well-designed SATA controlle
Anton B. Rang acm.org> writes:
>
> Be careful of changing the Max_Payload_Size parameter. It needs to match,
> and be supported, between all PCI-E components which might communicate with
> each other. You can tell what values are supported by reading the Device
> Capabilities Register and checkin
usage even though my stress
tests were all successful (aggregate data rate of 610 MB/s
generated by reading the disks for 24+ hours, 6 million head
seeks performed by each disk, etc).
Thanks for your much appreciated comments.
--
Marc Bevand
__
Brandon High freaks.com> writes:
> Do you have access to a Sil3726 port multiplier?
Nope. But AFAIK OpenSolaris doesn't support port multipliers yet. Maybe
FreeBSD does.
Keep in mind that three modern drives (334GB/platter) are all it takes to
saturate a SATA 3.0Gbps link.
> It's also easier to
Brandon High freaks.com> writes:
> [...]
> The lack of documentation for supported devices is a general complaint
> of mine with Solaris x86, perhaps better taken to the opensolaris-discuss
> list however.
I replied to all your questions in opensolaris-discuss.
-marc
___
Mark Shellenbaum Sun.COM> writes:
> # ls -V a
> -rw-r--r--+ 1 root root 0 Mar 19 13:04 a
> owner@:--:--I:allow
> group@:--:--I:allow
> everyone@:--:--I:allow
The ls(1) manpage (as of snv_82
Sachin Palav indiatimes.com> writes:
>
> 3. Currently there no command that prints the entire configuration of ZFS.
Well there _is_ a command to show all (and only) the dataset properties
that have been manually "zfs set":
$ zfs get -s local all
For the pool properties, zpool has no "-s loca
(Keywords: solaris hang zfs scrub heap space kernelbase marvell 88sx6081)
I am experiencing system hangs on a 32-bit x86 box with 1.5 GB RAM
running Solaris 10 Update 4 (with only patch 125205-07) during ZFS
scrubs of an almost full 3 TB zpool (6 disks on a AOC-SAT2-MV8
controller). I found out th
For the record a parallel install of snv_83 on the same machine allows me to
set kernelbase to 0x8000 with no pb, no init crash. This increased the
kernel heap size to 1912 MB (up from 632 MB with kernelbase=0xd000 in
sol10u4) and the system doesn't hang anymore. The max heap usage I have s
Pascal Vandeputte hotmail.com> writes:
>
> I'm at a loss, I'm thinking about just settling for the 20MB/s write
> speeds with a 3-drive raidz and enjoy life...
As Richard Elling pointed out, the ~10ms per IO operation implies
seeking, or hardware/firmware problems. The mere fact you observed
a l
Rustam code.az> writes:
>
> Didn't help. Keeps crashing.
> The worst thing is that I don't know where's the problem. More ideas on
> how to find problem?
Lots of CKSUM errors like you see is often indicative of bad hardware. Run
memtest for 24-48 hours.
-marc
_
Tim tcsac.net> writes:
>
> So we're still stuck the same place we were a year ago. No high port
> count pci-E compatible non-raid sata cards. You'd think with all the
> demand SOMEONE would've stepped up to the plate by now. Marvell, cmon ;)
Here is a 6-port SATA PCI-Express x1 controller for
Kyle McDonald Egenera.COM> writes:
> Marc Bevand wrote:
> >
> > Overall, like you I am frustrated by the lack of non-RAID inexpensive
> > native PCI-E SATA controllers.
>
> Why non-raid? Is it cost?
Primarily cost, reliability (less complex hw = less hw that can
Brandon High freaks.com> writes:
>
> I'm going to be putting together a home NAS
> based on OpenSolaris using the following:
> 1 SUPERMICRO CSE-743T-645B Black Chassis
> 1 ASUS M2N-LR AM2 NVIDIA nForce Professional 3600 ATX Server Motherboard
> 1 SUPERMICRO AOC-SAT2-MV8 64-bit PCI
Marc Bevand gmail.com> writes:
>
> What I hate about mobos with no onboard video is that these days it is
> impossible to find cheap fanless video cards. So usually I just go headless.
Didn't finish my sentence: ...fanless and *power-efficient*.
Most cards consume 20+W when id
So you are experiencing slow I/O which is making the deletion of this clone
and the replay of the ZIL take forever. It could be because of random I/O ops,
or one of your disks which is dying (not reporting any errors, but very slow
to execute every single ATA command). You provided the output of
Hernan Freschi hjf.com.ar> writes:
>
> Here's the output. Numbers may be a little off because I'm doing a nightly >
> build and compressing a crashdump with bzip2 at the same time.
Thanks. Your disks look healthy. But one question: why is
c5t0/c5t1/c6t0/c6t1 when in another post you referred to
Ben Middleton drn.org> writes:
>
> [...]
> But that simply had the effect of transferring the issue to the new drive:
When you see this behavior, it most likely means it's not your drive
which is failing, but instead it indicates a bad SATA/SAS cable, or
port on the disk controller.
PS: have yo
Buy a 2-port SATA II PCI-E x1 SiI3132 controller ($20). The solaris driver is
very stable.
Or, a solution I would personally prefer, don't use a 7th disk. Partition
each of your 6 disks with a small ~7-GB slice at the beginning and the rest of
the disk for ZFS. Install the OS in one of the sma
Richard L. Hamilton smart.net> writes:
> But I suspect to some extent you get what you pay for; the throughput on the
> higher-end boards may well be a good bit higher.
Not really. Nowadays, even the cheapest controllers, processors & mobos are
EASILY capable of handling the platter-speed throug
Weird. I have no idea how you could remove that file (beside destroying the
entire filesystem)...
One other thing I noticed:
NAMESTATE READ WRITE CKSUM
rpool ONLINE 0 0 8
raidz1ONLINE 0 0 8
c0t7d0 ONLINE
Ben Middleton drn.org> writes:
>
> Today's update:
> - I ran a memtest a few times - no errors.
Just making sure you know about it: memtest should run for a _least_ a couple
hours, and should complete at least 1 pass.
Also, after the scrub completes, any permanent errors you see (so far you on
I remember a similar pb with an AOC-SAT2-MV8 controller in a system of mine:
Solaris rebooted each time the marvell88sx driver tried to detect the disks
attached to it. I don't remember if happened during installation, or during
the first boot after a successful install. I ended up spending a ni
Erik Trimble Sun.COM> writes:
>
> * Huge RAM drive in a 1U small case (ala Cisco 2500-series routers),
> with SAS or FC attachment.
Almost what you want:
http://www.superssd.com/products/ramsan-400/
128 GB RAM-based device, 3U chassis, FC and Infiniband connectivity.
However as a commenter poi
Marc Bevand gmail.com> writes:
>
> I have recently had to replace this AOC-SAT2-MV8 controller with another one
> (we accidentally broke a SATA connector during a maintainance operation). Its
> firmware version is using a totally different numbering scheme (it's probably
&
Chris Cosby gmail.com> writes:
>
>
> You're backing up 40TB+ of data, increasing at 20-25% per year.
> That's insane.
Over time, backing up his data will require _fewer_ and fewer disks.
Disk sizes increase by about 40% every year.
-marc
___
zfs-dis
Matt Harrison genestate.com> writes:
>
> Aah, excellent, just did an export/import and its now showing the
> expected capacity increase. Thanks for that, I should've at least tried
> a reboot :)
More recent OpenSolaris builds don't even need the export/import anymore when
expanding a raidz thi
92 matches
Mail list logo