rocess?
Thanks - Adam...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have a system that is running Solaris 10 Update 3 TX with 1 zpool and 5
zones. Everything on it is running fine. I take the drive to my disk duplicator
and dupe it bit by bit to another drive, put the newly duped drive in the same
machine and boot it up everything boots up fine. Then I do a zp
Just to let everyone know what I did to 'fix' the problem. By halting the
zones and the exporting the zpool I was able to duplicate the drive without
issue. Just had to import the zpool upon booting and boot the zones. Although
my setup uses slices for the zpool (this is not supported by SUN),
I retracted that statement in the above edit.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s to
ensure that their deployments are successful, and fixing problems as they come
up.
> The hardware on the other hand is incredible in terms of resilience and
> performance, no doubt. Which makes me think the pretty interface becomes an
> annoyance sometimes. Let's wait for
Hey Karsten,
Very interesting data. Your test is inherently single-threaded so I'm not
surprised that the benefits aren't more impressive -- the flash modules on the
F20 card are optimized more for concurrent IOPS than single-threaded latency.
Adam
On Mar 30, 2010, at 3:30 AM, Kar
Hey Robert,
How big of a file are you making? RAID-Z does not explicitly do the parity
distribution that RAID-5 does. Instead, it relies on non-uniform stripe widths
to distribute IOPS.
Adam
On Jun 18, 2010, at 7:26 AM, Robert Milkowski wrote:
> Hi,
>
>
> zpool create test
istribute parity.
What is the total width of your raidz1 stripe?
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hey Robert,
I've filed a bug to track this issue. We'll try to reproduce the problem and
evaluate the cause. Thanks for bringing this to our attention.
Adam
On Jun 24, 2010, at 2:40 AM, Robert Milkowski wrote:
> On 23/06/2010 18:50, Adam Leventhal wrote:
>>> Does it mean
aults to lzjb which is fast; but gzip-9
> can be twice as good. (I've just done some tests on the MacZFS port on my
> blog for more info)
Here's a good blog comparing some ZFS compression modes in the context of the
Sun Storage 7000:
http://blogs.sun.com/dap/en
> In my case, it gives an error that I need at least 11 disks (which I don't)
> but the point is that raidz parity does not seem to be limited to 3. Is this
> not true?
RAID-Z is limited to 3 parity disks. The error message is giving you false hope
and that's a bug. If you had plugged in 11 dis
I Too have seen this problem.
I had done a zfs send from my main pool "terra" (6 disk raidz on seagate 1TB
drives) to a mirror pair of WD Green 1TB drives.
ZFS send was successful, however i noticed the pool was degraded after a while
(~1 week) with one of the mirror disks constantly re-silverin
Cindy: How can I view the bug report you referenced? Standard methods show my
the bug number is valid (6694909) but no content or notes. We are having
similar messages appear with snv_118 with a busy LSI controller, especially
during scrubbing, and I'd be interested to see what they mentioned in
James: We are running Phase 16 on our LSISAS3801E's, and have also tried the
recently released Phase 17 but it didn't help. All firmware NVRAM settings are
default. Basically, when we put the disks behind this controller under load
(e.g. scrubbing, recursive ls on large ZFS filesystem) we get th
I've filed the bug, but was unable to include the "prtconf -v" output as the
comments field only accepted 15000 chars total. Let me know if there is
anything else I can provide/do to help figure this problem out as it is
essentially preventing us from doing any kind of heavy IO to these pools,
Our config is:
OpenSolaris snv_118 x64
1 x LSISAS3801E controller
2 x 23-disk JBOD (fully populated, 1TB 7.2k SATA drives)
Each of the two external ports on the LSI connects to a 23-disk JBOD. ZFS-wise
we use 1 zpool with 2 x 22-disk raidz2 vdevs (1 vdev per JBOD). Each zpool has
one ZFS filesyst
Just submitted the bug yesterday, under advice of James, so I don't have a
number you can refer to you...the "change request" number is 6894775 if that
helps or is directly related to the future bugid.
>From what I seen/read this problem has been around for awhile but only rears
>its ugly head
I don't think there was any intention on Sun's part to ignore the
problem...obviously their target market wants a performance-oriented box and
the x4540 delivers that. Each 1068E controller chip supports 8 SAS PHY channels
= 1 channel per drive = no contention for channels. The x4540 is a monste
LSI's sales literature on that card specs "128 devices" which I take with a few
hearty grains of salt. I agree that with all 46 drives pumping out streamed
data, the controller would be overworked BUT the drives will only deliver data
as fast as the OS tells them to. Just because the speedometer
l of the blocks be re-checksummed with a zfs
> send/receive on the receiving side?
As with all property changes, new writes get the new properties. Old data
is not rewritten.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
And therein lies the issue. The excessive load that causes the IO issues is
almost always generated locally from a scrub or a local recursive "ls" used to
warm up the SSD-based zpool cache with metadata. The regular network IO to the
box is minimal and is very read-centric; once we load the box
Here is example of the pool config we use:
# zpool status
pool: pool002
state: ONLINE
scrub: scrub stopped after 0h1m with 0 errors on Fri Oct 23 23:07:52 2009
config:
NAME STATE READ WRITE CKSUM
pool002 ONLINE 0 0 0
raidz2 ONLINE
The iostat I posted previously was from a system we had already tuned the
zfs:zfs_vdev_max_pending depth down to 10 (as visible by the max of about 10 in
actv per disk).
I reset this value in /etc/system to 7, rebooted, and started a scrub. iostat
output showed busier disks (%b is higher, which
The controller connects to two disk shelves (expanders), one per port on the
card. If you look back in the thread, you'll see our zpool config has one vdev
per shelf. All of the disks are Western Digital (model WD1002FBYS-18A6B0) 1TB
7.2K, firmware rev. 03.00C06. Without actually matching up the
ake a substantial hit in throughput moving from one to the
other.
Tim,
That all really depends on your specific system and workload. As with
any
performance related matter experimentation is vital for making your
final
decision.
Adam
--
Adam Leventhal, Fishworks
So, while we are working on resolving this issue with Sun, let me approach this
from the another perspective: what kind of controller/drive ratio would be the
minimum recommended to support a functional OpenSolaris-based archival
solution? Given the following:
- the vast majority of IO to the s
grated
to ON as you can see from the consistent work of Eric Schrock.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
occurring?
The process is currently:
zfs_send -> mbuffer -> LAN -> mbuffer -> zfs_recv
--
Adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
would have been with it enabled but I wasn't
about to find out.
Thanks
On 20-Nov-09, at 11:48 AM, Richard Elling wrote:
On Nov 20, 2009, at 11:27 AM, Adam Serediuk wrote:
I have several X4540 Thor systems with one large zpool that
replicate data to a backup host via zfs send/recv.
> Hi Adam,
> thanks for this info. I've talked with my colleagues
> in Beijing (since
> I'm in Beijing this week) and we'd like you to try
> disabling MSI/MSI-X
> for your mpt instances. In /etc/system, add
>
> set mpt:mpt_enable_msi = 0
>
> then
>
> I thought you had just set
>
> set xpv_psm:xen_support_msi = -1
>
> which is different, because that sets the
> xen_support_msi variable
> which lives inside the xpv_psm module.
>
> Setting mptsas:* will have no effect on your system
> if you do not
> have an mptsas card installed. The mpts
l disks in the JBOD(s), not specific ones. Usually one or two
disks start to timeout which snowballs into all of them when the bus resets. We
have 15 of these systems running, all with the same config using 2 foot
external cables...changing cables doesn`t help. We have no
he problem occurs on all
of them.
- Adam
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t when you have the new bits.
Adam
On Dec 9, 2009, at 3:40 AM, Kjetil Torgrim Homme wrote:
> I'm planning to try out deduplication in the near future, but started
> wondering if I can prepare for it on my servers. one thing which struck
> me was that I should change the checksum alg
The host pool I assume, because
> clone contents are (in this scenario) "just some new data"?
The dedup property applies to all writes so the settings for the pool of origin
don't matter, just those on the destination pool.
Adam
--
Adam Leventhal, Fishworks
Hi Giridhar,
The size reported by ls can include things like holes in the file. What space
usage does the zfs(1M) command report for the filesystem?
Adam
On Dec 16, 2009, at 10:33 PM, Giridhar K R wrote:
> Hi,
>
> Reposting as I have not gotten any response.
>
> Here is the i
> Thanks for the response Adam.
>
> Are you talking about ZFS list?
>
> It displays 19.6 as allocated space.
>
> What does ZFS treat as hole and how does it identify?
ZFS will compress blocks of zeros down to nothing and treat them like
sparse files. 19.6 is pretty close t
ly wrote an article for ACM Queue that examines recent trends in hard
drives and makes the case for triple-parity RAID. It's at least peripherally
relevant to this conversation:
http://blogs.sun.com/ahl/entry/acm_triple_parity_raid
Adam
--
Adam Leventhal, Fishworks
in other words card about perf) why would
> you ever use raidz instead of throwing more drives at the problem and doing
> mirroring with identical parity?
You're right that a mirror is a degenerate form of raidz1, for example, but
mirrors allow for specific optimizations. While the redundan
| D15 |
1K per device with an additional 1K for parity.
Adam
On Jan 4, 2010, at 3:17 PM, Brad wrote:
> If a 8K file system block is written on a 9 disk raidz vdev, how is the data
> distributed (writtened) between all devices in the vdev since a zfs write is
> one continuously IO
n with the product
> designers.
Congratulations! This is great news for ZFS. I'll be very interested to
see the results members of the community can get with your device as part
of their pool. COMSTAR iSCSI performance should be dramatically improved
in particular.
Adam
--
Adam Leventhal,
e notion of a hybrid drive is nothing new. As
with any block-based caching, this device has no notion of the semantic
meaning of a given block so there's only so much intelligence it can bring
to bear on the problem.
Adam
--
Adam Leventhal, Fishworks
Hey Bob,
> My own conclusions (supported by Adam Leventhal's excellent paper) are that
>
> - maximum device size should be constrained based on its time to
> resilver.
>
> - devices are growing too large and it is about time to transition to
> the next small
> Hi Any idea why zfs does not dedup files with this format ?
> file /opt/XXX/XXX/data
> VAX COFF executable - version 7926
With dedup enabled, ZFS will identify and remove duplicated regardless of the
data format.
Adam
--
Adam Leventhal, Fishworkshttp://blog
l gigE line speed even on fully
random workloads. Your mileage may very but for now I am very happy
with the systems finally (and rightfully so given their performance
potential!)
--
Adam Serediuk___
zfs-discuss mailing list
zfs-dis
, etc all make a large different when dealing with very
large data sets.
On 24-Feb-10, at 2:05 PM, Adam Serediuk wrote:
I manage several systems with near a billion objects (largest is
currently 800M) on each and also discovered slowness over time. This
is on X4540 systems with average file
ng on the progress of Illumos and
others but for now things are still too uncertain to make the financial
commitment.
- Adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
oth iostat and zpool iostat show very little to zero load on the devices even
while blocking.
Any suggestions on avenues of approach for troubleshooting?
Thanks,
Adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
free for all devices
On May 4, 2011, at 12:28 PM, Michael Schuster wrote:
> On Wed, May 4, 2011 at 21:21, Adam Serediuk wrote:
>> We have an X4540 running Solaris 11 Express snv_151a that has developed an
>> issue where its write performance is absolutely abysmal. Even touching a
dedup enabled and the DDT no
> longer fits in RAM? That would create a huge performance cliff.
>
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org on behalf of Eric D. Mudama
> Sent: Wed 5/4/2011 12:55 PM
> To: Adam Serediuk
> Cc: zfs-discuss@opens
e introspection into the zpool thread that is
using cpu but not having much luck finding anything meaningful. Occasionally
the cpu usage for that thread will drop, and when it does performance of the
filesystem increases.
> On Wed, 2011-05-04 at 15:40 -0700, Adam Serediuk wrote:
>> Dedu
2 for writes and not at all for reads.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
doesn't matter much for a root device. Performance is typically a bit
better with SLC -- especially on the write side -- but it's not such a
huge difference.
The reason you'd use a flash SSD for a boot device is power (with
maybe a dash of performance), and either SLC or MLC
> apps point of view)?
You would lose transactions, but the pool would still reflect a
consistent
state.
> So is this idea completely crazy?
On the contrary; it's very clever.
Adam
--
Adam Leventhal, Fishworksh
block in our new line of storage
appliances.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
de path from the
x4500 to the x4540 so that would be required before any upgrade to the
equivalent of the Sun Storage 7210.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-dis
On Nov 11, 2008, at 10:41 AM, Brent Jones wrote:
> Wish I could get my hands on a beta of this GUI...
Take a look at the VMware version that you can run on any machine:
http://www.sun.com/storage/disk_systems/unified_storage/resources.jsp
Adam
--
Adam Leventhal, Fishwo
> Is this software available for people who already have thumpers?
We're considering offering an upgrade path for people with existing
thumpers. Given the feedback we've been hearing, it seems very likely
that we will. No word yet on pricing or availability.
Adam
--
Adam Leventh
ccurately be described as
dual active-passive.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
un Storage 7000 Series uses the same ZFS that's in OpenSolaris
today. A pool created on the appliance could potentially be imported on an
OpenSolaris system; that is, of course, not explicitly supported in the
service contract.
Adam
--
Adam Leventhal, Fishworks
the Fishworks team. Keep an eye on blogs.sun.com/fishworks.
> A little off topic: Do you know when the SSDs used in the Storage 7000 are
> available for the rest of us?
I don't think the will be, but it will be possible to purchase them as
replacement parts.
Adam
--
On Tue, Nov 18, 2008 at 09:09:07AM -0800, Andre Lue wrote:
> Is the web interface on the appliance available for download or will it make
> it to opensolaris sometime in the near future?
It's not, and it's unlikely to make it to OpenSolaris.
Adam
--
Adam Leve
The Intel part does about a fourth as many synchronous write IOPS at
best.
Adam
On Jan 16, 2009, at 5:34 PM, Erik Trimble wrote:
> I'm looking at the newly-orderable (via Sun) STEC Zeus SSDs, and
> they're
> outrageously priced.
>
> http://www.stec-inc.com/produ
drives that fell off the back of some truck, you may not have
that assurance.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. Other than
that you're micro-optimizing for gains that would hardly be measurable given
the architecture of the Hybrid Storage Pool. Recall that unlike other
products in the same space, we get our IOPS from flash rather than from
a bazillion spindles spinning at 15,000 RPM.
Adam
--
Adam
NetApp, and EMC all allow users to replace their
drives with stuff they've bought at Fry's. Is this still covered by their
service plan or would this only be in an unsupported config?
Thanks.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
u were saying that
HDS, NetApp, and EMC had a different model. Were you merely saying that the
software in those vendors' products operates differently than ZFS?
> Are you telling me zfs is deficient to the point it can't handle basic
> right-sizing like a 15$ sata raid adapter?
H
n the other hand I think that users don't
need to tweak settings that add complexity and little to no value? They seem
very different to me, so I suppose the answer to your question is: no I cannot
feel the irony oozing out between my lips, and yes I'm oblivious to the same.
Adam
id layout...
Yes, I'm not saying it shouldn't be done. I'm asking what the right answer
might be.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the advantage of being far
more dynamic and of only applying the space tax in situations where it actually
applies.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This is correct, and you can read about it here:
http://blogs.sun.com/ahl/entry/fishworks_launch
Adam
On Fri, Jan 23, 2009 at 05:03:57PM +, Ross Smith wrote:
> That's my understanding too. One (STEC?) drive as a write cache,
> basically a write optimised SSD. And cheaper, l
ting our use of SSDs with ZFS as a ZIL device, an L2ARC device,
and eventually as primary storage. We'll first focus on the specific
SSDs we certify for use in our general purpose servers and the Sun
Storage 7000 series, and help influence the industry to move to
standards that we
utely right. The L2ARC is for accelerating reads only and will
not affect write performance.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
10; it has plenty of PCI slots.
Ditto.
> finally, one question - I presume that I need to devote a pair of disks
> to the OS, so I really only get 14 disks for data. Correct?
That's right. We market the 7110 as either 2TB = 146GB x 14 or 4.2TB =
300G
Hey Lawrence,
Make sure you're running the latest software update. Note that this forumn
is not the appropriate place to discuss support issues. Please contact your
official Sun support channel.
Adam
On Thu, Jun 18, 2009 at 12:06:02PM -0700, lawrence ho wrote:
> We have a 7110 on try
s for pointing to relevant documentation.
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
are you using against it?
Thanks for pointing to relevant documentation.
The manual for the Supermicro cases [1, 2] does a pretty good job IMO
explaining the different options. See page D-14 and on in the 826
manual, or page D-11 and on in the 846 manual.
I'll read though that, thanks fo
Another thought in the same vein, I notice many of these systems
support "SES-2" for management. Does this do anything useful under
Solaris?
Sorry for these questions, I seem to be having a tough time locating
relevant information on the web.
Thanks,
A.
--
Adam Sherman
CTO,
ly interested in wrt management
uses of SES?
I'm really just exploring. Where can I read about how FMA is going to
help with failures in my setup?
Thanks,
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss
utions I should have a look at to get
>=12 SATA disks externally attached to my systems?
Thanks!
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.or
s
for my X4100s:
http://sunsolve.sun.com/handbook_private/validateUser.do?target=Devices/SCSI/SCSI_PCIX_SAS_SATA_HBA
$280 or so, looks like. Might be overkill for me though.
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
z
, why is that one cheaper than:
http://www.provantage.com/lsi-logic-lsi00124~7LSIG03W.htm
Just newer?
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
to me when putting a mess of them into a SAS JBOD with
an expander?
Thanks for everyone's great feedback, this thread has been highly
educating.
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss
make sure that the parts already developed are truely enterprise-
grade.
While I don't disagree that the focus for ZFS should be ensuring
enterprise-class reliability and performance, let me assure you that
requirements are driven by the market and not by marketing.
Adam
--
Adam
rm of 'optional' I/Os purely for
the purpose of coalescing writes into larger chunks.
I hope that's clear; if it's not, stay tuned for the aforementioned
blog post.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
Don't hear about triple-parity RAID that often:
Author: Adam Leventhal
Repository: /hg/onnv/onnv-gate
Latest revision: 17811c723fb4f9fce50616cb740a92c8f6f97651
Total changesets: 1
Log message:
6854612 triple-parity RAID-Z
http://mail.opensolaris.org/pipermail/onnv-notify/2009-July/
0
ion; raidz2, quadratic; now raidz3
is
N-cubed. There's really no way around it. Fortunately with proper
scrubbing
encountering data corruption in one stripe on three different drives is
highly unlikely.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
In the context of a low-volume file server, for a few users, is the
low-end Intel SSD sufficient?
A.
--
Adam Sherman
+1.613.797.6819
On 2009-07-23, at 14:09, Greg Mason wrote:
I think it is a great idea, assuming the SSD has good write
performance.
This one claims up to 230MB/s read and
gt; it is also much slower under other.
> IIRC some builds ago there were some fixes integrated so maybe it is
> different now.
Absolutely. I was talking more or less about optimal timing. I realize that
due to the priorities within ZFS and real word loads that it can take far
longer.
A
suspicion they all behave similarly dangerously, but actual
data would be useful.
Also, I think it may have already been posted, but I haven't found the
option to disable VirtualBox' disk cache. Anyone have the incantation
handy?
Thanks,
A
--
Adam Sherman
CTO, Versature Corp.
to ignore the sync/flush command. Caching is still
enabled
(it wasn't the problem).
Thanks!
As Russell points on in the last post to that thread, it doesn't seem
possible to do this with virtual SATA disks? Odd.
A.
--
Adam Sherman
CTO, Versature Cor
My test setup of 8 x 2G virtual disks under Virtual Box on top of Mac
OS X is running nicely! I haven't lost a *single* byte of data.
;)
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-di
, Joyent Inc.
I believe I have about a TB of data on at least one of Jason's pools
and it seems to still be around. ;)
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
oader on the CF
card in order to have root on the raidz2 tank
5.5. Figure out how to have the kernel and bootloader on the CF card
in order to have 4 pairs of mirrored drives in a tank, supposing #2
doesn't work
Comments, suggestions, questions, criticism?
Thanks,
A.
--
Adam Sh
On 4-Aug-09, at 16:08 , Bob Friesenhahn wrote:
On Tue, 4 Aug 2009, Adam Sherman wrote:
4. Use a CompactFlash card (the board has a slot) for root, 8
drives in raidz2 tank, backup the root regularly
If booting/running from CompactFlash works, then I like this one.
Backing up root should be
$350 CDN for the 500GB model, would have put this
system way over budget.
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be golden there.
You are suggesting booting from a mirrored pair of CF cards? I'll have
to wait until I see the system to know if I have room, but that's a
good idea.
I've got lots of unused SATA ports.
Thanks,
A.
--
Adam Sherman
CTO, Versature C
500GB-7200-ST9500430SS-602367/?matched_search=ST9500430SS
Which retailer is that?
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hot. I've
used them on a few machines, opensolaris and freebsd. I'm a big
fan of compact flash.
What about USB sticks? Is there a difference in practice?
Thanks for the advice,
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
d idea. Of course, my system only has a single x16
PCI-E slot in it. :)
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 215 matches
Mail list logo