rocess?
Thanks - Adam...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e introspection into the zpool thread that is
using cpu but not having much luck finding anything meaningful. Occasionally
the cpu usage for that thread will drop, and when it does performance of the
filesystem increases.
> On Wed, 2011-05-04 at 15:40 -0700, Adam Serediuk wrote:
>> Dedu
dedup enabled and the DDT no
> longer fits in RAM? That would create a huge performance cliff.
>
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org on behalf of Eric D. Mudama
> Sent: Wed 5/4/2011 12:55 PM
> To: Adam Serediuk
> Cc: zfs-discuss@opens
free for all devices
On May 4, 2011, at 12:28 PM, Michael Schuster wrote:
> On Wed, May 4, 2011 at 21:21, Adam Serediuk wrote:
>> We have an X4540 running Solaris 11 Express snv_151a that has developed an
>> issue where its write performance is absolutely abysmal. Even touching a
oth iostat and zpool iostat show very little to zero load on the devices even
while blocking.
Any suggestions on avenues of approach for troubleshooting?
Thanks,
Adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
ng on the progress of Illumos and
others but for now things are still too uncertain to make the financial
commitment.
- Adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> In my case, it gives an error that I need at least 11 disks (which I don't)
> but the point is that raidz parity does not seem to be limited to 3. Is this
> not true?
RAID-Z is limited to 3 parity disks. The error message is giving you false hope
and that's a bug. If you had plugged in 11 dis
aults to lzjb which is fast; but gzip-9
> can be twice as good. (I've just done some tests on the MacZFS port on my
> blog for more info)
Here's a good blog comparing some ZFS compression modes in the context of the
Sun Storage 7000:
http://blogs.sun.com/dap/en
Hey Robert,
I've filed a bug to track this issue. We'll try to reproduce the problem and
evaluate the cause. Thanks for bringing this to our attention.
Adam
On Jun 24, 2010, at 2:40 AM, Robert Milkowski wrote:
> On 23/06/2010 18:50, Adam Leventhal wrote:
>>> Does it mean
istribute parity.
What is the total width of your raidz1 stripe?
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hey Robert,
How big of a file are you making? RAID-Z does not explicitly do the parity
distribution that RAID-5 does. Instead, it relies on non-uniform stripe widths
to distribute IOPS.
Adam
On Jun 18, 2010, at 7:26 AM, Robert Milkowski wrote:
> Hi,
>
>
> zpool create test
Hey Karsten,
Very interesting data. Your test is inherently single-threaded so I'm not
surprised that the benefits aren't more impressive -- the flash modules on the
F20 card are optimized more for concurrent IOPS than single-threaded latency.
Adam
On Mar 30, 2010, at 3:30 AM, Kar
s to
ensure that their deployments are successful, and fixing problems as they come
up.
> The hardware on the other hand is incredible in terms of resilience and
> performance, no doubt. Which makes me think the pretty interface becomes an
> annoyance sometimes. Let's wait for
, etc all make a large different when dealing with very
large data sets.
On 24-Feb-10, at 2:05 PM, Adam Serediuk wrote:
I manage several systems with near a billion objects (largest is
currently 800M) on each and also discovered slowness over time. This
is on X4540 systems with average file
l gigE line speed even on fully
random workloads. Your mileage may very but for now I am very happy
with the systems finally (and rightfully so given their performance
potential!)
--
Adam Serediuk___
zfs-discuss mailing list
zfs-dis
> Hi Any idea why zfs does not dedup files with this format ?
> file /opt/XXX/XXX/data
> VAX COFF executable - version 7926
With dedup enabled, ZFS will identify and remove duplicated regardless of the
data format.
Adam
--
Adam Leventhal, Fishworkshttp://blog
Hey Bob,
> My own conclusions (supported by Adam Leventhal's excellent paper) are that
>
> - maximum device size should be constrained based on its time to
> resilver.
>
> - devices are growing too large and it is about time to transition to
> the next small
e notion of a hybrid drive is nothing new. As
with any block-based caching, this device has no notion of the semantic
meaning of a given block so there's only so much intelligence it can bring
to bear on the problem.
Adam
--
Adam Leventhal, Fishworks
n with the product
> designers.
Congratulations! This is great news for ZFS. I'll be very interested to
see the results members of the community can get with your device as part
of their pool. COMSTAR iSCSI performance should be dramatically improved
in particular.
Adam
--
Adam Leventhal,
| D15 |
1K per device with an additional 1K for parity.
Adam
On Jan 4, 2010, at 3:17 PM, Brad wrote:
> If a 8K file system block is written on a 9 disk raidz vdev, how is the data
> distributed (writtened) between all devices in the vdev since a zfs write is
> one continuously IO
in other words card about perf) why would
> you ever use raidz instead of throwing more drives at the problem and doing
> mirroring with identical parity?
You're right that a mirror is a degenerate form of raidz1, for example, but
mirrors allow for specific optimizations. While the redundan
ly wrote an article for ACM Queue that examines recent trends in hard
drives and makes the case for triple-parity RAID. It's at least peripherally
relevant to this conversation:
http://blogs.sun.com/ahl/entry/acm_triple_parity_raid
Adam
--
Adam Leventhal, Fishworks
> Thanks for the response Adam.
>
> Are you talking about ZFS list?
>
> It displays 19.6 as allocated space.
>
> What does ZFS treat as hole and how does it identify?
ZFS will compress blocks of zeros down to nothing and treat them like
sparse files. 19.6 is pretty close t
Hi Giridhar,
The size reported by ls can include things like holes in the file. What space
usage does the zfs(1M) command report for the filesystem?
Adam
On Dec 16, 2009, at 10:33 PM, Giridhar K R wrote:
> Hi,
>
> Reposting as I have not gotten any response.
>
> Here is the i
The host pool I assume, because
> clone contents are (in this scenario) "just some new data"?
The dedup property applies to all writes so the settings for the pool of origin
don't matter, just those on the destination pool.
Adam
--
Adam Leventhal, Fishworks
t when you have the new bits.
Adam
On Dec 9, 2009, at 3:40 AM, Kjetil Torgrim Homme wrote:
> I'm planning to try out deduplication in the near future, but started
> wondering if I can prepare for it on my servers. one thing which struck
> me was that I should change the checksum alg
he problem occurs on all
of them.
- Adam
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
l disks in the JBOD(s), not specific ones. Usually one or two
disks start to timeout which snowballs into all of them when the bus resets. We
have 15 of these systems running, all with the same config using 2 foot
external cables...changing cables doesn`t help. We have no
>
> I thought you had just set
>
> set xpv_psm:xen_support_msi = -1
>
> which is different, because that sets the
> xen_support_msi variable
> which lives inside the xpv_psm module.
>
> Setting mptsas:* will have no effect on your system
> if you do not
> have an mptsas card installed. The mpts
> Hi Adam,
> thanks for this info. I've talked with my colleagues
> in Beijing (since
> I'm in Beijing this week) and we'd like you to try
> disabling MSI/MSI-X
> for your mpt instances. In /etc/system, add
>
> set mpt:mpt_enable_msi = 0
>
> then
would have been with it enabled but I wasn't
about to find out.
Thanks
On 20-Nov-09, at 11:48 AM, Richard Elling wrote:
On Nov 20, 2009, at 11:27 AM, Adam Serediuk wrote:
I have several X4540 Thor systems with one large zpool that
replicate data to a backup host via zfs send/recv.
occurring?
The process is currently:
zfs_send -> mbuffer -> LAN -> mbuffer -> zfs_recv
--
Adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
grated
to ON as you can see from the consistent work of Eric Schrock.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
So, while we are working on resolving this issue with Sun, let me approach this
from the another perspective: what kind of controller/drive ratio would be the
minimum recommended to support a functional OpenSolaris-based archival
solution? Given the following:
- the vast majority of IO to the s
ake a substantial hit in throughput moving from one to the
other.
Tim,
That all really depends on your specific system and workload. As with
any
performance related matter experimentation is vital for making your
final
decision.
Adam
--
Adam Leventhal, Fishworks
The controller connects to two disk shelves (expanders), one per port on the
card. If you look back in the thread, you'll see our zpool config has one vdev
per shelf. All of the disks are Western Digital (model WD1002FBYS-18A6B0) 1TB
7.2K, firmware rev. 03.00C06. Without actually matching up the
The iostat I posted previously was from a system we had already tuned the
zfs:zfs_vdev_max_pending depth down to 10 (as visible by the max of about 10 in
actv per disk).
I reset this value in /etc/system to 7, rebooted, and started a scrub. iostat
output showed busier disks (%b is higher, which
Here is example of the pool config we use:
# zpool status
pool: pool002
state: ONLINE
scrub: scrub stopped after 0h1m with 0 errors on Fri Oct 23 23:07:52 2009
config:
NAME STATE READ WRITE CKSUM
pool002 ONLINE 0 0 0
raidz2 ONLINE
And therein lies the issue. The excessive load that causes the IO issues is
almost always generated locally from a scrub or a local recursive "ls" used to
warm up the SSD-based zpool cache with metadata. The regular network IO to the
box is minimal and is very read-centric; once we load the box
l of the blocks be re-checksummed with a zfs
> send/receive on the receiving side?
As with all property changes, new writes get the new properties. Old data
is not rewritten.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
LSI's sales literature on that card specs "128 devices" which I take with a few
hearty grains of salt. I agree that with all 46 drives pumping out streamed
data, the controller would be overworked BUT the drives will only deliver data
as fast as the OS tells them to. Just because the speedometer
I don't think there was any intention on Sun's part to ignore the
problem...obviously their target market wants a performance-oriented box and
the x4540 delivers that. Each 1068E controller chip supports 8 SAS PHY channels
= 1 channel per drive = no contention for channels. The x4540 is a monste
Just submitted the bug yesterday, under advice of James, so I don't have a
number you can refer to you...the "change request" number is 6894775 if that
helps or is directly related to the future bugid.
>From what I seen/read this problem has been around for awhile but only rears
>its ugly head
Our config is:
OpenSolaris snv_118 x64
1 x LSISAS3801E controller
2 x 23-disk JBOD (fully populated, 1TB 7.2k SATA drives)
Each of the two external ports on the LSI connects to a 23-disk JBOD. ZFS-wise
we use 1 zpool with 2 x 22-disk raidz2 vdevs (1 vdev per JBOD). Each zpool has
one ZFS filesyst
I've filed the bug, but was unable to include the "prtconf -v" output as the
comments field only accepted 15000 chars total. Let me know if there is
anything else I can provide/do to help figure this problem out as it is
essentially preventing us from doing any kind of heavy IO to these pools,
James: We are running Phase 16 on our LSISAS3801E's, and have also tried the
recently released Phase 17 but it didn't help. All firmware NVRAM settings are
default. Basically, when we put the disks behind this controller under load
(e.g. scrubbing, recursive ls on large ZFS filesystem) we get th
Cindy: How can I view the bug report you referenced? Standard methods show my
the bug number is valid (6694909) but no content or notes. We are having
similar messages appear with snv_118 with a busy LSI controller, especially
during scrubbing, and I'd be interested to see what they mentioned in
I Too have seen this problem.
I had done a zfs send from my main pool "terra" (6 disk raidz on seagate 1TB
drives) to a mirror pair of WD Green 1TB drives.
ZFS send was successful, however i noticed the pool was degraded after a while
(~1 week) with one of the mirror disks constantly re-silverin
block-interleaved parity
RAID-5block-interleaved distributed parity
RAID-6block-interleaved double distributed parity
raidz1 is most like RAID-5; raidz2 is most like RAID-6. There's no RAID
level that covers more than two parity disks, but raidz3 is most like RAID-6,
but wi
rrors that I should take a look at?
Absolutely not. That is an unrelated issue. This problem is isolated to
RAID-Z.
> And good luck with the fix for build 124. Are talking days or weeks for the
> fix to be available, do you think? :) --
Days or hours.
Adam
--
Adam Le
cription of the two issues. This is for interest
only and does not contain additional discussion of symptoms or
prescriptive
action.
Adam
---8<---
1. In situations where a block read from a RAID-Z vdev fails to checksum
but there were no errors from any of the child vdevs (e.g. hard
driv
Hi Trevor,
We intentionally install the system pool with an old ZFS version and
don't
provide the ability to upgrade. We don't need or use (or even expose)
any
of the features of the newer versions so using a newer version would
only
create problems rolling back to earlier relea
pe to have
and update to the last either later today or tomorrow.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi James,
After investigating this problem a bit I'd suggest avoiding deploying
RAID-Z
until this issue is resolved. I anticipate having it fixed in build 124.
Apologies for the inconvenience.
Adam
On Aug 28, 2009, at 8:20 PM, James Lever wrote:
On 28/08/2009, at 3:23 AM, Adam Leve
ut while it might be satisfying to add
another
request for it, Matt is already cranking on it as fast as he can and
more
requests for it are likely to have the opposite of the intended effect.
Adam
--
Adam Leventhal, Fishworkshttp
Will BP rewrite allow adding a drive to raidz1 to get raidz2? And
how is status on BP rewrite? Far away? Not started yet? Planning?
BP rewrite is an important component technology, but there's a bunch
beyond
that. It's not a high priority right now for us at Sun.
Adam
--
Adam
Hey Gary,
There appears to be a bug in the RAID-Z code that can generate
spurious checksum errors. I'm looking into it now and hope to have it
fixed in build 123 or 124. Apologies for the inconvenience.
Adam
On Aug 25, 2009, at 5:29 AM, Gary Gendel wrote:
I have a 5-500GB disk R
But the real question is whether the "enterprise" drives would have
avoided your problem.
A.
--
Adam Sherman
+1.613.797.6819
On 2009-08-26, at 11:38, Troels Nørgaard Nielsen wrote:
Hi Tim Cook.
If I was building my own system again, I would prefer not to go with
consumer har
dz2 vdevs, then
you can even
do better with copies=3 ;-)
Maybe this is noted somewhere, but I did not realize that "copies"
invoked logic that distributed the copies among vdevs? Can you please
provide some pointers about this?
Thanks,
A.
--
Adam Sherman
CTO, V
I believe you will get .5 TB in this example, no?
A.
--
Adam Sherman
+1.613.797.6819
On 2009-08-12, at 16:44, Erik Trimble wrote:
Eric D. Mudama wrote:
On Wed, Aug 12 at 12:11, Erik Trimble wrote:
Anyways, if I have a bunch of different size disks (1.5 TB, 1.0 TB,
500 GB, etc), can I
2ARC. Save your money.
That's our assessment, but it's highly dependent on the specific
characteristics of the MLC NAND itself, the SSD controller, and, of
course, the workload.
Adam
--
Adam Leventhal, Fishworks
oming in, so space won't be an issue. I'd
like to have the CF cards as read-only as possible though.
By sharable, what do you mean exactly?
Thanks a lot for the advice,
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
__
d the new-style ZFS-based "boot
environments"?
Is there going to be a difference for me? I plan to run OSOL, latest.
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opens
Thanks for everyone's input!
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Excellent advice, thans Ian.
A.
--
Adam Sherman
+1.613.797.6819
On 2009-08-06, at 15:16, Ian Collins wrote:
Adam Sherman wrote:
On 4-Aug-09, at 16:54 , Ian Collins wrote:
Use a CompactFlash card (the board has a slot) for root, 8
drives in raidz2 tank, backup the root regularly
If
e a cheaper one that
takes only 1 CF card?
I just ordered a pair of the Syba units, cheap enough too test out
anyway.
Now to find some reasonably priced 8GB CompactFlash cards…
Thanks,
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
this.
This product looks really interesting:
http://www.addonics.com/products/flash_memory_reader/ad2sahdcf.asp
But I can't confirm it will show both cards as separate disks…
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
_
don't think you
can move the bulk - /usr.
See:
http://docs.sun.com/source/820-4893-13/compact_flash.html#50589713_78631
Good link.
So I suppose I can move /var out and that would deal with most (all?)
of the writes.
Good plan!
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498
d idea. Of course, my system only has a single x16
PCI-E slot in it. :)
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hot. I've
used them on a few machines, opensolaris and freebsd. I'm a big
fan of compact flash.
What about USB sticks? Is there a difference in practice?
Thanks for the advice,
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
500GB-7200-ST9500430SS-602367/?matched_search=ST9500430SS
Which retailer is that?
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be golden there.
You are suggesting booting from a mirrored pair of CF cards? I'll have
to wait until I see the system to know if I have room, but that's a
good idea.
I've got lots of unused SATA ports.
Thanks,
A.
--
Adam Sherman
CTO, Versature C
$350 CDN for the 500GB model, would have put this
system way over budget.
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 4-Aug-09, at 16:08 , Bob Friesenhahn wrote:
On Tue, 4 Aug 2009, Adam Sherman wrote:
4. Use a CompactFlash card (the board has a slot) for root, 8
drives in raidz2 tank, backup the root regularly
If booting/running from CompactFlash works, then I like this one.
Backing up root should be
oader on the CF
card in order to have root on the raidz2 tank
5.5. Figure out how to have the kernel and bootloader on the CF card
in order to have 4 pairs of mirrored drives in a tank, supposing #2
doesn't work
Comments, suggestions, questions, criticism?
Thanks,
A.
--
Adam Sh
, Joyent Inc.
I believe I have about a TB of data on at least one of Jason's pools
and it seems to still be around. ;)
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
My test setup of 8 x 2G virtual disks under Virtual Box on top of Mac
OS X is running nicely! I haven't lost a *single* byte of data.
;)
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-di
to ignore the sync/flush command. Caching is still
enabled
(it wasn't the problem).
Thanks!
As Russell points on in the last post to that thread, it doesn't seem
possible to do this with virtual SATA disks? Odd.
A.
--
Adam Sherman
CTO, Versature Cor
suspicion they all behave similarly dangerously, but actual
data would be useful.
Also, I think it may have already been posted, but I haven't found the
option to disable VirtualBox' disk cache. Anyone have the incantation
handy?
Thanks,
A
--
Adam Sherman
CTO, Versature Corp.
gt; it is also much slower under other.
> IIRC some builds ago there were some fixes integrated so maybe it is
> different now.
Absolutely. I was talking more or less about optimal timing. I realize that
due to the priorities within ZFS and real word loads that it can take far
longer.
A
In the context of a low-volume file server, for a few users, is the
low-end Intel SSD sufficient?
A.
--
Adam Sherman
+1.613.797.6819
On 2009-07-23, at 14:09, Greg Mason wrote:
I think it is a great idea, assuming the SSD has good write
performance.
This one claims up to 230MB/s read and
ion; raidz2, quadratic; now raidz3
is
N-cubed. There's really no way around it. Fortunately with proper
scrubbing
encountering data corruption in one stripe on three different drives is
highly unlikely.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
Don't hear about triple-parity RAID that often:
Author: Adam Leventhal
Repository: /hg/onnv/onnv-gate
Latest revision: 17811c723fb4f9fce50616cb740a92c8f6f97651
Total changesets: 1
Log message:
6854612 triple-parity RAID-Z
http://mail.opensolaris.org/pipermail/onnv-notify/2009-July/
0
rm of 'optional' I/Os purely for
the purpose of coalescing writes into larger chunks.
I hope that's clear; if it's not, stay tuned for the aforementioned
blog post.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
make sure that the parts already developed are truely enterprise-
grade.
While I don't disagree that the focus for ZFS should be ensuring
enterprise-class reliability and performance, let me assure you that
requirements are driven by the market and not by marketing.
Adam
--
Adam
to me when putting a mess of them into a SAS JBOD with
an expander?
Thanks for everyone's great feedback, this thread has been highly
educating.
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss
, why is that one cheaper than:
http://www.provantage.com/lsi-logic-lsi00124~7LSIG03W.htm
Just newer?
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
s
for my X4100s:
http://sunsolve.sun.com/handbook_private/validateUser.do?target=Devices/SCSI/SCSI_PCIX_SAS_SATA_HBA
$280 or so, looks like. Might be overkill for me though.
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
z
utions I should have a look at to get
>=12 SATA disks externally attached to my systems?
Thanks!
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.or
ly interested in wrt management
uses of SES?
I'm really just exploring. Where can I read about how FMA is going to
help with failures in my setup?
Thanks,
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss
Another thought in the same vein, I notice many of these systems
support "SES-2" for management. Does this do anything useful under
Solaris?
Sorry for these questions, I seem to be having a tough time locating
relevant information on the web.
Thanks,
A.
--
Adam Sherman
CTO,
are you using against it?
Thanks for pointing to relevant documentation.
The manual for the Supermicro cases [1, 2] does a pretty good job IMO
explaining the different options. See page D-14 and on in the 826
manual, or page D-11 and on in the 846 manual.
I'll read though that, thanks fo
s for pointing to relevant documentation.
A.
--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hey Lawrence,
Make sure you're running the latest software update. Note that this forumn
is not the appropriate place to discuss support issues. Please contact your
official Sun support channel.
Adam
On Thu, Jun 18, 2009 at 12:06:02PM -0700, lawrence ho wrote:
> We have a 7110 on try
10; it has plenty of PCI slots.
Ditto.
> finally, one question - I presume that I need to devote a pair of disks
> to the OS, so I really only get 14 disks for data. Correct?
That's right. We market the 7110 as either 2TB = 146GB x 14 or 4.2TB =
300G
utely right. The L2ARC is for accelerating reads only and will
not affect write performance.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ting our use of SSDs with ZFS as a ZIL device, an L2ARC device,
and eventually as primary storage. We'll first focus on the specific
SSDs we certify for use in our general purpose servers and the Sun
Storage 7000 series, and help influence the industry to move to
standards that we
This is correct, and you can read about it here:
http://blogs.sun.com/ahl/entry/fishworks_launch
Adam
On Fri, Jan 23, 2009 at 05:03:57PM +, Ross Smith wrote:
> That's my understanding too. One (STEC?) drive as a write cache,
> basically a write optimised SSD. And cheaper, l
the advantage of being far
more dynamic and of only applying the space tax in situations where it actually
applies.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
id layout...
Yes, I'm not saying it shouldn't be done. I'm asking what the right answer
might be.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 215 matches
Mail list logo