here my existing zpool is called "tank" and the new disk is c4t0d0, would the
command be something like:
zpool create newtank raidz tank c4t0d0?
Many thanks,
Chris
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
I have old pool skeletons with vdevs that no longer exist. Can't import them,
can't destroy them, can't even rename them to something obvious like junk1.
What do I do to clean up?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
is it best to manage the pool in the global zone and make use of the -R
option with "relative zone mountpoints" set?
zfs set mountpoint=/path/in/zone poolname
zpool import -R /path/to/zone/roo poolname
Feedback is appreciated,
Chris
This message posted from
Hi guys,
I recently was adding and removing some devices to a zfs mirror and now
the format command command seems to be a bit confused (or is being given
erroneous information)
This happened under
Solaris Express Community Edition snv_81 X86
I have 3 disks in a pool
Hello Everyone,
I have recently jumped onto the OpenSolaris bandwagon coming from FreeBSD,
mainly because FreeBSD's ZFS stability is pretty bad. So a few weeks ago I
rebuilt my BSD NAS to OpenSolaris using ZFS and CIFS. Everything has been
working fine and I'm loving OpenSolaris. I haven't ha
zfs to reconstruct
the data on the new drive), and when you have replaced the final drive the
zpool will "magically" increase in size. i.e replace the 500GB drives, 1 by 1,
with 750GB drives, and when you finish the zpool effective storage will jump to
2250GB.
==
tha
Did you ever find a solution to this? I have a similar problem, where my ZFS
snapshot was sent through gzip out to a file. I tried to gzcat the file out to
a new ZFS, but got the "cannot receive new filesystem stream: invalid backup
stream" error. At that point I just ran gunzip on the file a
>That would be nice. Before developers worry about such exotic
>features, I would rather that they attend to the gross performance
>issues so that zfs performs at least as well as Windows NTFS or Linux
>XFS in all common cases.
To each their own.
A FS that calculates and writes parity onto dis
Ok, so the choice for a MB boils down to:
- Intel desktop MB, no ECC support
- Intel server MB, ECC support, expensive (requires a Xeon for speedstep
support). It is a shame to waste top kit doing nothing 24/7.
- AMD K8: ECC support(right?), no Cool'n'quiet support (but maybe still cool
enough w
Thanks for your reply.
What if I wrap the ram in a sheet of lead?;-)
(hopefully the lead itself won't be radioactive)
I found these 4 AM3 motherboard with "optional" ECC memory support. I don't
know whether this means ECC works, or ECC memory can be used but ECC will not.
Do you?
Asus M4N7
Good news; the manual for the M4N78-VM mentions ECC and gives the following
BIOS options: disabled/basic/good/super/maxi/user.
Unsure what these mean but that's a start.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discu
Found this:
ECC Mode [Disabled]
Disables or sets the DRAM ECC mode that allows the hardware to report and
correct memory errors. Set this item to [Basic] [Good] or [Max] to allow ECC
mode
auto-adjustment. Set this item to [Super] to adjust the DRAM BG Scrub sub-item
manually. You may also adjust
Thanks for this, good news!
Yes, I would try to use onboard video.
> Please note that frequency scaling is only supported
> on the K10 architecture. But don't expect to much
> power saving from it. A lower voltage yields far
> greater savings than a lower frequency.
Doesn't Cool'n'quiet step th
The Asus M4N78-VM uses a Nvidia GeForce 8200 Chipset (This board only has 1
PCIe-16 slot though, I should look at those that have 2 slots).
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.
Oh, and another unrelated question:
Would I better off using OpenSolaris or Solaris Community Edition?
I suspect SCE has more drivers (though mayby in a more beta state?), but its
huge download size (several days in backward New Zealand, thanks Telecom NZ!)
means I would only try if there is
Cheers Miles, and thanks also for the tip to look in the BIOS options to see if
ECC is actually used.
Which mode woud you use? Max seems the most appealing, why would anyone use
something called basic? But there must be a catch if they provided several ECC
support modes.
I am glad this thread
More choice is good!
It seems Intel's server boards sometimes accept desktop CPUS, but don't support
speedstep. Is all OK with those?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
>Note that the 'ecccheck.pl' script depends on the 'pcitweak' utility
>which is no longer present in OpenSolaris 2009.06 and Ubuntu 8.10
>because of Xorg changes.
This is exactly the kind of hidden trap I fear. One does everything right and
then discovers that xx is missing or has been changed
Ok, i am ready to try.
2 last questions before I go for it:
- which version of (open)solaris for Ecc support (which seems to have been
dropped from 200906) and general as-few-headaches-as-possible installation?
- do you think this issue with the AMD Athlon II X2 250
http://www.anandtech.com/cpu
How do I get this in OpenSolaris 2009.06?
http://www.alobbs.com/albums/albun26/ZFS_acl_dialog1.jpg
thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
I too am having the same issues. I started out using Solaris 10 8/07 release.
I could create all the filesystems, 47,000 filesystems, but if you needed to
reboot, patch, shutdown Very bad. So then I read about sharemgr and how
it was supposed to mitigate these issues. Well, after runnin
Ok... So I was wrong. I was informed I had this backwards. It seems that this
NFS4.1 mirror mounts thing is really only nice for getting rid of a lot of
automount maps. You still have to share each filesystem :-( I hate it when I
think there is hope just to have it taken away. Sigh...
T
I did a little bit more digging and found some interesting things. NFS4 Mirror
mounts. This would seem to be the most logical option. In this scenario the
client would connect to a single mount /tank/users but would be able to move
through the individual user file systems underneath that moun
I too am having a similar issue. It seems to increase as I add more
filesystems. When i had less than ten it was .3 secs per filesystem. Now it
is
real5.3
user4.4
sys 0.5
On filesystem 1,040.
This is much slower than in Solaris 10 08/07. Why is it so slow? I nee
same
time we built the server) and plan to replace that tonight. Does that seem like
the correct course of action? Are there any steps I can take beforehand to zero
in on the problem? Any words of encouragement or wisdom?
Regards,
Chris Dunbar
Eart
Assuming no snapshots. Do full backups (ie. tar or cpio) eliminate the need
for a scrub?
Thanks,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
reatest place to look when utilizing zfs.
Thanks,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Basically, it boils down to this: upgrade your pools ONLY when you are sure
> the new BE is stable and working for you, and you have no desire to revert to
> the old pool. I run a 'zpool upgrade' right after I do a 'beadm destroy
> '
I'd also add that for disas
OK I have a very large zfs snapshot I want to destroy. When I do this, the
system nearly freezes during the zfs destroy. This is a Sun Fire X4600 with
128GB of memory. Now this may be more of a function of the IO device, but let's
say I don't care that this zfs destroy finishes quickly. I actual
4---
ZFS blocks for this VM would be " CC", "CCAA", "AABB" etc. So, no overlap
between virtual machines, and no benefit from dedup.
I may have it wrong, and there are indeed 30,785,627 unique blocks in my setup,
but if there's a mechanism for checking align
Please excuse my pitiful example. :-)
I meant to say "*less* overlap between virtual machines", as clearly
block "AABB" occurs in both.
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Chris Murray
>
> I'll say it again: neither 'zfs send' or (s)tar is an
> enterprise (or
> even home) backup system on their own one or both can
> be components of
> the full solution.
>
Up to a point. zfs send | zfs receive does make a very good back up scheme for
the home user with a moderate amount of s
the process in the following link:
http://www.tuxyturvy.com/blog/index.php?/archives/59-Aligning-Windows-Partitions-Without-Losing-Data.html
With any luck I'll then see a smaller dedup table, and better performance!
Thanks to those for feedback,
Chris
--
This message posted
ll be doing the same thing. I think
the 6 x 2-way mirror configuration gives me the best mix of performance and
fault tolerance.
Regards,
Chris Dunbar
On Mar 19, 2010, at 5:44 PM, Erik Trimble wrote:
> Chris Dunbar - Earthside, LLC wrote:
> > Hello,
> >
> > After being imme
e a snapshot of tank/nfs, does it include the data in foo1 and foo2
or are they excluded since they are separate ZFS file systems?
Thanks for your help.
Regards,
Chris Dunbar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
Brandon,
Thank you for the explanation. It looks like I will have to share out each file
system. I was trying to keep the number of shares manageable, but it sounds
like that won't work.
Regards,
Chris
On Mar 24, 2010, at 9:36 PM, Brandon High wrote:
> 2010/3/24 Chris Dunbar
> I
se copy" for SDN members; the
$1015 you quote is for the standard Sun Software service plan. Is a service
plan now *required*, a la Solaris 10?
Cheers,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 31 Mar 2010, at 17:50, Bob Friesenhahn wrote:
> On Wed, 31 Mar 2010, Chris Ridd wrote:
>
>>> Yesterday I noticed that the Sun Studio 12 compiler (used to build
>>> OpenSolaris) now costs a minimum of $1,015/year. The "Premium" service
>>> plan
can recreate the pool,
> but it's going to take me several days to get all the data back. Is there any
> known workaround?
Charles,
Can you 'zpool export' and 'zpool import' the pool, and then
try destroying the snapshot again?
-Chris
this case, I realize
that Jason also needs to maximize the space he has in order to store all of
those legitimately copied Blu-Ray movies. ;-)
Regards,
Chris
On Apr 7, 2010, at 3:09 PM, Jason S wrote:
> Thank you for the replies guys!
>
> I was actually already planning to get another
fixes in build 132 related to destroying
snapshots while sending replication streams. I'm unable to reproduce
the 'zfs holds -r' issue on build 133. I'll try build 134, but I'm
not aware of any changes in that area.
-Chris
___
> One of my pools (backup pool) has a disk which I
> suspect may be going south. I have a replacement disk
> of the same size. The original pool was using one of
> the partitions towards the end of the disk. I want to
> move the partition to the beginning of the disk on
> the new disk.
>
> Does ZF
ut I don't htink Mac OS comes with that!
>
> Use Wireshark (formerly Ethereal); works great for me. It does require X11
> on your machine.
Macs come with the command-line tcpdump tool. Wireshark (recommended anyway!)
can read files saved by tcpdump and snoop.
Cheers,
Chris
_
SAS: full duplex
SATA: half duplex
SAS: dual port
SATA: single port (some enterprise SATA has dual port)
SAS: 2 active channel - 2 concurrent write, or 2 read, or 1 write and 1 read
SATA: 1 active channel - 1 read or 1 write
SAS: Full error detection and recovery on both read and write
SATA: err
If I create a file in a file system and then snapshot the file system.
Then delete the file.
Is it guaranteed that while the snapshot exists no new file will be created
with the same inode number as the deleted file?
--chris
--
This message posted from opensolaris.org
.
If they are able to be reused then when an inode number matches I would also
have to compare the real creation time which requires looking at the extended
attributes.
--chris
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
Just to close this. It turns out you can't get the crtime over NFS so without
access to the NFS server there is only limited checking that can be done.
I filed
CR 6956379 Unable to open extended attributes or get the crtime of files in
snapshots over NFS.
--chris
--
This message p
called on the new SATA controller)
3. Run zpool import against "preserve", copy over data that should be migrated.
4. Rebuild the mirror by destroying the "preserve" pool and attaching c7d0s0 to
the rpool mirror.
Am I missing anything?
--
Chris
--
This messa
>
>
> So, after rebuilding, you don't want to restore the
> same OS that you're
> currently running. But there are some files you'd
> like to save for after
> you reinstall. Why not just copy them off somewhere,
> in a tarball or
> something like that?
It's about 200+ gigs of files. If I had a
> You can also use the "zpool split" command and save
> yourself having to do the zfs send|zfs recv step -
> all the data will be preserved.
>
> "zpool split rpool preserve" does essentially
> everything up to and including the "zpool export
> preserve" commands you listed in your original email.
I have three zpools on a server and want to add a mirrored pair of ssd's for
the ZIL. Can the same pair of SSDs be used for the ZIL of all three zpools or
is it one ZIL SLOG device per zpool?
--
This message posted from opensolaris.org
___
zfs-discuss
Thank you everyone for your answers.
Cost is a factor, but the main obstacle is that the chassis will only support
four SSDs (and that's with using the spare 5.25 bay for a 4x2.5 hotswap bay).
My plan now is to buy the ssd's and do extensive testing. I want to focus my
performance efforts on
Alas you need the fix for:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4852783
Until that arrives mirror the disk or rebuild the pool.
--chris
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
etup, however, as I have upgraded my zpool to the latest version, and it can't
be read using the CD now.
Thanks,
Chris
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.
I knew it would be something simple!! :-)
Now 3.63TB, as expected, and no need to export and import either! Thanks
Richard, that's done the trick.
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
Not that I have seen. I use them, they work.
--chris
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
don't think so. 4.0.1 would have been the first release that actually had it.
--chris
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sep 28, 2009, at 6:58 PM, Albert Chin wrote:
Any reason the refreservation and usedbyrefreservation properties are
not sent?
I believe this was CR 6853862, fixed in snv_121.
-Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
not be beneficial to
increase the relative "necessary" size calculation of the arc even if the
extra cache isn't likely to get hit often? When an L2ARC is attached does it
get used if there is no memory pressure?
Thanks,
Chris
___
z
t version).
Out of curiosity, is there an easy way to find such a file?
Cheers,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to where it is being used. Expect the L2ARC
> to contain ARC evictions.
>
If c is much smaller than zfs_arc_max and there is no memory pressure can we
reasonably expect that the L2ARC is not likely to be used often? Do items
get evicted from the L2ARC before the L2ARC is full?
Thanks,
Chris
__
On Sat, Oct 3, 2009 at 11:33 AM, Richard Elling wrote:
> On Oct 3, 2009, at 10:26 AM, Chris Banal wrote:
>
> On Fri, Oct 2, 2009 at 10:57 PM, Richard Elling
>> wrote:
>>
>> c is the current size the ARC. c will change dynamically, as memory
>> pressure
>&
Yes Victor is amazing he has also helped us to recover alot of data we
did not have backed i am forever greatful for his skills and
willingness to help!
On Fri, Oct 9, 2009 at 4:58 AM, Ross wrote:
> Good news, great to hear you got your data back.
>
> Victor is a legend, I for one am very glad he
I think the raid card is a re-branded LSI SCSI raid. I have LSI 21320-4x and
having same problem with ZFS.
Do you have BBU on the card? You may want to disable cache flush and zil and
see how it works. I tried passthrough and basically the result is same.
I gave up on tuning this card with ZFS
apshot/snap
is equivalent to this:
# zfs destroy p...@snap
Similarly, this:
# mkdir /pool/.zfs/snapshot/snap
is equivalent to this:
# zfs snapshot p...@snap
This can be very handy if you want to create or destroy
a snapshot from an NFS client, for e
Sorry, do you mean luupgrade from previous versions or from 125 to future
versions?
I luupgrade from 124 to 125 with mirrored root pool and everything is working
fine.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discus
What luupgrade do you use?
I uninstall lu package in current build first, then install new lu package in
the verion to upgrade.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.
I think I finally see what you mean.
# luactivate b126
System has findroot enabled GRUB
ERROR: Unable to determine the configuration of the current boot environment
.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@
I just finished the upgrade.
detach one disk from the mirror, then luactivate b126 and init 6, after it
reboots, attach the disk to the mirror again, all went smoothly.
Thanks a lot.
--
This message posted from opensolaris.org
___
zfs-discuss mailing
Seems like upgrading from b126 to b127 will have the same problem.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
You can use VCB to backup.
In my test lab, I use VCB integrated with Bacula to backup all the VMs.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
You can get the E2 version of the chassis that supports multipathing but you
have to use dual port SAS disks. Or you can use seperate SAS hba to connect to
seperate jbos chassis and do mirror over 2 chassis. The backplane is just a
path-through fabric which is very unlikely to die.
Then like ot
know, unless you have grown the LUN since the pool was created or somehow
the host bus adapter driver has been downgraded since the pool was created.
--chris
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-di
? Or, instead, is it guaranteed to be committed in a
single transaction, and so committed atomically?
thanks!
--
Chris Frost
http://www.frostnet.net/chris/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
nswers my exact question; thanks!
And Richard, thanks, too. Sorry that my question wasn't stated clearly
enough to avoid causing confusion about whether I asked about the timing
of durability vs. the atomicity of writes with respect to failures.
--
Chris Frost
http://www.fros
On Mon, Nov 30, 2009 at 10:23:06PM -0800, Chris Frost wrote:
> On Mon, Nov 30, 2009 at 11:03:07PM -0700, Neil Perrin wrote:
> > A write made through the ZPL (zfs_write()) will be broken into transactions
> > that contain at most 128KB user data. So a large write could well be s
uld be great on why I'm having these
> problems.
You may be better off talking to the folks at
<https://groups.google.com/group/zfs-macos> who are actively using and working
on the Mac port of ZFS.
Cheers,
Chris
___
zfs-dis
b.opensolaris.org/bin/view/Community+Group+zfs/N>
Cheers,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ould it be that my performance troubles are due to the calculation of
two different checksums?
Thanks,
Chris
-Original Message-
From: cyril.pli...@gmail.com [mailto:cyril.pli...@gmail.com] On Behalf
Of Cyril Plisko
Sent: 16 December 2009 17:09
To: Andrey Kuzmin
Cc: Chris Murray; zfs-discuss@ope
I've been using OpenSolaris for my home file server for a little over a year
now. For most of that time I have used smb to share files out to my other
systems. I also have a Windows Server 2003 DC and all my client systems are
joined to the domain. Most of that time was a permissions nightmar
Cool thx, sounds like exactly what I'm looking for.
I did a bit of reading on the subject and to my understanding I should...
Create a volume of a size as large as I could possibly need. So, siding on the
optimistic, "zfs create -s -V 4000G tank/iscsi1". Then in Windows initialize
and quick
disk IO increased while dedup was on, although it didn't from the ESXi
side. Could it be that dedup tables don't fit in memory? I don't have a
great deal - 3GB. Is there a measure of how large the tables are in
bytes, rather than number of entries?
Chris
-Original Message-
Fr
), is there any
way to get rid of the this error? It doesn't look like any new errors are
occurring, just the original damaged files from when the system died. I
thought about running a scrub but I don't know if that will do much since it
isn't a mirror or raidz.
Any ideas would
You need SLC SSD for ZIL. The only SLC SSD I'd recommend is Intel X25-E. Others
are either too expensive or much slower than Intel.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
They are fast when they are new. Once all the blocks are written, performance
degrades significantly. SLC will also degrade over time, but when it needs to
erase blocks and rewrite, it is much faster than MLC. That's why for ZIL, SLC
SSD is prefered.
It's possible to remove MLC ZIL and use wipe
You can use the utility to erase all blocks and regain performance, but it's a
manual process and quite complex. Windows 7 support TRIM, if SSD firmware also
supports it, the process is run in the background so you will not notice
performance degrade. I don't think any other OS supports TRIM.
I
On 7 Jan 2010, at 23:52, Ian Collins wrote:
> http://www.opensolaris.org/os/community/zfs/version/
>
> No longer exists. Is there a bug for this yet?
I don't think so. But
<http://hub.opensolaris.org/bin/view/Community+Group+zfs/VERSION/> is where
they've
I have just installed EON .599 on a machine with a 6 disk raidz2 configuration.
I run updimg after creating a zpool. When I reboot, and attempt to run 'zpool
list' it returns 'no pools configured'.
I've checked /etc/zfs/zpool.cache, and it appears to have configuration
information about the
t; Is it possible to extend boot-archive in such a way that it include most of
> the files necessary for mounting /etc from separate pool? Have someone tried
> such configurations?
What does the live CD do?
Cheers,
Chris
___
zfs-discus
I just tried to create a new share and got the same error.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have Supermicro 936E1 (X28 expander chip) and LSI 1068 HBA. I never got
timeout issue but I'm using Seagate 15K.7 SAS. SATA might be different as it
handles error and io timeout differently.
If you can wait, better wait for 6Gb SAS expander based product.
BTW. I'd get Supermicro X8DTH-6F moth
That must be a combination of many things to make it happen.
ie. expander revision, SAS HBA revision, firmware, disk model, firmware, etc.
I didn't see the problem on my system but I haven't used SATA disks with it so
I can't say.
--
This message posted from opensolaris.org
_
rt to look for answers?
The virtual machine is running on ESXi 4, with two virtual CPUs and 3GB RAM.
Thanks in advance,
Chris
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
zpool configuration.
We need to improve the meta data performance with little to no money. Does
anyone have any suggestions? Is there such a thing as a Sun supported NVRAM
PCI-X card compatible with the X4500 which can be used as an L2ARC?
Thanks,
Chris
; conflit or some other type of conflit that leads this high load? I also
> noticed some messages about acpi..can this acpi also affect the performance
> of the system?
To see what interrupts are being shared:
# echo "::interrupts -d" | mdb -k
st be Blakes 7 fans in Oracle
> Corp.?
You can see all the working bits courtesy of dtrace...
>> I am glad to be able to contribute positively and constructively to this
>> discussion.
>
> Metoo ;-) ... Sean.
I'll get my coat.
Cheers,
Chris
___
lem?
We've got an HP D2700 JBOD attached to an LSI SAS 9208 controller in a DL360G7,
and I'm keen on getting a ZIL into the mix somewhere - either into the JBOD or
the spare bays in the DL360.
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 17 Dec 2011, at 19:35, Edmund White wrote:
> On 12/17/11 8:27 PM, "Chris Ridd" wrote:
>
>
>>
>> Can you explain how you got the SSDs into the HP sleds? Did you buy blank
>> sleds from somewhere, or cannibalise some "cheap" HP drives?
>
ement (?) are planning such a thing, although I have no idea
> on their pricing. The software is still in development.
They have announced pricing for 2 of their 4 ZFS products: see
<http://tenscomplement.com/our-products>.
Chris
___
zfs-discuss m
n certainly edit ZFS ACLs when they're exposed to
it over CIFS.
;-)
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 450 matches
Mail list logo