On 2/27/2013 2:05 PM, Tim Cook wrote:
On Wed, Feb 27, 2013 at 2:57 AM, Dan Swartzendruber
mailto:dswa...@druber.com>> wrote:
I've been using it since rc13. It's been stable for me as long as
you don't
get into things like zvols and such...
Then it de
I've been using it since rc13. It's been stable for me as long as you don't
get into things like zvols and such...
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Sašo Kiselkov
Sent: Wednesday, February 27, 2013 6:37
Zfs on linux (ZOL) has made some pretty impressive strides over the last
year or so...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Did you set the autoexpand property?___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 11/14/2012 9:44 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dan Swartzendruber
Well, I think I give up for now. I spent quite a few hours over the last
couple of days
On 11/14/2012 9:44 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dan Swartzendruber
Well, I think I give up for now. I spent quite a few hours over the last
couple of days
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jim Klimov
Sent: Tuesday, November 13, 2012 10:08 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Dedicated server running ESXi with no RAID card,
ZFS for
storage?
On 2012-11-14 03:20, Dan Swartzendruber wrote:
> Well, I think I give up for now. I spent quite a few hours over the
> last couple of days trying to get gnome desktop working on bare-metal
> OI, followed by virtualbox. Supposedly that works in headless mode
> with RDP fo
Well, I think I give up for now. I spent quite a few hours over the last
couple of days trying to get gnome desktop working on bare-metal OI,
followed by virtualbox. Supposedly that works in headless mode with RDP for
management, but nothing but fail for me. Found quite a few posts on various
f
storage?
Dan,
If you are going to do the all in one with vbox, you probably want to look
at:
http://sourceforge.net/projects/vboxsvc/
It manages the starting/stopping of vbox vms via smf.
Kudos to Jim Klimov for creating and maintaining it.
Geoff
On Thu, Nov 8, 2012 at 7:32 PM, Dan
I have to admit Ned's (what do I call you?)idea is interesting. I may give
it a try...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
wait my brain caught up with my fingers :) the guest is running on the
same host, so there is no virtual switch in this setup. i'm still going
to try the vmxnet3 and see what difference it makes...
___
zfs-discuss mailing list
zfs-discuss@opensolar
On 11/8/2012 1:41 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: Dan Swartzendruber [mailto:dswa...@druber.com]
Now you have me totally confused. How does your setup get data from the
guest to the OI box? If thru a wire, if it's gig-e, it's going to be
1
On 11/8/2012 12:35 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
the VM running "a ZFS OS" enjoys PCI-pass-through, so it gets dedicated
hardware access to the HB
On 11/8/2012 12:35 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
the VM running "a ZFS OS" enjoys PCI-pass-through, so it gets dedicated
hardware access to the HB
-Original Message-
From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
[mailto:opensolarisisdeadlongliveopensola...@nedharvey.com]
Sent: Wednesday, November 07, 2012 11:44 PM
To: Dan Swartzendruber; Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
Cc: Tiernan OToole
On 11/7/2012 10:53 AM, Edmund White wrote:
Same thing here. With the right setup, an all-in-one system based on
VMWare can be very solid and perform well.
I've documented my process here: http://serverfault.com/a/398579/13325
But I'm surprised at the negative comments about VMWare in this contex
On 11/7/2012 10:02 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
I formerly did exactly the same thing. Of course performance is abysmal
because you're booting a guest VM to share storage back to the host where the
actual VM's run. Not to mention, there's the startup de
On 10/25/2012 11:44 AM, Sašo Kiselkov wrote:
It may be that you'll get reduced cabling range (only up to SATA
lengths, obviously), but it works. The voltage differences are very
small and should only come into play when you're pushing the envelope of
the cable length.
I have a two-drive esat
On 10/4/2012 1:56 PM, Jim Klimov wrote:
What if the backup host is down (i.e. the ex-master after the failover)?
Will your failed-over pool accept no writes until both storage machines
are working?
What if internetworking between these two heads has a glitch, and as
a result both of them become
On 10/4/2012 12:19 PM, Richard Elling wrote:
On Oct 4, 2012, at 9:07 AM, Dan Swartzendruber <mailto:dswa...@druber.com>> wrote:
On 10/4/2012 11:48 AM, Richard Elling wrote:
On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber <mailto:dswa...@druber.com>> wrote:
This who
On 10/4/2012 11:48 AM, Richard Elling wrote:
On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber <mailto:dswa...@druber.com>> wrote:
This whole thread has been fascinating. I really wish we (OI) had
the two following things that freebsd supports:
1. HAST - provides a block-level dr
Forgot to mention: my interest in doing this was so I could have my ESXi
host point at a CARP-backed IP address for the datastore, and I would
have no single point of failure at the storage level.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
This whole thread has been fascinating. I really wish we (OI) had the
two following things that freebsd supports:
1. HAST - provides a block-level driver that mirrors a local disk to a
network "disk" presenting the result as a block device using the GEOM API.
2. CARP.
I have a prototype w
Matt, how about running the same disk benchmark(s), with sync=disabled vs
sync=enabled and the ZIL accelerator in place?
_
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Matt Van Mater
Sent: Monday, October 01, 2012 9:19 AM
To: zfs-disc
On 9/26/2012 11:18 AM, Matt Van Mater wrote:
If the added device is slower, you will experience a slight drop in
per-op performance, however, if your working set needs another SSD,
overall it might improve your throughput (as the cache hit ratio will
increase).
Thanks for your
On 9/25/2012 3:38 PM, Jim Klimov wrote:
2012-09-11 16:29, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dan Swartzendruber
My first thought was everything is
hitting in ARC
On 9/18/2012 10:31 AM, Eugen Leitl wrote:
I'm currently thinking about rolling a variant of
http://www.napp-it.org/napp-it/all-in-one/index_en.html
with remote backup (via snapshot and send) to 2-3
other (HP N40L-based) zfs boxes for production in
our organisation. The systems themselves would
f the datastore
and back (to get new smaller recordsize.) I wonder if that has an effect?
-Original Message-
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
Sent: Tuesday, September 11, 2012 10:12 AM
To: Dan Swartzendruber
Cc: 'James H'; zfs-discuss@opensolaris.org
-Original Message-
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
Sent: Tuesday, September 11, 2012 10:12 AM
To: Dan Swartzendruber
Cc: 'James H'; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Interesting question about L2ARC
On 09/11/2012 04:06 PM, Dan Swar
-Original Message-
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
Sent: Tuesday, September 11, 2012 9:52 AM
To: Dan Swartzendruber
Cc: 'James H'; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Interesting question about L2ARC
On 09/11/2012 03:41 PM, Dan Swar
like
160GB (thin provisioning in action), so it seems to me, I should be able to
fit the entire thing in L2ARC?
-Original Message-
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
Sent: Tuesday, September 11, 2012 9:35 AM
To: Dan Swartzendruber
Cc: 'James H'; zfs-discuss@opensola
g] On Behalf Of James H
Sent: Tuesday, September 11, 2012 5:09 AM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Interesting question about L2ARC
Dan,
If you're not already familiar with it, I find the following command useful.
It shows the realtime total read commands, number
Hmmm, but the "real hit ratio" was 68%?
-Original Message-
From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
[mailto:opensolarisisdeadlongliveopensola...@nedharvey.com]
Sent: Tuesday, September 11, 2012 8:30 AM
To: Dan Swartzendruber; zfs-discuss@opensolaris.org
S
I got a 256GB Crucial M4 to use for L2ARC for my OpenIndiana box. I added
it to the tank pool and let it warm for a day or so. By that point, 'zpool
iostat -v' said the cache device had about 9GB of data, but (and this is
what has me puzzled) kstat showed ZERO l2_hits. That's right, zero.
kst
limited to zvol_volsize_to_reservation(volsize, nvl) for
ZFS_PROP_REFRESERVATION (when type == ZFS_TYPE_VOLUME).
Dan Vatca
On 6 Jul 2012, at 0:00, Stefan Ring wrote:
>> Actually, a write to memory for a memory mapped file is more similar to
>> write(2). If two programs have t
limited to zvol_volsize_to_reservation(volsize, nvl) for
ZFS_PROP_REFRESERVATION (when type == ZFS_TYPE_VOLUME).
Dan
PS: sorry if this message is a duplicate (I sent the original one from the
wrong account).
On 6 Jul 2012, at 0:00, Stefan Ring wrote:
>> Actually, a write to memo
Is anyone aware of any freeware program that can speed up copying tons
of data (2 TB) from UFS to ZFS on same server?
Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
then you're OK.
> > Anything else is going to suck.
thanks for pointing out the obvious. :)
Still, though, this is basically true for ANY drive.
It's worse for slower RPM drives, but it's not like resilvers will
exactly be fast with 7200rpm drives, either.
danno
--
Dan Prit
On Mon, May 17, 2010 at 06:25:18PM +0200, Tomas Ögren wrote:
> Resilver does a whole lot of random io itself, not bulk reads.. It reads
> the filesystem tree, not "block 0, block 1, block 2..". You won't get
> 60MB/s sustained, not even close.
Even with large, unfragmente
turns into
random i/o. Which is slow on these drives.
danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224
Visit our website: www.internet2.edu
Follow us on Twitter: www.twitter.com/internet2
Become a Fa
Thanks for your suggestions.
In the meantime I had found this case and PSU - what do folks think?
Antec Twelve Hundred Gaming Case -
http://wiki.dandascalescu.com/reviews/gadgets/computers/cases#Antec_Twelve_Hundred_Gaming_Case_.E2.98.85
+ 12 5.25" externally-accessible bays, in which you can e
tes to swap the disks out.
I did something very similar but with over 1000 CDs. If you can scare
up an external DVD drive, use it too - that way you'll have to change
half as many times.
danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-722
ly only #1 (reliable) and #2 (quiet) matter most. I've been mulling over
this server for too long and want to get it over with.
Looking forward to your recommendations,
Dan
--
This message posted from opensolaris.org
___
zfs-discuss maili
ul tales to tell about promise
FC arrays. They were clearly not ready for prime time.
OTOH a SAS jbod is a lot less complicated.
danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224
Internet2 Spring Member Meeting
April 26-28, 2010 - Arlington, Vi
t possible that there is only 106M left?!?
I also know that once there isn't enough space on the device, it should start
to delete the old snapshot, if I'm right. Therefore I should never really run
out of space unless I really fill it up with files.
Thanks,
Dan
--
This message posted
in root 18 2010-01-12 15:40
zfs-auto-snap:hourly-2010-01-12-21:00
drwxrwxr-x 12 admin root 18 2010-01-12 15:40
zfs-auto-snap:monthly-2010-01-12-19:37
drwxrwxr-x 12 admin root 18 2010-01-12 15:40
zfs-auto-snap:weekly-2010-01-12-19:37
So what can I do to make all auto-snapshot available in zones?
the drive, especially for the price
paid.
I agree with Al that it probably isn't suitable as a ZIL. Maybe as a
read cache though.
danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224
Winter 2010 ESCC/Internet2 Joint Techs
Hosted by the Un
> Hi Dan,
>
> Can you describe what you are trying to recover from
> with more details
> because we can't quite follow what steps might have
> lead to this
> scenario.
Sorry.
I was running Nevada 103 with a root zpool called "hdc" with c1t0d0s0 and
c1t1d0
now the (wiped clean)
c8t0d0s0. Any clues are, as always, welcome. I'd prefer not to restore my
saved zfs-send streams, so I'd like to get the import of the old root pool
(hdc) to work.
Thanks!
Dan McD.
--
This message posted from opensolaris.org
_
On Nov 4, 2009, at 6:02 PM, Jim Klimov wrote:
> Thanks for the link, but the main concern in spinning down drives of a ZFS
> pool
> is that ZFS by default is not so idle. Every 5 to 30 seconds it closes a
> transaction
> group (TXG) which requires a synchronous write of metadata to disk.
I'm r
Does the same thing apply for a "failing" drive? I have a drive that
has not failed but by all indications, it's about to Can I do the
same thing here?
-dan
Jeff Bonwick wrote:
Yep, you got it.
Jeff
On Fri, Jun 19, 2009 at 04:15:41PM -0700, Simon Breden wrote:
Hi
king it up to
the 7110; it has plenty of PCI slots.
finally, one question - I presume that I need to devote a pair of disks
to the OS, so I really only get 14 disks for data. Correct?
thanks!
danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224
I have a situation where the first part of the first label on the vdev is
corrupted (from 8K to 16K on the slice).
Can I use a zfs/zpool command to rewrite that label from one of the other three?
So far I've tried
zpool update
zpool export
zpool import
Some zdb magic?
__
vices explicitly depend
on svc:/system/filesystem/local:default that don't already, but I get the
feeling there's more to it than that.
Any recent insights into /var/log being its own filesystem?
Thanks!
Dan
This message posted from
It would seem that the ZFS Web UI lacks a few requisite classes to support the
updates ZFS features in snv_b89+.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/lis
> Yeah. The command line works fine. Thought it to be a
> bit curious that there was an issue with the HTTP
> interface. It's low priority I guess because it
> doesn't impact the functionality really.
>
> Thanks for the responses.
I was receiving the same stacktrace:
[b]No enum const class
com.s
Bill Shannon wrote:
> I just wanted to follow up on this issue I raised a few weeks ago.
>
> With help from several of you, I had all the information and tools
> I needed to start debugging my problem. Which of course meant that
> my problem disappeared!
>
> At one point my theory was that ksh93
the grep output or
> use the -q flag for grep (caveat: I don't know which grep you
> prefer to use, so check the flags for your version)
/usr/xpg4/bin/grep has -q, that's easy enough to code-in, as we do that for
punchctl already.
Thanks!
Dan
_
ee no documentation mentioning how to scrub, then wait-until-completed. I'm
happy to be pointed at any such documentation. I'm also happy to be otherwise
clued-in if no such documentation exists, or if no such feature exists.
Thanks!
Dan McD.
This message posted from
stec (Promise) or zfs.
the way i'd try to do this would be to use the same box under solaris
software RAID, or better yet linux or windows software RAID (to make
sure it's not a solaris device driver problem).
does pulling the disk then get noticed? If so, it's a zfs bug
a little pucker for my colleagues when it happened while i
was on vacation. The support guy at the reseller we were working with
(NOT Western Scientific) told them the raid was hosed and they should
rebuild from scratch, hope you had a backup.
danno
--
Dan Pritts, System Administrator
Interne
pull a disk and goes on and does the right thing.
I wonder if you've got a scsi card/driver problem. We tried using
an Adaptec card with solaris with poor results; switched to LSI,
it "just works".
danno
--
Dan Pritts, System Administrator
Internet
windows
> machinesis there any similar solution on the win machines?
none that i'm aware of; windows does have software mirroring, of
course. Make lots of backups :).
danno
--
Dan Pritts, System Administrator
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224
[1]
http://www
I've just discovered patch 125205-07, which wasn't installed on our system
because we don't have SUNWhea..
Has anyone with problems tried this patch, and has it helped at all?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
That is interesting, again we're having the same problem with our X4500s.
I am trying to work out what is causing the problem with NFS, restarting the
service causes it to try and stop and not bring it back up.
Rebooting the whole box fails and it just hangs till a hard reset..
This message
ere i've heard about it)
wants to be awful sure that the drive actually flushes its write cache
when you ask for it.
Regardless, the speed difference is marginal.
danno
--
Dan Pritts, System Administrator
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224
_
have NCQ, you'd lose on some random i/o workloads by
adding the PATA disk. But, i think that you need SATA300 to support
that feature.
danno
--
Dan Pritts, System Administrator
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224
___
zfs-disc
I had a pool, p, with a filesystem p/local, mounted at /local. In that are
several workspace filesystems, including "/local/ws/install-nv".
I used "mv" from /local/ws to change install-nv to nv-install.
it worked. nothing seemed wrong, except the output of "zfs list".
Then I remembered there w
I had a pool, p, with a filesystem p/local, mounted at /local. In that are
several workspace filesystems, including "/local/ws/install-nv".
I used "mv" from /local/ws to change install-nv to nv-install.
it worked. nothing seemed wrong, except the output of "zfs list".
Then I remembered there
Does anyone have a customer
using IBM Tivoli Storage Manager (TSM) with ZFS? I see that IBM has a
client for Solaris 10, but does it work with ZFS?
--
Dan Christensen
System Engineer
Sun Microsystems, Inc.
Des Moines, IA 50266 US
877-263-2204
I have 8 SATA on the motherboard, 4 PCI cards with 4 SATA each, one
PCIe 4x sata card with two, and one PCIe 1x with two. The operating
system itself will be on a hard drive attached to one ATA 100
connector.
Kind of like a "poor man's" data centre, except not that cheap... It
still is estimated
I care more about data integrity then performance. Of course if
performance is so bad that one would not be able to, say stream a
video off of it that wouldn't be acceptable.
On 6/22/07, Richard Elling <[EMAIL PROTECTED]> wrote:
Dan Saul wrote:
> Good day ZFS-Discuss,
>
> I
redundancy but
still keeping as much disk space open for my uses as possible.
I don't want to mirror 15 drives to 15 drives as that would
drastically affect my storage capacity.
Thank you for your time,
Dan
___
zfs-discuss mailing list
zfs-di
[EMAIL PROTECTED] wrote:
it's been assigned CR 6566207 by Linda Bernal. Basically, if you look
at si_intr and read the comments in the code, the bug is pretty
obvious.
si3124 driver's interrupt routine is incorrectly coded. The ddi_put32
that clears the interrupts should be enclosed in an "
Robert Milkowski wrote:
Hello Dan,
Tuesday, April 17, 2007, 9:44:45 PM, you wrote:
How can this work? With compressed data, its hard to predict its
final size before compression.
Because you are NOT compressing the file only compressing the blocks as
they get written to disk.
DM> I gu
How can this work? With compressed data, its hard to predict its
final size before compression.
Because you are NOT compressing the file only compressing the blocks as
they get written to disk.
I guess this implies that the compression only can save integral numbers of
blocks.
_
Sigh. We have devolved. Every thread on OpenSolaris discuss lists
seems to devolve into a license discussion.
:0 B:
* GPL
/dev/null
___
zfs-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Luke Scharf wrote:
This is also OT -- but what is the boot-archive, really?
Is it analogous to the initrd on Linux?
precisely.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
So, that would be an "error", and, other than reporting it accurately, what
would you want ZFS to do to "support" it?
dudekula mastan wrote:
If a write call attempted to write X bytes of data, and if writecall
writes only x ( hwere x
-Masthan
> Please let me know the ZFS support for s
Ian Collins wrote:
Rayson Ho wrote:
Interesting...
http://www.rhic.bnl.gov/RCF/LiaisonMeeting/20070118/Other/thumper-eval.pdf
I wonder where they got the information that "Solaris 10 doesn't support
dual-core Intel" from?
probably from evaluating Solaris 8 or something.
__
[EMAIL PROTECTED] wrote:
Bryan Cantrill wrote:
well, "Thumper" is actually a reference to Bambi
You'd have to ask Fowler, but certainly when he coined it, "Bambi" was the
last thing on anyone's mind. I believe Fowler's intention was "one that
thumps" (or, in the unique parlance of a certain Co
Bryan Cantrill wrote:
well, "Thumper" is actually a reference to Bambi
You'd have to ask Fowler, but certainly when he coined it, "Bambi" was the
last thing on anyone's mind. I believe Fowler's intention was "one that
thumps" (or, in the unique parlance of a certain Commander-in-Chief,
"one th
Frank Cusack wrote:
On January 19, 2007 10:01:43 PM -0800 Dan Mick <[EMAIL PROTECTED]> wrote:
Scouting around a bit, I see SIIG has a 3132 chip, for which they make a
card, eSATA II, available in PCIe and PCIe ExpressCard formfactors. I
can't promise, but chances seem good that it&
[EMAIL PROTECTED] wrote:
That said, this definition is not always used consistently, as is the case
with the x2100. I filed a bug against the docs in this case, and unfortunately
it was closed as "will not fix." :-(
In the context of a hardware platform it makes little sense to
distinguish be
David J. Orman wrote:
Hi,
I'm looking at Sun's 1U x64 server line, and at most they support two drives.
This is fine for the root OS install, but obviously not sufficient for many
users.
Specifically, I am looking at the: http://www.sun.com/servers/x64/x2200/
X2200M2.
It only has "Riser car
Is it possible to convert/upgrade a file system that is currently under the
control of Solaris Volume Manager to ZFS?
Thanks
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.or
Tonight I've been moving some of my personal data around on my
desktop system and have hit some on-disk corruption. As you may
know, I'm cursed, and so this had a high probability of ending badly.
I have two SCSI disks and use live upgrade, and I have a partition,
/aux0, where I tend to keep pers
88 matches
Mail list logo