performance goes.
Thanks
Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
#x27;kill' the offline node.
Perhaps those things could be made to run on Solaris if they don't already.
-tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
actly this functionality -
it'll start deleting snapshots that it has taken when the filesystem
reaches a certain threshold.
More details at:
http://opensolaris.org/os/community/arc/caselog/2008/571/mail
cheers,
tim
[1] actually while I'm here, quick po
oy!
cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
where the
best place to retrieve data from. I suspect it'll take more real-user
testing to determine what's the best balance between data availability
and disk space.
cheers,
tim
___
zfs-discuss mailing lis
7;m not sure baking this support into the zfs utilities is the right way
to go. None of the other Solaris commands do this sort of
auto-completion, do they?
cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
vice breaking for datasets with spaces in their names,
I've got an ugly fix, but want to have a go at doing a better job of it.
cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to taking snapshots based solely on
statically listed filesystems (see the service manifest or README file
and check for the "zfs/fs-name" SMF property)
> Alas I've had to downgrade as Nautilus is not usable:
Yow.
cheers,
tim
___
ces, but not proto areas or places where ISO images get built)
cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
tly importing brand new pools, then yes, you've got a
point.
cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ime-slider service just looks for snapshots of given names.
We could mark those snapshot with another zfs user property, but that'd
break backwards compatibility with earlier versions of Solaris that
don't have snapshot property support, so I'd rather not do that if possible?
s as
I didn't have a chance to work out what was going on. Getting ZFS plug
n' play on usb disks would be much much cooler though[1].
cheers,
tim
[1] and I reckon that by relying on the 'zfs/interval' 'none' setting
for the auto-sna
571
I wonder is there a build problem with 2008.11 ? I'm image-updating my
desktop at the moment and will check it out. Thanks for the heads-up!
cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@op
eriod"
to set how many you want to wait between snapshots.
cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to the contrary. I think it's a bug, we
should either promote immediately on creation, or perhaps beadm destroy
could do the promotion behind the covers.
-tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Kyle McDonald wrote:
> Tim Haley wrote:
>> Ross wrote:
>>
>>> While it's good that this is at least possible, that looks horribly
>>> complicated to me.
>>> Does anybody know if there's any work being done on making it easy to
>>&g
/appl
> cp: cannot create /datapool/appl/ISO8859-K?ln.url: Operation not supported
> # /usr/bin/cp UTF8-Köln.txt /datapool/appl/
> #
>
> Kristof/
What is the output from:
zfs get utf8only datapool/appl
?
thanks,
-tim
___
zfs-d
sensitive all of sudden you've got two files that
can no longer be looked up.
The other reason is performance. Knowing beforehand that we need to
track or disallow case conflicts helps us to optimize to keep lookups fast.
-tim
> Nico
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Kristof Van Damme wrote:
> Hi Tim,
> Thanks for having a look.
> The 'utf8only' setting is set to off.
>
> Important bit of additional information:
> We only seem to have this problem when copying to a zfs filesystem with the
> casesensitivity=mixed property. We n
Kristof Van Damme wrote:
> Hi Tim,
> That's splendid!
>
> In case other people want to reproduce the issue themselves, here is how.
> In attach is a tar which contains the 2 files (UTF8 and ISO8859) like the
> ones I used in my first post to demonstrate the problem. Here
ur,
http://bugs.opensolaris.org/view_bug.do?bug_id=6462803
and a workaround you can use in the meantime.
cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
mountpoint=legacy rpool/ROOT/opensolaris
$ mkdir /tmp/a
$ pfexec mount -F zfs rpool/ROOT/opensolaris /tmp/a
$ pfexec umount /tmp/a
$ svcadm clear frequent daily frequent hourly
cheers,
tim
___
zfs-discuss mailing list
zf
imits like this can take a long time (even with
the massive zfs list performance improvements :-)
[ hacks around listing the contents of .zfs/snapshots/ only work when
filesystems are mounted unfortunately, so I'd been avoiding doing that
in the zfs-auto-snapshot
m/zfs/auto-snapshot:weekly
There's documentation on the SMF properties for the core service at
/var/svc/manifest/system/filesystem/auto-snapshot.xml
http://blogs.sun.com/timf/resource/README.zfs-auto-snapshot.txt
cheers,
tim
___
This is a known bug:
http://bugs.opensolaris.org/view_bug.do?bug_id=6749498
- being a duplicate of an existing bug:
http://bugs.opensolaris.org/view_bug.do?bug_id=6462803
cheers,
tim
On Mon, 2009-01-12 at 14:07 -0800, Robert Bauer wrote:
> Time slider is
27;/nas1/backups/': invalid dataset name
>
There's a typo there, you would have to do
zfs destroy nas1/backups
Unfortunately, you can't use the mountpoint, you have to name the dataset.
-tim
>> r...@bitchko:/nas1# rm -Rf backups/
>> rm: cannot remove directory `bac
fore I have a chance to play with b105.
>
> Does anyone know specifically if b105 has ZFS encryption?
>
It does not.
-tim
> Thanks,
>
> Jerry
>
>
> Original Message
> Subject: [osol-announce] SXCE Build 105 available
> Date: Fri, 09 Jan 2
his is an awful idea..in which case I am happy to hear
that as well and will feed that back to the customer.
Thanks
Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
discuss
Does a by-hand share succeed?
I.e, share -F nfs -o sec=krb5,rw /home
?
-tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
tually in the works. There is a functioning prototype.
-tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
iable to be destroyed
when the service recycles old snapshots, so take care!
[ we could potentially fix that, now that snapshots have their own
user-properties, by adding a user-property to every auto-snapshot that
was taken, and taking care to only destroy those ones, but we're not
ther
ck receives would be highly
useful, if you can get it. Reboot -d would be best, but it might just hang.
You can try savecore -L.
-tim
I'f I boot to my snv_106 BE, everything works fine, this issue has
never occurred on that version.
Any thoughts?
snapshot -r mydata...@t1
# for ds in $(zfs list -t filesystem,volume -o name -r mydataset)
> do
> echo sending $...@t1
> zfs send $...@t1 | ssh remote-host zfs recv -d foo
> done
cheers,
tim
_
e - it'd be worth testing it, logging a
bug against "send -R" if that's the case.
cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
napshots.
http://defect.opensolaris.org/bz/show_bug.cgi?id=8683
That's gone back already, I don't know if it made 2009.06 though
( It was a Time Slider bug, not one in the core auto-snapshot services
http://src.opensolaris.org/source/history/jds/time-slider/ )
cheers,
ng the dump we got from you (thanks again), we're relatively sure
you are hitting
6826836 Deadlock possible in dmu_object_reclaim()
This was introduced in nv_111 and fixed in nv_113.
Sorry for the trouble.
-tim
___
zfs-discuss maili
uss/2009-January/025601.html
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-March/027629.html
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-March/027365.html
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-March/027257.html
--Tim
___
uld have the same reboot problem.
>
> Any clues?
>
> Thanks,
> TJ
>
Moving normal drives in a pool around isn't a problem. If you move a boot
drive, you need to update grub. That has nothing to do with ZFS though,
that would occur on
blem isn't well-isolated yet.
In my notes: 6565042, 6749630
The first of which is marked as fixed in snv_77, 19 months ago.
The second is marked as a duplicate of 6784395, fixed in snv_107, 20 weeks ago.
-tim
but as I said before, I've found the information on the mailing list
more
27; in
snapshot names now)
More at:
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_12
cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Ross,
On Thu, 2009-06-25 at 04:24 -0700, Ross wrote:
> Thanks Tim, do you know which build this is going to appear in?
I've actually no idea - SUNWzfs-auto-snapshot gets delivered by the
Desktop consolidation, not me. I'm checking in with them to see what the
story is.
That said
; SMF property ]
time-slider-cleanup is the thing that deletes snapshots iff you're
running low on disk space. The auto-snapshot service runs all of it's
cron job from the 'zfssnap' role.
cheers,
tim
__
What's the deal with the mailing list? I've unsubscribed an old email address,
and attempted to sign up the new one 4 times now over the last month, and have
yet to receive any updates/have it approved. Are the admins asleep at the helm
for zfs-discuss or what?
--
This message posted from ope
So it is broken then... because I'm on week 4 now, no responses to this thread,
and I'm still not getting any emails.
Anyone from Sun still alive that can actually do something?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
z
buMP? I watched the stream for several hours and never heard a word about
dedupe. The blogs also all seem to be completely bare of mention. What's the
deal?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
likely I have to set it with rge.conf, and reboot, but I would need to
> rebuild my USB image for that. (unplumb, modunload, modload, plumb did not
> seem to enable it either).
>
>
Your NIC may not support it. Realtek and Broadcom both make cheap, cheap
chipsets that are
ensitive. Good luck, I'd be happy to be
proven wrong. Every test I've ever done has shown you need SAS/FC for
vmware workloads though.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Aug 3, 2009 at 10:18 PM, Tim Cook wrote:
>
>
> On Mon, Aug 3, 2009 at 3:34 PM, Joachim Sandvik
> wrote:
>
>> I am looking at a nas software from nexenta, and after some initial
>> testing i like what i see. So i think we will find in funding the budget for
&g
ter response time=virtualized platform being much happier.
Not to mention, in my experience, the 7.2k drives fall off a cliff when you
overwork them. 10k/15k drives tend to have a more linear degradation in
performance.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Aug 7, 2009 at 8:49 AM, Dick Hoogendijk wrote:
> I've a new MB (tyhe same as before butthis one works..) and I want to
> change the way my SATA drives were connected. I had a ZFS boot mirror
> conncted to SATA3 and 4 and I wat those drives to be on SATA1 and 2 now.
>
> Question: will ZFS
You can size DNLC
by tuning the ncsize parameter, but it requires a reboot. See the
Solaris Tunable Parameters Guide for details.
http://docs.sun.com/app/docs/doc/817-0404/chapter2-35?a=view
Ye
show anything?
What about
zfs get refquota,refreservation,quota,reservation zp/fs/esx_tmp
-tim
Thanks,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. I'm wondering if anyone from Sun has any updated info on the
bug?
I was unable to locate the bug in the bugs database.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e as much protection. Raidz21, you can lose any 4
drives, and up to 14 if it's the right 14. Raid10, if you lose the wrong
two drives, you're done.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Aug 21, 2009 at 5:52 PM, Ross Walker wrote:
> On Aug 21, 2009, at 6:34 PM, Tim Cook wrote:
>
>
>
> On Fri, Aug 21, 2009 at 5:26 PM, Ross Walker <
> rswwal...@gmail.com> wrote:
>
>> On Aug 21, 2009, at 5:46 PM, Ron Mexico <
>> no-re...@opensol
On Fri, Aug 21, 2009 at 7:41 PM, Richard Elling wrote:
> On Aug 21, 2009, at 3:34 PM, Tim Cook wrote:
>
> On Fri, Aug 21, 2009 at 5:26 PM, Ross Walker wrote:
>> On Aug 21, 2009, at 5:46 PM, Ron Mexico wrote:
>>
>> I'm in the process of setting up a NAS for my co
On Fri, Aug 21, 2009 at 8:04 PM, Richard Elling wrote:
> On Aug 21, 2009, at 5:55 PM, Tim Cook wrote:
>
>> On Fri, Aug 21, 2009 at 7:41 PM, Richard Elling
>> wrote:
>>
>> My vote is with Ross. KISS wins :-)
>> Disclaimer: I'm also a member of BAARF.
(although I'm not sure how baked FCOE is
at this point).
http://www.opensolaris.org/os/project/comstar/;jsessionid=507478D1B2496DCEA0A764D4C8A63131
http://wikis.sun.com/display/OpenSolarisInfo/comstar+Administration
--Tim
___
zfs-discu
ev/zvol/rdsk/storagepool/backups/macbook_dg
> sbdadm: could not create meta file
>
I'm not entirely sure what you're trying to do here. Is
/dev/zvol/rdsk/storagepool/backups/macbook_dg a zfs snapshot?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e of the enterprise grade. And guess what... none of the
drives in my array are less than 5 years old, so even if they did die, and I
had bought the enterprise versions, they'd be covered.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
create your LUN, and typed the path to the file as
/storagepool/backups/iscsi/macbook_dg.
Reference:
http://de.opensolaris.org/os/project/comstar/COMSTAR_Admin-FC-iSCSI.pdf;jsessionid=2C549A4253A0B211ED9DABBF66EF1495
--Tim (four jamesons later)
___
do a raid-z2/3, and you won't have to worry
about it. The odds of 4 drives not returning valid data are so rare (even
among RE drives), you might as well stop working and live in a hole (as your
odds are better being hit by a meteor or winning the lottery by osmosis).
I KIIID.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
project, I am currently lacking horror stories.
When it comes to "what the hell, this drive literally failed a week after
the warranty was up", I unfortunately PERSONALLY have 3 examples. I'm
guessing (hoping) it's just bad luck. Perhaps the luck wasn't SO bad
though, as I
x27;t raidz+an SSD arc not
meet both financial and performance requirements? It would literally be a
first for me.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> http://www.cuddletech.com/blog/pivot/entry.php?id=968
> --
>
So the typo fixed it?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nd who got at least another 3 years out of them (heck, he might still be
using them for all I know). Those maxtor's weren't worth the packaging they
came in. I wasn't sad to see them bought up.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
riously? It's considered "works as designed" for a system to take 5+
hours to boot? Wow.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
likely to have the opposite of the intended effect.
>
> Adam
>
>
> --
> Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
>
>
Adam/David,
I believe this is the one you're looking for:
http://bugs.opensolaris.org/view_bug.do?bug_id=67182
(2:1). I also want to be able to add and remove storage
> dynamically. You know, champagne on a beer budget. :)
>
>
Any particular reason you want to present block storage to VMware? It works
as well, if not better over NFS, and saves a LOT of headaches.
--Tim
___
27;ve got MASSIVE deployments of VMware on NFS over 10g that achieve stellar
performance (admittedly, it isn't on zfs).
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
about
> the same. It's a ULI, so the components are on the "wrong" side of the
> board, but it's still just PCIe electrically.
>
> -B
>
>
The mv8 is a marvell based chipset, and it appears there are no Solaris
drivers fo
uya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
> Japan| +81 (0)3 -3375-1767 (home)
>
>
Interesting, there was a big thread that this card was in over at hardocp,
and they said with 2009.06 it didn't work.
--Tim
_
27;t understand why you need this two layer architecture. Just add
a server to the mix, and add the new storage to vmware. If you're doing
iSCSI, you'll hit the LUN size limitations long before you'll need a second
box.
--Tim
___
ote:
I see this issue on each of my X4540's, 64GB of ECC memory, 1TB drives.
Rolling back to snv_118 does not reveal any checksum errors, only snc_121
So, the commodity hardware here doesn't hold up, unless Sun isn't
validating their equipment (not likely, as these servers have had no
hardware issues prior to this build)
--
Brent Jones
br...@servuhome.net
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nce
> shell. |
>
> +-+
>
>
> Trevor
>
>
Wow, that prompt is all official. You should see what happens when you try
to get into the shell prompt of the beta systems. Far less professional,
far more entertaining. Thanks again Adam, you know I got a kick out of it
:)
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
m quite happy with, it's just
> storing just a snapshot file that makes me nervous.
>
The correct answer is ndmp. Whether Sun will ever add it to opensolaris is
another subject entirely though.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
B/sec. The backend can more than
satisfy that. Who cares at that point whether it can push 500MB/s or
5000MB/s? It's not a database processing transactions. It only needs to be
able to push as fast as the front-end can go.
--Tim
___
zfs-discuss
On Thu, Sep 3, 2009 at 4:57 AM, Karel Gardas wrote:
> Hello,
> your "(open)solaris for Ecc support (which seems to have been dropped from
> 200906)" is misunderstanding. OS 2009.06 also supports ECC as 2005 did. Just
> install it and use my updated ecccheck.pl script to get informed about
> errors
On Sat, Sep 5, 2009 at 12:30 AM, Marc Bevand wrote:
> Tim Cook cook.ms> writes:
> >
> > Whats the point of arguing what the back-end can do anyways? This is
> bulk
> data storage. Their MAX input is ~100MB/sec. The backend can more than
> satisfy that. Who cares a
On Mon, Sep 7, 2009 at 2:01 AM, Karel Gardas wrote:
> What's your uptime? Usually it scrubs memory during the idle time and
> usually waits quite a long nearly till the deadline -- which is IIRC 12
> hours. So do you have more than 12 hours of uptime?
> --
>
10:43am up 30 days 6:47, 1 user,
gs the scrub
> > Interesting. Note my crontab entry doesn't have any protection
> > against this, so perhaps this bug is back in different form now.
> >
> > Will
> >
>
Might wanna be careful with b122. There's issues with raid-z raidsets
producing phantom checksum errors.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
tem as it gets too complicated and way too
> expensive.
>
Better IOPS? Do you have some numbers to back that claim up? I've never
heard of anyone getting "much better" IOPS out of a drive by simply changing
the interface from SATA to SAS. Or SAT
On Fri, Sep 11, 2009 at 3:20 PM, Eric D. Mudama
wrote:
> On Fri, Sep 11 at 13:14, Tim Cook wrote:
>
>> Better IOPS? Do you have some numbers to back that claim up? I've never
>> heard of anyone getting "much better" IOPS out of a drive by simply
>>
27;s a pretty well known range of IOPS provided for 7200, 10K, and 15K
drives respectively, regardless of interface. You appear to be saying this
isn't the case, so I'd like to know what data you're using as a reference
point.
--Tim
___
On Sat, Sep 12, 2009 at 10:17 AM, Damjan Perenic <
damjan.pere...@guest.arnes.si> wrote:
> On Sat, Sep 12, 2009 at 7:25 AM, Tim Cook wrote:
> >
> >
> > On Fri, Sep 11, 2009 at 4:46 PM, Chris Du wrote:
> >>
> >> You can optimize for better IOPS or fo
e from Sun going
>> to tell you a word until it is possible to tell things. At which point
>> they will probably tell everything + source.
>>
>> My own opinion of course...
>>
>> --
>> Regards,
>> Cyril
>>
>>
As we should. Did the
On Thursday, September 17, 2009, Nilsen, Vidar wrote:
> Hi,
>
> I'm trying to move disks in a zpool from one SATA-kontroller to another.
> Its 16 disks in 4x4 raidz.
> Just to see if it could be done, I moved one disk from one raidz over to
> the new controller. Server was powered off.
> After boo
e/hazz41.3G 5.65G 41.3G /export/home/hazz/
> rpool/swap1.50G 5.86G 1.29G -
> Any clue to get on the rescue?
> --
>
>
What does the grub.conf look like now that you've re-installed grub?
--Tim
___
zfs-discuss mailing
o a zpool scrub after you replaced the drive? How would zfs know
what you wanted done with the drive if you didn't tell it?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
riod. I hope I heard wrong or the whole
announcement feels like a bit of a joke to me.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
me to time, I do think about
upgrading my system at home, and would really appreciate a
zfs-community-recommended configuration to use.
Any takers?
cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.o
Hi Guys,
I completely forgot to unsubscribe to the zfs list before changing email
addresses, and no longer have access to the old one. Is there someone I can
contact about manually removing my old address, or updating it with my new one?
Thanks!
--Tim
This message posted from
I think this will be a hard sell internally given that it would eat up their
own storagetek line.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
Just want to verify, if I have say, 1 160GB disk, can I format it so that the
first say 40GB is my main UFS parition with the base OS install, and then make
the rest of the disk zfs? Or even better yet, for testing purposes make two
60GB partitions out of the rest of it and make them a *mirror*
Well, the system can only have one disk, so giving it the full disk isn't
really an option unless they've finally gotten the whole boot from a zfs disk
figured out.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@op
I guess I should clarify what I'm doing.
Essentially I'd like to have the / and swap on the first 60GB of the disk.
Then use the remaining 100GB as a zfs partition to setup zones on. Obviously
the snapshots are extremely useful in such a setup :)
Does my plan sound feasible from both a usabil
When you do the initial install, how do you do the slicing?
Just create like:
/ 10G
swap 2G
/altroot 10G
/zfs restofdisk
Or do you just create the first three slices and leave the rest of the disk
untouched? I understand the concept at this point, just trying to explain to a
third party exactl
It's a third party host, and I've been informed the cases they use only
have room available for one hard drive. It's definitely not my first
choice, but it's the only option I have at this point.
Tim Cook
-Original Message-
From: Al Hopper [mailto:[EMAIL PROTEC
I'm thinking that if that is the case I'll just be dd'ing to a new disk and
continuing on with it. Obviously this is not the preferred solution, but
unless they're willing to let me send my own hardware, I don't have much of a
choice.
This message posted from opensolaris.org
___
does liveupgrade work fine if the zones are on a UFS partition?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
701 - 800 of 959 matches
Mail list logo