Thank you lori! Guess I'll take that route :) The final solution is actually
a production environment. For now I was going to stage, but it looks like it's
not possible at this point in time.
This message posted from opensolaris.org
___
zfs-discu
more useful in the future.
Hope this helps ?
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
http://blogs.sun.com/timf
___
zfs-discuss mailing
he case where those blocks aren't referenced by a
previous snapshot, in which case the data isn't unreferenced.
hope this helps,
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
available
* ability to use free space on the root pool, making it
available for other uses (by setting a reservation on the root
filesystem, you can ensure that / always has sufficient available
space)
- am I missing any others ?
cheers,
tim
--
Tim Foster, Sun Micros
Hi all,
On Wed, 2007-03-28 at 14:23 -0700, Lin Ling wrote:
> We will make the manual and netinstall instructions available to
> non-SWAN folks shortly.
>
> Tim Foster also has a script to do the set up, wait for his blog.
Just put that blog post up - you can find it at
http://b
hey folks,
Lori Alt wrote:
See Tim Foster's blog for some procedures for doing
some LU-like management of bootable datasets:
http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling
Yep - I'll be working this weekend on updating the previous
"mountrootadm.sh
Hi All,
On Fri, 2007-03-30 at 19:33 +0100, Tim Foster wrote:
(replying to myself, I know, I know)
> Yep - I'll be working this weekend on updating the previous
> "mountrootadm.sh" script that I wrote when the previous ZFS Mountroot
> (ufs boot + a quick switcheroo to zfs
LINE 0 0 0
/tmp/1 ONLINE 0 0 0
/tmp/2 ONLINE 0 0 0
/tmp/3 ONLINE 0 0 0
/tmp/4ONLINE 0 0 0
errors: No known data errors
#
cheers,
tim
--
Tim Foster, Sun M
On Tue, 2007-04-03 at 10:54 -0400, Luke Scharf wrote:
> Tim Foster wrote:
> > You can add a disk to a raidz configuration, but then that makes a pool
> > containing 1 raidz + 1 additional disk in a dynamic stripe configuration
> > (which ZFS will warn you about, since you
archiving application.
Rgds
Tim
Bill Sprouse said the following :
On Apr 18, 2007, at 12:47 PM, Dennis Clarke wrote:
Maybe with a definition of what a "backup" is and then some way to
achieve it. As far as I know the only real backup is one that can be
tossed into a vault and lo
backup
catalog which can be a drag but the procedures are well defined.
There is other stuff that SAM-FS can do such as shared Global File
system support in SAN's etc...but I have gone on enough!!
Rgds
Tim
Robert Milkowski said the following :
Hello Tim,
Thursday, April 19, 200
the cluster failed over!
I need to go read some white papers on this...but I assume that
something like direct I/O (which UFS, VxFS and QFS all have) is in the
plans for ZFS so we don't end up double buffering data for apps like
databases ? - th
My initial reaction is that the world has got by without
[email|cellphone|
other technology] for a long time ... so not a big deal.
Well, I did say I viewed it as an indefensible position :-)
Now shall we debate if the world is a better place because of cell
phones :-P
_
Nigel,
Was the iSCSI target daemon running and the targets are gone? or
did the daemon core repeatedly?
How did you created the targets?
-tim
eric kustarz wrote:
Hi Tim,
Is the iSCSI target not coming back up after a reboot a known problem?
Can you take a look?
eric
Begin
tely - would love to have
the time to play about more with upgrade hacks.
cheers,
tim
On 5/31/07, *Lori Alt* < [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
zfs-boot crowd:
I said I'd try to come up with a procedure for liveup
tup your root pool. Not ideal but it should
work.
cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
teps.
After that, could you verify that by changing the grub menu entry
in //boot/grub/menu.lst ( eg. change the "title" line in the ZFS
boot entry, adding some random text) that you see those changes
reflected in the menu that grub actually displays ?
Let me know if any of these s
Hi Doug,
On Tue, 2007-06-05 at 06:45 -0700, Douglas Atique wrote:
> Hi, Tim. Thanks for your hints.
No problem
> Comments on each one follow (marked with "Doug:" and in blue).
html mail :-/
> Tim Foster <[EMAIL PROTECTED]> wrote:
> There's a
So I just imported an old zpool onto this new system. The problem would be one
drive (c4d0) is showing up twice. First it's displayed as ONLINE, then it's
displayed as "UNAVAIL". This is obviously causing a problem as the zpool now
thinks it's in a degraded state, even though all drives are t
changing the pool state nvpair in
the vdev_label (if I'm not mistaken - section 1.3.3 in the on-disk
format document covers this.)
cheers,
tim
[1]
http://www.opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf
--
Tim Foster, Sun Microsyste
ou do "zpool create pool <7th slice>"
where has a UFS filesystem, and is mounted, it should print an
error message, saying that
a) there's a filesystem present
b) it's mounted
- so you need to unmount it, and use the -f flag.
cheers,
tim
In m
ool list does not seem to be listing those,
"zpool import" should show the pools that are available for import -
does this help ?
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
http://blogs.sun.com/timf
_
laris.org/os/community/arc/caselog/2007/342/
http://www.opensolaris.org/os/community/arc/caselog/2007/328/ )
cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
r we go on the zpool.cache
file. If that file isn't present, you'll need to manually zpool import
pools in order for the system to see them.
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
http://blogs.sun.com/timf
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ume across multiple
h/ware RAID-5 stripes.
Thanks
Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
server. This is done over the network and the transfer the
actual data blocks is done over the SAN.
HTH
Tim
Rainer J.H. Brandt said the following :
Sorry, this is a bit off-topic, but anyway:
Ronald Kuehn writes:
No. You can neither access ZFS nor UFS in that way. Only one
host can
gt;
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
Sun Logo <http://www.sun.com>
<http://www.sun.com>*Tim Thomas
*St
ly make a T1000 (Sol1) kernel panic when imported.
It will also make an x4100 panic (osol)
Any ideas?
Thanks in advance.
-Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Neil Perrin wrote:
>
>
> Tim Spriggs wrote:
>> Hello,
>>
>> I think I have gained "sufficient fool" status for testing the
>> fool-proof-ness of zfs. I have a cluster of T1000 servers running
>> Solaris 10 and two x4100's running an Open
_massacre
and I've another one that will also do this for you:
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_8
I'm sure there's other scripts floating around, which may be more suited
to your requirements but these are a start at least.
cheers,
Won't come cheap, but this mobo comes with 6x pci-x slots... should get the job
done :)
http://www.supermicro.com/products/motherboard/Xeon1333/5000P/X7DBE-X.cfm
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opens
What lead you to the assumption it's ONLY those switches? Just because the
patch is ONLY for those switches doesn't mean that the bug is only for them.
The reason you only see the patch for 3xxx and newer is because the 2xxx was
EOL before the patch was released...
FabOS is FabOS, the nature
I'm far from an expert but my understanding is that the zil is spread
across the whole pool by default so in theory the one drive could slow
everything down. I don't know what it would mean in this respect to keep
the PATA drive as a hot spare though.
-Tim
Christopher Gibbs wrote
zfs get creation pool|filesystem|snapshot
Poulos, Joe wrote:
>
> Hello,
>
>
>
> Is there a way to find out what the timestamp is of a specific
> snapshot? Currently, I have a system with 5 snapshots, and would like
> to know the timestamp as to when it was created. Thanks JOr
>
> This messa
ny way to determine which snapshot was created
> earlier?
>
> This would be helpful to know in order to predict the effect of a
> rollback or promote command.
>
> Fred Oliver
>
>
> Tim Spriggs wrote:
>
>> zfs get creation pool|filesystem|snapshot
>>
>&g
Andy Lubel wrote:
> On 9/20/07 3:49 PM, "Paul B. Henson" <[EMAIL PROTECTED]> wrote:
>
>
>> On Thu, 20 Sep 2007, Richard Elling wrote:
>>
>>
>> That would also be my preference, but if I were forced to use hardware
>> RAID, the additional loss of storage for ZFS redundancy would be painful.
Paul B. Henson wrote:
> On Thu, 20 Sep 2007, Tim Spriggs wrote:
>
>
>> We are in a similar situation. It turns out that buying two thumpers is
>> cheaper per TB than buying more shelves for an IBM N7600. I don't know
>> about power/cooling considerations yet t
as long as the pool is safe... but we've lost multiple pools.
-Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
really limits how much faith we can put
>> into our data on ZFS.
>> It's safe as long as the pool is safe... but we've
>> lost multiple pools.
>>
>
> Hello Tim,
> did you try SNV60+ or S10U4 ?
>
> Gino
&g
Paul B. Henson wrote:
> On Thu, 20 Sep 2007, Tim Spriggs wrote:
>
>
>> The x4500 is very sweet and the only thing stopping us from buying two
>> instead of another shelf is the fact that we have lost pools on Sol10u3
>> servers and there is no easy way of making
eric kustarz wrote:
>
> On Sep 21, 2007, at 3:50 PM, Tim Spriggs wrote:
>
>> m2# zpool create test mirror iscsi_lun1 iscsi_lun2
>> m2# zpool export test
>> m1# zpool import -f test
>> m1# reboot
>> m2# reboot
>
> Since I haven't actually looke
that information?
> How would you ensure that it stayed accurate in
> a hotplug world?
>
If it is stored on the device itself it would keep the description with
the same device.
In the case of iSCSI, it would be nice to keep lun info instead of
having to correlate the drive id to the iqn to the lun, especially when
working with luns in one place and drive ids in another.
-Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
zdb?
Damon Atkins wrote:
> ZFS should allow 31+NULL chars for a comment against each disk.
> This would work well with the host name string (I assume is max_hostname
> 255+NULL)
> If a disk fails it should report c6t4908029d0 failed "comment from
> disk", it should also remember the comment unt
Hi all,
I just posted some stuff about a simple ZFS automatic backup service to
my blog:
http://blogs.sun.com/timf/entry/zfs_automatic_backup_0_1
- all thoughts/comments (and bug reports!) welcome
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Solaris
Nicolas Williams wrote:
> On Thu, Oct 04, 2007 at 10:26:24PM -0700, Jonathan Loran wrote:
>
>> I can envision a highly optimized, pipelined system, where writes and
>> reads pass through checksum, compression, encryption ASICs, that also
>> locate data properly on disk. ...
>>
>
> I've a
ill persist across reboots:
# zfs set mountpoint=/ftp/information tank/information
Does this do the trick ?
cheers,
tim
[1] though you can still use them if you really want to. This is well
documented in the ZFS Administration guide at
(pointing to the google c
Hi
this may be of interest:
http://blogs.sun.com/timthomas/entry/samba_performance_on_sun_fire
I appreciate that this is not a frightfully clever set of tests but I
needed some throughout numbersand the easiest way to share the
results is to blog.
Rgds
Tim
--
*Tim Thomas
*Storage
://blogs.sun.com/timthomas/entry/another_samba_test_on_sun
What I find nice about Thumper/X4500's is that they behave very
predictably..in my experience anyway.
Rgds
Tim
--
Tim Thomas
Storage
Systems Product Group
Sun Microsystems, Inc.
Internal Extension:
Would the bootloader have issues here? On x86 I would imagine that you
would have to reload grub, would a similar thing need to be done on SPARC?
Ivan Wang wrote:
>>> Erik Trimble wrote:
>>> After both drives are replaced, you will automatically see the
>>> additional space.
>>>
>> I be
Yeah, that would have saved me several weeks ago.
Samuel Borgman wrote:
> Hi,
>
> Having my 700Gb one disk ZFS crashing on me created ample need for a recovery
> tool.
>
> So I spent the weekend creating a tool that lets you list directories and
> copy files from any pool on a one disk ZFS fil
Jonathan Loran wrote:
> Richard Elling wrote:
>
>> Jonathan Loran wrote:
>>
> ...
>
>
>> Do not assume that a compressed file system will send compressed.
>> IIRC, it
>> does not.
>>
> Let's say, if it were possible to detect the remote compression support,
> couldn't we send it
Hey Kugutsumen,
Kugutsumen wrote:
> Ref: http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling
> Ref: http://mediacast.sun.com/share/timf/zfs-actual-root-install.sh
>
> This is my errata for Tim Foster's zfs root install script:
Thanks for the edits, much app
heers,
tim
--
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
http://blogs.sun.com/timf
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
u want to
be. An example is that Nexenta packs the Sun ssh build but also allows
installation of the Debian/Ubuntu build of the openssh package. The Sun
ssh is exactly what you expect. One thing that is difficult and not
entirely dealt with is upgrading zones to stay in sync with the glo
Chill. It's a filesystem. If you don't like it, don't use it.
Sincere Regards,
-Tim
can you guess? wrote:
>> can you guess? wrote:
>>
>
> ...
>
>
>>> Most of the balance of your post isn't addressed in
>>>
>&
No, you aren't cool, and no it isn't about zfs or your interest in it. It was
clear from the get-go that netapp was paying you to troll any discussion on it,
and to that end you've succeeded. Unfortunately you've done nothing but make
yourself look like a pompous arrogant ass in every forum yo
In the previous and current responses, you seem quite determined of
others misconceptions. Given that fact and the first paragraph of your
response below, I think you can figure out why nobody on this list will
reply to you again.
can you guess? wrote:
>> No, you aren't cool, and no it isn't a
Cyril Plisko wrote:
> On Nov 12, 2007 5:51 PM, Neelakanth Nadgir <[EMAIL PROTECTED]> wrote:
>
>> You could always replace this device by another one of same, or
>> bigger size using zpool replace.
>>
>
> Indeed. Provided that I always have an unused device of same or
> bigger size, which i
or all you really need to know to use them. Once you have found the new
disk you can simply:
zpool create pool c1t0d1
Let me know if you still find trouble.
Thanks,
-Tim
Boris Derzhavets wrote:
> I was able to create second Solaris partition by running
>
> #fdisk /dev/rdsk/c1t0d0p0
>
Rich Teer wrote:
> I should know better than to reply to a troll, but I can't let this
> personal attack stand. I know Al, and I can tell you for a fact that
> he is *far* from "technically incompentent".
>
> Judging from the length of your diatribe (which I didn't bother reading),
> you seem to
You've been trolling from the get-go and continue to do so. First it's "I have
the magical fix", which wasn't a fix at all. You claim to want to better the
project, then claim you can't be bothered because you don't really care.
You rant and rave about how this is so much like wafl from a tech
Which would be great if there were any merit to what he spews. It's
unfortunate if you're wasting your time reading the rants, you'd be much better
off reading the zfs manual if you need more in-depth explanation of the
technology...
This message posted from opensolaris.org
The only sad part is it's clear one or two people were fooled into believe
there's any merit to your trolling.
Grow up.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mai
Big talk from someone who seems so intent on hiding their credentials.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
So... issues with reslivering yet again. This is ~3TB pool. I have one raid-z
of 5 500GB disks, and a second pool of 3 300GB disks. One of the 300GB disks
failed, so I have replaced the drive. After doing the resliver, it takes
approximately 5 minutes for it to complete 68.05% of the reslive
After messing around... who knows what's going on with it now. Finally
rebooted because I was sick of it hanging. After that, this is what it came
back with:
root:=> zpool status
pool: fserv
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
c
That locked up pretty quickly as well, one more reboot and this is what I'm
seeing now:
root:=> zpool status
pool: fserv
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the re
ctory is being
either backed up, and/or included in snapshots.
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
http://blogs.sun.com/timf
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
So I have 8 drives total.
5x500GB seagate 7200.10
3x300GB seagate 7200.10
I'm trying to decide, would I be better off just creating two separate pools?
pool1 = 5x500gb raidz
pool2= 3x300gb raidz
or would I be better off creating one large pool, with two raid sets? I'm
trying to figure out if
ng to find a cleaner way than
http://blogs.sun.com/timf/entry/zfs_on_your_desktop
to tie the client and server sides together.
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
http://blogs.sun.com/timf
So now that cifs has finally been released in b77, anyone happen to have any
documentation on setup. I know the initial share is relatively simple... but
what is the process after that for actually getting users authenticated? I see
in the idmap service there's some configurations for authenti
so apparently you need to use smbadm, but when I got to create the group:
smbadm create wheel
failed to create the group (NOT_SUPPORTED)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.ope
can you guess? wrote:
> he isn't being
>
>> paid by NetApp.. think bigger
>>
>
> O frabjous day! Yet *another* self-professed psychic, but one whose internal
> voices offer different counsel.
>
> While I don't have to be psychic myself to know that they're *all* wrong
> (that's an adva
That would require coming up with something solid. Much like his
generalization that there's already snapshotting and checksumming that exists
for linux. yet when he was called out, he responded with a 20 page rant
because there doesn't exist such a solution. It's far easier to condescend
wh
Literacy has nothing to do with the glaringly obvious BS you keep spewing.
Rather than answer a question, which couldn't be answered, because you were
full of it, you tried to convince us all he really didn't know what he wanted.
The assumption sure made an a$$ out of someone, but you should
what firmware revision are you at?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Actually, it's central to the issue: if you were
> capable of understanding what I've been talking about
> (or at least sufficiently humble to recognize the
> depths of your ignorance), you'd stop polluting this
> forum with posts lacking any technical content
> whatsoever.
I don't speak "full
Whoever coined that phrase must've been wrong, it should definitely be "By
billtodd you've got it".
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-dis
For the same reason he won't respond to Jone, and can't answer the original
question. He's not trying to help this list out at all, or come up with any
real answers. He's just here to troll.
This message posted from opensolaris.org
___
zfs-discuss
> As I explained, there are eminently acceptable
> alternatives to ZFS from any objective standpoint.
>
So name these mystery alternatives that come anywhere close to the protection,
functionality, and ease of use that zfs provides. You keep talking about how
they exist, yet can't seem to come
STILL haven't given us a list of these filesystems you say match what zfs does.
STILL coming back with long winded responses with no content whatsoever to try
to divert the topic at hand. And STILL making incorrect assumptions.
This message posted from opensolaris.org
__
> You have me at a disadvantage here, because I'm not
> even a Unix (let alone Solaris and Linux) aficionado.
> But don't Linux snapshots in conjunction with rsync
> (leaving aside other possibilities that I've never
> heard of) provide rather similar capabilities (e.g.,
> incremental backup or re-
> If you ever progress beyond counting on your fingers
> you might (with a lot of coaching from someone who
> actually cares about your intellectual development)
> be able to follow Anton's recent explanation of this
> (given that the higher-level overviews which I've
> provided apparently flew com
Yet another prime example.
can you guess? wrote:
>> Please see below for an example.
>>
>
> Ah - I see that you'd rather be part of the problem than part of the
> solution. Perhaps you're also one of those knuckle-draggers who believes
> that a woman with the temerity to leave her home af
>
> http://www.itovernight.com/store/comersus_viewItem.asp
> ?idProduct=866720
>
Fly by night from the looks of it.
http://www.resellerratings.com/store/IToverNight
$140 looks like bottom dollar from anywhere reputable (which is more in line
with what I would expect).
http://castle.pricew
Look, it's obvious this guy talks about himself as if he is the person
he is addressing. Please stop taking this personally and feeding the troll.
can you guess? wrote:
>> Bill - I don't think there's a point in continuing
>> that discussion.
>>
>
> I think you've finally found something u
Hi there,
On Thu, 2007-12-13 at 02:17 -0800, Ross wrote:
> This may not be the best place to ask this, but I'm so new to Solaris
> I really don't know anywhere better. If anybody can suggest a better
> forum I'm all ears :)
You could have just mailed me :-)
>
ing anything useful when I insert a
disk with a pool on it. Does anyone know whether these should be working
now ? I'm not a hal expert...
> I've glanced at Tim Foster's autobackup and related scripts, and they're
> all about being triggered by the plug connection being
http://www.ewiz.com/detail.php?p=AOC-SAT2MV&c=fr&pid=84b59337aa4414aa488fdf95dfd0de1a1e2a21528d6d2fbf89732c9ed77b72a4
^^that was the best price I could find when looking 6 months ago. Dunno if
that's changed since.
This message posted from opensolaris.org
www.mozy.com appears to have unlimited backups for 4.95 a month. Hard to beat
that. And they're owned by EMC now so you know they aren't going anywhere
anytime soon.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
http://rsync.net/ $1.60 per month per G (no experience)
^^how does that compete with 4.95/month for all you can store? At 1.60/G, I
dunno about most people here, but I'd be broke real quick :D
As for personal, mine's all 4+1. I have the luxury of working for a storage
reseller so backups
Another free.99 option if you have the extra hardware lying around is boxbackup.
http://www.boxbackup.org/
I haven't used it personally, but heard good things.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
Speaking of which, I'm somewhat surprised sun hasn't done similar with zfs and
thumpers. You would think they would want some sort of ultimate showcase that
way :D Drinking the koolaid and such :)
This message posted from opensolaris.org
___
zfs-d
al (and
perhaps integrates more neatly with the rest of ZFS) ?
Otherwise, should I start filling in an ARC one-pager template or is
this sort of utility something that's better left to sysadmins to
implement themselves, rather than baking it into the OS ?
cheers,
Marcus:
I'm currently running the asus K8N-LR, and it works wonderfully. Not only do
the onboard ports work, but it also has multiple pci-x slots. I'm running an
opteron 165 (dual core) cpu with it. It's cheap, and fast.
http://usa.asus.com/products.aspx?l1=9&l2=39&l3=263&l4=0&model=1023&mode
Oh, one thing. The only downside is the onboard gigE interfaces are the
broadcom pci-e based nic's. They unfortunately do not support jumbo frames. I
doubt this will be an issue for you if it's just a home NAS. In my setup I've
pushed 50MB/sec over nfs and the server was barely breathing.
status of this ?
Thanks
Tim
--
Tim Thomas
Staff Engineer
Storage
Systems Product Group
Sun Microsystems, Inc.
Internal Extension: x(70)18097
Office Direct Dial: +44-161-905-8097
Mobile: +44-7802-212-209
Email: [EMAIL PROTECTED]
Blog: http://blogs.sun.com
Mike Gerdts wrote:
> On Jan 29, 2008 5:55 PM, Andrew Gabriel <[EMAIL PROTECTED]> wrote:
>
>> Having attached new bigger disks to a mirror, and detached all the older
>> smaller disks, how to I tell ZFS to expand the size of the mirror to
>> match that of the bigger disks? I had a look through th
by me.
[ just thought I'd ask ]
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
http://blogs.sun.com/timf
Template Version: @(#)onepager.txt 1.31 07/08/08 SMI
[ timf note: this is still a Draft, last updated 02/04/2008
using the templ
get rid of this pool?
Yep, here's one way: zpool export other pools on the system, then
delete /etc/zfs/zpool.cache, reboot the machine then do a zpool import
for each of the other pools you want to keep.
cheers,
tim
--
Tim Foster, Sun Microsystems Inc, Sol
801 - 900 of 959 matches
Mail list logo