On Apr 25, 2007, at 11:17 AM, cedric briner wrote:
hello the list,
After reading the _excellent_ ZFS Best Practices Guide, I've seen
in the section: ZFS and Complex Storage Consideration that we
should configure the storage system to ignore command which will
flush the memory into the dis
Brian Gupta wrote:
Yes, dump on ZVOL isn't currently supported, so a dump slice is still
needed.
Maybe a dumb question, but why would anyone ever want to dump to an
actual filesystem? (Or is my head thinking too Solaris)
IMHO, only a few people in the world care about dumps at all (and you
kn
You've delivered us to awesometown, Brain.
> zfsboot.tar.bz2 is a vmware image made on a VMWare Server 1.0.1
machine.
But oops, what is the root login password?! :)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@op
On 4/24/07, Mark Shellenbaum <[EMAIL PROTECTED]> wrote:
> Is it expected that if I have filesystem tank/foo and tank/foo/bar
> (mounted under /tank) then in order to be able to browse via
> /net down into tank/foo/bar I need to have group/other permissions
> on /tank/foo open?
>
You are running
If I recall, the dump partition needed to be at least as large as RAM.
In Solaris 8(?) this changed, in that crashdumps streans were
compressed as they were written out to disk. Although I've never read
this anywhere, I assumed the reasons this was done are as follows:
1) Large enterprise system
cadaver decided to complain that the file was "too large".
This means that the DVD ISO won't get uploaded until I get to work
tomorrow and can use something other than cadaver.
Sorry for the delay.
-brian
--
"Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of
On Wed, Apr 25, 2007 at 09:05:09PM -0400, Brian Gupta wrote:
>
> I do understand the reasons why you would want to dump to a virtual
> construct. I am just not very comfortable with the concept.
>
> My instinct is that you want the fewest layers of software involved in
> the event of a system cra
Please bear with me, as I am not very familiar with ZFS. (And
unfortunately probably won't have time to be until ZFS supports root
boot and clustering in a named release).
I do understand the reasons why you would want to dump to a virtual
construct. I am just not very comfortable with the concep
On 4/25/07, Brian Gupta <[EMAIL PROTECTED]> wrote:
>
>> Yes, dump on ZVOL isn't currently supported, so a dump slice is still
>needed.
>
>Maybe a dumb question, but why would anyone ever want to dump to an
>actual filesystem? (Or is my head thinking too Solaris)
>
>Actually I could see why, but I d
Maybe so that it can grow rather than being tied to a specific piece of
hardware?
Malachi
On 4/25/07, Brian Gupta <[EMAIL PROTECTED]> wrote:
> Yes, dump on ZVOL isn't currently supported, so a dump slice is still
needed.
Maybe a dumb question, but why would anyone ever want to dump to an
actu
Yes, dump on ZVOL isn't currently supported, so a dump slice is still needed.
Maybe a dumb question, but why would anyone ever want to dump to an
actual filesystem? (Or is my head thinking too Solaris)
Actually I could see why, but I don't think it is a good idea.
-brian
__
On Wed, Apr 25, 2007 at 04:36:46PM -0700, Malachi de ??lfweald wrote:
> That's awesome!
>
> So, to clarify... If I burn the b62_zfsboot.iso to a DVD and boot from it
> with the intention of doing a fresh install
Yes. Upgrade/Flash installs are not availble with zfs boot yet.
> What are the
That's awesome!
So, to clarify... If I burn the b62_zfsboot.iso to a DVD and boot from it
with the intention of doing a fresh install
What are the correct steps to do the setup with a zfs mirrored boot? Follow
the installer? Follow the netinstall or Tom's instructions?
Thanks,
Malachi
On 4/
As promised, here it is:
https://jeffshare.jefferson.edu/users/blh008/Public/Solaris/
b62_zfsboot.iso.bz2 is a bootable patched b62 DVD.
b62_zfsboot_cd1.iso.bz2 is a bootable patched b63 CD1. I'm not
sure how useful this is unless you know how to tell pfinstall how
to look to an NFS server for
They do need to start on the "next" filesystem and it seems very ideal
for Apple. If they didn't then apple will be making a huge mistake
because whatever FS's exist now, zfs has already pretty much trump'd it
on almost every level except for maturity.
I'm expecting ZFS and ISCSI(initiator and ta
On 4/25/07, shay <[EMAIL PROTECTED]> wrote:
Those are the result of my preformance test :
Conclutions :
1. the dual HBA qla2342 is not a parallel at all. so it is better to use 2
single HBAs then one dual HBA.
That would surprise me. Can it be that you are saturating the PCI slot
you 2342
Mike Dotson wrote:
On Wed, 2007-04-25 at 14:45 -0600, Lori Alt wrote:
oliver soell wrote:
I just installed a mirrored root system last night, but using Tim Foster's
zfs-actual-root-install.sh script on a clean install of b62
(http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happi
On Wed, 2007-04-25 at 14:45 -0600, Lori Alt wrote:
> oliver soell wrote:
> > I just installed a mirrored root system last night, but using Tim Foster's
> > zfs-actual-root-install.sh script on a clean install of b62
> > (http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling).
> >
oliver soell wrote:
I just installed a mirrored root system last night, but using Tim Foster's
zfs-actual-root-install.sh script on a clean install of b62
(http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling).
You mention that no UFS slices are necessary using the patched DV
Those are the result of my preformance test :
time -p "dd if=/dev/zero of=/ZFS_pool/testfile bs=1024k count=20480 &&
/bin/sync"
(also I change on the server the kernel parameter -> ssd_max_throttle=32)
seconds__write the file on zpool that biuld from
--
Sorry I didn't get respond to this earlier. Due to a really dumb mistake
on my part (long embarrassing story omitted), I lost access to
my mail for two days.
Answers below:
Malachi de Ælfweald wrote:
I currently have b60 kinda installed (there, but still having some
issues learning my way arou
On Apr 25, 2007, at 10:54 AM, Andy Lubel wrote:
The Xraid is a very well thought of storage device with a heck of a
price
point. Attached is an image of the "Settings"/"Performance" Screen
where
you see "Allow Host Cache Flushing".
I think when you use ZFS, it would be best to uncheck that
Toby Thain wrote:
On 25-Apr-07, at 12:17 PM, cedric briner wrote:
hello the list,
After reading the _excellent_ ZFS Best Practices Guide, I've seen in
the section: ZFS and Complex Storage Consideration that we should
configure the storage system to ignore command which will flush the
memor
On 25-Apr-07, at 12:17 PM, cedric briner wrote:
hello the list,
After reading the _excellent_ ZFS Best Practices Guide, I've seen
in the section: ZFS and Complex Storage Consideration that we
should configure the storage system to ignore command which will
flush the memory into the disk.
On Apr 25, 2007, at 9:16 AM, Oliver Gould wrote:
Hello-
I was planning on sending out a more formal sort of introduction in a
few weeks, but.. hey- it came up.
I will be porting ZFS to NetBSD this summer. Some info on this
project
can be found at:
http://www.olix0r.net/bitbucket/i
Hello-
I was planning on sending out a more formal sort of introduction in a
few weeks, but.. hey- it came up.
I will be porting ZFS to NetBSD this summer. Some info on this project
can be found at:
http://www.olix0r.net/bitbucket/index.cgi/netbsd/zfs
and
http://netbsd-soc.sf.ne
NetBSD could reuse Pawel's work on FreeBSD but it sits on top of GEOM,
something NetBSD does not have. It greatly helped the porting work as Pawel is
already a GEOM wizard (gmirror, gjournal and all that).
This message posted from opensolaris.org
_
[EMAIL PROTECTED] wrote on 04/25/2007 10:17:50 AM:
> hello the list,
>
> After reading the _excellent_ ZFS Best Practices Guide, I've seen in the
> section: ZFS and Complex Storage Consideration that we should configure
> the storage system to ignore command which will flush the memory into
>
hello the list,
After reading the _excellent_ ZFS Best Practices Guide, I've seen in the
section: ZFS and Complex Storage Consideration that we should configure
the storage system to ignore command which will flush the memory into
the disk.
So does some of you knows how to tell Xserve Raid t
On 4/24/07, Brian Hechinger <[EMAIL PROTECTED]> wrote:
Also keep in mind that FUSE has the disadvantage of data needing to
pass the userland/kernel barrier typically at least twice, so there
will be some slowdown there most likely.
I think it depends on how the Linux kernel handles the interfac
Brian Hechinger wrote:
> I think someone needs to port ZFS to NetBSD, they are in dired need of
> a better filesystem. ;)
Already being done:
http://code.google.com/soc/netbsd/about.html
Isn't the Google SoC great? :)
___
zfs-discuss mailing list
zfs-
31 matches
Mail list logo