There are few things to keep in mind while configuring ZFS.
1. As far as possible, give entire disk to ZFS, not slices. If you
have to use slices, at least give slices from different disks. Using
slices from same disk is bad idea as it results in extra rotations for
the disk.
2. If you are using
> With zfs, file systems are in many ways more like directories than what
we used to call file systems. They draw from pooled storage. They
have low overhead and are easy to create and destroy. File systems
are sort of like super-functional directories, with quality-of-service
control and cloning a
Brian Hechinger wrote:
After having set my desktop to install (to a pair of 140G SATA disks
that zfs is mirroring) at work, I was trying to skip the dump slice
since in this case, no, I don't really want it. ;)
Don't underestimate the usefulness of a dump device. You might run into
a panic so
So you think that if I move the QLA2342 from the 33MH slot to the 64MH slot, it
will be faster and will do a real parrallel work on each port?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
> I wonder if any one have idea about the performance loss caused by COW
> in ZFS? If you have to read old data out before write it to some other
> place, it involve disk seek.
Since all I/O in ZFS is cached, this actually isn't that bad; the seek happens
eventually, but it's not an "extra" seek.
Anyone who has an Xraid should have one (or 2) of these BBC modules.
good mojo.
http://store.apple.com/1-800-MY-APPLE/WebObjects/AppleStore.woa/wa/RSLID
?mco=6C04E0D7&nplm=M8941G/B
Can you tell I <3 apple?
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
On 4/24/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
Other than /var/tmp my short list for being separate ZFS datasets are:
/var/crash - because can be big and we might want quotas.
/var/core [ which we don't yet have by default but I'm considering
submitting an ARC case for this. ]
After having set my desktop to install (to a pair of 140G SATA disks
that zfs is mirroring) at work, I was trying to skip the dump slice
since in this case, no, I don't really want it. ;)
pfinstall refuses to install without a dump slice, so I went ahead
and set the last 500MB of the disks to be
On Fri, Apr 27, 2007 at 09:09:21AM +1200, Ian Collins wrote:
>
> Or an alternative CD1, for those of us who do network installs after
> booting from CD.
I have a question about that. This is how I do all my installs, and
in fact I just did this on a machine at work (to be a temporary NFS
server
Sun Manager wrote:
> Hi,
>
> I am pretty new to Solaris 10 and am slowly getting used to some of
> its niceties - zfs being one of them. On my first venture with it, I
> have configured a storage pool containing a few slices from a single
> disk. This disk is under a hardware raid control, the ser
On Thu, Apr 26, 2007 at 02:51:24PM -0700, MC wrote:
> I think Benjamin was referring to the image Brian promised to upload, which,
> I see now, is up on his web space.
Which is now complete, and the sum file is also uploaded.
> Doing a zpool scrub after booting up causes Solaris to restart about
Cedric,
On 4/26/07, cedric briner <[EMAIL PROTECTED]> wrote:
>> okay let'say that it is not. :)
>> Imagine that I setup a box:
>> - with Solaris
>> - with many HDs (directly attached).
>> - use ZFS as the FS
>> - export the Data with NFS
>> - on an UPS.
>>
>> Then after reading the :
>
Robert,
On 4/27/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello Wee,
Thursday, April 26, 2007, 4:21:00 PM, you wrote:
WYT> On 4/26/07, cedric briner <[EMAIL PROTECTED]> wrote:
>> okay let'say that it is not. :)
>> Imagine that I setup a box:
>> - with Solaris
>> - with many HDs (dire
Hi,
I am pretty new to Solaris 10 and am slowly getting used to some of its
niceties - zfs being one of them. On my first venture with it, I have
configured a storage pool containing a few slices from a single disk. This
disk is under a hardware raid control, the server in question being a
Sunfir
On Fri, 2007-04-27 at 07:53 +1000, James C. McPherson wrote:
> Ming Zhang wrote:
> > On Fri, 2007-04-27 at 09:25 +1200, Ian Collins wrote:
> >> Claus Guttesen wrote:
> >>
> >>> Hi.
> >>>
> >>> If I create a zpool with the following command:
> >>>
> >>> zpool create tank raidz2 da0 da1 da2 da3 da4 d
On 4/27/07, Erblichs <[EMAIL PROTECTED]> wrote:
Ming Zhang wrote:
>
> Hi All
>
> I wonder if any one have idea about the performance loss caused by COW
> in ZFS? If you have to read old data out before write it to some other
> place, it involve disk seek.
>
Ming,
Lets take a pro example
On Thu, 2007-04-26 at 14:50 -0700, Eric Schrock wrote:
> On Thu, Apr 26, 2007 at 05:46:36PM -0400, Ming Zhang wrote:
> >
> > one thing i would guess is that the device id will keep the same and be
> > used as final proof while path name is only used as a clue.
> >
>
> The devid is the preferred
Hello Cindy,
Friday, April 27, 2007, 1:28:05 AM, you wrote:
CSSC> Hi Robert,
CSSC> I just want to be clear that you can't just remove a disk from an
CSSC> exported pool without penalty upon import:
CSSC> - If the underlying redundancy of the original pool doesn't support
CSSC> it and you lose d
Hi Robert,
I just want to be clear that you can't just remove a disk from an
exported pool without penalty upon import:
- If the underlying redundancy of the original pool doesn't support
it and you lose data
- Some penalty exists even for redundant pools, which is running
in DEGRADED mode until
Hello Cindy,
Thursday, April 26, 2007, 8:57:54 PM, you wrote:
CSSC> Nenad,
CSSC> I've seen this solution offered before, but I would not recommend this
CSSC> except as a last resort, unless you didn't care about the health of
CSSC> the original pool.
CSSC> Removing a device from an exported poo
Actually, I think that one might not be ready quite yet. per:
ps: the iso's will be up shortly, they are zipping now. You will be able
to tell they are done because I won't upload the cksum files until after
the iso images upload completes. When you see .sum files, you know they
are done.
Ming Zhang wrote:
On Fri, 2007-04-27 at 09:25 +1200, Ian Collins wrote:
Claus Guttesen wrote:
Hi.
If I create a zpool with the following command:
zpool create tank raidz2 da0 da1 da2 da3 da4 da5 da6 da7
and after a reboot the device names for some reason are changed so da2
and da5 are swapp
> zpool create tank raidz2 da0 da1 da2 da3 da4 da5 da6 da7
>
> and after a reboot the device names for some reason are changed so da2
> and da5 are swapped, either by altering the LUN setting on the storage
> or by switching cables/swapping disks etc.?
>
> How will zfs handle that? Will it simply
I think Benjamin was referring to the image Brian promised to upload, which, I
see now, is up on his web space.
My experience with the vmware image is as follows:
Doing a zpool scrub after booting up causes Solaris to restart about half way
through. After the crash, a zpool status says there i
On Thu, Apr 26, 2007 at 05:46:36PM -0400, Ming Zhang wrote:
>
> one thing i would guess is that the device id will keep the same and be
> used as final proof while path name is only used as a clue.
>
The devid is the preferred method of finding devices. The path is used
as a secondary measure.
On Fri, 2007-04-27 at 09:25 +1200, Ian Collins wrote:
> Claus Guttesen wrote:
>
> > Hi.
> >
> > If I create a zpool with the following command:
> >
> > zpool create tank raidz2 da0 da1 da2 da3 da4 da5 da6 da7
> >
> > and after a reboot the device names for some reason are changed so da2
> > and da
Claus Guttesen wrote:
> Hi.
>
> If I create a zpool with the following command:
>
> zpool create tank raidz2 da0 da1 da2 da3 da4 da5 da6 da7
>
> and after a reboot the device names for some reason are changed so da2
> and da5 are swapped, either by altering the LUN setting on the storage
> or by s
Of course. But in the case of syslog you write it to local disk and
send it to your central syslog server.
Speaking of syslog, where is the appropriate community to discuss syslog-ng?
Thanks,
Brian
On 4/26/07, Malachi de Ælfweald <[EMAIL PROTECTED]> wrote:
Just an interesting side note net
Hi.
If I create a zpool with the following command:
zpool create tank raidz2 da0 da1 da2 da3 da4 da5 da6 da7
and after a reboot the device names for some reason are changed so da2
and da5 are swapped, either by altering the LUN setting on the storage
or by switching cables/swapping disks etc.?
Just an interesting side note networked based logging isn't always a bad
thing. I'll give you an example. My Netgear router will crash within 1/2
hour if I turn local logging on. However, it has no problems sending the
logs via syslog to another machine.
Just a thought.
Mal
On 4/26/07, Ada
Ming,
Lets take a pro example with a minimal performance
tradeoff.
All FSs that modify a disk block, IMO, do a full
disk block read before anything.
If doing a extended write and moving to a
larger block size with COW you give yourself
the
Lori Alt wrote:
> Benjamin Perrault wrote:
>
>> Don't mean to be a pest - but is there an eta on when the
>> b62_zfsboot.iso will be posted?
>> I'm really looking forward to ZFS root, but I'd rather download a
>> working dvd image then attempt to patch the image myself :-)
>
> Actually, we hadn't
> Peter Tribble wrote:
> > On 4/24/07, Darren J Moffat <[EMAIL PROTECTED]>
> wrote:
> >> With reference to Lori's blog posting[1] I'd like
> to throw out a few of
> >> my thoughts on spliting up the namespace.
> >
> > Just a plea with my sysadmin hat on - please don't
> go overboard
> > and make ne
Benjamin Perrault wrote:
Don't mean to be a pest - but is there an eta on when the b62_zfsboot.iso will be posted?
I'm really looking forward to ZFS root, but I'd rather download a working dvd
image then attempt to patch the image myself :-)
Actually, we hadn't planned to release zfsboot dvd
Peter Tribble wrote:
On 4/24/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
With reference to Lori's blog posting[1] I'd like to throw out a few of
my thoughts on spliting up the namespace.
Just a plea with my sysadmin hat on - please don't go overboard
and make new filesystems just because we
Hi All
I wonder if any one have idea about the performance loss caused by COW
in ZFS? If you have to read old data out before write it to some other
place, it involve disk seek.
Thanks
Ming
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
htt
There are 3 slots in a V240;
1 x 64bit @ 33/66Mhz
2 x 64bit @ 33Mhz
His suggestion was that you might be saturating the PCI slot, since
their respective throughput (in theory) is 528MB and 264MB.
A 2342 should (again, in theory) do 256MB (per port) ... so slotting
the card into the 33Mhz slots
On 4/26/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> Or maybe even restructure the filesystem layout so that directories
> with common properties could live under a common parent that could
> be a separate filesystem rather than creating separate filesystems
> for each?
Hmn we have that alr
Adam Leventhal wrote:
On Wed, Apr 25, 2007 at 09:30:12PM -0700, Richard Elling wrote:
IMHO, only a few people in the world care about dumps at all (and you
know who you are :-). If you care, setup dump to an NFS server somewhere,
no need to have it local.
Well IMHO, every Solaris customer car
Peter Tribble wrote:
In other words, let people have a system with just one filesystem.
I'm fine with that.
I think we have lots of options but it might be nice to come up with a
short list of special/important directories that would should always
recommend be separate datasets -
If there i
Hey Robert,
This is very cool. Thanks for doing the analysis. What a terrific validation
of software RAID and of RAID-Z in particular.
Adam
On Tue, Apr 24, 2007 at 11:35:32PM +0200, Robert Milkowski wrote:
> Hello zfs-discuss,
>
> http://milek.blogspot.com/2007/04/hw-raid-vs-zfs-software-raid-
On Wed, Apr 25, 2007 at 09:30:12PM -0700, Richard Elling wrote:
> IMHO, only a few people in the world care about dumps at all (and you
> know who you are :-). If you care, setup dump to an NFS server somewhere,
> no need to have it local.
Well IMHO, every Solaris customer cares about crash dumps
On 4/24/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
With reference to Lori's blog posting[1] I'd like to throw out a few of
my thoughts on spliting up the namespace.
Just a plea with my sysadmin hat on - please don't go overboard
and make new filesystems just because we can. Each extra
files
Nenad,
I've seen this solution offered before, but I would not recommend this
except as a last resort, unless you didn't care about the health of
the original pool.
Removing a device from an exported pool, could be very bad, depending
on the pool's redundancy. You might not get your all data bac
Hello Wee,
Thursday, April 26, 2007, 4:21:00 PM, you wrote:
WYT> On 4/26/07, cedric briner <[EMAIL PROTECTED]> wrote:
>> okay let'say that it is not. :)
>> Imagine that I setup a box:
>> - with Solaris
>> - with many HDs (directly attached).
>> - use ZFS as the FS
>> - export the Data wit
Don't mean to be a pest - but is there an eta on when the b62_zfsboot.iso will
be posted?
I'm really looking forward to ZFS root, but I'd rather download a working dvd
image then attempt to patch the image myself :-)
cheers and thanks,
-bp
This message posted from opensolaris.org
_
On Wed, 2007-04-25 at 21:30 -0700, Richard Elling wrote:
> Brian Gupta wrote:
> > Maybe a dumb question, but why would anyone ever want to dump to an
> > actual filesystem? (Or is my head thinking too Solaris)
>
> IMHO, only a few people in the world care about dumps at all (and you
> know who you
On 26-Apr-07, at 11:57 AM, cedric briner wrote:
okay let'say that it is not. :)
Imagine that I setup a box:
- with Solaris
- with many HDs (directly attached).
- use ZFS as the FS
- export the Data with NFS
- on an UPS.
Then after reading the :
http://www.solarisinternals.com/wiki/in
cedric briner writes:
> > You might set zil_disable to 1 (_then_ mount the fs to be
> > shared). But you're still exposed to OS crashes; those would
> > still corrupt your nfs clients.
> >
> > -r
>
> hello Roch,
>
> I've few questions
>
> 1)
> from:
>Shenanigans with ZFS flus
You can - easily:
# zpool export [i]mypool[/i]
Then you take out one of the disks and put it into another system or a safe
place.
Afterwards you simply import the pool again:
# zpool import [i]mypool[/i]
Note - you can NOT import both disks separately, as they are both taged to
belong to the sam
cedric briner wrote:
You might set zil_disable to 1 (_then_ mount the fs to be
shared). But you're still exposed to OS crashes; those would still
corrupt your nfs clients.
-r
hello Roch,
I've few questions
1)
from:
Shenanigans with ZFS flushing and intelligent arrays...
http://blogs
You might set zil_disable to 1 (_then_ mount the fs to be
shared). But you're still exposed to OS crashes; those would
still corrupt your nfs clients.
-r
hello Roch,
I've few questions
1)
from:
Shenanigans with ZFS flushing and intelligent arrays...
http://blogs.digitar.com/jjww/?itemid
So first of all, we're not proposing dumping to a filesystem.
We're proposing dumping to a zvol, which is a raw volume
implemented within a pool (see the -V option to the zfs
create command). As Malachi points out, the advantage
of this is that it simplifies the ongoing administration. You don't
RP> Correction, it's now Fix Delivered build snv_56.
RP> 4894692 caching data in heap inflates crash dump
Good to know.
I hope it will make it into U4.
Yep, it will. You know its kinda silly we don't expose that info to
the public via:
http://bugs.opensolaris.org/view_bug.do?bug_
okay let'say that it is not. :)
Imagine that I setup a box:
- with Solaris
- with many HDs (directly attached).
- use ZFS as the FS
- export the Data with NFS
- on an UPS.
Then after reading the :
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#ZFS_and_Complex_St
On 4/26/07, Roch - PAE <[EMAIL PROTECTED]> wrote:
You might set zil_disable to 1 (_then_ mount the fs to be
shared). But you're still exposed to OS crashes; those would
still corrupt your nfs clients.
For the love of God do NOT do stuff like that.
Just create ZFS on a pile of disks the way t
You might set zil_disable to 1 (_then_ mount the fs to be
shared). But you're still exposed to OS crashes; those would
still corrupt your nfs clients.
-r
cedric briner writes:
> Hello,
>
> I wonder if the subject of this email is not self-explanetory ?
>
>
> okay let'say that it is no
On 4/26/07, cedric briner <[EMAIL PROTECTED]> wrote:
okay let'say that it is not. :)
Imagine that I setup a box:
- with Solaris
- with many HDs (directly attached).
- use ZFS as the FS
- export the Data with NFS
- on an UPS.
Then after reading the :
http://www.solarisinternals.com/wiki
> few weeks, but.. hey- it came up.
Hello!
> We will.
Good.
> I don't plan on finishing most of the work in ~10
> days like Pawel did ;)
I'd really really advise you to try to port GEOM which is an awesome way
of dealing with storage. You could do w/o of course but it will be
more complicat
Hello,
I wonder if the subject of this email is not self-explanetory ?
okay let'say that it is not. :)
Imagine that I setup a box:
- with Solaris
- with many HDs (directly attached).
- use ZFS as the FS
- export the Data with NFS
- on an UPS.
Then after reading the :
http://www.solarisin
On Thu, 26 Apr 2007, Ben Miller wrote:
> I just rebooted this host this morning and the same thing happened again. I
> have the core file from zfs.
>
> [ Apr 26 07:47:01 Executing start method ("/lib/svc/method/nfs-server start")
> ]
> Assertion failed: pclose(fp) == 0, file ../common/libzfs_mo
I was able to duplicate this problem on a test Ultra 10. I put in a workaround
by adding a service that depends on /milestone/multi-user-server which does a
'zfs share -a'. It's strange this hasn't happened on other systems, but maybe
it's related to slower systems...
Ben
This message pos
The Xraid is a very well thought of storage device with a heck of a price
point. Attached is an image of the "Settings"/"Performance" Screen where
you see "Allow Host Cache Flushing".
I think when you use ZFS, it would be best to uncheck that box.
This is what happen when you do use GUI in your
On Wed, Apr 25, 2007 at 09:30:12PM -0700, Richard Elling wrote:
>
> IMHO, only a few people in the world care about dumps at all (and you
> know who you are :-). If you care, setup dump to an NFS server somewhere,
> no need to have it local.
a) what does this entail
b) with zvols not supporting
On Wed, Apr 25, 2007 at 09:55:16PM -0400, Brian Gupta wrote:
>
> In Solaris 8(?) this changed, in that crashdumps streans were
> compressed as they were written out to disk. Although I've never read
> this anywhere, I assumed the reasons this was done are as follows:
What happens if the dump slic
On Wed, Apr 25, 2007 at 07:50:16PM -0700, MC wrote:
> You've delivered us to awesometown, Brain.
>
> > zfsboot.tar.bz2 is a vmware image made on a VMWare Server 1.0.1
> machine.
>
> But oops, what is the root login password?! :)
D'Oh!
The root password is..
wait for it..
password
:)
Hello Roch,
Thursday, April 26, 2007, 12:33:00 PM, you wrote:
RP> Robert Milkowski writes:
>> Hello Brian,
>>
>> Thursday, April 26, 2007, 3:55:16 AM, you wrote:
>>
>> BG> If I recall, the dump partition needed to be at least as large as RAM.
>>
>> BG> In Solaris 8(?) this changed, in t
I got my answers as explained in the following to links
http://www.opensolaris.org/jive/thread.jspa?threadID=27277&tstart=15
http://docs.sun.com/app/docs/doc/817-1592/6mhahuork?a=view
Thanks
On 4/25/07, Asif Iqbal <[EMAIL PROTECTED]> wrote:
On 4/24/07, Asif Iqbal <[EMAIL PROTECTED]> wrote:
>
I just rebooted this host this morning and the same thing happened again. I
have the core file from zfs.
[ Apr 26 07:47:01 Executing start method ("/lib/svc/method/nfs-server start") ]
Assertion failed: pclose(fp) == 0, file ../common/libzfs_mount.c, line 380, func
tion zfs_share
Abort - core du
Robert Milkowski writes:
> Hello Brian,
>
> Thursday, April 26, 2007, 3:55:16 AM, you wrote:
>
> BG> If I recall, the dump partition needed to be at least as large as RAM.
>
> BG> In Solaris 8(?) this changed, in that crashdumps streans were
> BG> compressed as they were written out to
Hello Ron,
Tuesday, April 24, 2007, 4:54:52 PM, you wrote:
RH> Thanks Robert. This will be put to use.
Please let us know about the results.
--
Best regards,
Robertmailto:[EMAIL PROTECTED]
http://milek.blogspot.com
__
Hello Brian,
Thursday, April 26, 2007, 3:55:16 AM, you wrote:
BG> If I recall, the dump partition needed to be at least as large as RAM.
BG> In Solaris 8(?) this changed, in that crashdumps streans were
BG> compressed as they were written out to disk. Although I've never read
BG> this anywhere,
On 04/24/07 01:37, Richard Elling wrote:
Leon Koll wrote:
My guess that Yaniv assumes that 8 pools with 62.5 million files each
have significantly less chances to be corrupted/cause the data loss
than 1 pool with 500 million files in it.
Do you agree with this?
I do not agree with this sta
On 04/24/07 17:30, Darren J Moffat wrote:
Richard Elling wrote:
/var/tm Similar to the /var/log rationale.
[assuming /var/tmp]
I intended to type /var/fm not /var/tm or /var/tmp. The FMA state data
is I believe something that you would want to share between all boot
environments on
Hello Robert,
it would be really intresting if you can add a HD RAID 10 lun with UFS to your
comparison.
gino
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/list
>That would surprise me. Can it be that you are saturating the PCI slot
>you 2342 card sits in ? IIRC not every slot on V240 can handle dual port 2342
>card
>going at full rate.
I didn't understand what you mean?
there is only 3 slots on v240, which of them cannot handle dual HBA?
>Generalizati
76 matches
Mail list logo