> A couple more questions here.
>
> [mpstat]
>
> > CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
> > 0 0 0 3109 3616 316 196 5 17 48 45 245 0 85 0 15
> > 1 0 0 3127 3797 592 217 4 17 63 46 176 0 84 0 15
> > CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys
> with recent bits ZFS compression is now handled concurrently with
> many CPUs working on different records.
> So this load will burn more CPUs and acheive it's results
> (compression) faster.
>
> So the observed pauses should be consistent with that of a load
> generating high system time.
running a recent patched s10 system, zfs version 3, attempting to
dump the label information using zdb when the pool is online doesn't seem to
give
a reasonable information, any particular reason for this ?
# zpool status
pool: blade-mirror-pool
state: ONLINE
scrub: none requested
con
On May 7, 2007, at 7:11 AM, Frank Batschulat wrote:
running a recent patched s10 system, zfs version 3, attempting to
dump the label information using zdb when the pool is online
doesn't seem to give
a reasonable information, any particular reason for this ?
# zpool status
pool: blade-
Should not be a problem. I have 3 drives hooked up via eSATA myself and it
works great. I am not sure if you will have to power down and up to connect and
disconnect the drives though when you power the drives up and down.
This message posted from opensolaris.org
_
Greetings learned ZFS geeks & guru’s,
Yet another question comes from my continued ZFS performance testing. This has
to do with zpool iostat, and the strangeness that I do see.
I’ve created an eight (8) disk raidz pool from a Sun 3510 fibre array giving me
a 465G volume.
# zpool create tp raidz
Something I was wondering about myself. What does the raidz toplevel (pseudo?)
device do? Does it just indicate to the SPA, or whatever module is responsible,
to additionally generate parity? The thing I'd like to know is if variable
block sizes, dynamic striping et al still applies to a single
Hi Lee,
You can decide whether you want to use ZFS for a root file system now.
You can find this info here:
http://opensolaris.org/os/community/zfs/boot/
Consider this setup for your other disks, which are:
250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive
250GB = disk1
200GB =
On 5/7/07, Tony Galway <[EMAIL PROTECTED]> wrote:
Greetings learned ZFS geeks & guru's,
Yet another question comes from my continued ZFS performance testing. This has
to do with zpool iostat, and the strangeness that I do see.
I've created an eight (8) disk raidz pool from a Sun 3510 fibre arra
> Given the odd sizes of your drives, there might not
> be one, unless you
> are willing to sacrifice capacity.
I think for the SoHo and home user scenarios, I think it might be of advantage
if the disk drivers offer unified APIs to read out and interpret disk drive
diagnostics, like SMART on AT
What are these alignment requirements?
I would have thought that at the lowest level, parity stripes would have been
allocated traditionally, while treating the remaining usable space like a JBOD
the level above, thus not subject to any restraints (apart when getting close
to the parity stripe
Cindy,
Thanks so much for the response -- this is the first one that I
consider an actual answer. :-)
I'm still unclear on exactly what I end up with. I apologize in
advance for my ignorance -- the ZFS admin guide assumes knowledge
that I don't yet have.
I assume that disk4 is a hot spa
On 7-May-07, at 3:44 PM, [EMAIL PROTECTED] wrote:
Hi Lee,
You can decide whether you want to use ZFS for a root file system now.
You can find this info here:
http://opensolaris.org/os/community/zfs/boot/
Bearing in mind that his machine is a G4 PowerPC. When Solaris 10 is
ported to this p
Toby Thain wrote:
On 7-May-07, at 3:44 PM, [EMAIL PROTECTED] wrote:
Hi Lee,
You can decide whether you want to use ZFS for a root file system now.
You can find this info here:
http://opensolaris.org/os/community/zfs/boot/
Bearing in mind that his machine is a G4 PowerPC. When Solaris 10 is
On 5/7/07, Chris Csanady <[EMAIL PROTECTED]> wrote:
On 5/7/07, Tony Galway <[EMAIL PROTECTED]> wrote:
> Greetings learned ZFS geeks & guru's,
>
> Yet another question comes from my continued ZFS performance testing. This
has to do with zpool iostat, and the strangeness that I do see.
> I've crea
Lee,
Yes, the hot spare (disk4) should kick if another disk in the pool fails
and yes, the data is moved to disk4.
You are correct:
160 GB (the smallest disk) * 3 + raidz parity info
Here's the size of raidz pool comprised of 3 136-GB disks:
# zpool list
NAMESIZEUSED
I think it will be in the next.next (10.6) OSX, we just need to get apple to
stop playing with their silly cell phone (that I cant help but want, damn
them!).
I have similar situation at home, but what I do is use Solaris 10 on a
cheapish x86 box with 6 400gb IDE/SATA disks, I then make them into
I've been using long SATA cables routed out through the case to a home built
chassis with its own power supply for a year now. Not even eSATA. That part
works well.
Substitute this for USB/Firewire/SCSI/USB thumb drives. It's really the same
problem.
Ok, now you want to deal with a ZFS zpoo
Tom Buskey wrote:
How well does ZFS work on removable media? In a RAID configuration? Are there
issues with matching device names to disks?
I've had a zpool with 4-250Gb IDE drives in three places recently:
- in an external 4-bay Firewire case, attached to a Sparc box
- inside a dual-Opter
There's a video put out by some Sun people in Germany (IIRC) they
made several 4 RAIDZs on 3 USB hubs using a total of 12 USB
thumbdrives. At one point they pulled all the USB sticks, shuffled
them and then re-imported the pool. Worked like butter.
Corey
On May 7, 2007, at 1:30 PM, Tom Bu
I'm hoping that this is simpler than I think it is. :-)
We routinely clone our boot disks using a fairly simple script that:
1) Copies the source disk's partition layout to the target disk using
[i]prtvtoc[/i], [i]fmthard[/i] and [i]installboot.[/i]
2) Using a list, runs [i]newfs[/i] against the
Aaron Newcomb wrote:
Does ZFS support any type of remote mirroring? It seems at present my
only two options to achieve this would be Sun Cluster or Availability
Suite. I thought that this functionality was in the works, but I haven't
heard anything lately.
You could put something together using
Pawel Jakub Dawidek wrote:
This is what I see on Solaris (hole is 4GB):
# /usr/bin/time dd if=/ufs/hole of=/dev/null bs=128k
real 23.7
# /usr/bin/time dd if=/zfs/hole of=/dev/null bs=128k
real 21.2
# /usr/bin/time dd if=/ufs/hole of=/dev/null
On 7-May-07, at 5:27 PM, Andy Lubel wrote:
I think it will be in the next.next (10.6) OSX,
Well, the iPhone forced a few months schedule slip, perhaps *instead
of* dropping features?
Mind you I wouldn't be particularly surprised if ZFS wasn't in 10.5.
Just so long as we get it eventua
Mark V. Dalton wrote:
I'm hoping that this is simpler than I think it is. :-)
We routinely clone our boot disks using a fairly simple script that:
1) Copies the source disk's partition layout to the target disk using
[i]prtvtoc[/i], [i]fmthard[/i] and [i]installboot.[/i]
Danger Will Robinson!
> Pawel Jakub Dawidek wrote:
> > This is what I see on Solaris (hole is 4GB):
> >
> > # /usr/bin/time dd if=/ufs/hole of=/dev/null bs=128k
> > real 23.7
> > # /usr/bin/time dd if=/zfs/hole of=/dev/null bs=128k
> > real 21.2
> >
> > # /usr/bin/time dd if=/ufs/hole o
ZFS send/receive?? I am not familiar with this feature. Is there a doc I
can reference?
Thanks,
Aaron Newcomb
Sr. Systems Engineer
Sun Microsystems
[EMAIL PROTECTED]
Cell: 513-238-9511
Office: 513-562-4409
Matthew Ahrens wrote:
Aaron Newcomb wrote:
Does ZFS support any type of remote mirr
Have there been any new developments regarding the availability of
vfs_zfsacl.c? Jeb, were you able to get a copy of Jiri's work-in-progress? I
need this ASAP (as I'm sure most everyone watching this thread does)...
Thank you for your help.
Roger Ripley
[EMAIL PROTECTED]
This message post
Matthew Ahrens wrote:
Aaron Newcomb wrote:
Does ZFS support any type of remote mirroring? It seems at present my
only two options to achieve this would be Sun Cluster or Availability
Suite. I thought that this functionality was in the works, but I haven't
heard anything lately.
You could put s
I guess when we are defining a mirror, are you talking about a synchronous
mirror or an asynchronous mirror?
As stated earlier, if you are looking for an asynchronous mirror and do not
want to use AVS, you can use zfs send and receive and craft a fairly simple
script that runs constantly and u
Well since we are talking about for home use, I never tried as a spare, but if
you want to get real nutty, do the setup cindys suggested but format the 600GB
drive as UFS or some other filesystem and then try and create a 250GB file
device as a spare on that UFS drive. it will give you redundanc
Bryan Wagoner wrote:
Well since we are talking about for home use, I never tried as a spare, but if
you want to get real nutty, do the setup cindys suggested but format the 600GB
drive as UFS or some other filesystem and then try and create a 250GB file
device as a spare on that UFS drive. it
This benchmark models real-world workload faced by many ISP's worldwide everyday
http://untroubled.org/benchmarking/2004-04/
Would appreciate if the ZFS team or the Performance group could take a look at
it. I've run this myself on b61 (minor mods to the driver program) but
obviously Team ZFS o
> Have there been any new developments regarding the
> availability of vfs_zfsacl.c? Jeb, were you able to
> get a copy of Jiri's work-in-progress? I need this
> ASAP (as I'm sure most everyone watching this thread
> does)...
me too... A.S.A.P.!!!
[i]-- leon[/i]
This message posted from ope
Hello all,
Spent the last several hours perusing the ZFS forums and some of the blog
entries regarding ZFS. I have a couple of questions and am open to any hints,
tips, or things to watch out for on implementation of my home file server. I'm
building a file server consisting of an Asus P5WD2 m
John Smith wrote:
> Hello all,
>
> Spent the last several hours perusing the ZFS forums and some of the blog
> entries regarding ZFS. I have a couple of questions and am open to any hints,
> tips, or things to watch out for on implementation of my home file server.
> I'm building a file server
36 matches
Mail list logo