>The vdev can handle dynamic lun growth, but the underlying VTOC or
>EFI label
>may need to be zero'd and reapplied if you setup the initial vdev on
>a slice. If
>you introduced the entire disk to the pool you should be fine, but I
>believe you'll
>still need to offline/online the pool.
Fin
Olaf Manczak wrote:
Eric Schrock wrote:
On Mon, Jun 26, 2006 at 05:26:24PM -0600, Gregory Shaw wrote:
You're using hardware raid. The hardware raid controller will rebuild
the volume in the event of a single drive failure. You'd need to keep
on top of it, but that's a given in the case of eit
-Does ZFS in the current version support LUN extension? With UFS, we have to zero the VTOC, and then adjust the new disk geometry. How does it look like with ZFS?The vdev can handle dynamic lun growth, but the underlying VTOC or EFI labelmay need to be zero'd and reapplied if you setup the initial
Gregory Shaw wrote:
On Tue, 2006-06-27 at 09:09 +1000, Nathan Kroenert wrote:
How would ZFS self heal in this case?
>
You're using hardware raid. The hardware raid controller will rebuild
the volume in the event of a single drive failure. You'd need to keep
on top of it, but that's a given
Eric Schrock wrote:
On Mon, Jun 26, 2006 at 05:26:24PM -0600, Gregory Shaw wrote:
You're using hardware raid. The hardware raid controller will rebuild
the volume in the event of a single drive failure. You'd need to keep
on top of it, but that's a given in the case of either hardware or
softw
On Mon, Jun 26, 2006 at 05:26:24PM -0600, Gregory Shaw wrote:
>
> You're using hardware raid. The hardware raid controller will rebuild
> the volume in the event of a single drive failure. You'd need to keep
> on top of it, but that's a given in the case of either hardware or
> software raid.
T
> If you've got hardware raid-5, why not just run
> regular (non-raid)
> pools on top of the raid-5?
>
> I wouldn't go back to JBOD. Hardware arrays offer a
> number of
> advantages to JBOD:
> - disk microcode management
> - optimized access to storage
> - large write cache
Roch wrote:
And, ifthe load can accomodate a
reorder, to get top per-spindle read-streaming performance,
a cp(1) of the file should do wonders on the layout.
but there may not be filesystem space for double the data.
Sounds like there is a need for a zfs-defragement-file utility perhaps
On Tue, 2006-06-27 at 09:09 +1000, Nathan Kroenert wrote:
> On Tue, 2006-06-27 at 02:27, Gregory Shaw wrote:
> > On Jun 26, 2006, at 1:15 AM, Mika Borner wrote:
> >
> > > What we need, would be the feature to use JBODs.
> > >
> >
> > If you've got hardware raid-5, why not just run regular (non-ra
On Tue, 2006-06-27 at 02:27, Gregory Shaw wrote:
> On Jun 26, 2006, at 1:15 AM, Mika Borner wrote:
>
> > What we need, would be the feature to use JBODs.
> >
>
> If you've got hardware raid-5, why not just run regular (non-raid)
> pools on top of the raid-5?
>
> I wouldn't go back to JBOD. H
On 6/26/06, Neil Perrin <[EMAIL PROTECTED]> wrote:
Robert Milkowski wrote On 06/25/06 04:12,:
> Hello Neil,
>
> Saturday, June 24, 2006, 3:46:34 PM, you wrote:
>
> NP> Chris,
>
> NP> The data will be written twice on ZFS using NFS. This is because NFS
> NP> on closing the file internally uses f
I just downloaded sol-10-u2-ga-sparc-dvd-iso-a.zip. Try again.
Gary
Larry Wake wrote:
Shannon
Roddy wrote:
Noel Dellofano wrote:
Solaris 10u2 was released today. You can
now download it from here:
http://www.sun.com/software/solaris/get.jsp
Shannon Roddy wrote:
Noel Dellofano wrote:
Solaris 10u2 was released today. You can now download it from here:
http://www.sun.com/software/solaris/get.jsp
Seems the download links are dead except for x86-64. No Sparc downloads.
There were some problems getting the links set u
> Noel Dellofano wrote:
>> Solaris 10u2 was released today. You can now download it from here:
>>
>> http://www.sun.com/software/solaris/get.jsp
>
> Seems the download links are dead except for x86-64. No Sparc downloads.
>
Everything works perfectly.
$ ls -1
sol-10-u2-ga-sparc-lang-iso.zip
so
I had the same problem.On 6/26/06, Shannon Roddy <[EMAIL PROTECTED]> wrote:
Noel Dellofano wrote:> Solaris 10u2 was released today. You can now download it from here:>> http://www.sun.com/software/solaris/get.jsp
Seems the download links are dead except for x86-64. No Sparc downloads.
Noel Dellofano wrote:
> Solaris 10u2 was released today. You can now download it from here:
>
> http://www.sun.com/software/solaris/get.jsp
Seems the download links are dead except for x86-64. No Sparc downloads.
___
zfs-discuss mailing list
zfs-discu
Solaris 10u2 was released today. You can now download it from here:
http://www.sun.com/software/solaris/get.jsp
Noel
Joe Little wrote:
So, if I recall from this list, a mid-june release to the web was
expected for S10U2. I'm about to do some final production testing, and
I was wondering i
> > -Does ZFS in the current version support LUN extension? With UFS, we
> > have to zero the VTOC, and then adjust the new disk geometry. How does
> > it look like with ZFS?
>
> I don't understand what you're asking. What problem is solved by
> zeroing the vtoc?
When the underlying storage in
Robert Milkowski wrote On 06/25/06 04:12,:
Hello Neil,
Saturday, June 24, 2006, 3:46:34 PM, you wrote:
NP> Chris,
NP> The data will be written twice on ZFS using NFS. This is because NFS
NP> on closing the file internally uses fsync to cause the writes to be
NP> committed. This causes the ZI
James C. McPherson wrote:
James C. McPherson wrote:
Jeff Bonwick wrote:
6420204 root filesystem's delete queue is not running
The workaround for this bug is to issue to following command...
# zfs set readonly=off /
This will cause the delete queue to start up and should flush your
queue.
Than
On Jun 26, 2006, at 1:15 AM, Mika Borner wrote:
Hi
Now that Solaris 10 06/06 is finally downloadable I have some
questions
about ZFS.
-We have a big storage sytem supporting RAID5 and RAID1. At the
moment,
we only use RAID5 (for non-solaris systems as well). We are thinking
about using
Mike Gerdts wrote:
On 6/25/06, Nathan Kroenert <[EMAIL PROTECTED]> wrote:
Now, looking forward a bit, where does the ZFS integration with zones
documentation belong?
Some of it will appear in the next update to the Sun BluePrint "Solaris Containers
Architecture Technology Guide."
How abo
A lesson we learned with Solaris Zones applies here to ZFS. Accomplishing
high-level goals, e.g. "prepare an appropriate environment for application XYZ
installation (Zones)" or "prepare an appropriate filesystem for application XYZ
data (ZFS)" is different than it was before Solaris 10. For Zo
About:
-I've read the threads about zfs and databases. Still I'm not 100%
convenienced about read performance. Doesn't the fragmentation of the
large database files (because of the concept of COW) impact
read-performance?
I do need to get back to this thread. The way I am currently
loo
I don't know how I missed it, but there are periodic commit requests by the
NFS client. These occur often enough that the data ends up being written
twice as you have suggested.
In any case, this is really annoying, as dd certainly isn't requesting this
behavior. Perhaps the clients are just st
Hi
Probbaly been reported a while back, but 'zfs list -o' does not
list the rather useful (and obvious) 'name' property, and nor does the manpage
at a quick read. snv_42.
# zfs list -o
missing argument for 'o' option
usage:
list [-rH] [-o property[,property]...] [-t type[,type]...]
So if you have a single thread doing open/write/close of 8K
files and get 1.25MB/sec, that tells me you have something
like a 6ms I/O latency. Which look reasonable also.
What does iostat -x svc_t (client side) says ?
400ms seems high for the workload _and_ doesn't match my
formula, so I don't li
Hi
Now that Solaris 10 06/06 is finally downloadable I have some questions
about ZFS.
-We have a big storage sytem supporting RAID5 and RAID1. At the moment,
we only use RAID5 (for non-solaris systems as well). We are thinking
about using ZFS on those LUNs instead of UFS. As ZFS on Hardware RAID5
28 matches
Mail list logo