On 9/6/06, UNIX admin <[EMAIL PROTECTED]> wrote:
Yes, the man page says that. However, it is possible to mix disks of different
sizes in a RAIDZ, and this works. Why does it work? Because RAIDZ stripes are
dynamic in size. From that I infer that disks can be any size because the
stripes can be
For background on what this is, see:
http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200
=
zfs-discuss 08/16 - 08/31
=
Size of all threads during per
Robert Milkowski wrote:
On Wed, 6 Sep 2006, Mark Maybee wrote:
Robert Milkowski wrote:
::dnlc!wc
1048545 3145811 76522461
Well, that explains half your problem... and maybe all of it:
After I reduced vdev prefetch from 64K to 8K for last few hours system
is working properly witho
Roch - PAE wrote:
Thinking some more about this. If your requirements does
mandate some form of mirroring, then it truly seems that ZFS
should take that in charge if only because of the
self-healing characteristics. So I feel the storage array's
job is to export low latency Luns to ZFS.
T
Robert Milkowski wrote:
::dnlc!wc
1048545 3145811 76522461
Well, that explains half your problem... and maybe all of it:
We have a thread that *should* be trying to free up these entries
in the DNLC, however it appears to be blocked:
stack pointer for thread 2a10014fcc0: 2a10014edd1
[
Darren Dunham wrote:
Let's say the devices are named thus (and I'm making this up):
/devices/../../SUNW,[EMAIL PROTECTED]/[EMAIL PROTECTED],0/WWN:sliceno
[EMAIL PROTECTED] denotes the FLX380 frame, [0-6]
[EMAIL PROTECTED],n denotes the virtual disk,LUN, [0-19],[0-3]
How do I know that my strip
> Let's say the devices are named thus (and I'm making this up):
>
> /devices/../../SUNW,[EMAIL PROTECTED]/[EMAIL PROTECTED],0/WWN:sliceno
>
> [EMAIL PROTECTED] denotes the FLX380 frame, [0-6]
> [EMAIL PROTECTED],n denotes the virtual disk,LUN, [0-19],[0-3]
>
> How do I know that my stripes are
> ::dnlc!wc
1048545 3145811 76522461
>
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi All,
I just posted version 0.6 of the automatic snapshots prototype on my web
log. The new features are:
* ZFS send/receive support
* Multiple schedules per filesystem
More at:
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_now_with
Note, this is just something I've been messing
This is a most interesting thread. I'm a little be-fuddled, though.
How will ZFS know to select the RAID-Z2 stripes from each FLX380,
because if it stripes the (5+2) from the LUNS within one FLX380, this
will not help if one frame goes irreplaceably out of service.
Let's say the devices are n
However performance will be much worse as data will be striped to only those
mirrors already available.
However is performance isn't an issue it could be interesting.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@
Hmmm, interesting data. See comments in-line:
Robert Milkowski wrote:
Yes, server has 8GB of RAM.
Most of the time there's about 1GB of free RAM.
bash-3.00# mdb 0
Loading modules: [ unix krtld genunix dtrace specfs ufs sd md ip sctp usba fcp
fctl qlc ssd lofs zfs random logindmux ptm cpc nfs
+5
I've been saving my +1s for a few weeks now. ;)
Richard Elling - PAE wrote:
There is another option. I'll call it "grow into your storage."
Pre-ZFS, for most systems you would need to allocate the storage well
in advance of its use. For the 7xFLX380 case using SVM and UFS, you
would typica
As user properties are coming then maybe when a fs is mounted in a local zone a
user property would be set, like zone_mounted=test1. Perhaps during each mount
such property would be created. In case with several zones just put names after
, or something similar.
??
This message posted from
On September 6, 2006 7:19:32 AM -0700 Lieven De Geyndt <[EMAIL PROTECTED]>
wrote:
sorry guys ...RTF did the job
[b]Legacy Mount Points[/b]
That just means filesystems in the pool won't get mounted, not that the
pool won't be imported.
-frank
___
zf
There is another option. I'll call it "grow into your storage."
Pre-ZFS, for most systems you would need to allocate the storage well
in advance of its use. For the 7xFLX380 case using SVM and UFS, you
would typically setup the FLX380 LUNs, merge them together using SVM,
and newfs. Growing is s
Lowering from default 64K to 16K turned into about 10x less read throutput! And
similar factor for latency for nfs clients. For now I'll probably leave it as
it is and later will do some comparisons with different settings.
ps. very big thanks to Roch! I owe you!
This message posted from ope
On Wed, Sep 06, 2006 at 04:52:48PM +0100, Dick Davies wrote:
>
> Oh God no. That's exactly what I wanted to avoid.
> Why wouldn't you want it stored in the dataset, out of interest?
There are a couple of reasons:
- We don't want to re-create the same information in multiple places.
Keeping bot
No, remove all other datasets from zone config and just put:
add dataset
set name=telecom/oracle/production
end
and that's it.
That way you will see all filesystem beneath tyelecom/oracle/production.
Additionally in a zone production you will be able to create more file systems
inside without c
On 06/09/06, Eric Schrock <[EMAIL PROTECTED]> wrote:
On Wed, Sep 06, 2006 at 03:53:52PM +0100, Dick Davies wrote:
> That's a bit nicer, thanks.
> Still not that clear which zone they belong to though - would
> it be an idea to add a 'zone' property be a string == zonename ?
Yes, this is possible
On Wed, Sep 06, 2006 at 08:34:26AM -0700, Eric Schrock wrote:
>
> Feel free to file an RFE.
>
Oops, found one already:
6313352 'zpool list' & 'zfs list' should add '-z' & '-Z' to identifier a zone
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
_
On Wed, Sep 06, 2006 at 04:23:32PM +0100, Dick Davies wrote:
>
> a) prevent attempts to create zvols in non-global zones
> b) somehow allow it (?) or
> c) Don't do That
>
> I vote for a) myself - should I raise an RFE?
Yes, that was _supposed_ to be the original behavior, and I thought we
had it
On Wed, Sep 06, 2006 at 09:01:00AM -0400, Kenneth Mikelinich wrote:
> Hi Robert -- Here are the outputs. I cannot seem to see the last isapps
> dataset via zfs list. The non-global zone will be used to host a 10G
> Oracle.
Yes, this is definitely a bug somewhere. I'll try to reproduce this on
a
On Wed, Sep 06, 2006 at 03:53:52PM +0100, Dick Davies wrote:
> That's a bit nicer, thanks.
> Still not that clear which zone they belong to though - would
> it be an idea to add a 'zone' property be a string == zonename ?
Yes, this is possible, but it's annoying because the actual owning zone
isn'
A colleague just asked if zfs delegation worked with zvols too.
Thought I'd give it a go and got myself in a mess
(tank/linkfixer is the delegated dataset):
[EMAIL PROTECTED] / # zfs create -V 500M tank/linkfixer/foo
cannot create device links for 'tank/linkfixer/foo': permission denied
cannot cr
That's a bit nicer, thanks.
Still not that clear which zone they belong to though - would
it be an idea to add a 'zone' property be a string == zonename ?
On 06/09/06, Kenneth Mikelinich <[EMAIL PROTECTED]> wrote:
zfs mount
should show where all your datasets are mounted.
I too was confused wi
Oatway, Ted wrote:
Thanks for the response Richard. Forgive my ignorance but the following
questions come to mind as I read your response.
I would then have to create 80 RAIDz(6+1) Volumes and the process of
creating these Volumes can be scripted. But -
1) I would then have to create 80 mount p
> I would then have to create 80 RAIDz(6+1) Volumes and the process of
> creating these Volumes can be scripted. But -
>
> 1) I would then have to create 80 mount points to mount each of these
> Volumes (?)
No. Each of the RAIDZs that you create can be combined into a single
pool. Data written
sorry guys ...RTF did the job
[b]Legacy Mount Points[/b]
You can manage ZFS file systems with legacy tools by setting the mountpoint
property to legacy.
Legacy file systems must be managed through the mount and umount commands and the
/etc/vfstab file. ZFS does not automatically mount legacy file sy
This could still corrupt the pool.
Probably the customer has to write its own tool to import a pool using libzfs
and not creating zpool.cache.
Eventually just after pool is imported remove zpool.cache - I'm not sure but it
should work.
This message posted from opensolaris.org
___
Thanks.
I will try this out
Ken Mikelinich
Computer Operations Manager
Telecommunications and Client Services
University of New Hampshire
603.862.4220
-Original Message-
From: Dick Davies [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 06, 2006 9:54 AM
To: Mikelinich, Ken
Cc: zfs-
On 06/09/06, Kenneth Mikelinich <[EMAIL PROTECTED]> wrote:
Are you suggesting that I not get too granular with datasets and use a
higher level one versus several?
I tihnk what he's saying is you should only have to
delegate one dataset (telecom/oracle/production, for example),
and all the 'chi
Lieven De Geyndt wrote:
zpool create -R did his job . Thanks for the tip .
Is ther a way to disable the auto mount when you boot a system ?
The customer has some kind of poor mans cluster .
2 systems has access to a SE3510 with ZFS .
System A was powered-off as test , system B did an import of
Hmmm.
I thought I was doing this via zonecfg -z production, which zonecfg is
run from the global zone.
add dataset
set name=telecom/oracle/production/oraapp
end
... repeat
add dataset
set name=telecom/oracle/production/isapps
end
commit
exit
The zone took all the datasets (shown in the earlier
Hmmm.
I thought I was doing this via zonecfg -z production, which zonecfg is
run from the global zone.
add dataset
set name=telecom/oracle/production/oraapp
end
... repeat
add dataset
set name=telecom/oracle/production/isapps
end
commit
exit
The zone took all the datasets (shown in the earlier
Wee Yeh Tan writes:
> On 9/5/06, Torrey McMahon <[EMAIL PROTECTED]> wrote:
> > This is simply not true. ZFS would protect against the same type of
> > errors seen on an individual drive as it would on a pool made of HW raid
> > LUN(s). It might be overkill to layer ZFS on top of a LUN that is
zpool create -R did his job . Thanks for the tip .
Is ther a way to disable the auto mount when you boot a system ?
The customer has some kind of poor mans cluster .
2 systems has access to a SE3510 with ZFS .
System A was powered-off as test , system B did an import of the pools .
When system A
Well, that's interesting. Looks like some limit/bug here. However whatever the
limit is have you considered to add dataset into a zone? That way you can
actually create new file systems as needed inside a zone without changing zone
configuration, etc. You can also utilize snapshots, clones insid
Hi Robert -- Here are the outputs. I cannot seem to see the last isapps
dataset via zfs list. The non-global zone will be used to host a 10G
Oracle.
//** From the global zone **
//** Pool is named telecom
# zonecfg -z production export
create -b
set zonepath=/zones/production
set autoboot=true
a
Hi.
Can you post zonecfg -z export and zfs list in that XXX zone?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
It looks like I discovered a workaround. I've got another zpool within rg in
SC. The other zpool does not have production data (yet) so I can switch it
between nodes freely. By doing this every 3 minutes I can stay safe on free
memory, at least so far.
I guess it frees some ARC cache. What is n
Hi this question is along the lines of datasets and number of zfs file
systems available per zone. I suspect I am missing something obvious.
We have added 8 datasets to one non-global Zone. While logged in and
doing a ZFS list in that zone, I am only able to see the first 7
available ZFS file sys
zfs mount
should show where all your datasets are mounted.
I too was confused with the zfs list readout.
On Wed, 2006-09-06 at 07:37, Dick Davies wrote:
> Just did my first dataset delegation, so be gentle :)
>
> Was initially terrified to see that changes to the mountpoint in the
> non-glob
Lieven De Geyndt wrote:
When a pool is in a faulted state , you can't import it . Even -f fails .
When you to decide to recreate the pool , you cannot execute zpool destroy ,
because it is not imported . Also -f does not work .
Any idea how to get out of this situation ?
try something like
z
Hi.
Just re-create it or create new pool with disks from the old one and use -f
flag.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Just did my first dataset delegation, so be gentle :)
Was initially terrified to see that changes to the mountpoint in the non-global
zone were visible in the global zone.
Then I realised it wasn't actually mounted (except in the delegated zone).
But I couldn't see any obvious indication that th
When a pool is in a faulted state , you can't import it . Even -f fails .
When you to decide to recreate the pool , you cannot execute zpool destroy ,
because it is not imported . Also -f does not work .
Any idea how to get out of this situation ?
This message posted from opensolaris.org
_
Hi.
Is there a waf to safely flush ARC cache (get most of ZFS memory back to
system)?
ps. not by export/import
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
48 matches
Mail list logo