Re: [zfs-discuss] Zpools on USB

2009-03-23 Thread Rince
On Sat, Mar 14, 2009 at 8:25 PM, Richard Elling wrote:

> Florian Ermisch wrote:
>
>> Richard Elling schrieb:
>> [...]
>>
>>> ZFS maintains a cache of what pools were imported so that at boot time,
>>> it will automatically try to re-import the pool.  The file is
>>> /etc/zfs/zpool.cache
>>> and you can view its contents by using "zdb -C"
>>>
>>> If the current state of affairs does not match the cache, then you can
>>> export the pool, which will clear its entry in the cache.  Then retry the
>>> import.
>>> -- richard
>>>
>>
>> I had this problem myself with a mirrored zpool in a ICY BOX IB-3218 (2
>> HDDs which appear as different LUNs) set up for backup purposes.
>> For zpool which are intended to be disconnect (or powered of) regulary an
>> 'autoexport' flag would be nice: If set the system exports the pool at
>> shutdown. This would prevent problems like Stefan's on a reboot and when a
>> zpool from a shutdown system is connected to an other system (like "Hm, old
>> slow laptop's powered off, but hey, everything I need is also on this shiny
>> 1.5TB USB-HDD-zpool with all my other important stuff/backups.. *plug into
>> workstation* OMG! My backup-pool is faulty!!")
>>
>
> There is a zpool parameter, "cachefile" which will effectively do this.
> Yes, I think this is a good idea for removable media.
> -- richard
>
>
>
I've actually experienced this bug on non-removable media - in particular,
if I have a pool exported, and I reboot, on import, it will see two possible
pools - one using either the slice/partition boundaries on the drive, and
one using the "whole disk" [which is what I used when I made the pool.]

They have different unique IDs, and obviously attempting to import one of
them results in the report that the metadata is corrupt and cannot be
recovered...but how is it that ZFS fails to detect that one of these pools
is impossible?

[Yes, I realize that removable media isn't really a special case - as far as
ZFS is concerned, it's all disks.]

- Rich


-- 
Linux is obsolete -- Andrew Tanenbaum
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpools on USB & zpool.cache

2009-03-23 Thread Damon Atkins
Do we still need the  zpool.cache still. I believe early versions of 
zpool used the cache to remember what zpools to import at boot.
I understand newer versions of zfs still use the cache but also check to 
see if the pool contains the correct host name of the server, and will 
only import if the hostname matches.


I suggest ZFS at boot should (multi-threaded) scan every disk for  ZFS 
disks, and import the ones with the correct host name and with a  import 
flag set, without using the cache file. Maybe just use the cache file 
for non-EFI disk/partitions, but without the storing the pool name, but 
you should be able to tell ZFS to do a full scan which includes 
partition disk.


Cheers

 Original Message 

ZFS maintains a cache of what pools were imported so that at boot time,
it will automatically try to re-import the pool.  The file is 
/etc/zfs/zpool.cache

and you can view its contents by using "zdb -C"

If the current state of affairs does not match the cache, then you can
export the pool, which will clear its entry in the cache.  Then retry the
import.
 -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpools on USB & zpool.cache

2009-03-23 Thread Richard Elling

Damon Atkins wrote:
Do we still need the zpool.cache still. I believe early versions of 
zpool used the cache to remember what zpools to import at boot.


Yes.

I understand newer versions of zfs still use the cache but also check 
to see if the pool contains the correct host name of the server, and 
will only import if the hostname matches.


This serves a different function.



I suggest ZFS at boot should (multi-threaded) scan every disk for ZFS 
disks, and import the ones with the correct host name and with a 
import flag set, without using the cache file. Maybe just use the 
cache file for non-EFI disk/partitions, but without the storing the 
pool name, but you should be able to tell ZFS to do a full scan which 
includes partition disk.


Full scans are a bad thing, because they cannot scale. This is one
good reason why zpool.cache exists.

What problem are you trying to solve?
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpools on USB & zpool.cache

2009-03-23 Thread Mattias Pantzare
>> I suggest ZFS at boot should (multi-threaded) scan every disk for ZFS
>> disks, and import the ones with the correct host name and with a import flag
>> set, without using the cache file. Maybe just use the cache file for non-EFI
>> disk/partitions, but without the storing the pool name, but you should be
>> able to tell ZFS to do a full scan which includes partition disk.
>
> Full scans are a bad thing, because they cannot scale. This is one
> good reason why zpool.cache exists.

What do you mean by cannot scale? Is it common to not use the majority
of disks available to a system?

If you "taste" all buses in parallel there should not be a scalability problem.

>
> What problem are you trying to solve?

It would be nice to be able to move disks around when a system is
powered off and not have to worry about a "cache" when I boot.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpools on USB & zpool.cache

2009-03-23 Thread Richard Elling

Mattias Pantzare wrote:

I suggest ZFS at boot should (multi-threaded) scan every disk for ZFS
disks, and import the ones with the correct host name and with a import flag
set, without using the cache file. Maybe just use the cache file for non-EFI
disk/partitions, but without the storing the pool name, but you should be
able to tell ZFS to do a full scan which includes partition disk.
  

Full scans are a bad thing, because they cannot scale. This is one
good reason why zpool.cache exists.



What do you mean by cannot scale? Is it common to not use the majority
of disks available to a system?
  


No, it is uncommon.


If you "taste" all buses in parallel there should not be a scalability problem.
  


Don't think "busses," think "networks."

NB, busses are on the way out, most modern designs are point-to-point 
(SAS, SATA,
USB) or networked (iSCSI, SAN, NAS).  Do you want to scan the internet 
for LUNs? :-)



What problem are you trying to solve?



It would be nice to be able to move disks around when a system is
powered off and not have to worry about a "cache" when I boot.


Why are you worrying about it?
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpools on USB & zpool.cache

2009-03-23 Thread Mattias Pantzare
On Mon, Mar 23, 2009 at 22:15, Richard Elling  wrote:
> Mattias Pantzare wrote:

 I suggest ZFS at boot should (multi-threaded) scan every disk for ZFS
 disks, and import the ones with the correct host name and with a import
 flag
 set, without using the cache file. Maybe just use the cache file for
 non-EFI
 disk/partitions, but without the storing the pool name, but you should
 be
 able to tell ZFS to do a full scan which includes partition disk.

>>>
>>> Full scans are a bad thing, because they cannot scale. This is one
>>> good reason why zpool.cache exists.
>>>
>>
>> What do you mean by cannot scale? Is it common to not use the majority
>> of disks available to a system?
>>
>
> No, it is uncommon.

So, what do you mean by cannot scale?


>> If you "taste" all buses in parallel there should not be a scalability
>> problem.
>>
>
> Don't think "busses," think "networks."
>
> NB, busses are on the way out, most modern designs are point-to-point (SAS,
> SATA,
> USB) or networked (iSCSI, SAN, NAS).  Do you want to scan the internet for
> LUNs?

Du you know how a device is made available to zfs, cache or no cache?

All busses has to be probed when you do a reconfigure boot or run devfsadm.

zfs will only se the devices that you se in /dev.

If I can run zpool import in a reasonable amount of time the cahe is
not needed. Are there cases where I can't run zpool import?

>
>>> What problem are you trying to solve?
>>>
>>
>> It would be nice to be able to move disks around when a system is
>> powered off and not have to worry about a "cache" when I boot.
>
> Why are you worrying about it?

If I put my disks on a diffrent controler zfs won't find them when I
boot. That is bad. It is also an extra level of complexity.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpools on USB & zpool.cache

2009-03-23 Thread Tim
On Mon, Mar 23, 2009 at 4:45 PM, Mattias Pantzare wrote:

>
>
> If I put my disks on a diffrent controler zfs won't find them when I
> boot. That is bad. It is also an extra level of complexity.
>

Correct me if I'm wrong, but wading through all of your comments, I believe
what you would like to see is zfs automatically scan if the cache is invalid
vs. requiring manual intervention, no?

It would seem to me this would be rather sane behavior and a legitimate
request to add this as an option.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpools on USB & zpool.cache

2009-03-23 Thread Mattias Pantzare
On Tue, Mar 24, 2009 at 00:21, Tim  wrote:
>
>
> On Mon, Mar 23, 2009 at 4:45 PM, Mattias Pantzare 
> wrote:
>>
>>
>> If I put my disks on a diffrent controler zfs won't find them when I
>> boot. That is bad. It is also an extra level of complexity.
>
> Correct me if I'm wrong, but wading through all of your comments, I believe
> what you would like to see is zfs automatically scan if the cache is invalid
> vs. requiring manual intervention, no?

That would be nice, but if there really is a problem with a scan that
would not be good as that would trigger the very problem that the
cache is supposed to avoid.

But I don't understand why we need it in the first place, except as a
list of pools to import at boot.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Copying thousands of small files on an expanded ZFS pool crawl to a poor performance-not on other pools.

2009-03-23 Thread Jim Mauro


Cross-posting to the public ZFS discussion alias.
There's  nothing here that requires confidentiallity, and
the public alias is a much broader audience with a larger
number of experienced ZFS users...

As to the issue - what is the free space disparity
across the pools? Is the one particular pool significantly
tighter on free space than the other pools ("zpool list")?

Thanks,
/jim

Nobel Shelby wrote:

Customer has many large zfs pools..He does the same on all pools:
Copying overnight large amounts of small files (1-5K).
All but one particular pool (that has been expanded) gives them this 
problem:
--the copying within a few minutes crawls and the zpool looks 
unresponsive.


Background:
He had to grow this particular pool twice over a period of time (it 
was 6TB and it grew by 4TB twice-now it is 14TB)

Solaris was U4 but now is U6.

They have limited the arc:
set zfs:zfs_arc_max=0x1
and
zfs:zfs_nocacheflush=1 (they have a 6540 array).

Does expanding the pool affects performance and if so what is the best 
way to recover

(other than rebuilding the pool)

Thanks,
-Nobel




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpools on USB & zpool.cache & zpool import

2009-03-23 Thread Damon Atkins

The zpool.cache file makes clustering complex. {Assume the man page is
still correct}

From the zpool man page:

cachefile=path | "none"

Controls the location of where the pool configuration is cached.
Discovering all pools on system startup requires a cached copy of the
configuration data that  is  stored on  the  root  file  system. All
pools in this cache are automatically  imported  when  the  system  boots.

 Some  environments,  such  as  install and clustering, need to
cache this information in a different location so that pools are not
automatically imported. *

Setting this property caches the pool configuration in a different
location  that can later be imported with "zpool import -c".
... When the last pool using  a cache file  is  exported  or
destroyed,  the  file  is removed.

zpool import [-d dir | -c cachefile] [-D]

Lists pools available to import. If the -d option is not
specified,   this   command   searches  for  devices  in
"/dev/dsk".
--
A truss of zpool import indicates that it is not multi-threaded when
scanning for disks. ie. it scans 1 at a time instead of X at a time. So
it does take a while to run. Would be nice if this was multi-threaded.

If the cache file is to stay, it should do a scan of /dev to fix itself
at boot if something is wrong, and report it is doing a scan to the
console. esp if it is not multi-threaded.

PS it would be nice to have a zpool diskinfo  reports  if
the device belongs to a zpool imported or not, and all the details about
any zpool it can find on the disk. e.g. file-systems (zdb is only for
ZFS "engineers" says the man page). 'zpool import' needs an option to
list the file systems of a pool which is not yet imported and its
properties so you can have more information about it before importing it.

Cheers
 Original Message 



On Mon, Mar 23, 2009 at 4:45 PM, Mattias Pantzare > wrote:




If I put my disks on a diffrent controler zfs won't find them when I
boot. That is bad. It is also an extra level of complexity.


Correct me if I'm wrong, but wading through all of your comments, I 
believe what you would like to see is zfs automatically scan if the 
cache is invalid vs. requiring manual intervention, no? 

It would seem to me this would be rather sane behavior and a 
legitimate request to add this as an option.


--Tim




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss