[zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st

2006-09-07 Thread Lieven De Geyndt
So I can manage the file system mounts/automounts using the legacy option , but I can't manage the auto-import of the pools . Or I should delete the zpool.cache file during boot . This message posted from opensolaris.org ___ zfs-discuss mailing lis

[zfs-discuss] Re: Recommendation ZFS on StorEdge 3320

2006-09-07 Thread Nicolas Dorfsman
> The hard part is getting a set of simple > requirements. As you go into > more complex data center environments you get hit > with older Solaris > revs, other OSs, SOX compliance issues, etc. etc. > etc. The world where > most of us seem to be playing with ZFS is on the > lower end of the > c

[zfs-discuss] Performance problem of ZFS ( Sol 10U2 )

2006-09-07 Thread Ivan Debnár
Hi, I deployed ZFS on our mailserver recently, hoping for eternal peace after running on UFS and moving files witch each TB added. It is mailserver - it's mdirs are on ZFS pool: capacity operationsbandwidth poolused avail read w

Re: [zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st

2006-09-07 Thread James C. McPherson
Lieven De Geyndt wrote: So I can manage the file system mounts/automounts using the legacy option , but I can't manage the auto-import of the pools . Or I should delete the zpool.cache file during boot . Doesn't this come back to the problem which is self-induced, namely that they are trying "

Re: [zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st

2006-09-07 Thread Frank Cusack
On September 7, 2006 6:55:48 PM +1000 "James C. McPherson" <[EMAIL PROTECTED]> wrote: Doesn't this come back to the problem which is self-induced, namely that they are trying "poor man's cluster" ?? If you want cluster functionality then pay for a proper solution. If you can't afford a proper s

[zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st

2006-09-07 Thread Lieven De Geyndt
I know this is not supported . But we try to build a safe configuration , till zfs is supported in Sun cluster . The customer did order SunCluster , but needs a workarround till the release date . And I think it must be possible to setup . This message posted from opensolaris.org

[zfs-discuss] Re: Re: Re: ZFS forces system to paging to the point it is

2006-09-07 Thread Jürgen Keil
> We are trying to obtain a mutex that is currently held > by another thread trying to get memory. Hmm, reminds me a bit on the zvol swap hang I got some time ago: http://www.opensolaris.org/jive/thread.jspa?threadID=11956&tstart=150 I guess if the other thead is stuck trying to get memory, then

Re: [zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st

2006-09-07 Thread James C. McPherson
Lieven De Geyndt wrote: I know this is not supported . But we try to build a safe configuration, till zfs is supported in Sun cluster. The customer did order SunCluster, but needs a workarround till the release date . And I think it must be possible to setup . So build them a configuration whic

Re: [zfs-discuss] ZFS with expanding LUNs

2006-09-07 Thread James C. McPherson
Eric Schrock wrote: On Thu, Aug 31, 2006 at 09:54:25AM -0700, Matthew Ahrens wrote: Theo Bongers wrote: Please can anyone tell me how to handle with a LUN that is expanded (on a RAID array or SAN storage)? and grow the filesystem without data-loss? How does ZFS looks at the volume. In other wor

Re[2]: [zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st

2006-09-07 Thread Robert Milkowski
Hello James, Thursday, September 7, 2006, 1:44:48 PM, you wrote: JCM> Lieven De Geyndt wrote: >> I know this is not supported . But we try to build a safe configuration, >> till zfs is supported in Sun cluster. The customer did order SunCluster, >> but needs a workarround till the release date .

Re: [zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st

2006-09-07 Thread James C. McPherson
Robert Milkowski wrote: Hello James, Thursday, September 7, 2006, 1:44:48 PM, you wrote: JCM> Lieven De Geyndt wrote: I know this is not supported . But we try to build a safe configuration, till zfs is supported in Sun cluster. The customer did order SunCluster, but needs a workarround till t

Re: [zfs-discuss] Re: Re: ZFS forces system to paging to the point it is

2006-09-07 Thread Philippe Magerus - SUN Service - Luxembourg
Hi, This same dump has now shown up as a P1 pts-kernel esc of which I am the lucky owner. I noticed that arc.size is far smaller than the sum of all zio... caches. This of course might be caused by : 6456888 zpool scrubbing leads to memory exhaustion and system hang Except that there is no

Re: [zfs-discuss] Re: Re: Re: ZFS forces system to paging to the point it is

2006-09-07 Thread Mark Maybee
Jürgen Keil wrote: We are trying to obtain a mutex that is currently held by another thread trying to get memory. Hmm, reminds me a bit on the zvol swap hang I got some time ago: http://www.opensolaris.org/jive/thread.jspa?threadID=11956&tstart=150 I guess if the other thead is stuck trying

Re[2]: [zfs-discuss] Re: Re: ZFS forces system to paging to the point it is

2006-09-07 Thread Robert Milkowski
Hello Mark, Thursday, September 7, 2006, 12:32:32 AM, you wrote: MM> Robert Milkowski wrote: >> >> >> On Wed, 6 Sep 2006, Mark Maybee wrote: >> >>> Robert Milkowski wrote: >>> > ::dnlc!wc 1048545 3145811 76522461 >>> Well, that explains half your problem... and maybe a

Re: [zfs-discuss] Performance problem of ZFS ( Sol 10U2 )

2006-09-07 Thread Mark Maybee
Ivan, What mail clients use your mail server? You may be seeing the effects of: 6440499 zil should avoid txg_wait_synced() and use dmu_sync() to issue parallel IOs when fsyncing This bug was fixed in nevada build 43, and I don't think made it into s10 update 2. It will, of course, be in upd

RE: [zfs-discuss] Performance problem of ZFS ( Sol 10U2 )

2006-09-07 Thread Ivan Debnár
Hi, thanks for reply. The load is like this: 20 msg/s incoming 400 simult IMAP connections ( select, search, fetch-env ) 60 new websessions / s 100 simult POP3 Is there a way to get that "patch" to try? Thinks are really getting worse down here :-( It might make sense, since the mail server

Re: [zfs-discuss] Re: Recommendation ZFS on StorEdge 3320

2006-09-07 Thread Torrey McMahon
Nicolas Dorfsman wrote: The hard part is getting a set of simple requirements. As you go into more complex data center environments you get hit with older Solaris revs, other OSs, SOX compliance issues, etc. etc. etc. The world where most of us seem to be playing with ZFS is on the lower end o

Re: [zfs-discuss] Re: How to destroy a pool wich you can't import

2006-09-07 Thread Darren Dunham
> Lieven De Geyndt wrote: > > So I can manage the file system mounts/automounts using the legacy option > > , but I can't manage the auto-import of the pools . Or I should delete > > the zpool.cache file during boot . > > Doesn't this come back to the problem which is self-induced, namely > that t

Re: [zfs-discuss] Performance problem of ZFS ( Sol 10U2 )

2006-09-07 Thread eric kustarz
Ivan Debnár wrote: Hi, I deployed ZFS on our mailserver recently, hoping for eternal peace after running on UFS and moving files witch each TB added. It is mailserver - it's mdirs are on ZFS pool: capacity operationsbandwidth poolus

Re: [zfs-discuss] Re: Recommendation ZFS on StorEdge 3320

2006-09-07 Thread Richard Elling - PAE
Torrey McMahon wrote: Raid calculations take CPU time but I haven't seen numbers on ZFS usage. SVM is known for using a fair bit of CPU when performing R5 calculations and I'm sure other OS have the same issue. EMC used to go around saying that offloading raid calculations to their storage arra

Re: [zfs-discuss] Re: How to destroy a pool wich you can't import

2006-09-07 Thread Eric Schrock
On Thu, Sep 07, 2006 at 11:32:18AM -0700, Darren Dunham wrote: > > I know that VxVM stores the "autoimport" information on the disk > itself. It sounds like ZFS doesn't and it's only in the cache (is this > correct?) I'm not sure what 'autoimport' is, but ZFS always stores enough information on

Re: [zfs-discuss] Re: Recommendation ZFS on StorEdge 3320

2006-09-07 Thread Peter Rival
Richard Elling - PAE wrote: Torrey McMahon wrote: Raid calculations take CPU time but I haven't seen numbers on ZFS usage. SVM is known for using a fair bit of CPU when performing R5 calculations and I'm sure other OS have the same issue. EMC used to go around saying that offloading raid calcu

Re: [zfs-discuss] Re: Recommendation ZFS on StorEdge 3320

2006-09-07 Thread James Dickens
On 9/7/06, Torrey McMahon <[EMAIL PROTECTED]> wrote: Nicolas Dorfsman wrote: >> The hard part is getting a set of simple >> requirements. As you go into >> more complex data center environments you get hit >> with older Solaris >> revs, other OSs, SOX compliance issues, etc. etc. >> etc. The worl

Re: [zfs-discuss] Re: Re: ZFS forces system to paging to the point it is

2006-09-07 Thread Matthew Ahrens
Philippe Magerus - SUN Service - Luxembourg wrote: there should be a tunable for max number of cached znodes/dnodes as there is in other file systems. ... As for arc.c_max, it should be settable via /etc/system. No, there should not be tunables. The system should simply work. We need to di

Re: [zfs-discuss] Re: Recommendation ZFS on StorEdge 3320 - offtopic

2006-09-07 Thread Richard Elling - PAE
[EMAIL PROTECTED] wrote: This is the case where I don't understand Sun's politics at all: Sun doesn't offer really cheap JBOD which can be bought just for ZFS. And don't even tell me about 3310/3320 JBODs - they are horrible expansive :-( Yep, multipacks are EOL for some time now -- killed by b

RE: [zfs-discuss] Performance problem of ZFS ( Sol 10U2 )

2006-09-07 Thread Ivan Debnár
Hi, thanks for respose. As this is close-source mailserver (CommuniGate pro), I can't say 100% answer, but the writes that I see that take too much time (15-30secs) are writes from temp queue to final storage, and from my understanding, they are sync so the queue manager can guarantee they are

[zfs-discuss] Re: Recommendation ZFS on StorEdge 3320

2006-09-07 Thread Anton B. Rang
The bigger problem with system utilization for software RAID is the cache, not the CPU cycles proper. Simply preparing to write 1 MB of data will flush half of a 2 MB L2 cache. This hurts overall system performance far more than the few microseconds that XORing the data takes. (A similar effect

Re: [zfs-discuss] Re: How to destroy a pool wich you can't import

2006-09-07 Thread Frank Cusack
On September 7, 2006 11:50:43 AM -0700 Eric Schrock <[EMAIL PROTECTED]> wrote: On Thu, Sep 07, 2006 at 11:32:18AM -0700, Darren Dunham wrote: Lets imagine that I lose a motherboard on a SAN host and it crashes. To get things going I import the pool on another host and run the apps while I repa

Re: [zfs-discuss] Re: How to destroy a pool wich you can't import

2006-09-07 Thread Eric Schrock
On Thu, Sep 07, 2006 at 01:09:47PM -0700, Frank Cusack wrote: > > That zfs needs to address. > > What if I simply lose power to one of the hosts, and then power is restored? Then use a layered clustering product - that's what this is for. For example, SunCluster doesn't use the cache file in th

Re: [zfs-discuss] Re: How to destroy a pool wich you can't import

2006-09-07 Thread Darren Dunham
> > I know that VxVM stores the "autoimport" information on the disk > > itself. It sounds like ZFS doesn't and it's only in the cache (is this > > correct?) > > I'm not sure what 'autoimport' is, but ZFS always stores enough > information on the disks to open the pool, provided all the devices

Re: [zfs-discuss] Performance problem of ZFS ( Sol 10U2 )

2006-09-07 Thread Mark Maybee
Ivan Debnár wrote: Hi, thanks for respose. As this is close-source mailserver (CommuniGate pro), I can't say 100% answer, but the writes that I see that take too much time (15-30secs) are writes from temp queue to final storage, and from my understanding, they are sync so the queue manager c

Re: [zfs-discuss] Re: How to destroy a pool wich you can't import

2006-09-07 Thread Eric Schrock
On Thu, Sep 07, 2006 at 01:52:33PM -0700, Darren Dunham wrote: > > What are the problems that you see with that check? It appears similar > to what VxVM has been using (although they do not use the `hostid` as > the field), and that appears to have worked well in most cases. > > I don't know wha

Re: [zfs-discuss] Re: How to destroy a pool wich you can't import

2006-09-07 Thread Sanjay Nadkarni
Darren Dunham wrote: I know that VxVM stores the "autoimport" information on the disk itself. It sounds like ZFS doesn't and it's only in the cache (is this correct?) I'm not sure what 'autoimport' is, but ZFS always stores enough information on the disks to open the pool, provided al

[zfs-discuss] Re: Re: How to destroy a pool wich you can't import

2006-09-07 Thread Anton B. Rang
A determined administrator can always get around any checks and cause problems. We should do our very best to prevent data loss, though! This case is particularly bad since simply booting a machine can permanently damage the pool. And why would we want a pool imported on another host, or not mar

Re: [zfs-discuss] Re: Re: How to destroy a pool wich you can't import

2006-09-07 Thread Darren Dunham
> And why would we want a pool imported on another host, or not marked > as belonging to this host, to show up as faulted? That seems an odd > use of the word. Unavailable, perhaps, but not faulted. It certainly changes some semantics... In a UFS/VxVM world, I still have filesystems referenced i

Re: [zfs-discuss] Re: Re: How to destroy a pool wich you can't import

2006-09-07 Thread Eric Schrock
On Thu, Sep 07, 2006 at 06:07:40PM -0700, Anton B. Rang wrote: > > And why would we want a pool imported on another host, or not marked > as belonging to this host, to show up as faulted? That seems an odd > use of the word. Unavailable, perhaps, but not faulted. > That's FMA terminology, and

Re: [zfs-discuss] Re: Re: How to destroy a pool wich you can't import

2006-09-07 Thread Eric Schrock
On Thu, Sep 07, 2006 at 06:31:30PM -0700, Darren Dunham wrote: > > It certainly changes some semantics... > > In a UFS/VxVM world, I still have filesystems referenced in /etc/vfstab. > I might expect (although have seen counterexamples), that if my VxVM > group doesn't autoimport, then obviously