> Hi Rainer,
>
> This is a long thread and I wasn't commenting on your
> previous
> replies regarding mirror manipulation. If I was, I
> would have done
> so directly. :-)
Yes, I realize. I did the response on your post because I was agreeing with
you. :-) I was just extending your comment by i
> Nenad,
>
> I've seen this solution offered before, but I would
> not recommend this
> except as a last resort, unless you didn't care about
> the health of
> the original pool.
This is emphatically not what was being requested by me, in fact. I agree, I
would be highly suspicious of the data's
> So why don't you state the actual time it takes to "come up"?
I can't because I don't know. The DBA's have been very difficult about sharing
the information. It took several emails and a meeting before we even found out
the fact that the 10GB SGA DB didn't start up "quick enough". We also hone
We're running Update 3. Note that the DB _does_ come up, just not in the two
minutes they were expecting. If they wait a few moments after their two-minute
start-up attempt, it comes up just fine.
I was looking at vmstat, and it seems to tell me what I need. It's just that I
need to present the
We cannot go to an OpenSolaris Nevada build for political as well as support
reasons. It's not an option.
We have been running several other systems using Oracle on ZFS without issues.
The current problem we have is more about getting the DBA's to understand how
things have changed with Sol10/Z
Thanks, I'll give it a whirl.
Rainer
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks. Like above, knowing the ARC takes time to ramp up strongly suggests
that it won't be an issue on a normally booting system. It sounds like your
needs are much greater, and that your databases are running fine.
I can take this information to the DBA's and use it to "manage their
expecta
> After bootup, ZFS should have near zero memory in the
> ARC.
This makes sense, and I have no idea how long the server has been running
before the test. We can use the above information to help manage their
expectations; on boot-up, ARC will be low, so the de-allocation of resources
won't be a
The updated information states that the kernel setting is only for the current
Nevada build. We are not going to use the kernel debugger method to change the
setting on a live production system (and do this everytime we need to reboot).
We're back to trying to set their expectations more realist
Thanks for the feedback. Please see below.
> ZFS should give back memory used for cache to system
> if applications are demanding it. Right it should but sometimes it
> won't.
>
> However with databases there's simple workaround - as
> you know how much ram all databases will consume at least you
Thanks for the links, but this is not really the kind of data I'm looking for.
These focus more on I/O. I need information on the memory cahing, and so on.
Specifically, I need data that shows how starting up a 10GB SGA database on a
16GB machine will not be able to flush the ZFS cache as quickl
Greetings, all.
Does anyone have a good whitepaper or three on how ZFS uses memory and swap? I
did some Googling, but found nothing that was useful.
The reason I ask is that we have a small issue with some of our DBA's. We have
a server with 16GB of memory, and they are looking at moving over d
Hello.
> 2. Most of the cases where customers ask for "zpool
> remove" can be solved
> with zfs send/receive or with zpool replace. Think
> Pareto's 80-20 rule.
This depends on where you define "most". In the cases I am looking at, I would
have to disagree.
> 2a. The cost of doing 2., includin
Jeremy is correct. There is actually an RFE open to allow a "zpool split" that
would have allowed you to detach the second disk while keeping the vdev data
(and thus allowing you to pull in the data in the detached disk using some sort
of "import" type command).
Rainer
This message posted f
Al Hopper wrote:
> On Fri, 26 Jan 2007, Rainer Heilke wrote:
>
>>> So, if I was an enterprise, I'd be willing to keep
>>> enough empty LUNs
>>> available to facilitate at least the migration of
>>> one or more filesystems
>>> if not complet
Richard Elling wrote:
> Rainer Heilke wrote:
>
>>> So, if I was an enterprise, I'd be willing to keep
>>> enough empty LUNs
>>> available to facilitate at least the migration of
>>> one or more filesystems
>>> if not complete pools.
&
> So, if I was an enterprise, I'd be willing to keep
> enough empty LUNs
> available to facilitate at least the migration of
> one or more filesystems
> if not complete pools.
You might be, but don't be surprised when the Financials folks laugh you out of
their office. Large corporations do not
> ...such that a snapshot (cloned if need be) won't do
> what you want?
Nope. We're talking about taking a whole disk in a mirror and doing something
else with it, without touching the data on the other parts of the mirror.
Rainer
This message posted from opensolaris.org
> While contemplating "zpool split" functionality, I
> wondered whether we
> really want such a feature because
>
> 1) SVM allows it and admins are used to it.
> or
> 2) We can't do what we want using zfs send |zfs recv
I don't think this is an either/or scenario. There are simply too many times
> For the "clone another system" zfs send/recv might be
> useful
Keeping in mind that you only want to send/recv one half of the ZFS mirror...
Rainer
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
> but the only acceptable way is the host based
> mirror with vxvm. so we can migrate manuelly in a few
> weeks but without downtime.
Detaching mirrors is actually easy with ZFS. I've done in several times. Look
at:
zpool detach pool device
The problem here is that the detached side loses all i
If you are referring to shrinking a pool/file system, where I work this is
considered very high on the list. It isn't a truly dynamic file system if we
can't shrink it.
As a practical example, you have a test server with several projects being
worked on. When a project finishes 9for whatever re
Rats, didn't proof accurately. For "UFS", I meant NFS.
Rainer
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Sorry, I should have qualified that "effective" better. I was specifically
speaking in terms of Solaris and price. For companies without a SAN (especially
using Linux), something like a NetApp Filer using UFS is the way to go, I
realize. If you're running Solaris, the cost of QFS becomes a major
> If you plan on RAC, then ASM makes good sense. It is
> unclear (to me anyway)
> if ASM over a zvol is better than ASM over a raw LUN.
Hmm. I thought ASM was really the _only_ effective way to do RAC, but then, I'm
not a DBA (and don't want to be ;-) We'll be just using raw LUN's. While the
z
Thanks for the detailed explanation of the bug. This makes it clearer to us as
to what's happening, and why (which is something I _always_ appreciate!).
Unfortunately, U4 doesn't buy us anything for our current problem.
Rainer
This message posted from opensolaris.org
> > This problem was fixed in snv_48 last September
> and will be
> > in S10_U4.
U4 doesn't help us any. We need the fix now. :-( By the time U4 is out, we may
even be finished (certainly well on our way) our RAC/ASM migration and this
whole issue will be moot.
Rainer
This message posted
> Bag-o-tricks-r-us, I suggest the following in such a case:
>
> - Two ZFS pools
> - One for production
> - One for Education
The DBA's are very resistant to splitting our whole environments. There are
nine on the test/devl server! So, we're going to put the DB files and redo logs
on separate
> The limit is documented as "1 million inodes per TB".
> So something
> ust not have gone right. But many people have
> complained and
> you could take the newfs source and fix the
> limitation.
"Patching" the source ourselves would not fly very far, but thanks for the
clarification. I guess I
It turns out we're probably going to go the UFS/ZFS route, with 4 filesystems
(the DB files on UFS with Directio).
It seems that the pain of moving from a single-node ASM to a RAC'd ASM is
great, and not worth it. The DBA group decided doing the migration to UFS for
the DB files now, and then t
We had a 2TB filesystem. No matter what options I set explicitly, the UFS
filesystem kept getting written with a 1 million file limit. Believe me, I
tried a lot of options, and they kept getting set back on me.
After a fair bit of poking around (Google, Sun's site, etc.) I found several
other n
> Also as an workaround you could disable zil if it's
> acceptable to you
> (in case of system panic or hard reset you can endup
> with
> unrecoverable database).
Again, not an option, but thatnks for the pointer. I read a bit about this last
week, and it sounds way too scary.
Rainer
This me
Thanks for the feedback!
This does sound like what we're hitting. From our testing, you are absolutely
correct--separating out the parts is a major help. The big problem we still
see, though, is doing the clones/recoveries. The DBA group clones the
production environment for Education. Since b
> What do you mean by UFS wasn't an option due to
> number of files?
Exactly that. UFS has a 1 million file limit under Solaris. Each Oracle
Financials environment well exceeds this limitation.
> Also do you have any tunables in system?
> Can you send 'zpool status' output? (raidz, mirror,
> ...
The DBA team isn't wanting to do another test. They have "made up their minds".
We have a meeting with them tomorrow, though, and will try to convince them of
one more test so that we can try the mdb and fsstat tools. (The admin doing the
tests was using iostat, not fsstat.) I, at least, am inte
> Rainer Heilke,
>
> You have 1/4 of the amount of memory that the 2900
> 0 system is capable of (192GBs : I think).
Yep. The server does not hold the application (three-tier architecture) so this
is the standard build we bought. The memory has not indicated any problems. All
err
> What hardware is used? Sparc? x86 32-bit? x86
> 64-bit?
> How much RAM is installed?
> Which version of the OS?
Sorry, this is happening on two systems (test and production). They're both
Solaris 10, Update 2. Test is a V880 with 8 CPU's and 32GB, production is an
E2900 with 12 dual-core CPU
Greetings, everyone.
We are having issues with some Oracle databases on ZFS. We would appreciate any
useful feedback you can provide.
We are using Oracle Financials, with all databases, control files, and logs on
one big 2TB ZFS pool that is on a Hitachi SAN. (This is what the DBA group
wanted
> Seems that "break" is a more obvious thing to do with
> mirrors; does this
> allow me to peel of one bit of a three-way mirror?
>
> Casper
I would think that this makes sense, and splitting off one side of a two-way
mirror is more the edge case (though emphatically required/desired).
Rainer
Well, I haven't overwritten the disk, in the hopes that I can get the data
back. So, how do I go about copying or otherwise repairing the vdevs?
Rainer
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
This makes sense for the most part (and yes, I think it should be done by the
file system, not a manual grovelling through vdev labels).
The one difference I would make is that it should not fail if the pool
_requires_ a scrub (but yes, if a scrub is in progress...). I worry about this
requirem
Neither clear nor scrub clean up the errors on the pool. I've done this about a
dozen times in the past several days, without success.
Rainer
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
Sorry for the delay...
No, it doesn't. The format command shows the drive, but zpool import does not
find any pools. I've also used the detached bad SATA drive for testing; no go.
Once a drive is detached, there seems to be no (not enough?) information about
the pool that allows import.
I have
Replying to myself here...
ZFS is now in a totally confused state. Trying to attach SATA disk 4 to the
pool, I get an error saying a zpool exists on c4d0s0. Yet, when I export the
pool on SATA disk 5 and disconnect the drive, and try to import the pool on
disk 4, I'm told there aren't any.
zpo
After exporting the pool on the two SATA drives, shutting down and
disconnecting them, I tried importing the pool on the EIDE drive. I get the
message about there being no pools to import. This was done using both "zpool
import" and "zpool import ". So, it does seem that something gets
cleared
Nope. I get "no pools available to import". I think that detaching the drive
cleared any pool information/headers on the drive, which is why I can't figure
out a way to get the data/pool back.
There is some new data on the SATA drives, but I've also kept a copy of it
elsewhere. I don't mind los
So, from the deafening silence, am I to assume there's no way to tell ZFS that
the EIDE drive was a zpool, and pull it into a new pool in a manner that I can
(once again) see the data that's on the drive? :-(
Rainer
This message posted from opensolaris.org
__
Greetings, all.
I put myself into a bit of a predicament, and I'm hoping there's a way out.
I had a drive (EIDE) in a ZFS mirror die on me. Not a big deal, right? Well, I
bought two SATA drives to build a new mirror. Since they were about the same
size (I wanted bigger drives, but they were out
I can't be specific with my reply to the second question, as I've never done
it, but do a search for "re-silvering". It is a functionality that is supposed
to be there.
As to the first question, absolutely! I have upgraded my internal server twice,
and both times, I was able to see the old ZFS
49 matches
Mail list logo