On Tue, 2008-04-29 at 15:02 +0200, Ulrich Graef wrote:
> Hi,
>
> ZFS won't boot on my machine.
>
> I discovered, that the lu manpages are there, but not
> the new binaries.
> So I tried to set up ZFS boot manually:
>
> > zpool create -f Root c0t1d0s0
> >
> > lucreate -n nv88_zfs -A "nv88 fin
| Still, I'm curious -- why lots of pools? Administration would be
| simpler with a single pool containing many filesystems.
The short answer is that it is politically and administratively easier
to use (at least) one pool per storage-buying group in our environment.
This got discussed in more d
Indeed, things should be simpler with fewer (generally one) pool.
That said, I suspect I know the reason for the particular problem
you're seeing: we currently do a bit too much vdev-level caching.
Each vdev can have up to 10MB of cache. With 132 pools, even if
each pool is just a single iSCSI de
A silly question: Why are you using 132 ZFS pools as opposed to a
single ZFS pool with 132 ZFS filesystems?
--Bill
On Wed, Apr 30, 2008 at 01:53:32PM -0400, Chris Siebenmann wrote:
> I have a test system with 132 (small) ZFS pools[*], as part of our
> work to validate a new ZFS-based fileserve
I have a test system with 132 (small) ZFS pools[*], as part of our
work to validate a new ZFS-based fileserver environment. In testing,
it appears that we can produce situations that will run the kernel out
of memory, or at least out of some resource such that things start
complaining 'bash: fork:
Hi Ulrich,
The updated lucreate.1m man page integrated accidentally into
build 88.
If you review the build 88 instructions, here:
http://opensolaris.org/os/community/zfs/boot/
You'll see that we're recommending patience until the install/upgrade
support integrates.
If you are running the tran
When we installed the Marvell driver patch 125205-07 on our X4500 a few months
ago and it started crashing, Sun support just told us to back out that patch.
The system has been stable since then.
We are still running Solaris 10 11/06 on that system. Is there an advantage to
using 125205-07 an
On Tue, 29 Apr 2008, Jonathan Loran wrote:
>>
> Oh contraire Bob. I'm not going to boost Linux, but in this department,
> they've tried to do it right. If you use Linux autofs V4 or higher, you
> can use Sun style maps (except there's no direct maps in V4. Need V5
> for direct maps). For our ho
dh wrote:
> Hello eschrock,
>
> I'm a newbe on solaris, would you tell me how I can get/install build 89 of
> nevada?
>
> Fabrice.
Hi Fabrice,
I think a good place to start is http://www.opensolaris.org/os/newbies/ - I
don't know whether they give you access to build 89 yet, but you can
cert
On Apr 29, 2008, at 9:35 PM, Tim Wood wrote:
> Hi,
> I have a pool /zfs01 with two sub file systems /zfs01/rep1 and /
> zfs01/rep2. I used [i]zfs share[/i] to make all of these mountable
> over NFS, but clients have to mount either rep1 or rep2
> individually. If I try to mount /zfs01 it s
> Hi There,
>
> Is there any chance you could go into a little more
> detail, perhaps even document the procedure, for the
> benefit of others experiencing a similar problem?
I have some spare time this weekend and will try to give more details.
This message posted from opensolaris.org
___
Did you see http://www.opensolaris.org/jive/thread.jspa?messageID=220125
I managed to recover my lost data with simple mdb commands.
--Lukas
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
Hello eschrock,
I'm a newbe on solaris, would you tell me how I can get/install build 89 of
nevada?
Fabrice.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listi
Hi Rick,
OK, thanks for clarifying.
As, it seems there's different devices with (1) mixed speed NICs and (2) mixed
category cabling being used in your setup, I will simplify things by saying
that if you want to get much faster speeds then I think you'll need to ensure
you (1) use at least Cat.
14 matches
Mail list logo