Spare a thought also for the remote serviceability aspects of these
systems, if customers raise calls/escalations against such systems
then our remote support/solution centre staff would find such an
output useful in identifying and verifying the config.
I'm don't have visibility of the Exp
Karen,
This looks like you were using the internal raid on a T2000, is that right?
If so is it possible that you did not re-label the drives after you
deleted the volume?
After deleting a raid volume using the onboard controller you must
relabel the affected drives.
The 1064 controller utilizes
Included below is a a thread which dealt with trying to find the
packages necessary for a minimal Solais 10 U2 install with ZFS
functionality. In addition to SUNWzfskr, SUNzfsr and SUNWzfsu the
SUNWsmapi package needs to be installed. The libdiskmgt.so.1 library is
required for the zpool(1M
No arguement from me. For better or for worse, most of the customers I
speak with minimize their OS distributions. The more we can accurately
describe dependencies within our current methods, the better.
/jason
Jim Connors wrote:
Included below is a a thread which dealt with trying to fi
On Tue, Jul 25, 2006 at 10:25:04AM -0400, Jim Connors wrote:
>
> Included below is a a thread which dealt with trying to find the
> packages necessary for a minimal Solais 10 U2 install with ZFS
> functionality. In addition to SUNWzfskr, SUNzfsr and SUNWzfsu the
> SUNWsmapi package needs to be
Craig Morgan wrote:
Spare a thought also for the remote serviceability aspects of these
systems, if customers raise calls/escalations against such systems then
our remote support/solution centre staff would find such an output
useful in identifying and verifying the config.
I'm don't have vis
Guys,
Thanks for the help so far, now comes the more interesting questions ...
Piggybacking off of some work being done to minimize Solaris for
embedded use, I have a version of Solaris 10 U2 with ZFS functionality
with a disk footprint of about 60MB. Creating a miniroot based upon
this im
You need the following file:
/etc/zfs/zpool.cache
This file 'knows' about all the pools on the system. These pools can
typically be discovered via 'zpool import', but we can't do this at boot
because:
a. It can be really, really expensive (tasting every disk on the system)
b. Pools can
I understand. Thanks.
Just curious, ZFS manages NFS shares. Have you given any thought to
what might be involved for ZFS to manage SMB shares in the same manner.
This all goes towards my "stateless OS" theme.
-- Jim C
Eric Schrock wrote:
You need the following file:
/etc/zfs/z
On Tue, Jul 25, 2006 at 01:07:59PM -0400, Jim Connors wrote:
>
> I understand. Thanks.
>
> Just curious, ZFS manages NFS shares. Have you given any thought to
> what might be involved for ZFS to manage SMB shares in the same manner.
> This all goes towards my "stateless OS" theme.
Yep, this
I've recently started doing ON nightly builds on zfs filesystems on the
internal ATA disk of a Blade 1500 running snv_42. Unfortunately, the
builds are extremely slow compared to building on an external IEEE 1394
disk attached to the same machine:
ATA disk:
Elapsed build time (DEBUG)
I've run into this myself. (I am in a university setting). after reading bug
ID 6431277 (URL below for noobs like myself who didn't know what "see 6431277"
meant):
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6431277
...it's not clear to me how this will be resolved. What I'd r
On Tue, 2006-07-25 at 13:45, Rainer Orth wrote:
> At other times, the kernel time can be even as high as 80%. Unfortunately,
> I've not been able to investigate how usec_delay is called since there's no
> fbt provider for that function (nor for the alternative entry point
> drv_usecwait found in u
Eric Schrock wrote:
You need the following file:
/etc/zfs/zpool.cache
So as a workaround (or more appropriately, a kludge) would it be
possible to:
1. At boot time do a 'zpool import' of some pool guaranteed to exist.
For the sake of this discussion call it 'system'
2. Have /
Bill,
> In the future, you can try:
>
> # lockstat -s 10 -I sleep 10
>
> which aggregates on the full stack trace, not just the caller, during
> profiling interrupts. (-s 10 sets the stack depth; tweak up or down to
> taste).
nice. Perhaps lockstat(1M) should be updated to include something l
On Tue, 25 Jul 2006, Brad Plecs wrote:
> I've run into this myself. (I am in a university setting). after reading
> bug ID 6431277 (URL below for noobs like myself who didn't know what "see
> 6431277" meant):
>
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6431277
>
> ...it's no
A couple of weeks ago, there was a discussion on the best system for ZFS
and I mentioned that AMD would reduce pricing and withdraw some of the
939-pin (non AM2) processors from the marketplace.
Update: I see a dual-core AMD X2 4400+ (1Mb cache per core) processor on
www.monarchcomputers.com for
> First, ZFS allows one to take advantage of large, inexpensive Serial ATA
> disk drives. Paraphrased: "ZFS loves large, cheap SATA disk drives". So
> the first part of the solution looks (to me) as simple as adding some
> cheap SATA disk drives.
I hope not. We have quotas available for a reaso
> First, ZFS allows one to take advantage of large, inexpensive Serial ATA
> disk drives. Paraphrased: "ZFS loves large, cheap SATA disk drives". So
> the first part of the solution looks (to me) as simple as adding some
> cheap SATA disk drives.
>
> Next, after extra storage space has been adde
I would like to make a couple of additions to the proposed model.
Permission Sets.
Allow the administrator to define a named set of permissions, and then
use the name as a permission later on. Permission sets would be
evaluated dynamically, so that changing the set definition would cha
On Tue, 2006-07-25 at 14:36, Rainer Orth wrote:
> Perhaps lockstat(1M) should be updated to include something like
> this in the EXAMPLES section.
I filed 6452661 with this suggestion.
> Any word when this might be fixed?
I can't comment in terms of time, but the engineer working on it has a
pa
Bill,
> On Tue, 2006-07-25 at 14:36, Rainer Orth wrote:
> > Perhaps lockstat(1M) should be updated to include something like
> > this in the EXAMPLES section.
>
> I filed 6452661 with this suggestion.
excellent, thanks.
> > Any word when this might be fixed?
>
> I can't comment in terms of ti
On Tue, Jul 25, 2006 at 11:13:16AM -0700, Brad Plecs wrote:
>
> If we must contain snapshots inside a filesystem, perhaps it's
> possible to set a distinct quota for snapshot space vs. live data
> space? I could then set snapshot quotas for my filesystems
> arbitrarily large for my administrativ
On Tue, Jul 25, 2006 at 11:13:16AM -0700, Brad Plecs wrote:
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6431277
>
> What I'd really like to see is ... the ability for the snapshot space
> to *not* impact the filesystem space).
Yep, as Eric mentioned, that is the purpose of this
Our application Canary has approx 750 clients uploading to the server
every 10 mins, that's approx 108,000 gzip tarballs per day writing to
the /upload directory. The parser untars the tarball which consists of
8 ascii files into the /archives directory. /app is our application and
tools (apache,
On Tue, Jul 25, 2006 at 03:39:11PM -0700, Karen Chau wrote:
> Our application Canary has approx 750 clients uploading to the server
> every 10 mins, that's approx 108,000 gzip tarballs per day writing to
> the /upload directory. The parser untars the tarball which consists of
> 8 ascii files into
Given the amount of I/O wouldn't it make sense to get more drives
involved or something that has cache on the front end or both? If you're
really pushing the amount of I/O you're alluding too - Hard to tell
without all the details - then you're probably going to hit a limitation
on the drive IO
On 7/25/06, Brad Plecs <[EMAIL PROTECTED]> wrote:
I've run into this myself. (I am in a university setting). after reading bug ID 6431277
(URL below for noobs like myself who didn't know what "see 6431277" meant):
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6431277
...it's not
On Tue, Jul 25, 2006 at 07:24:51PM -0500, Mike Gerdts wrote:
> On 7/25/06, Brad Plecs <[EMAIL PROTECTED]> wrote:
> >What I'd really like to see is ... the ability for the snapshot space
> >to *not* impact the filesystem space).
>
> The idea is that you have two storage pools - one for live data, o
On 7/25/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
You can simplify and improve the performance of this considerably by
using 'zfs send':
for user in $allusers ; do
zfs snapshot users/[EMAIL PROTECTED]
zfs send -i $yesterday users/[EMAIL PROTECTED] | \
Hi Torrey; we are the cobblers kids. We borrowed this T2000 from
Niagara engineering after we did some performance tests for them. I am
trying to get a thumper to run this data set. This could take up to 3-4
months. Today we are watching 750 Sun Ray servers and 30,000 employees.
Lets see
1) Sol
Karen and Sean,
You mention ZFS version 6 do yo mean that you are running s10u2_06? If
so, then definitely you want to upgrade to the RR version of s10u2 which
is s10u2_09a.
Additionally, I've just putback the latest feature set and bugfixes
which will be part of s10u3_03. There were some ad
On Tue, Jul 25, 2006 at 08:37:28PM -0500, Mike Gerdts wrote:
> On 7/25/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
> >You can simplify and improve the performance of this considerably by
> >using 'zfs send':
> >
> >for user in $allusers ; do
> >zfs snapshot users/[EMAIL PR
Since ZFS is COW, can I have a read-only pool (on a central file server, or on
a DVD, etc) with a separate block-differential pool on my local hard disk to
store writes?
This way, the pool in use can be read-write, even if the main pool itself is
read-only, without having to make a full local co
Do an automatic pool snapshot (using the recursive atomic snapshot feature that
Matt Ahrens implemented recently, taking time proportional to the number of
filesystems in the pool) upon every txg commit.
Management of the trashcan snapshots could be done by some user-configurable
policy such as
For a synchronous write to a pool with mirrored disks, does the write unblock
after just one of the disks' write caches is flushed, or only after all of the
disks' caches are flushed?
This message posted from opensolaris.org
___
zfs-discuss mailing
36 matches
Mail list logo