e both @Sun.COM btw).
best regards,
James C. McPherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
has reached integration yet. Until it does, you'll probably
have to reboot :(
cheers,
James C. McPherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Robert Milkowski wrote:
Hello zfs-discuss,
S10U2 SPARC + patches
Generic_118833-20
LUNs from 3510 array.
bash-3.00# zpool import
no pools available to import
bash-3.00# zpool create f3-1 mirror c5t600C0FF0098FD535C3D2B900d0
c5t600C0FF0098FD54CB01E1100d0 mir
to demonstrate to
yourself just how reliable zfs is :)
cheers,
James C. McPherson
(on a permanent search for more disk space )
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ed using
the '-f' flag.
...
Any pointers muchly appreciated! :-|
Did you try a zpool export on either or both machines?
James C. McPherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Stuart Low wrote:
Well I would, if it let me. :)
[EMAIL PROTECTED] ~]$ zpool export ax150s
cannot open 'ax150s': no such pool
[EMAIL PROTECTED] ~]$
By it's own admission it's Online but it can't find it within it's pool list?
:-|
Darn. What about a "zpool export -f ax150s" ?
James
_
Stuart Low wrote:
Nada.
[EMAIL PROTECTED] ~]$ zpool export -f ax150s
cannot open 'ax150s': no such pool
[EMAIL PROTECTED] ~]$
I wonder if it's possible to force the pool to be marked as inactive? Ideally
all I want to do is get it back online then scrub it for errors. :-|
At this point it mi
like
zpool create -R /alternate_root newpoolname vdevlist
You might need to add a "-f", but try it without "-f" first.
cheers,
James C. McPherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
James C. McPherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
n's CPRE and
PTS organisations.
Save yourself the hassle and do things right from the start.
James C. McPherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Lieven De Geyndt wrote:
I know this is not supported . But we try to build a safe configuration,
till zfs is supported in Sun cluster. The customer did order SunCluster,
but needs a workarround till the release date . And I think it must be
possible to setup .
So build them a configuration whic
m not sure if all drivers properly handle this case.
I'm not sure that sd does, at this point. This is one of the projects
that the Beijing driver group are working on. I know that they have a
plan, but how far along that plan is, I do not know.
James C. McPherson
_
Robert Milkowski wrote:
Hello James,
Thursday, September 7, 2006, 1:44:48 PM, you wrote:
JCM> Lieven De Geyndt wrote:
I know this is not supported . But we try to build a safe configuration,
till zfs is supported in Sun cluster. The customer did order SunCluster,
but needs a workarround till t
make use of Brendan Gregg's
DTrace Toolkit (http://www.brendangregg.com/dtrace.html#DTraceToolkit)
James C. McPherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Joe Little wrote:
So, people here recommended the Marvell cards, and one even provided a
link to acquire them for SATA jbod support. Well, this is what the
latest bits (B47) say:
Sep 12 13:51:54 vram marvell88sx: [ID 679681 kern.warning] WARNING:
marvell88sx0: Could not attach, unsupported chip
Richard Elling wrote:
Frank Cusack wrote:
It would be interesting to have a zfs enabled HBA to offload the checksum
and parity calculations. How much of zfs would such an HBA have to
understand?
[warning: chum]
Disagree. HBAs are pretty wimpy. It is much less expensive and more
efficient to
our fault alone*.
Didn't we have the PMC (poor man's cluster) talk last week as well?
James C. McPherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ng framework.
WHY?
What valid tests do you think you are going to be able to run?
Wait for the SunCluster 3.2 release (or the beta). Don't faff around
with a data-killing test suite in an unsupported configuration.
James C. McPherson
___
zfs-di
Erik Trimble wrote:
OK, this may seem like a stupid question (and we all know that there are
such things...)
I'm considering sharing a disk array (something like a 3510FC) between
two different systems, a SPARC and an Opteron.
Will ZFS transparently work to import/export pools between the two
s
se let us know
what you did to get it working, how well it copes and how you are
addressing any data corruption that might occur.
I tend to refer to SunCluster more than VCS simply because I've got
more in depth experience with Sun's offering.
James C. McPherson
___
prings to mind which I
could do to ensure it works?
thanks in advance,
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/pub/2/1ab/967
__
ZFS, and use it for those same purposes.
Would you want a single checksum per file, or the list of every checksum
for every block that the file referenced?
The second option might get unwieldy.
The first option - a meta-checksum if you like - would require some
interesting design.
James C
shtestmountpoint /inout/kshtest default
You should also have a look at the "legacy" option in the zfs
manpage, which provides more details on how to get zpools and
zfs integrated into your system.
James C. McPherson
--
Solaris kernel software engineer, s
of small files?
You're not quite thinking ZFS-style yet. With ZFS you do not have to
worry about block sizes unless you want to - the filesystem handles
that for you.
cheers,
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
http://www.jm
of the problem - ZFS _requires_ a complete re-working
of your understanding of how storage works, because the old limitations
are no longer valid.
If the customer actually wants to get benefit from ZFS then they have
to be prepared to undergo a paradigm shift.
James C. McPherson
--
Solaris kernel
e this dependancy?
What do you suggest in its place?
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
_
system we did have a couple of NFS related panics, always on Fridays!
> This is the fourth panic, first time with a ZFS error. There are no
> errors in zpool status.
Without data, it is difficult to suggest what might have caused
your NFS panics.
James C. McPherson
--
Solaris kernel so
Douglas Denny wrote:
On 12/4/06, James C. McPherson <[EMAIL PROTECTED]> wrote:
> Is this normal behavior for ZFS?
Yes. You have no redundancy (from ZFS' point of view at least),
so ZFS has no option except panicing in order to maintain the
integrity of your data.
This is inte
Douglas Denny wrote:
On 12/4/06, James C. McPherson <[EMAIL PROTECTED]> wrote:
If you look into your /var/adm/messages file, you should see
more than a few seconds' worth of IO retries, indicating that
there was a delay before panicing while waiting for the device
to return.
My or
ally
nonconfigurable it's impossible to tell).
Ebay.se
2. The power cord got slightly lose to the Brocade switch causing it to
reboot causing the server into an *Instant PANIC thanks to ZFS*
Yes, as noted, this is by design in order to *protect your data*
James C. McPherson
--
Solar
the ifp
driver should be updated to support the maximum number of targets on a
loop, which might also solve your second problem.)
Your alternative option isn't going to happen. The ifp driver and
the card it supports have both been long since EOLd.
James C. McPherson
the emlxs driver for
the Emulex card?
Second question -- are you up to date on the SAN Foundation
Kit (SFK) patches? I think the current version is 4.4.11. If
you're not running that version, I strongly recommend that
you upgrade your patch levels to it. Ditto for kernel, sd
and scsi_vhci.
Ja
0 0
mirrorONLINE 0 0 0
c1d0s3 ONLINE 0 0 0
c2d0s3 ONLINE 0 0 0
errors: No known data errors
$
libdiskmgmt protects you (mostly) from using slices or
partitions which are already in use
James C. McPherson
pool?
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs-di
James C. McPherson wrote:
Jason J. W. Williams wrote:
I agree with others here that the kernel panic is undesired behavior.
If ZFS would simply offline the zpool and not kernel panic, that would
obviate my request for an informational message. It'd be pretty darn
obvious what was goi
ll
be a stripe or raidz or raidz2 or whatever.
From that pool you create as many filesystems as you need.
If from those 8 disks you want two different underlying types
of storage layout, you would create two pools.
James C. McPherson
--
Solaris kernel software engineer, system admi
levant
people who have signoff on things like this.
cheers,
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Al Hopper wrote:
On Sun, 21 Jan 2007, James C. McPherson wrote:
... snip
Would you please expand upon this, because I'm really interested
in what your thoughts are. since I work on Sun's SAS driver :)
Hi James - just the man I have a couple of questions for... :)
Will th
at it like scsi when instead they should
treat it like FC (or native SATA).
uhmm... SAS is serial attached SCSI, why wouldn't we treat it like SCSI?
On January 21, 2007 8:17:10 PM +1100 "James C. McPherson"
Uh ... you do know that the second "S" in SAS stands for
"s
x27;t know for sure. Apart from that
I think it's a good idea.
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/james
Prashanth Radhakrishnan wrote:
Is there someway to synchronously mount a ZFS filesystem?
> '-o sync' does not appear to be honoured.
No there isn't. Why do you think it is necessary?
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
Frank Cusack wrote:
On January 23, 2007 8:53:30 AM +1100 "James C. McPherson"
...
Why would you start your numbering at 10?
Because you don't have a choice. It is up to the HBA and getting it
to do the right thing (ie, what you want) isn't always easy. IIRC,
the LSI Logi
ly for detailed questions about ZFS code that may
not interest the community at large. If '[EMAIL PROTECTED]' were
created, would this be beneficial to anyone?
Definitely. Please sign me up when you create it.
thanks,
James C. McPherson
--
Solaris Datapath Engineering
Data Manageme
, there is no such tool right now. A few people (including myself)
have talked about how such a tool might be designed, but I don't think
there's been any activity to date.
best regards,
James C. McPherson
--
Solaris Datapath Engineering
Data Management Group
Sun Mi
aw ("don't you know that
everything is faster on raw?!?!") then I'd carve a zvol for them.
Anything else would be carefully delineated - they stick to the rdbms
and don't tell me how to do my job, and vice versa.
cheers,
James C. McPherson
--
Sola
301 - 345 of 345 matches
Mail list logo