I've been testing the ZFS root recovery using 10u6 and have come across a very
odd problem.
When following this procedure I the disk I am setting up my rpool on keeps
reverting to an EFI label.
http://docs.sun.com/app/docs/doc/819-5461/ghzur?l=en&a=view
Here is what the exact steps I am doing;
I've discovered the source of the problem.
zpool create -f -o failmode=continue -R /a -m legacy -o
cachefile=/etc/zfs/zpool.cache rpool c1t0d0
It seems a root pool must only be created on a slice. Therefore
zpool create -f -o failmode=continue -R /a -m legacy -o
cachefile=/etc/zfs/zpool.cache
Thanks I don't know how I missed it.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e fixing it?" is more than
acceptable. Netiquette rules. To quote you: "Why don't you just fix the
apparently broken link to your source, then?" is _not_ forum/list material.
Thanks... Sean.
___
zfs-discuss mailing list
z
Khyron,
Finally, Michael S. made the best recommendation...talk to your sales
rep if you're
a paying customer.
... but don't expect any commitments or generic answer from them at the
moment.
I do however congratulate quoting Mr. Harman in your .sig ;-)
Regar
This morning we got a fault management message from one of our production
servers stating that a fault in one of our pools had been detected and fixed.
Looking into the error using fmdump gives:
fmdump -v -u 90ea244e-1ea9-4bd6-d2be-e4e7a021f006
TIME UUID
Thanks for this information.
We have a weekly scrub schedule, but I ran another just to be sure :-) It
completed with 0 errors.
Running fmdump -eV gives:
TIME CLASS
fmdump: /var/fm/fmd/errlog is empty
Dumping the faultlog (no -e) does give some output, but again there
ed rather than
there being a real issue with ZFS. Despite this, we're happy to know that we
can now match vdevs against physical devices using either the mdb trick or zdb.
We've followed Eric's work on ZFS device enumeration for the Fishwork project
with great interest - hopefully
We have a number of Sun J4200 SAS JBOD arrays which we have multipathed using
Sun's MPxIO facility. While this is great for reliability, it results in the
/dev/dsk device IDs changing from cXtYd0 to something virtually unreadable like
"c4t5000C5000B21AC63d0s3".
Since the entries in /dev/{rdsk,d
mdump doesn't produce any "human readable" disk ids, only
guids which then have to be correlated via a "zdb -c"
Sean
>Date: Tue, 17 Nov 2009 16:18:52 -0700
>From: Cindy Swearingen
>Subject: Re: [zfs-discuss] building zpools on device aliases
>To: se
We recently patched our X4500 from Sol10 U6 to Sol10 U8 and have not noticed
anything like what you're seeing. We do not have any SSD devices installed.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
discussion.
Metoo ;-) ... Sean.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Rainer,
devfsadm -C alone didn't make a difference, but clearing out /dev/*dsk
and running devfsadm -Cv did help.
I am glad it helped; but removing anything from /dev/*dsk is a kludge
that cannot be accepted/condoned/supported.
Regards...
: xvm-4200m2-02 ;
I can do the echo | mdb -k. But what is that : xvm-4200 command?
My guess is that is a very odd shell prompt ;-)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-d
sted default partitioning did work.
>>
>
> OpenSolaris 2008.05 doesn't use Caiman?
>
Rich (the OP) was installing Nevada b95, which does not use Caiman;
whereas 2008.05 does - see http://www.opensolaris.org/os/project/caiman/
Regards... Sean.
__
Systems and Network Analyst | [EMAIL PROTECTED]
< California State Polytechnic University | Pomona CA 91768
< ___
< zfs-discuss mailing list
< zfs-discuss@opensolaris.org
< http://mail.opensolaris.org/mailman/listinfo/zfs-discu
eas patches are problem-fixes; which no-one will/should
pay for... Call me pedantic ;-)
Regards... Sean.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Greetings,
I have been evaluating an X4540 server. I now have to return it. I'm
curious about all of your thoughts on the best method for securely wiping the
data might be.
Thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailin
> We require urgent help on the compliance sheet attached for
> filesystem ZFS for a USD 20 million storage tender in India.
And from where I come from, companies at this stage of the tendering
process generally do not wish their details/requirements to be widely
publicised. Thus the referenc
> What is less clear is why windows write performance drops to zero.
Perhaps the tweak for Nagel's Algorithm in Windows would be in order?
http://blogs.sun.com/constantin/entry/x4500_solaris_zfs_iscsi_perfect
--
This message posted from opensolaris.org
___
Z,
> Beloved Tim,
> You challenged me a while ago, as a friend.
> I did what you asked me to do, in the honor of my father.
>
> Best,
> z
Please don't post personal stuff like this or links to wikipedia or
other ephemera/apocrypha to this/any list unless they are re
cept a drive is part #570-1182.
> anyone know how i could order 12 of these?
Call your local Sun Account Manager. No-one on zfs-discuss will have the
remotest clue (or be even the slightest bit interested - it's so OT).
Apols and regards... Sean.
___
ty/WFS/CDS-CDS_SMI-Site/en_US/-/USD/viewproductdetail-start?productref=sol-express_b114-full-x86-sp-...@cds-cds_smi
There appears to be a minor glitch in /etc/driver_aliases where a
spurious line for qlc has appeared in /etc/driver_aliases, but I have it
installed and running.
Regar
sp-...@cds-cds_smi
This was for single-file ISO image download,
https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_SMI-Site/en_US/-/USD/viewproductdetail-start?productref=sol-express_b114-seg-x86-sp-...@cds-cds_smi
will give it to you in two segments.
Regar
Hello Dick,
Sean Sprague wrote:
There appears to be a minor glitch in /etc/driver_aliases where a
spurious line for qlc has appeared in /etc/driver_aliases, but I have it
installed and running.
What's a spurious line (I'm dutch) and how did you "solve" it?
D
Orvar Korvar wrote:
In the comments there are several people complaining of loosing data. That
doesnt sound to good. It takes a long time to build a good reputation, and 5
minutes to ruin it. We dont want ZFS to loose it's reputation of an uber file
system.
With due respect, I recommend t
was never the place to go for accurate information about ZFS.
Many would even say:
Slashdot was never the place to go for accurate information.
Slashdot was never the place to go for information.
Slashdot was never the place to go.
Slashdot? Never.
Take your pick ;-)
Regar
Sun X4500 (thumper) with 16Gb of memory running Solaris 10 U6 with patches
current to the end of Feb 2009.
Current ARC size is ~6Gb.
ZFS filesystem created in a ~3.2 Tb pool consisting of 7 sets of mirrored 500Gb
SATA drives.
I used 4000 8Mb files for a total of 32Gb.
run 1: ~140M/s average a
1) Turning on write caching is potentially dangerous because the disk will
indicate that data has been written (to cache) before it has actually been
written to non-volatile storage (disk). Since the factory has no way of knowing
how you'll use your T5140, I'm guessing that they set the disk wri
Something caused my original message to get cut off. Here is the full post:
1) Turning on write caching is potentially dangerous because the disk will
indicate that data has been written (to cache) before it has actually been
written to non-volatile storage (disk). Since the factory has no way o
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
sed -e "s/http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
In good'ol days if you are moving file/files in the same UFS, it's a snap as
the moving is only a change in dir/inode level.
Since zfs encourages creating more filesystems instead of dirs, moving can be
an issue - data must be moved around instead of being pointed to, so it takes a
long time if
Post it and forgot it :-)
And sincere thanks for so many replies of "This is not right" - which is a
standard sysadmin answer - which is probably the same answer I would give out
to others as well.
Actually I was inviting some answers like "how this can be done" or "this can
be done but the cost
rror or raid functionality. Does this add unnecessary overhead at
the cost of performance when the SAN may be configured in a RAID 5 or
RAID 10 arrangement?
Many thanks!
--
Sean
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the same thing up on a Sun
Fire X4200 M2 the other day and was able to create a zpool on the same
CLARiiON device.
--
Sean
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ZFS is a 128 bit file system. The performance on your 32-bit CPU will
not be that good. ZFS was designed for a 64-bit CPU. Another GB of RAM
might help. There are a bunch of post in the archive about 32-bit CPUs
and performance.
-Sean
Orvar Korvar wrote:
> I am using Solaris Expr
igate the
issue, and unfortunately I do not have any other available sparcs with
SAN connectivity.
--
Sean
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Peter Tribble
Sent: Friday, July 13, 2007 11:18 AM
To: [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.o
There was a Sun Forums post that I referenced in that other thread that
mentioned something about mpxio working but powerpath not working. Of
course I don't know how valid those statements are/were, and I don't
recall much detail given.
--
Sean
-Original Message-
From: Pet
'm not sure why exactly they were chosen over the qlogic, some of our
admins swear by the qlogic cards, others have have had bad experiences
with the qlogic cards not allowing for persistent binding on some
configurations, but from my perspective being mostly a SAN noob it's all
hearsa
the
same zpool information on all three devices, but I concede this is a
relatively uneducated guess.
--
Sean
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Friday, July 13, 2007 4:11 PM
To: Manoj Joseph
Cc: [EMAIL PROTECTE
cisions. I'm always looking for more ammo to debate those kind of
statements.
--
Sean
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of eric kustarz
Sent: Thursday, July 19, 2007 1:24 PM
To: ZFS Discussions
Subject: [zfs-discuss] more love for databases
0 0
errors: No known data errors
I'm hoping yes, but expecting no. :(
P.S. I'm running on a Sun Fire X4200 M2...
[EMAIL PROTECTED]:/]# uname -iprsv
SunOS 5.10 Generic_118855-36 i386 i86pc
Thanks.
--
Sean
___
zfs-discuss mailing l
ZFS will stripe across all Root Level VDEVs, regardless of type of VDEV
(mirror, raidz, single whole disk, disk slices, whatever).
e.g.
tank01
mirror
c1t0d0
c1t1d0
raidz
c1t2d0
c1t3d0
c1t4d0
mirror
c0t0d0s7
c0t1d0s7
Should give you 3 stripes.
--
Sean
-Original
torage on all
of our new hardware going forward, but unless I can justify over ruling the
Storage System's RAID 1+0 or dropping my capacity utilization from 50% to 25%,
I haven't got much ground to stand on. Is anyone else paddling in my canoe?
--
Sean
-Original Message-
From: [
tives here other than to forcefully recreate the
pool, reinstall the software and go through the process again?
--
Sean
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t3d0s0 is part of an active ZFS pool,
etc. I've done this before more than once, applying config changes to
bring system into what we now know is a more stable config.
Any ideas?
--
Sean
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
P.S. yeah I hand typed the output of those commands due to being in
Single User Mode via the ILOM console application which doesn't allow
for anything but a screen print. I didn't figure anyone would enjoy an
attachment, so I hand typed it all.
--
Sean
-Original Message-
From: Lo
We mostly rely on AMANDA, but for a simple, compressed, encrypted,
tape-spanning alternative backup (intended for disaster recovery) we use:
tar cf - | lzf (quick compression utility) | ssl (to encrypt) | mbuffer
(which writes to tape and looks after tape changes)
Recovery is exactly the oppos
Casper,
> Do you have a reference for "all data in RAM most be held". I guess we
> need to build COW RAM as well.
Is that one of those genetic hybrids?
Regards... Sean.
BTW: I remember the days when only RAS and CAS kept your data in
nel keeps panicing.
>
> Is there any interest in it being release to the public?
Yes indeed. Please put the source up here - I am sure that you will receive
interesting feedback.
Thanks and regards... Sean.
___
zfs-discuss mailing list
zfs-disc
So, if your array is something big like an HP XP12000, you wouldn't just make a
zpool of one big LUN (LUSE volume), you'd split it in two and make a mirror
when creating the zpool?
If the array has redundancy built in, you're suggesting to add another layer of
redundancy using ZFS on top of tha
I haven't used it myself, but the following blog describes an automatic
snapshot facility:
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10
I agree that it would be nice to have this type of functionality built into the
base product, however.
This message posted from opensolaris
I have a server "thumper1" which exports its root (UFS) filesystem to one
specific server "hoss" via /etc/dfs/dfstab so that we can backup various system
files. When I added a ZFS pool mypool to this system, I shared it to hoss and
several other machines using the ZFS sharenfs property.
Prior t
Some additional information: I should have noted that the client could not see
the thumper1 shares via the automounter.
I've played around with this setup a bit more and it appears that I can
manually mount both filesystems (e.g. on /tmp/troot and /tmp/tpool), so the ZFS
and UFS volumes are bei
ifications of any tuning
that you undertake.
If this is all "master of the bleedin' obvious" to you, then please
accept my humble apologies - it is often not the case...
Regards... Sean.
___
zfs-discuss mailing list
zfs-dis
uggested"; and gave some very generic pointers.
Regards... Sean.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
There is no fix/patch for these BUGs yet. We would recommend to try
the workarounds till there is a fix available for these BUGs.
what is the downside of disabling loggin records? this is a production
machine and we are now affecting a fairly large population.
thanks
sean
Daniel Rock wr
the fact that we went through a reboot. Whatever the root cause we are now back to a well behaved file system.
thanks
sean
Roch wrote:
15 minutes to do a fdsync is way outside the slowdown usually seen.
The footprint for 6413510 is that when a huge amount of
data is being written
795K
canary 42.0G 12.0G364 0 23.9M 0
canary 42.0G 12.0G387 0 25.6M 0
thanks
sean
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0573 0.0005 write(1, " / u p l o a d / c a n a".., 24)
= 24
26078: 0.0576 0.0003 _exit(0)
Michael Schuster - Sun Microsystems wrote:
Sean Meighan
wrote:
I am not sure if this is ZFS, Niagara or
something else issue? Does someone know why commands have the latency
shown bel
1t0d0
website for the canary is located at http://canary.sfbay
thanks
sean
--
Sean Meighan
Mgr ITSM Engineering
Sun Microsystems, Inc.
US
Phone x32329 / +1 408 850-9537
Mobile 303-520-2024
Fax 408 850-9537
Email [EMAIL PROTECTED]
N
week. . When
we get a new box, more drives we will reconfigure.
Our graphs have 5000 data points per month, 140 data points per day. we
can stand to lose data.
my suggestion was one drive as the system volume and the remaining
three drives as one big zfs volume , probably raidz.
thanks
sean
f ASCII per day all fitting in a 2u T2000
box!
thanks
sean
George Wilson wrote:
Sean,
Sorry for the delay getting back to you.
You can do a 'zpool upgrade' to see what version of the on-disk format
you pool is currently running. The latest version is 3. You can then
issue a
you can scroll down this report and find
out which of the 1500 unique executables (oracle,
vi,vim,emacs,mozilla,firefox,thunderbird,opera,.etc.) is causing the
most load across the world.
basically our performance issues have been solved. thanks ZFS and
Niagara teams.
sean
Matthew Ahrens wrot
Neel,
Is it possible to destroy a pool by ID? I created two pools with the
same name, and want to destroy one of them
Could you please cut and paste (ie. not re-type) the output from the command "zpool
list | col -b", and post it here please?
Than
*Synopsis*: X2100 release notes inconsistent with Sun terminology,
confuses customers
Status: 11-Closed
Substatus: Will Not Fix
<0.02>
I agree completely. The phrase "confuses customers" equates to "degrades community relations", and thus equals "high
pri
Hi, Chris,
You may force a panic by "reboot -d".
Thanks,
Sean
On Tue, Nov 14, 2006 at 09:11:58PM -0600, Chris Csanady wrote:
> I have experienced two hangs so far with snv_51. I was running snv_46
> until recently, and it was rock solid, as were earlier builds.
>
> Is
with at least 100 million + files?
What were the performance characteristics?
Thanks!
Sean
--
<http://www.sun.com> * Sean Cochrane *
Global Storage Architect
*Sun Microsystems, Inc.*
525 South 1100 East
Salt Lake City, UT 84102 US
Phone +1877 255 5756
Mobile +1801-949-4799
Fax +1877.25
---
< Bryan Cantrill, Solaris Kernel Development. http://blogs.sun.com/bmc
< ___
< zfs-discuss mailing list
< zfs-discuss@opensolaris.org
< http://mail.opensolaris.org/mailman/listinf
Regards,
<
<
< This message posted from opensolaris.org
< ___
< zfs-discuss mailing list
< zfs-discuss@opensolaris.org
< http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Sean.
.
___
zfs
11G28%/
#
<
<
< This message posted from opensolaris.org
< ___
< zfs-discuss mailing list
< zfs-discuss@opensolaris.org
< http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Sean.
.
___
/archives/fedora-list/2006-June/msg04497.html
<
< --
< Darren J Moffat
< ___
< zfs-discuss mailing list
< zfs-discuss@opensolaris.org
< http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Sean.
.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
he drive.
<
< Any suggestions to fix this problem?
<
< Thanks in advance,
< Xinfeng
<
<
< This message posted from opensolaris.org
< ___
< zfs-discuss mailing list
< zfs-discuss@o
oes anyone think
< this is the right direction to go in ?
Theres no new API to be written, its easily extended and customisable, builds
on features already there..
<
< cheers,
< tim
<
< [1] I'm not yet sure if SMF instance names are allowed '/' chars
74 matches
Mail list logo