Howdy Ron,
Right, right - I know I dropped the ball on that one. Sorry, I haven't been
able to log into OpenSolaris lately, and thus haven't been able to actually do
anything useful... (lol, not to rag on OpenSolaris or anything, but it can also
freeze just by logging in... See:
http://defect.
Waynel,
It takes significant amount of work to typeset any large document. Especially
if it is a technical document in which you have to adhere to a set of strict
typesetting guidelines. In these cases separation of content and style is
essential and can't be stressed enough.
Word Processors h
Okay, so your ACHI hardware is not using an ACHI driver in solaris. A crash
when pulling a cable is still not great, but it is understandable because that
driver is old and bad and doesn't support hot swapping at all.
So there are two things to do here. File a bug about how pulling a sata cabl
On 8/26/08, Cyril Plisko <[EMAIL PROTECTED]> wrote:
> that's very interesting ! Can you share more info on what these
> bugs/issues are ? Since it is LU related I guess we'll never see these
> via opensolaris.org, right ? So I would appreciate if community will
> be updated when these fixes will
> Pulling cables only simulates pulling cables. If you
> are having difficulty with cables falling out, then this problem cannot
> be solved with software. It *must* be solved with hardware.
I don't think anyone is asking for software to fix cables that fall out...
they're asking for the OS to no
On Wed, Aug 27, 2008 at 2:33 AM, Lori Alt <[EMAIL PROTECTED]> wrote:
> More or less. There are a number of bugs in LU
> support of zfs that we've just fixed in the final builds
> of the S10 Update 6 release, which we'll forward-port
> to Nevada as soon as we catch our breath. Most but
> not all a
> James isn't being a jerk because he hates your or
> anything...
>
> Look, yanking the drives like that can seriously
> damage the drives or your motherboard. Solaris
> doesn't let you do it and assumes that something's
> gone seriously wrong if you try it. That Linux
> ignores the behavior and l
Mattias Pantzare wrote:
> 2008/8/26 Richard Elling <[EMAIL PROTECTED]>:
>
>>> Doing a good job with this error is mostly about not freezing
>>> the whole filesystem for the 30sec it takes the drive to report the
>>> error.
>>>
>> That is not a ZFS problem. Please file bugs in the appropr
greg evigan wrote:
> Could anyone explain where the capacity % comes from for this df -h output
> (or where to read to find out, having scoured the man page for df and ZFS
> admin guide already)?
>
> # df -h -F zfs
> Filesystem size used avail capacity Mounted on
> jira-pool/artif
On Tue, Aug 26, 2008 at 04:12:01PM -0700, Rich Teer wrote:
>
> Would I be correct in thinking that LiveUpgrade plays nicely
> with ZFS boot, now that the latter is integrated into Nevada?
Wonderfully! `lucreate' is almost instantaneous because it doesn't
do any copying. You can also put several
> I suspect the problem with ZFS boot from USB sticks
> is,
> that the kernel does not create "devid" properties
> for the
> USB stick, and apparently those devids are now
> required for
> zfs booting.
>
> The kernel (sd driver) does not create "devid"
> properties for USB flash
> memory sticks, b
2008/8/26 Richard Elling <[EMAIL PROTECTED]>:
>
>> Doing a good job with this error is mostly about not freezing
>> the whole filesystem for the 30sec it takes the drive to report the
>> error.
>
> That is not a ZFS problem. Please file bugs in the appropriate category.
Who's problem is it? It ca
Could anyone explain where the capacity % comes from for this df -h output (or
where to read to find out, having scoured the man page for df and ZFS admin
guide already)?
# df -h -F zfs
Filesystem size used avail capacity Mounted on
jira-pool/artifactory
4
> > Again, I don't see any reason why we should not
> consider using
> > StarOffice (BTW, it's "StarOffice"--one word, not
> "star office") to
> > publish the Adm Guide, as well as other Sun
> publications.
>
> You are saying that Sun should start over from
> scratch and attempt to
> use the wr
I've been playing with this, and it seems what's going on is simply
poor documentation on how snapshotting and send/recv interact.
Here's the snippet that's been posted prior that will work once and
then fail on subsequent runs
>>So for example, each night you could do:
>># zfs snapshot -r tank
More or less. There are a number of bugs in LU
support of zfs that we've just fixed in the final builds
of the S10 Update 6 release, which we'll forward-port
to Nevada as soon as we catch our breath. Most but
not all are related to support of zones.
Lori
Rich Teer wrote:
>Hi all,
>
>Would I be
Hi all,
Would I be correct in thinking that LiveUpgrade plays nicely
with ZFS boot, now that the latter is integrated into Nevada?
TIA,
--
Rich Teer, SCSA, SCNA, SCSECA
CEO,
My Online Home Inventory
URLs: http://www.rite-group.com/rich
http://www.linkedin.com/in/richteer
http://ww
On Tue, 26 Aug 2008, W. Wayne Liauh wrote:
> Again, I don't see any reason why we should not consider using
> StarOffice (BTW, it's "StarOffice"--one word, not "star office") to
> publish the Adm Guide, as well as other Sun publications.
You are saying that Sun should start over from scratch an
W. Wayne Liauh wrote:
>> One can carve furniture with an axe, especially if
>> it's razor-sharp,
>> ut that doesn't make it a spokeshave, plane and saw.
>>
>> I love star office, and use it every day, but my
>> publisher uses
>> rame, so that's what I use for books.
>>
>> --dave
>>
>
> As of
Miles Nordin wrote:
>> "re" == Richard Elling <[EMAIL PROTECTED]> writes:
>>
>
> re> unrecoverable read as the dominant disk failure mode. [...]
> re> none of the traditional software logical volume managers nor
> re> the popular open source file systems (other than
> I think that your expectations from ZFS are
> reasonable. However, it is useful to determine if pulling the IDE drive locks
> the entire IDE channel, which serves the other disks as well. This
> could happen at a hardware level, or at a device driver level. If this
> happens, then there is nothi
Todd, 3 days ago you were asked what mode the BIOS was using, AHCI or IDE
compatibility. Which is it? Did you change it? What was the result? A few other
posters suggested the same thing but the thread went off into left field and I
believe the question / suggestions got lost in the noise.
--ro
Carson Gaspar wrote:
> Richard Elling wrote:
>
>> No snake oil. Pulling cables only simulates pulling cables. If you
>> are having difficulty with cables falling out, then this problem cannot
>> be solved with software. It *must* be solved with hardware.
>>
>> But the main problem with "simul
PS: I also think it's worthy to note the level of supportive and constructive
feedback that many others have provided, and how much I appreciate it. Thanks!
Keep it coming!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-di
> One can carve furniture with an axe, especially if
> it's razor-sharp,
> ut that doesn't make it a spokeshave, plane and saw.
>
> I love star office, and use it every day, but my
> publisher uses
> rame, so that's what I use for books.
>
> --dave
As of Build 95, I am still unable to read a g
> Since OpenSolaris is open source, perhaps some brave
> soul can investigate the issues with the IDE device driver and
> send a patch.
Fearing that other Senior Kernel Engineers, Solaris, might exhibit similar
responses, or join in and play “antagonize the noob,” I decided that I would
try to s
> The behavior of ZFS to an error reported by an underlying device
> driver is tunable by the zpool failmode property. By default, it is
> set to "wait." For root pools, the installer may change this
> to "continue." The key here is that you can argue with the choice
> of default behavior, but d
Is there any flaw with the process below, customer asked:
Sun Cluser with each zpool composed of 1 Lun (yes, they have been
advised to use redundant config instead). They do not export the pool to
other host instead they use BCV to make a mirror of the lun. They then
split the mirror and impor
Richard Elling wrote:
>
> No snake oil. Pulling cables only simulates pulling cables. If you
> are having difficulty with cables falling out, then this problem cannot
> be solved with software. It *must* be solved with hardware.
>
> But the main problem with "simulating disk failures by pulling
> "re" == Richard Elling <[EMAIL PROTECTED]> writes:
re> unrecoverable read as the dominant disk failure mode. [...]
re> none of the traditional software logical volume managers nor
re> the popular open source file systems (other than ZFS :-)
re> address this problem.
Other LV
Miles Nordin wrote:
>> "jcm" == James C McPherson <[EMAIL PROTECTED]> writes:
>> "thp" == Todd H Poole <[EMAIL PROTECTED]> writes:
>> "mh" == Matt Harrison <[EMAIL PROTECTED]> writes:
>> "js" == John Sonnenschein <[EMAIL PROTECTED]> writes:
>> "re" == Richard Elling <[EMAIL PROT
Yes, you should be able to use it on another computer. All the zfs information
is stored on disk. The one thing you need to be aware of is the version of ZFS
your pool is using. Systems can read versions older than the one they support
fine, but they won't be able to mount newer ones. You ca
After a helpful email from Miles, I destroyed all of my other opensolaris-*
filesystems (using beadm destroy), instead of his suggestion to
mount/unmount them all (easier this way.) I did another scrub:
[EMAIL PROTECTED]:~$ pfexec zpool status
pool: rpool
state: ONLINE
scrub: scrub completed a
On Tue, Aug 26, 2008 at 10:11 AM, Darren J Moffat <[EMAIL PROTECTED]>wrote:
> Keith Bierman wrote:
>
>>
>>
>>
>>>
>> On a SPARC CMT (Niagara 1+) based system wouldn't that be likely to have a
>> large impact?
>>
>
> UltraSPARC T1 has no hardware SHA256 so I wouldn't expect any real change
> from r
> "r" == Ross <[EMAIL PROTECTED]> writes:
r> I've just gotten a pool back online after the server booted
r> with it unavailable, but found that NFS shares were not
r> automatically restarted when the pool came online.
``me, too.'' in b44, in b71.
for workarounds, export/impo
Oh, and one more question.
Would it bo possible for me to copy data (i.e. photos) to my external usb HDD
(which is zfs) and to go to another computer that runs solaris and view the
data in that disk.
I know it sounds silly, but I want to know if the pool name and metadata and
such are stored on
I'm trying to get a feel on how to deal with a ZFS root filesystem when booted
off an alternate medium. For UFS, this simply meant finding the correct device
(slice on a disk) to mount and then mount it, assuming there wasn't some volume
manager in the way.
For ZFS, this is a little more comple
Victor Latushkin wrote:
> Hi Tom and all,
>> [EMAIL PROTECTED]:~# uname -a
>> SunOS cs3.kw 5.10 Generic_127127-11 sun4v sparc SUNW,Sun-Fire-T200
>
> Btw, have you considered opening support call for this issue?
As a follow up to the whole story, with the fantastic help of Victor,
the failed pool
Mike Gerdts wrote:
> On Tue, Aug 26, 2008 at 10:58 AM, Darren J Moffat <[EMAIL PROTECTED]> wrote:
>> In the interest of "full disclosure" I have changed the sha256.c in the
>> ZFS source to use the default kernel one via the crypto framework rather
>> than a private copy. I wouldn't expect that to
Keith Bierman wrote:
>
> On Aug 26, 2008, at 9:58 AM, Darren J Moffat wrote:
>
>>
>> than a private copy. I wouldn't expect that to have too big an impact (I
>>
>
> On a SPARC CMT (Niagara 1+) based system wouldn't that be likely to have
> a large impact?
UltraSPARC T1 has no hardware SHA256 s
On Tue, 26 Aug 2008, Darren J Moffat wrote:
> Bob Friesenhahn wrote:
>> On Tue, 26 Aug 2008, Darren J Moffat wrote:
>>>
>>> zfs set checksum=sha256
>>
>> Expect performance to really suck after setting this.
>
> Do you have evidence of that ? What kind of workload and how did you test it
I did
On Tue, Aug 26, 2008 at 10:58 AM, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> In the interest of "full disclosure" I have changed the sha256.c in the
> ZFS source to use the default kernel one via the crypto framework rather
> than a private copy. I wouldn't expect that to have too big an impact (
On Aug 26, 2008, at 9:58 AM, Darren J Moffat wrote:
>
> than a private copy. I wouldn't expect that to have too big an
> impact (I
>
On a SPARC CMT (Niagara 1+) based system wouldn't that be likely to
have a large impact?
--
Keith H. Bierman [EMAIL PROTECTED] | AIM kbiermank
5430 Na
After rebooting, I ran a zpool scrub on the root pool, to see if the issue
was resolved:
[EMAIL PROTECTED]:~$ pfexec zpool status
pool: rpool
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are un
Bob Friesenhahn wrote:
> On Tue, 26 Aug 2008, Darren J Moffat wrote:
>>
>> zfs set checksum=sha256
>
> Expect performance to really suck after setting this.
Do you have evidence of that ? What kind of workload and how did you
test it ?
I've recently been benchmarking using filebench filemicro
On Tue, 26 Aug 2008, Darren J Moffat wrote:
>
> zfs set checksum=sha256
Expect performance to really suck after setting this.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Graphic
I've rebooted the system(s), which should accomplish this. I'm not clear
which posts you are referring to, I just joined the list today. The ZFS pool
is being mounted automatically, and that is the only filesystem on my
system. I filed: http://defect.opensolaris.org/bz/show_bug.cgi?id=3079 (bug
307
[EMAIL PROTECTED] wrote:
>>> Does some script-usable ZFS API (if any) provide for fetching
>> block/file hashes (checksums) stored in the filesystem itself? In
>> fact, am I wrong to expect file-checksums to be readily available?
>> Yes. Files are not checksummed, blocks are checksummed.
>> -- ri
Poulos, Joe wrote:
>
> Hello,
>
>
>
> ZFS is working great for us, but we have seen it use all or most of
> the memory on our systems.Is there a recommended setting to put in
> /etc/system to limit the amount of RAM to cache? Or is it
> recommended to just leave it alone, and let it rele
> >
> > Does some script-usable ZFS API (if any) provide for fetching
> block/file hashes (checksums) stored in the filesystem itself? In
> fact, am I wrong to expect file-checksums to be readily available?
> >
>
> Yes. Files are not checksummed, blocks are checksummed.
> -- richard
Further, e
I apologize if this is a duplicate. Dave Bevans notes indicate he sent
an email for assistance to the alias for help already but I don't see it
posted in the notes & cust called back in.
T5240 Sol 10
thru FC switch, non Sun, to EMC array
36 LUNs are being presented to the host, cust is trying t
Hello,
ZFS is working great for us, but we have seen it use all or most of the
memory on our systems.Is there a recommended setting to put in
/etc/system to limit the amount of RAM to cache? Or is it recommended
to just leave it alone, and let it release the memory as needed. Most
of our
Recently I managed to create a pool named 'vault' for my external usb HDD
(250G).
I generally backup my data using the zfs send and zfs recieve commands.
However, I don't leave my computer or usb HDD on 24/7.
Before I poweroff the HDD, I export the pool (zpool export vault). Then I turn
off th
Jim Klimov wrote:
> Ok, thank you Nils, Wade for the concise replies.
>
> After much reading I agree that the ZFS-development queued features do
> deserve a higher ranking on the priority list (pool-shrinking/disk-removal
> and user/group quotas would be my favourites), so probably the deduplicat
I've just gotten a pool back online after the server booted with it
unavailable, but found that NFS shares were not automatically restarted when
the pool came online.
Although the pool was online and sharenfs was set, sharemgr was showing no
pools shared, and the NFS server service was disabled
glitch:
> have you tried mounting and re-mounting all filesystems which are not
^^^
unmounting
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mail
Hi David,
have you tried mounting and re-mounting all filesystems which are not
being mounted automatically? See other posts to zfs-discuss.
Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Can anybody help me get this pool online. During my testing I've been removing
and re-attaching disks regularly, and it appears that I've attached a disk that
used to be part of the pool, but that doesn't contain up to date data.
Since I've used the same pool name a number of times, it's possib
Hi,
After upgrading to b95 of OSOL/Indiana, and doing a ZFS upgrade to the newer
revision, all arrays I have using ZFS mirroring are displaying errors. This
started happening immediately after ZFS upgrades. Here is an example:
[EMAIL PROTECTED]:~$ zpool status
pool: rpool
state: DEGRADED
statu
Hi all,
I can confirm that this is fixed too. I ran into the exact same issue yesterday
after destroying a clone:
http://www.opensolaris.org/jive/thread.jspa?threadID=70459&tstart=0
I used the b95-based 2008.11 development livecd this morning and the pool is
now back up and running again after a
Ok, used the development 2008.11 (b95) livecd earlier this morning to import
the pool, and it worked fine. I then rebooted back into Nexenta and all is
well. Many thanks for the help guys!
Chris
This message posted from opensolaris.org
___
zfs-disc
Hi all,
I've just pushed some of the changes coming up in 0.11
hg clone ssh://[EMAIL PROTECTED]/hg/jds/zfs-snapshot
I've got some commentary on the Early Access nature of this release at:
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_11
Comments (and bug reports) welcome!
62 matches
Mail list logo