I know, bad form replying to myself, but I am wondering if it might be
related to
6438702 error handling in zfs_getpage() can trigger "page not
locked"
Which is marked "fix in progress" with a target of the current build.
alan.
Alan Hargreaves wrote:
Folks, before I start delving to
Folks, before I start delving too deeply into this crashdump, has anyone
seen anything like it?
The background is that I'm running a non-debug open build of b49 and was
in the process of running the "zoneadm -z redlx install ...".
After a bit, the machine panics, initially looking at the cras
Bady, Brant RBCM:EX wrote:
Actually to clarify - what I want to do is to be able to read the
associated checksums ZFS creates for a file and then store them in an
external system e.g. an oracle database most likely
Rather than storing the checksum externally, you could simply let ZFS
verify th
On Thu, Sep 14, 2006 at 06:26:46PM -0500, Mike Gerdts wrote:
> On 9/14/06, Chad Lewis <[EMAIL PROTECTED]> wrote:
> >Better still would be the forthcoming cryptographic extensions in some
> >kind of digital-signature mode.
>
> When I first saw extended attributes I thought that would be a great
> p
On 9/14/06, Chad Lewis <[EMAIL PROTECTED]> wrote:
Better still would be the forthcoming cryptographic extensions in some
kind of digital-signature mode.
When I first saw extended attributes I thought that would be a great
place to store a digital signature of the file. I'm not saying that
it i
On Thu, Sep 14, 2006 at 10:32:59PM +0200, Henk Langeveld wrote:
> Bady, Brant RBCM:EX wrote:
> >Part of the archiving process is to generate checksums (I happen to use
> >MD5), and store them with other metadata about the digital object in
> >order to verify data integrity and demonstrate the authe
On Thu, 2006-09-14 at 13:55 -0700, Bill Moore wrote:
> On Thu, Sep 14, 2006 at 08:09:07AM -0700, David Smith wrote:
> > I have run zpool scrub again, and I now see checksum errors again.
> > Wouldn't the checksum errors gotten fixed with the first zpool scrub?
> >
> > Can anyone recommend action
Actually to clarify - what I want to do is to be able to read the
associated checksums ZFS creates for a file and then store them in an
external system e.g. an oracle database most likely
Its just a way of avoiding having to do MD5's on everything when ZFS is
doing checksums as well.
If ZFS does
On Thu, Sep 14, 2006 at 08:09:07AM -0700, David Smith wrote:
> I have run zpool scrub again, and I now see checksum errors again.
> Wouldn't the checksum errors gotten fixed with the first zpool scrub?
>
> Can anyone recommend actions I should do at this point?
After running the first scrub, d
Bady, Brant RBCM:EX wrote:
I am working in the area of archiving (in the true send of the word -
e.g. using the OAIS reference model) electronic data for long term
preservation and access. ZFS now makes magnetic disk arrays a bit more
suitable for that.
Part of the archiving process is to ge
On Sep 14, 2006, at 1:32 PM, Henk Langeveld wrote:
Bady, Brant RBCM:EX wrote:
Part of the archiving process is to generate checksums (I happen
to use
MD5), and store them with other metadata about the digital object in
order to verify data integrity and demonstrate the authenticity of
the
Bady, Brant RBCM:EX wrote:
Part of the archiving process is to generate checksums (I happen to use
MD5), and store them with other metadata about the digital object in
order to verify data integrity and demonstrate the authenticity of the
digital object over time.
Wouldn't it be helpful if the
Title: Access to ZFS checksums would be nice and very useful feature
I am working in the area of archiving (in the true send of the word - e.g. using the OAIS reference model) electronic data for long term preservation and access. ZFS now makes magnetic disk arrays a bit more suitable for th
On Thu, Sep 14, 2006 at 07:46:33AM -0700, Anton B. Rang wrote:
>
> It looks like 'zpool create -R' should solve the problem for anyone
> who is trying to build their own clustering facility, since it
> prevents the automatic import. Maybe we just need to document that
> more clearly? Calling it "a
Hello David,
Tuesday, September 12, 2006, 11:41:27 PM, you wrote:
DS> I currently have a system which has two ZFS storage pools. One
DS> of the pools is coming from a faulty piece of hardware. I would
DS> like to bring up our server mounting the storage pool which is
DS> okay and NOT mounting t
I figured this out. Was way too simple.
zpool import fitz
thanks,
On Thursday, September 14, 2006, at 12:16PM, Aric Gregson <[EMAIL PROTECTED]>
wrote:
>I was running solaris 10 6/06 with latest kernel patch on ultra 20 (x86) with
>two internal disks, the root with the OS (c1d0) as UFS and the
James Dickens wrote:
eric was allready talking about printing the last time a disk was
accessed when a disk was about to be imported, my idea would be run
that check twice, once initially and if it looks like it could be
still in use, like the pool wasn't exported and last write occurred in
the
I was running solaris 10 6/06 with latest kernel patch on ultra 20 (x86) with
two internal disks, the root with the OS (c1d0) as UFS and the userland data on
c2d0s7 formatted as ZFS. An update made the system unusable and required
reinstallation of the OS on c1d0 (solaris 6/06).
I cannot figur
> As others have pointed out you could use the fully supported alternate
> root support for this.
>
> The "zpool create -R" and "zpool import -R" commands allow
Yes. I tried that. It should work well.
In addition, I'm happy to note that '-R /' appears to be valid, allowing
all the fil
Darren Dunham wrote:
Exactly. What method could such a framework use to ask ZFS to import a
pool *now*, but not also automatically at next boot? (How does the
upcoming SC do it?)
I don't know how Sun Cluster does it and I don't know where the source is.
As others have pointed out you could u
> > If you *never* want to import a pool automatically on reboot you just have
> > to delete the
> > /etc/zfs/zpool.cache file before the zfs module is being loaded.
> > This could be integrated into SMF.
>
> Or you could always use import -R / create -R for your pool management. Of
> course, t
> > Again, the difference is that with UFS your filesystems won't auto
> > mount at boot. If you repeated with UFS, you wouldn't try to mount
> > until you decided you should own the disk.
>
> Normally on Solaris UFS filesystems are mounted via /etc/vfstab so yes
> the will probably automaticall
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Neil A. Wilson wrote:
> This is unfortunate. As a laptop user with only a single drive, I was
> looking forward to it since I've been bitten in the past by data loss
> caused by a bad area on the disk. I don't care about the space
> consumption becau
I have run zpool scrub again, and I now see checksum errors again. Wouldn't
the checksum errors gotten fixed with the first zpool scrub?
Can anyone recommend actions I should do at this point?
Thanks,
David
This message posted from opensolaris.org
___
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matthew Ahrens wrote:
> Out of curiosity, what would you guys think about addressing this same
> problem by having the option to store some filesystems unreplicated on
> an mirrored (or raid-z) pool? This would have the same issues of
> unexpected spa
> If you *never* want to import a pool automatically on reboot you just have to
> delete the
> /etc/zfs/zpool.cache file before the zfs module is being loaded.
> This could be integrated into SMF.
Or you could always use import -R / create -R for your pool management. Of
course, there's no way
Anton B. Rang wrote:
You need to rewrite/extend that to deal with the fact that ZFS doesn't use vfstab
and instead express it in terms of ZFS import/export.
The problem (as I see it) is that ZFS import is (by default) implicit at
startup, while UFS mount is (by default) only performed when exp
> You need to rewrite/extend that to deal with the fact that ZFS doesn't use
> vfstab
> and instead express it in terms of ZFS import/export.
The problem (as I see it) is that ZFS import is (by default) implicit at
startup, while UFS mount is (by default) only performed when explicitly
request
> The OP was just showing a test case. On a real system your HA software
> would exchange a heartbeat and not do a double import. The problem with
> zfs is that after the original system fails and the second system imports
> the pool, the original system also tries to import on [re]boot, and the
Luke Scharf wrote:
The problem is that when I mount from a client, I can only mount if I
specify the IP address 1st network interface. If I use the 2nd or 3rd
interface (both also on the internal network), then I get the
following error:
kernel: nfs server 10.1.5.10:/xr7/group/ntnt: not re
On 9/14/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:
James Dickens wrote:
> On 9/13/06, Eric Schrock <[EMAIL PROTECTED]> wrote:
>> On Wed, Sep 13, 2006 at 02:29:55PM -0500, James Dickens wrote:
>> >
>> > this would not be the first time that Solaris overrided an administive
>> > command, becaus
"With ZFS however the in-between cache is obsolete, as individual disk
caches can be used directly."
The statement needs to be qualified.
Storage cache, if protected, works great to reduce critical
op latency. ZFS when it writes to disk cache, will flush
data out before return to
Frank Cusack wrote:
On September 13, 2006 7:07:40 PM -0700 Richard Elling
<[EMAIL PROTECTED]> wrote:
Dale Ghent wrote:
James C. McPherson wrote:
As I understand things, SunCluster 3.2 is expected to have support
for HA-ZFS
and until that version is released you will not be running in a
suppor
James Dickens wrote:
On 9/13/06, Eric Schrock <[EMAIL PROTECTED]> wrote:
On Wed, Sep 13, 2006 at 02:29:55PM -0500, James Dickens wrote:
>
> this would not be the first time that Solaris overrided an administive
> command, because its just not safe or sane to do so. For example.
>
> rm -rf /
As
34 matches
Mail list logo