Peter L. Thomas wrote:
> That said, is there a "HOWTO" anywhere on installing QFS on Solaris 9
> (Sparc64) machines? Is that even possible?
We've been selling SAMFS (which qfs is a part of) for ages, long before S10
ever
saw the light, so I'd be *very* surprised if it wasn't documented with t
> No. You can neither access ZFS nor UFS in that way.
> Only one host can mount the file system at the same time
> (read/write or read-only doesn't matter here).
[...]
> If you don't want to use NFS, you can use QFS in such a configuration.
> The shared writer approach of QFS allows mounting the sa
[EMAIL PROTECTED] wrote:
>
> >AFAIK, a read-only UFS mount will unroll the log and thus write to th=
> >e medium.
>
>
> It does not (that's what code inspection suggests).
>
> It will update the in-memory image with the log entries but the
> log will not be rolled.
Why then does fsck mount the fs
>AFAIK, a read-only UFS mount will unroll the log and thus write to th=
>e medium.
It does not (that's what code inspection suggests).
It will update the in-memory image with the log entries but the
log will not be rolled.
Casper
___
zfs-discuss mai
[EMAIL PROTECTED] wrote:
>
> >> It's worse than this. Consider the read-only clients. When you
> >> access a filesystem object (file, directory, etc.), UFS will write
> >> metadata to update atime. I believe that there is a noatime option to
> >> mount, but I am unsure as to whether this is suf
If you have disks to experiment on & corrupt (and you will!) try this:
System A mounts the SAN [b]disk[/b] and format w/ UFS
System A umounts [b]disk[/b]
System B mounts [b]disk[/b]
B runs [i]touch x[/i] on [b]disk[/b].
System A mounts [b]disk[/b]
System A and B umount [b]disk[/b]
System B [i]fsck
The following seems much more complicated, much less
supported, and much more prone to failure than just setting up Sun
Cluster on the nodes and using it just for HA storage and the Global
File System. You do not have to put the Oracle RAC instances under Sun
Cluster control.
On 8/25/07, M
On Tue, 28 Aug 2007, David Olsen wrote:
>> On 27/08/2007, at 12:36 AM, Rainer J.H. Brandt wrote:
[ ... ]
>>> I don't see why multiple UFS mounts wouldn't work,
>> if only one
>>> of them has write access. Can you elaborate?
>>
>> Even with a single writer you would need to be
>> concerned with re
On Tue, 28 Aug 2007, Charles DeBardeleben wrote:
> Are you sure that UFS writes a-time on read-only filesystems? I do not think
> that it is supposed to. If it does, I think that this is a bug. I have
> mounted
> read-only media before, and not gotten any write errors.
>
> -Charles
I think what m
>> It's worse than this. Consider the read-only clients. When you
>> access a filesystem object (file, directory, etc.), UFS will write
>> metadata to update atime. I believe that there is a noatime option to
>> mount, but I am unsure as to whether this is sufficient.
>
>Is this some particular
Are you sure that UFS writes a-time on read-only filesystems? I do not think
that it is supposed to. If it does, I think that this is a bug. I have
mounted
read-only media before, and not gotten any write errors.
-Charles
David Olsen wrote:
>> On 27/08/2007, at 12:36 AM, Rainer J.H. Brandt wrote
> It's worse than this. Consider the read-only clients. When you
> access a filesystem object (file, directory, etc.), UFS will write
> metadata to update atime. I believe that there is a noatime option to
> mount, but I am unsure as to whether this is sufficient.
Is this some particular build
It sounds like you are looking for a shared file system like Sun's QFS?
Take a look here
http://opensolaris.org/os/project/samqfs/What_are_QFS_and_SAM/
Writes from multiple hosts go through the metadata server, basically,
that handles locking and update problems. I believe there are other
op
> On 27/08/2007, at 12:36 AM, Rainer J.H. Brandt wrote:
> > Sorry, this is a bit off-topic, but anyway:
> >
> > Ronald Kuehn writes:
> >> No. You can neither access ZFS nor UFS in that
> way. Only one
> >> host can mount the file system at the same time
> (read/write or
> >> read-only doesn't matte
> Host w continuously has a UFS mounted with read/write
> access.
> Host w writes to the file f/ff/fff.
> Host w ceases to touch anything under f.
> Three hours later, host r mounts the file system read-only,
> reads f/ff/fff, and unmounts the file system.
This would probably work for a non-journa
David Hopwood writes:
> Note also that mounting a filesystem read-only does not guarantee that
> the disk will not be written, because of atime updates (this is arguably
> a Unix design flaw, but still has to be taken into account). So r may
I can mount with the -noatime option.
> I don't understa
On 27/08/2007, at 12:36 AM, Rainer J.H. Brandt wrote:
> Sorry, this is a bit off-topic, but anyway:
>
> Ronald Kuehn writes:
>> No. You can neither access ZFS nor UFS in that way. Only one
>> host can mount the file system at the same time (read/write or
>> read-only doesn't matter here).
>
> I can
Rainer J.H. Brandt wrote:
> Ronald,
>
> thanks for your comments.
>
> I was thinking about this scenario:
>
> Host w continuously has a UFS mounted with read/write access.
> Host w writes to the file f/ff/fff.
> Host w ceases to touch anything under f.
> Three hours later, host r mounts the file
Rainer,
If you are looking for a means to safely "READ" any filesystem,
please take a look at Availability Suite.
One can safely take Point-in-Time copies of any Solaris supported
filesystem, including ZFS, at any snapshot interval of one's
choosing, and then access the shadow volume on any
Rainer J.H. Brandt wrote:
> Ronald,
>
> thanks for your comments.
>
> I was thinking about this scenario:
>
> Host w continuously has a UFS mounted with read/write access.
> Host w writes to the file f/ff/fff.
> Host w ceases to touch anything under f.
> Three hours later, host r mounts the file
Ronald,
thanks for your comments.
I was thinking about this scenario:
Host w continuously has a UFS mounted with read/write access.
Host w writes to the file f/ff/fff.
Host w ceases to touch anything under f.
Three hours later, host r mounts the file system read-only,
reads f/ff/fff, and unmount
>Yes, thank you for confirming what I said.
>
>So it is possible, but not recommended, because I must take care
>not to read from files for which buffers haven't been flushed yet.
Not, it's much worse than that: UFS will not re-read cached data for
the read-only mount so the read-only mount wil
On Sunday, August 26, 2007 at 17:47:32 CEST, Rainer J.H. Brandt wrote:
> Ronald Kuehn writes:
> > On Sunday, August 26, 2007 at 16:36:26 CEST, Rainer J.H. Brandt wrote:
> >
> > > Ronald Kuehn writes:
> > > > No. You can neither access ZFS nor UFS in that way. Only one
> > > > host can mount the fi
Tim,
thanks for answering...
>
>
>
>
>
>
>
...but please don't send HTML, if possible.
>
> Try this explanation..
>
> Host A mounts UFS file system rw
> Hosts B-C mount sam UFS file system read only
>
> In natural scheme of things hosts B-C read files and cache
> metadata about the
Ronald Kuehn writes:
> On Sunday, August 26, 2007 at 16:36:26 CEST, Rainer J.H. Brandt wrote:
>
> > Ronald Kuehn writes:
> > > No. You can neither access ZFS nor UFS in that way. Only one
> > > host can mount the file system at the same time (read/write or
> > > read-only doesn't matter here).
> >
On Sunday, August 26, 2007 at 16:36:26 CEST, Rainer J.H. Brandt wrote:
> Ronald Kuehn writes:
> > No. You can neither access ZFS nor UFS in that way. Only one
> > host can mount the file system at the same time (read/write or
> > read-only doesn't matter here).
>
> I can see why you wouldn't reco
Rainer
Try this explanation..
Host A mounts UFS file system rw
Hosts B-C mount sam UFS file system read only
In natural scheme of things hosts B-C read files and cache
metadata about the files and file system.
Host A changes the file system. The metadata that hosts B-C have cached
is now in
Sorry, this is a bit off-topic, but anyway:
Ronald Kuehn writes:
> No. You can neither access ZFS nor UFS in that way. Only one
> host can mount the file system at the same time (read/write or
> read-only doesn't matter here).
I can see why you wouldn't recommend trying this with UFS
(only one ho
I have tried tcpip over fc in the lab, the performance was no diff compare to
gigabit ethernet.
-Original Message-
From: "Al Hopper" <[EMAIL PROTECTED]>
To: "Matt B" <[EMAIL PROTECTED]>
Cc: zfs-discuss@opensolaris.org
Sent: 8/26/2007 9:29 AM
Subject:
On Sat, 25 Aug 2007, Matt B wrote:
snip
> I still wonder if NFS could be used over the FC network in some way similar
> to how NFS works over ethernet/tcp network
If you're running Qlogic FC HBAs, you can run a TCP/IP stack over the
FC links. That would allow NFS traffic over the FC
Here is what seems to be the best course of action assuming IP over FC is
supported by the HBA's (which I am pretty sure they so since this is all brand
new equipment)
Mount the shared disk backup lun on Node 1 via the FC link to the SAN as a
non-redundant ZFS volume.
On node 1 RMAN (oracle bac
On Sat, Aug 25, 2007 at 12:36:34 -0700, Matt B wrote:
: Im not sure what you mean
I think what he's trying to tell you is that you need to consult a storage
expert.
--
Dickon Hood
Due to digital rights management, my .sig is temporarily unavailable.
Normal service will be resumed as soon as pos
Im not sure what you mean
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Originally, we tried using our tape backup software
> to read the oracle flash recovery area (oracle raw
> device on a seperate set of san disks), however our
> backup software has a known issue with the the
> particular version of ORacle we are using.
So one option is to get the backup vendor
On 8/25/07, Matt B <[EMAIL PROTECTED]> wrote:
> the 4 database servers are part of an Oracle RAC configuration. 3 databases
> are hosted on these servers, BIGDB1 on all 4, littledb1 on the first 2, and
> littledb2 on the last two. The oracle backup system spawns db backup jobs
> that could occur
the 4 database servers are part of an Oracle RAC configuration. 3 databases are
hosted on these servers, BIGDB1 on all 4, littledb1 on the first 2, and
littledb2 on the last two. The oracle backup system spawns db backup jobs that
could occur on any node based on traffic and load. All nodes are
Ronald Kuehn wrote:
> On Friday, August 24, 2007 at 21:06:28 CEST, Matt B wrote:
>> Cant use the network because these 4 hosts are database servers
>> that will be dumping close to a Terabyte every night. If we put
>> that over the network all the other servers would be starved
>
> I'm afraid ther
> That is what I was afraid of.
>
> In regards to QFS and NFS, isnt QFS something that must be purchased?
> I looked on the SUN website and it appears to be a little pricey.
That's correct. Earlier this year Sun declared an intent to opensource
QFS/SAMFS, but that doesn't help you install it tod
On Friday, August 24, 2007 at 21:06:28 CEST, Matt B wrote:
> Cant use the network because these 4 hosts are database servers that will be
> dumping close to a Terabyte every night. If we put that over the network all
> the other servers would be starved
I'm afraid there aren't many other options
Cant use the network because these 4 hosts are database servers that will be
dumping close to a Terabyte every night. If we put that over the network all
the other servers would be starved
This message posted from opensolaris.org
___
zfs-discuss mai
On Friday, August 24, 2007 at 20:41:04 CEST, Matt B wrote:
> That is what I was afraid of.
>
> In regards to QFS and NFS, isnt QFS something that must be purchased? I
> looked on the SUN website and it appears to be a little pricey.
>
> NFS is free, but is there a way to use NFS without traversi
That is what I was afraid of.
In regards to QFS and NFS, isnt QFS something that must be purchased? I looked
on the SUN website and it appears to be a little pricey.
NFS is free, but is there a way to use NFS without traversing the network? We
already have our SAN presenting this disk to each o
On Friday, August 24, 2007 at 20:14:05 CEST, Matt B wrote:
Hi,
> Is it a supported configuration to have a single LUN presented to 4 different
> Sun servers over a fiber channel network and then mounting that LUN on each
> host as the same ZFS filesystem?
No. You can neither access ZFS nor UFS
> Is it a supported configuration to have a single LUN presented to 4
> different Sun servers over a fiber channel network and then mounting
> that LUN on each host as the same ZFS filesystem?
ZFS today does not support multi-host simultaneous mounts. There's no
arbitration for the pool metadata,
Is it a supported configuration to have a single LUN presented to 4 different
Sun servers over a fiber channel network and then mounting that LUN on each
host as the same ZFS filesystem?
We need any of the 4 servers to be able to write data to this shared FC disk.
We are not using NFS as we do
45 matches
Mail list logo