Ben Rockwood wrote:
Robert Milkowski wrote:
I haven't tried it but what if you mounted ro via loopback into a zone
/zones/myzone01/root/.zfs is loop mounted in RO to /zones/myzone01/.zfs
That is so wrong. ;)
Besides just being evil, I doubt it'd work. And if it does, it
Darren J Moffat wrote:
Ben Rockwood wrote:
Robert Milkowski wrote:
I haven't tried it but what if you mounted ro via loopback into a zone
/zones/myzone01/root/.zfs is loop mounted in RO to /zones/myzone01/.zfs
That is so wrong. ;)
Besides just being evil, I doubt it'd wo
Hello Ben,
Tuesday, February 6, 2007, 10:19:54 AM, you wrote:
BR> Darren J Moffat wrote:
>> Ben Rockwood wrote:
>>> Robert Milkowski wrote:
I haven't tried it but what if you mounted ro via loopback into a zone
/zones/myzone01/root/.zfs is loop mounted in RO to /zones/myzone01
Hello zfs-discuss,
It looks like when zfs issues write cache flush commands se3510
actually honors it. I do not have right now spare se3510 to be 100%
sure but comparing nfs/zfs server with se3510 to another nfs/ufs
server with se3510 with "Periodic Cache Flush Time" set to disable
or so
Hello Robert,
Tuesday, February 6, 2007, 12:55:19 PM, you wrote:
RM> Hello zfs-discuss,
RM> It looks like when zfs issues write cache flush commands se3510
RM> actually honors it. I do not have right now spare se3510 to be 100%
RM> sure but comparing nfs/zfs server with se3510 to another n
On Feb 6, 2007, at 06:55, Robert Milkowski wrote:
Hello zfs-discuss,
It looks like when zfs issues write cache flush commands se3510
actually honors it. I do not have right now spare se3510 to be 100%
sure but comparing nfs/zfs server with se3510 to another nfs/ufs
server with se3510 w
Hello Jonathan,
Tuesday, February 6, 2007, 5:00:07 PM, you wrote:
JE> On Feb 6, 2007, at 06:55, Robert Milkowski wrote:
>> Hello zfs-discuss,
>>
>> It looks like when zfs issues write cache flush commands se3510
>> actually honors it. I do not have right now spare se3510 to be 100%
>> sure
IIRC Bill posted here some tie ago saying the problem with write cache
on the arrays is being worked on.
Yep, the bug is:
6462690 sd driver should set SYNC_NV bit when issuing SYNCHRONIZE
CACHE to
SBC-2 devices
We have a case going through PSARC that will make things works
correctly with
On Feb 6, 2007, at 11:46, Robert Milkowski wrote:
Does anybody know how to tell se3510 not to honor write cache
flush
commands?
JE> I don't think you can .. DKIOCFLUSHWRITECACHE *should* tell the
array
JE> to flush the cache. Gauging from the amount of calls that zfs
makes to
JE>
ozan s. yigit wrote:
not strictly a zfs question but related: after giving a lot of thought
to the appropriate zpool layout in an x4500, we decided to use raidz2
5x(7+2)+1 layout, paying attention to having no more than two disks per
set in each controller. in the process, i want to move my root
ah, good stuff.
thanks.
oz
Richard Elling [in response to my question] wrote:
ozan s. yigit wrote:
... is there any reason why factory install
comes with C5T0 and C5T4? a limitation of the bios or some other reason
i am missing? (i may need to RTFM harder... :)
BIOS limitation.
Hello eric,
Tuesday, February 6, 2007, 5:55:23 PM, you wrote:
>>
>> IIRC Bill posted here some tie ago saying the problem with write cache
>> on the arrays is being worked on.
ek> Yep, the bug is:
ek> 6462690 sd driver should set SYNC_NV bit when issuing SYNCHRONIZE
ek> CACHE to
ek> SBC-2 devi
Hello Casper,
Monday, January 22, 2007, 2:56:16 PM, you wrote:
>>Is there an BIOS uptade for Ultra20 to make it understand EFI?
CDSC> Understanding EFI is perhaps asking too much; but I believe the
CDSC> latest BIOS no longer hangs/crashes when it encountered EFI labels
CDSC> on disks it examin
Hi All,
No one has any idea on this ?
-Masthan
dudekula mastan <[EMAIL PROTECTED]> wrote:
Hi All,
In my test set up, I have one zpool of size 1000M bytes.
On this zpool, my application writes 100 files each of size 10 MB.
First 96 files were written successfully
Robert Milkowski wrote On 02/06/07 11:43,:
Hello eric,
Tuesday, February 6, 2007, 5:55:23 PM, you wrote:
IIRC Bill posted here some tie ago saying the problem with write cache
on the arrays is being worked on.
ek> Yep, the bug is:
ek> 6462690 sd driver should set SYNC_NV bit when issuing
ZFS documentation lists snapshot limits on any single file system in a pool at
2**48 snaps, and that seems to logically imply that a snap on a file system
does not require an update to the pool’s currently active uberblock. That is
to say, that if we take a snapshot of a file system in a pool,
On Feb 6, 2007, at 10:43 AM, Robert Milkowski wrote:
Hello eric,
Tuesday, February 6, 2007, 5:55:23 PM, you wrote:
IIRC Bill posted here some tie ago saying the problem with write
cache
on the arrays is being worked on.
ek> Yep, the bug is:
ek> 6462690 sd driver should set SYNC_NV bit
>We've considered looking at porting the AOE _server_ module to Solaris,
>especially since the Solaris loopback driver (/dev/lofi) is _much_ more
>stable than the loopback module in Linux (the Linux loopback module is a
>stellar piece of crap).
ok, it`s quite old and probably not the most elegant
Kevin Abbey wrote:
Does this seem like a good idea? I am not a storage expert and am
attempting to create a scalable distributed storage cluster for an HPC
cluster.
An AOE/ZFS/NFS setup doesn't sound scalable or distributed; your ZFS/NFS
server may turn out to be a bottleneck.
Wes Felter
If I understand correctly, at least some systems claim not to guarantee
consistency between changes to a file via write(2) and changes via mmap(2).
But historically, at least in the case of regular files on local UFS, since
Solaris
used the page cache for both cases, the results should have been c
On 1/18/07, Tan Shao Yi <[EMAIL PROTECTED]> wrote:
Hi,
Was wondering if anyone had experience working with VxVM volumes in a
zpool. We are using VxVM 5.0 on a Solaris 10 11/06 box. The volume is on a
SAN, with two FC HBAs connected to a fabric.
The setup works, but we observe a very strange mes
Richard,
Richard L. Hamilton wrote:
If I understand correctly, at least some systems claim not to guarantee
consistency between changes to a file via write(2) and changes via mmap(2).
But historically, at least in the case of regular files on local UFS, since
Solaris
used the page cache for bo
Masthan,
*/dudekula mastan <[EMAIL PROTECTED]>/* wrote:
Hi All,
In my test set up, I have one zpool of size 1000M bytes.
Is this the size given by zfs list ? Or is the amount of disk space that
you had ?
The reason I ask this is because ZFS/Zpool takes up some amount of
23 matches
Mail list logo