Guys,
What is the best way to ask for a feature enhancement to ZFS.
To allow ZFS to be usefull for DR disk replication, we need to be able
set an option against the pool or file system or both, called close
sync. ie When a programme closes a file any outstanding writes are flush
to disk, bef
Hi,
We're implementing ZFS on a Sun X4500. Does anyone know if Sun or other vendor
provide a template excel-like sheet that will help us prepare the design of ZFS
on a server system? Of course we can create one that includes the Pools File
System properties (such as compression;quota;reservatio
On Thu, 26 Jul 2007, Damon Atkins wrote:
> Guys,
> What is the best way to ask for a feature enhancement to ZFS.
>
> To allow ZFS to be usefull for DR disk replication, we need to be able
> set an option against the pool or file system or both, called close
> sync. ie When a programme closes a f
Robert Milkowski wrote:
> Hello Matthew,
>
> Monday, June 18, 2007, 7:28:35 PM, you wrote:
>
> MA> FYI, we're already working with engineers on some other ports to ensure
> MA> on-disk compatability. Those changes are going smoothly. So please,
> MA> contact us if you want to make (or want us
Zeke wrote:
> Hello all,
>
> I've been thinking about using an OpenSolaris fileserver for my home network.
>
> There are several things which are important to me in this situation and I'd
> like to know how ZFS handles them. I've been reading the ZFS Administration
> Guide from Sun and I've
Does opensolaris iSCSI target support SCSI-3 PGR reservation ?
My goal is to use the iSCSI LUN created by [1] or [2] as a quorum device for a
3-node suncluster.
[1] zfs set shareiscsi=on
[2] iscsitadm create target .
Thanks,
-- leon
This message posted from opensolaris.org
___
Realistically speaking, you don't want to use SATA for general-purpose
random I/O-heavy storage, which is most likely what your disk pattern is
going to be, for multiple windows clients and hosted VMs.
Frankly, if you can afford it, you really want to find someone who will
sell you a combined SA
Leon Koll wrote:
> Does opensolaris iSCSI target support SCSI-3 PGR reservation ?
As far as I can tell, and it is part of the iSCSI RFC. Source code is at
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/iscsi/iscsitgtd/t10_sbc.c
more info at http://www.opensolaris.org/os/proje
On Jul 25, 2007, at 11:46 PM, asa wrote:
> Hello all,
> I am interested in getting a list of the changed files between two
> snapshots in a fast and zfs-y way. I know that zfs knows all about
> what blocks have been changed, but can one map that to a file list? I
> know this could be solved
On Jul 26, 2007, at 10:00 AM, gerald anderson wrote:
> Customer question:
>
> Oracle 10
>
> Customer has a 6540 with 4 trays of 300G 10k drives. The raid sets
> are 3 + 1
>
> vertically stripped on the 4 trays. Two 400G volumes are created on
> each
>
> raid set. Would it be best to put all o
Itay Menahem wrote:
> Hi,
> We're implementing ZFS on a Sun X4500. Does anyone know if Sun or other
> vendor provide a template excel-like sheet that will help us prepare the
> design of ZFS on a server system? Of course we can create one that includes
> the Pools File System properties (such a
If you look at callers of remove_mountpoint() in libzfs, you'll see that
it does remove the mountpoint, but only for inherited or default
directories. We have no way to know for sure whether the mountpoint was
originally created by ZFS or not, so we can only guess based on the
current mountpoint.
Hello Matthew,
Thursday, July 26, 2007, 2:56:32 PM, you wrote:
MA> Robert Milkowski wrote:
>> Hello Matthew,
>>
>> Monday, June 18, 2007, 7:28:35 PM, you wrote:
>>
>> MA> FYI, we're already working with engineers on some other ports to ensure
>> MA> on-disk compatability. Those changes are goi
Sean,
This scenario is covered in the ZFS Admin Guide, found here:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6fu?a=view#gcfhe
I provided an example below.
Cindy
# zpool create tank02 c0t0d0
# zpool status tank02
pool: tank02
state: ONLINE
scrub: none requested
config:
NA
Hello Matthew,
Monday, June 18, 2007, 7:28:35 PM, you wrote:
MA> FYI, we're already working with engineers on some other ports to ensure
MA> on-disk compatability. Those changes are going smoothly. So please,
MA> contact us if you want to make (or want us to make) on-disk changes to ZFS
MA> fo
hi there,
On Thu, 2007-07-26 at 21:51 +0800, Andre Wenas wrote:
> You need to specify your boot zfs pool in grub menu.lst:
Hang on, I'm not sure that was the point of the question.
> Robert Prus - Solution Architect, Systems Practice - Sun Poland wrote:
> > Where from Solaris/ZFS knows which sto
You need to specify your boot zfs pool in grub menu.lst:
# ZFS boot
title Solaris ZFS
root (hd0,3,d)
*bootfs rootpool/rootfs
*kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
module$ /platform/i86pc/$ISADIR/boot_archive
In this example, the bootfs is rootpool/rootfs. Grub will load the
Hello Victor,
Wednesday, June 27, 2007, 1:19:44 PM, you wrote:
VL> Gino wrote:
>> Same problem here (snv_60).
>> Robert, did you find any solutions?
VL> Couple of week ago I put together an implementation of space maps which
VL> completely eliminates loops and recursion from space map alloc
VL>
Hi Wee Yeh,
Thanks for the earlier tips on July17th. I have a couple questions...
I followed your suggestion and first created two raid0 volumes.
volume capacity raid data standby
v0 134.890 GB 0 u1d1-4none
v1 168.613 GB 0 u1d5
I'm looking to use ZFS to store about 6-10 live virtual machine images
(served via VMWare Server on Linux) and network file storage for ~50 Windows
clients. I'll probably start at about 1TB of storage and want to be able to
scale to at least 4TB. Cost and reliability are my two greatest concerns.
> First, does RaidZ support disks of multiple sizes, or must each RaidZ set
> consist of equal sized disks?
Each RAID-Z set must be constructed from equal-sized storage. While it's
possible to mix disks of different sizes, either you lose the capacity of the
larger disks, or you have to partitio
Customer question:
Oracle 10
Customer has a 6540 with 4 trays of 300G 10k drives. The raid sets are 3 + 1
vertically stripped on the 4 trays. Two 400G volumes are created on each
raid set. Would it be best to put all of the volumes in one Zpool or should
we create multiple Zpools to better manage
> I'd implement this via LD_PRELOAD library [ ... ]
>
> There's a problem with sync-on-close anyway - mmap for file I/O. Who
> guarantees you no file contents are being modified after the close() ?
The latter is actually a good argument for doing this (if it is necessary) in
the file system, rat
A quick look through the source would seem to indicate that the PERSISTENT
RESERVE commands are not supported by the Solaris ISCSI target at all.
http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/iscsi/iscsitgtd/t10_spc.c
This message posted from opensolaris.org
__
On 26-Jul-07, at 1:24 PM, Robert Milkowski wrote:
> Hello Matthew,
>
> Thursday, July 26, 2007, 2:56:32 PM, you wrote:
>
> MA> Robert Milkowski wrote:
>>> Hello Matthew,
>>>
>>> Monday, June 18, 2007, 7:28:35 PM, you wrote:
>>>
>>> MA> FYI, we're already working with engineers on some other ports
Hello all,
I've been thinking about using an OpenSolaris fileserver for my home network.
There are several things which are important to me in this situation and I'd
like to know how ZFS handles them. I've been reading the ZFS Administration
Guide from Sun and I've looked over a few of the wi
26 matches
Mail list logo