Darren J Moffat wrote:
..
I think we need 5 distinct places to set the policy:
1) On file delete
This would be a per dataset policy.
The bleaching would happen in a new transaction group
created by the one that did the normal deletion, and would
run only if theoriginal one c
> BTW, Jeff's posts to zfs-discuss are being rejected with this message [ ... ]
... while the spam is coming through loud & clear. ;-)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.ope
> I have a Sun SE 3511 array with 5 x 500 GB SATA-I disks in a RAID 5. This
> 2 TB logical drive is partitioned into 10 x 200GB slices. I gave 4 of these
> slices to a
> Solaris 10 U2 machine and added each of them to a concat (non-raid) zpool as
> listed below:
This is certainly a supportable
Mike Seda wrote:
Basically, is this a supported zfs configuration?
Can't see why not, but support or not is something only Sun support can
speak for, not this mailing list.
You say you lost access to the array though-- a full disk failure
shouldn't cause this if you were using RAID-5 on the
Hi All,
Umm... I recently posted that I have successfully deployed zfs in a san.
Well, I just had a disk fail on the second day of production, and am
currently in downtime waiting for a disk from Sun. I have a Sun SE 3511
array with 5 x 500 GB SATA-I disks in a RAID 5. This 2 TB logical drive
Jürgen Keil wrote:
Shouldn't zfs/fstyp skip probing for zfs / zpools on small capacity
devices like a floppy media, that are less than this 64 mbytes ?
To add to this. I've also seen that uninitialised disk slices are listed
as a corrupted pool when I try to do a zfs import. In my case the sli
It seems to me that the optimal scenario would be network filesystems
on top of ZFS, so you can get the data portability of a SAN, but let
ZFS make all of the decisions. Short of that, ZFS on SAN-attached
JBODs would give a similar benefit. Having benefited tremendously from
being able to easily d
On Mon, Dec 18, 2006 at 06:46:09PM -0500, Jeffrey Hutzelman wrote:
>
>
> On Monday, December 18, 2006 05:16:28 PM -0600 Nicolas Williams
> <[EMAIL PROTECTED]> wrote:
>
> >Or an iovec-style specification. But really, how often will one prefer
> >this to truncate-and-bleach? Also, the to-be-ble
On Mon, Dec 18, 2006 at 05:16:28PM -0600, Nicolas Williams wrote:
> On Mon, Dec 18, 2006 at 05:44:08PM -0500, Jeffrey Hutzelman wrote:
BTW, Jeff's posts to zfs-discuss are being rejected with this message:
You are not allowed to post to this mailing list, and your message
has been automatic
On Mon, 18 Dec 2006, Torrey McMahon wrote:
> Al Hopper wrote:
> > On Sun, 17 Dec 2006, Ricardo Correia wrote:
> >
> >
> >> On Friday 15 December 2006 20:02, Dave Burleson wrote:
> >>
> >>> Does anyone have a document that describes ZFS in a pure
> >>> SAN environment? What will and will not work?
On Mon, Dec 18, 2006 at 05:44:08PM -0500, Jeffrey Hutzelman wrote:
> On Monday, December 18, 2006 11:32:37 AM -0600 Nicolas Williams
> <[EMAIL PROTECTED]> wrote:
> > I'd say go for both, (a) and (b). Of course, (b) may not be easy to
> > implement.
>
> Another option would be to warn the use
Hello Torrey,
Monday, December 18, 2006, 8:38:42 PM, you wrote:
TM> Christine Tran wrote:
>>
>>
>> And the PowerPath question is important, customer is using PP right now.
>>
TM> I haven't heard any powerpath issues. Can you track down what it was
TM> GeorgeW mentioned?
H...
http://www.we
comment far below...
Jonathan Edwards wrote:
On Dec 18, 2006, at 16:13, Torrey McMahon wrote:
Al Hopper wrote:
On Sun, 17 Dec 2006, Ricardo Correia wrote:
On Friday 15 December 2006 20:02, Dave Burleson wrote:
Does anyone have a document that describes ZFS in a pure
SAN environment? Wh
On Dec 18, 2006, at 16:13, Torrey McMahon wrote:
Al Hopper wrote:
On Sun, 17 Dec 2006, Ricardo Correia wrote:
On Friday 15 December 2006 20:02, Dave Burleson wrote:
Does anyone have a document that describes ZFS in a pure
SAN environment? What will and will not work?
From some of the i
Al Hopper wrote:
On Sun, 17 Dec 2006, Ricardo Correia wrote:
On Friday 15 December 2006 20:02, Dave Burleson wrote:
Does anyone have a document that describes ZFS in a pure
SAN environment? What will and will not work?
From some of the information I have been gathering
it doesn't ap
Mike Seda wrote:
The following is output from getfacl on a ufs filesytem:
[EMAIL PROTECTED] maseda]$ getfacl /home/users/ahege/incoming
# file: /home/users/ahege/incoming
# owner: ahege
# group: uncmd
user::rwx
user:nobody:rwx #effective:rwx
group::r-x #effective:r-x
mask:rwx
James W. Abendschan wrote:
Once the mirror was synced, I disconnected one of the iSCSI boxes
(pulled the ethernet plug from one of the VTraks), did some I/O
on the volume, and Solaris paniced. After it rebooted, I did a
'zpool scrub' and the T1000 again went into la-la land while the
scrubbing
Torrey McMahon wrote:
I haven't heard any powerpath issues. Can you track down what it was
GeorgeW mentioned?
Well, the problem is I can't remember. It was during a ZFS TOI class,
and perhaps it was that PP tries to be clever by grouping tsx together
... If there's been no PP issue reported
The following is output from getfacl on a ufs filesytem:
[EMAIL PROTECTED] maseda]$ getfacl /home/users/ahege/incoming
# file: /home/users/ahege/incoming
# owner: ahege
# group: uncmd
user::rwx
user:nobody:rwx #effective:rwx
group::r-x #effective:r-x
mask:rwx
other:r-x
I want
On 12/18/06, Robin Harris <[EMAIL PROTECTED]> wrote:
There's been another sighting of ZFS on Mac. The latest developer
release of Leopard (Mac OS 10.5) has a dialogue box calling out the
"Zettabyte File System (ZFS)" as an option. The first publication I
saw this is a French website called Mac4Ev
Christine Tran wrote:
And the PowerPath question is important, customer is using PP right now.
I haven't heard any powerpath issues. Can you track down what it was
GeorgeW mentioned?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
There's been another sighting of ZFS on Mac. The latest developer
release of Leopard (Mac OS 10.5) has a dialogue box calling out the
"Zettabyte File System (ZFS)" as an option. The first publication I
saw this is a French website called Mac4Ever - http://mac4ever.com/
news/27485/zettabyte_s
On Mon, Dec 18, 2006 at 11:32:37AM -0600, Nicolas Williams wrote:
>The new system call should either appear to truncate the file or to
>overwrite it with zeros. The latter would allow for bleaching some
>byte ranges, rather than the whole file (ZFS complexity: first COW
>the non-bl
IMO:
- The hardest problem in the case of bleaching individual files or
datasets is dealing with snapshots/clones:
- blocks not shared with parent/child snapshots can be bleached with
little trouble, of course.
- But what about shared blocks?
IMO we have two options:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
James W. Abendschan wrote:
> It took about 3 days to finish
> during which the T1000 was basically unusable. (during that time,
> sendmail managed to syslog a few messages about how it
> was skipping the queue run because the load was at 200 :-)
Glup
Kory Wheatley wrote:
Basically then wilth data being stored on the ZFS disks (no applications), and
web servers logs, it would benefit us more to have the 3 luns setup in one ZFS
Storage Pool?
In general, yes.
-- richard
___
zfs-discuss mailing lis
Additional comments below...
Christine Tran wrote:
Hi,
I guess we are acquainted with the ZFS Wikipedia?
http://en.wikipedia.org/wiki/ZFS
Customers refer to it, I wonder where the Wiki gets its numbers. For
example there's a Sun marketing slide that says "unlimited snapshots"
contradicted
[EMAIL PROTECTED] wrote:
Rather than bleaching which doesn't always remove all stains, why can't
we use a word like "erasing" (which is hitherto unused for filesystem use
in Solaris, AFAIK)
and this method doesn't remove all stains from the disk anyway it just
reduces them so they can't be eas
>Darren J Moffat wrote:
>> I think we need 5 distinct places to set the policy:
>>
>> 1) On file delete
>> This would be a per dataset policy.
>> The bleaching would happen in a new transaction group
>> created by the one that did the normal deletion, and would
>> run only if the
On 12/18/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:
[ This is for discussion, it doesn't mean I'm actively working on this
functionality at this time or that I might do so in the future. ]
When we get crypto support one way to do "secure delete" is to destroy
the key. This is usually a
Darren J Moffat wrote:
I think we need 5 distinct places to set the policy:
1) On file delete
This would be a per dataset policy.
The bleaching would happen in a new transaction group
created by the one that did the normal deletion, and would
run only if theoriginal one compl
[ This is for discussion, it doesn't mean I'm actively working on this
functionality at this time or that I might do so in the future. ]
When we get crypto support one way to do "secure delete" is to destroy
the key. This is usually a much simpler and faster task than erasing
and overwriting
Thank you to everyone that has replied. It sounds like I have a few options
with regards to upgrading or just waiting and patching the current environment.
David
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@o
Neil Perrin wrote:
Having said that I don't think we recommend messing with the transaction
group commit timing.
Yeah I don't think the customer means to tune it this way either, they
were thinking of something like tune_t_fsflushr (is this still in use?)
They want to know when the txg timin
Hello Nathalie,
Monday, December 18, 2006, 2:14:29 PM, you wrote:
NPI> I have a machine with ZFS connected to a SAN. The space of storage
NPI> increased on the SAN. The order "format" shows the increase in volume
NPI> well. But the size of the ZFS pool did not increase. What to make so
NPI> th
Hello zfs-discuss,
S10U3, one pool with several RAID-Z2 groups, one file system in a
pool with one large file (about 16,7TB). Pool was just imported and
I issued zfs destroy pool/test.
According to zpool iostat there's about 1-3MB/s of reads with 1-2K
IOPS. Using iostat I can see about
I've noticed that fstyp on a floppy media formatted with "pcfs" now needs
somewhere between
30 - 100 seconds to find out that the floppy media is formatted with "pcfs".
E.g. on sparc snv_48, I currently observe this:
% time fstyp /vol/dev/rdiskette0/nomedia
pcfs
0.01u 0.10s 1:38.84 0.1%
zfs's /
I have a machine with ZFS connected to a SAN. The space of storage
increased on the SAN. The order "format" shows the increase in volume
well. But the size of the ZFS pool did not increase. What to make so
that zfs takes into account this increase in volume?
Thanks,
Nathalie.
Roch - PAE wrote:
Was it over NFS ?
No, local.
Was zil_disable set on the server ?
Not unless it is set by default. I haven't changed any ZFS params.
If it's yes/yes, I still don't know for sure if that would
be grounds for a causal relationship, but I would certainly
be looking into it.
Hello Neil,
Monday, December 18, 2006, 5:48:40 AM, you wrote:
>> CT> Will I be able to I tune the DMU "flush" rate, now set at 5 seconds?
>>
>> echo 'txg_time/D 0t1' | mdb -kw
NP> Er, that 'D' ahould be a 'W'.
NP> Having said that I don't think we recommend messing with the transaction
NP> grou
Was it over NFS ?
Was zil_disable set on the server ?
If it's yes/yes, I still don't know for sure if that would
be grounds for a causal relationship, but I would certainly
be looking into it.
-r
Trevor Watson writes:
> Anton B. Rang wrote:
> > Were there any errors reported in /var/adm/messa
41 matches
Mail list logo