Darren Reed wrote:
Darren,
A point I don't yet believe that has been addressed in this
discussion is: what is the threat model?
Are we targetting NIST requirements for some customers
or just general use by everyday folks?
Even higher level: What problem are you/we trying to solve?
_
Darren,
A point I don't yet believe that has been addressed in this
discussion is: what is the threat model?
Are we targetting NIST requirements for some customers
or just general use by everyday folks?
Darrn
___
zfs-discuss mailing list
zfs-discuss@
Hi,
This may not be the right place to post, but hoping someone here is running a
reliably working system with 12 drives using ZFS that can tell me what hardware
they are using.
I have on order with my server vendor a pair of 12-drive servers that I want to
use with ZFS for our company file st
On Dec 20, 2006, at 5:46 AM, Darren J Moffat wrote:
james hughes wrote:
Not to add a cold blanket to this...
This would be mostly a "vanity erase" not really a serious
"security erase" since it will not over write the remnants of
remapped sectors.
Indeed and as you said there is other so
On Dec 20, 2006, at 1:37 PM, Bill Sommerfeld wrote:
On Wed, 2006-12-20 at 03:21 -0800, james hughes wrote:
This would be mostly a "vanity erase" not really a serious "security
erase" since it will not over write the remnants of remapped sectors.
Yup. As usual, your milage will vary dependin
Thanks Ben, and thanks Jason for clearing everything up for me via e-mail!
Hope you two, and everyone here have a great Christmas and a happy holiday!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Andrew Summers wrote:
> So, I've read the wikipedia, and have done a lot of research on google about
> it, but it just doesn't make sense to me. Correct me if I'm wrong, but you
> can take a simple 5/10/20 GB drive or whatever size, and turn it into
> exabytes of storage space?
>
> If that is n
On 20 December, 2006 - storage-disk sent me these 0,4K bytes:
> Hi Eric,
>
> How do you decode file /var/fm/fmd/errlog and /var/fm/fmd/fltlog?
fmdump -e, fmdump
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadm
So, I've read the wikipedia, and have done a lot of research on google about
it, but it just doesn't make sense to me. Correct me if I'm wrong, but you can
take a simple 5/10/20 GB drive or whatever size, and turn it into exabytes of
storage space?
If that is not true, please explain the impor
I've heard from old-old-oldtimers, back in the epoxy-disk days, that
even after this type of erase the old epoxy disks could sometimes be
read via etching combined with electron microscopes -- the (relatively) new
sputtered aluminum finishes probably changed that. So back in the epoxy days,
disks
On Tue, Dec 19, 2006 at 02:55:59PM -0500, Rince wrote:
> "zpool import" should give you a list of all the pools ZFS sees as being
> mountable. "zpool import [poolname]" is also, conveniently, the command used
> to mount the pool afterward. :)
>
> If it doesn't show up there, I'll be surprised.
I
Hi Eric,
I'm experience the same problem. However, I don't know how to dd the first and
the last sector of the disk. may I have the command?
How do you decode file /var/fm/fmd/errlog and /var/fm/fmd/fltlog?
Thanks
Giang
This message posted from opensolaris.org
__
> On Wed, 2006-12-20 at 03:21 -0800, james hughes wrote:
> > This would be mostly a "vanity erase" not really a serious "security
> > erase" since it will not over write the remnants of remapped sectors.
>
> Yup. As usual, your milage will vary depending on your threat model.
>
> My gut fe
On 20-Dec-06, at 3:05 PM, Jason J. W. Williams wrote:
Hi Toby,
My understanding on the subject of SATA firmware reliability vs.
FC/SCSI is that its mostly related to SATA firmware being a lot
younger. ... Its probably unfair to expect defect rates out of SATA
firmware equivalent to firmware t
Dennis wrote:
I just wanted to know if there are any news regarding Project
Honeycomb? Wasn´ it announced for end of 2006? Is there still
development?
We're still going ;-)
There has been some limited releases so far. Stanford bought a Honeycomb
system for a Digital Library project.
If you
Jason J. W. Williams wrote:
Not sure. I don't see an advantage to moving off UFS for boot pools. :-)
-J
Except of course that snapshots & clones will surely be a nicer
way of recovering from "adverse administrative events"...
-= Bart
--
Bart Smaalders Solaris Kernel Performa
On Wed, 2006-12-20 at 03:21 -0800, james hughes wrote:
> This would be mostly a "vanity erase" not really a serious "security
> erase" since it will not over write the remnants of remapped sectors.
Yup. As usual, your milage will vary depending on your threat model.
My gut feel is that the
Hello,
Short version: Pool A is fast, pool B is slow. Writing to pool A is
fast. Writing to pool B is slow. Writing to pool B WHILE writing to
pool A is fast on both pools. Explanation?
Long version:
I have an existing two-disk pool consisting of two SATA drives. Call
this pool "pool1". This has
Not sure. I don't see an advantage to moving off UFS for boot pools. :-)
-J
On 12/20/06, James C. McPherson <[EMAIL PROTECTED]> wrote:
Jason J. W. Williams wrote:
> I agree with others here that the kernel panic is undesired behavior.
> If ZFS would simply offline the zpool and not kernel panic
James C. McPherson wrote:
Jason J. W. Williams wrote:
I agree with others here that the kernel panic is undesired behavior.
If ZFS would simply offline the zpool and not kernel panic, that would
obviate my request for an informational message. It'd be pretty darn
obvious what was going on.
Wha
Jason J. W. Williams wrote:
I agree with others here that the kernel panic is undesired behavior.
If ZFS would simply offline the zpool and not kernel panic, that would
obviate my request for an informational message. It'd be pretty darn
obvious what was going on.
What about the root/boot pool?
Some further joy:
http://bugs.opensolaris.org/view_bug.do?bug_id=6504404
On 12/20/06, Joe Little <[EMAIL PROTECTED]> wrote:
On 12/20/06, Joe Little <[EMAIL PROTECTED]> wrote:
> We just put together a new system for ZFS use at a company, and twice
> in one week we've had the system wedge. You ca
On 12/20/06, Joe Little <[EMAIL PROTECTED]> wrote:
We just put together a new system for ZFS use at a company, and twice
in one week we've had the system wedge. You can log on, but the zpools
are hosed, and a reboot never occurs if requested since it can't
unmount the zfs volumes. So, only a powe
Hi Robert,
I agree with others here that the kernel panic is undesired behavior.
If ZFS would simply offline the zpool and not kernel panic, that would
obviate my request for an informational message. It'd be pretty darn
obvious what was going on.
Best Regards,
Jason
On 12/20/06, Robert Milkows
Hi Toby,
My understanding on the subject of SATA firmware reliability vs.
FC/SCSI is that its mostly related to SATA firmware being a lot
younger. The FC/SCSI firmware that's out there has been debugged for
10 years or so, so it has a lot fewer hiccoughs. Pillar Data Systems
told us once that the
We just put together a new system for ZFS use at a company, and twice
in one week we've had the system wedge. You can log on, but the zpools
are hosed, and a reboot never occurs if requested since it can't
unmount the zfs volumes. So, only a power cycle works.
In both cases, we get this:
Dec 20
On 19-Dec-06, at 2:42 PM, Jason J. W. Williams wrote:
I do see this note in the 3511 documentation: "Note - Do not use a
Sun StorEdge 3511 SATA array to store single instances of data. It
is more suitable for use in configurations where the array has a
backup or archival role."
My unders
On 18-Dec-06, at 11:18 PM, Matt Ingenthron wrote:
Mike Seda wrote:
Basically, is this a supported zfs configuration?
Can't see why not, but support or not is something only Sun support
can speak for, not this mailing list.
You say you lost access to the array though-- a full disk failure
On 19-Dec-06, at 11:51 AM, Jonathan Edwards wrote:
On Dec 19, 2006, at 10:15, Torrey McMahon wrote:
Darren J Moffat wrote:
Jonathan Edwards wrote:
On Dec 19, 2006, at 07:17, Roch - PAE wrote:
Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that
Jonathan Edwards wrote:
On Dec 20, 2006, at 04:41, Darren J Moffat wrote:
Bill Sommerfeld wrote:
There also may be a reason to do this when confidentiality isn't
required: as a sparse provisioning hack..
If you were to build a zfs pool out of compressed zvols backed by
another pool, then it w
>>
>> no no .. its a "feature". :-P
>>
>> If it walks like a duck and quacks like a duck then its a duck.
>>
>> a kernel panic that brings down a system is a bug. Plain and simple.
>
> I disagree (nit). A hardware fault can also cause a panic. Faults != bugs.
ha ha .. yeah. If the sysa
Jason J. W. Williams wrote:
"INFORMATION: If a member of this striped zpool becomes unavailable or
develops corruption, Solaris will kernel panic and reboot to protect
your data."
This is a bug, not a feature. We are currently working on fixing it.
--matt
_
Dennis Clarke wrote:
Anton B. Rang wrote:
"INFORMATION: If a member of this striped zpool becomes unavailable or
develops corruption, Solaris will kernel panic and reboot to protect your
data."
Is this the official, long-term stance? I don't think it is. I think this
is an interpretation of
Hi..
After searching hi & low, I cannot find the answer for what I want to do (or at
least
understand how to do it). I am hopeful somebody can point me in the right
direction.
I have (2) non global zones (samba & www) I want to be able to have all user
home
dir's served from zone samba AND be v
On Dec 20, 2006, at 04:41, Darren J Moffat wrote:
Bill Sommerfeld wrote:
There also may be a reason to do this when confidentiality isn't
required: as a sparse provisioning hack..
If you were to build a zfs pool out of compressed zvols backed by
another pool, then it would be very convenient i
On Wed, 20 Dec 2006, Pawel Jakub Dawidek wrote:
On Tue, Dec 19, 2006 at 02:04:37PM +, Darren J Moffat wrote:
In case it wasn't clear I am NOT proposing a UI like this:
$ zfs bleach ~/Documents/company-finance.odp
Instead ~/Documents or ~ would be a ZFS file system with a policy set someth
Pawel Jakub Dawidek writes:
> > The goal is the same as the goal for things like compression in ZFS, no
> > application change it is "free" for the applications.
>
> I like the idea, I really do, but it will be s expensive because of
> ZFS' COW model. Not only file removal or truncation will
On Dec 20, 2006, at 00:37, Anton B. Rang wrote:
"INFORMATION: If a member of this striped zpool becomes
unavailable or
develops corruption, Solaris will kernel panic and reboot to
protect your data."
OK, I'm puzzled.
Am I the only one on this list who believes that a kernel panic,
inste
On Tue, Dec 19, 2006 at 02:04:37PM +, Darren J Moffat wrote:
> In case it wasn't clear I am NOT proposing a UI like this:
>
> $ zfs bleach ~/Documents/company-finance.odp
>
> Instead ~/Documents or ~ would be a ZFS file system with a policy set
> something like this:
>
> # zfs set erase=fil
On Wed, Dennis wrote:
> Hello,
>
> I just wanted to know if there are any news regarding Project Honeycomb?
> Wasn?? it announced for end of 2006? Is there still development?
http://www.sun.com/storagetek/honeycomb/
___
zfs-discuss mailing list
zfs-d
On Tue, Dec 19, 2006 at 10:29:24PM -0500, Rince wrote:
>
> What exactly did it say? Did it say there are some pools that couldn't be
> imported, use zpool import -f to see them, or just "no pools available"?
no pools available
> If not, then I suspect that Solaris install didn't see the relevant
james hughes wrote:
Not to add a cold blanket to this...
This would be mostly a "vanity erase" not really a serious "security
erase" since it will not over write the remnants of remapped sectors.
Indeed and as you said there is other software to deal with this for
those types of customers t
Hello,
I just wanted to know if there are any news regarding Project Honeycomb? Wasn´
it announced for end of 2006? Is there still development?
I hope, this is the right place to post this question
happy holidays
This message posted from opensolaris.org
_
Hello Jason,
Wednesday, December 20, 2006, 1:02:36 AM, you wrote:
JJWW> Hi Robert
JJWW> I didn't take any offense. :-) I completely agree with you that zpool
JJWW> striping leverages standard RAID-0 knowledge in that if a device
JJWW> disappears your RAID group goes poof. That doesn't really req
Not to add a cold blanket to this...
This would be mostly a "vanity erase" not really a serious "security
erase" since it will not over write the remnants of remapped sectors.
Serious security erase software will unmap sectors and erase both
locations using special microcode features. While
On Tue, 19 Dec 2006, Anton B. Rang wrote:
"INFORMATION: If a member of this striped zpool becomes unavailable or
develops corruption, Solaris will kernel panic and reboot to protect your data."
OK, I'm puzzled.
Am I the only one on this list who believes that a kernel panic, instead of
EIO,
Matthew Ahrens wrote:
Darren J Moffat wrote:
I believe that ZFS should provide a method of bleaching a disk or part
of it that works without crypto having ever been involved.
I see two use cases here:
1. This filesystem contains sensitive information. When it is freed,
make sure it's really
Bill Sommerfeld wrote:
There also may be a reason to do this when confidentiality isn't
required: as a sparse provisioning hack..
If you were to build a zfs pool out of compressed zvols backed by
another pool, then it would be very convenient if you could run in a
mode where freed blocks were ov
48 matches
Mail list logo