> Anton B. Rang wrote:
>>> "INFORMATION: If a member of this striped zpool becomes unavailable or
>>> develops corruption, Solaris will kernel panic and reboot to protect your
>>> data."
>>>
>>
>> OK, I'm puzzled.
>>
>> Am I the only one on this list who believes that a kernel panic, instead
>> of
Nathalie Poulet (IPSL) wrote:
Hello,
After an export and an importation, the size of the pool remains
unchanged. As there were no data on this partition, I destroyed and
recreate the pool. The size was indeed taken into account.
The correct size is indicated by the order "zpool list". The or
Bill Sommerfeld wrote:
On Tue, 2006-12-19 at 16:19 -0800, Matthew Ahrens wrote:
Darren J Moffat wrote:
I believe that ZFS should provide a method of bleaching a disk or part
of it that works without crypto having ever been involved.
I see two use cases here:
I agree with your two, but I thin
Anton B. Rang wrote:
"INFORMATION: If a member of this striped zpool becomes unavailable or
develops corruption, Solaris will kernel panic and reboot to protect your data."
OK, I'm puzzled.
Am I the only one on this list who believes that a kernel panic, instead of
EIO, represents a bug?
> "INFORMATION: If a member of this striped zpool becomes unavailable or
> develops corruption, Solaris will kernel panic and reboot to protect your
> data."
OK, I'm puzzled.
Am I the only one on this list who believes that a kernel panic, instead of
EIO, represents a bug?
This message post
On 12/19/06, Brian Hechinger <[EMAIL PROTECTED]> wrote:
On Tue, Dec 19, 2006 at 02:55:59PM -0500, Rince wrote:
>
> If it doesn't show up there, I'll be surprised.
I take that back, I just managed to restore my ability to boot the old
instance.
I will be making backups and starting clean, this
Thanks a lot Eric.
But were'nt you supposed to be on vacation!?
Regards,
Al Hopper Logical Approach Inc, Plano, TX. [EMAIL PROTECTED]
Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
OpenSolaris Gov
On Tue, 2006-12-19 at 16:19 -0800, Matthew Ahrens wrote:
> Darren J Moffat wrote:
> > I believe that ZFS should provide a method of bleaching a disk or part
> > of it that works without crypto having ever been involved.
>
> I see two use cases here:
I agree with your two, but I think I see a thi
For background on what this is, see:
http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200
=
zfs-discuss 12/01 - 12/15
=
Size of all threads during per
Darren J Moffat wrote:
I believe that ZFS should provide a method of bleaching a disk or part
of it that works without crypto having ever been involved.
I see two use cases here:
1. This filesystem contains sensitive information. When it is freed,
make sure it's really gone.
I believe that
Hi Robert
I didn't take any offense. :-) I completely agree with you that zpool
striping leverages standard RAID-0 knowledge in that if a device
disappears your RAID group goes poof. That doesn't really require a
notice...was just trying to be complete. :-)
The surprise to me was that detecting
Hello Jason,
Tuesday, December 19, 2006, 11:23:56 PM, you wrote:
JJWW> Hi Robert,
JJWW> I don't think its about assuming the admin is an idiot. It happened to
JJWW> me in development and I didn't expect it...I hope I'm not an idiot.
JJWW> :-)
JJWW> Just observing the list, a fair amount of peop
On Tue, Dec 19, 2006 at 03:09:03PM -0500, Jeffrey Hutzelman wrote:
>
>
> On Tuesday, December 19, 2006 01:54:56 PM + Darren J Moffat
> <[EMAIL PROTECTED]> wrote:
>
> >While I think having this in the VOP/FOP layer is interesting it isn't
> >the problem I was trying to solve and to be comple
Hi Robert,
I don't think its about assuming the admin is an idiot. It happened to
me in development and I didn't expect it...I hope I'm not an idiot.
:-)
Just observing the list, a fair amount of people don't expect it. The
likelihood you'll miss this one little bit of very important
information
On Tue, Dec 19, 2006 at 02:55:59PM -0500, Rince wrote:
>
> If it doesn't show up there, I'll be surprised.
I take that back, I just managed to restore my ability to boot the old
instance.
I will be making backups and starting clean, this old partitioning has
screwed me up for the last time.
Tha
On Tue, Dec 19, 2006 at 02:55:59PM -0500, Rince wrote:
>
> "zpool import" should give you a list of all the pools ZFS sees as being
> mountable. "zpool import [poolname]" is also, conveniently, the command used
> to mount the pool afterward. :)
Which is what I expected to happen, however.
>
Hello Jason,
Tuesday, December 19, 2006, 8:54:09 PM, you wrote:
>> > Shouldn't there be a big warning when configuring a pool
>> > with no redundancy and/or should that not require a -f flag ?
>>
>> why? what if the redundancy is below the pool .. should we
>> warn that ZFS isn't directly involv
On Mon, 2006-12-18 at 16:05 +, Darren J Moffat wrote:
> 6) When modifying any file you want to bleach the old blocks in a way
> very simlar to case 1 above.
I think this is the crux of the problem. If you fail to solve it, you
can't meaningfully say that all blocks which once contained parts
Hi Roch,
That sounds like a most excellent resolution to me. :-) I believe
Engenio devices support SBC-2. It seems to me making intelligent
decisions for end-users is generally a good policy.
Best Regards,
Jason
On 12/19/06, Roch - PAE <[EMAIL PROTECTED]> wrote:
Jason J. W. Williams writes:
On 12/19/06, Brian Hechinger <[EMAIL PROTECTED]> wrote:
I'm trying to upgrade my desktop at work. It used to have a 10G
partition with Windows on it and the rest of the disk was for
Solaris. Windows pissed me off one too many times and got turned
into a 10G swap partition.
Because of the way
> Shouldn't there be a big warning when configuring a pool
> with no redundancy and/or should that not require a -f flag ?
why? what if the redundancy is below the pool .. should we
warn that ZFS isn't directly involved in redundancy decisions?
Because if the host controller port goes flaky an
I do see this note in the 3511 documentation: "Note - Do not use a Sun StorEdge 3511
SATA array to store single instances of data. It is more suitable for use in
configurations where the array has a backup or archival role."
My understanding of this particular scare-tactic wording (its also in
Torrey McMahon wrote:
The first bug we'll get when adding a "ZFS is not going to be able to
fix data inconsistency problems" error message to every pool creation or
similar operation is going to be "Need a flag to turn off the warning
message..."
Richard pines for ditto blocks for data...
--
sidetracking below...
Matt Ingenthron wrote:
Mike Seda wrote:
Basically, is this a supported zfs configuration?
Can't see why not, but support or not is something only Sun support can
speak for, not this mailing list.
You say you lost access to the array though-- a full disk failure
shoul
> I thought this is what the T10 OSD spec was set up to address. We've already
> got device manufacturers beginning to design and code to the spec.
Precisely. The interface to block-based devices forces much of the knowledge
that the file system and application have about access patterns to be t
I'm trying to upgrade my desktop at work. It used to have a 10G
partition with Windows on it and the rest of the disk was for
Solaris. Windows pissed me off one too many times and got turned
into a 10G swap partition.
Because of the way this was all setup in the first place (poorly)
Solaris won'
Frank Hofmann wrote:
would not be a call to posix_fallocate() or ftruncate(), instead an
unlink(2) or a zfs destory or zpool destroy. Also on hotsparing in a
disk - if the old disk can still be written to in some way we should
do our best to bleach it.
Since VOP_*() requires a filesystem (wi
On Tue, 19 Dec 2006, Darren J Moffat wrote:
Frank Hofmann wrote:
On the technical side, I don't think a new VOP will be needed. This could
easily be done in VOP_SPACE together with a new per-fs property - bleach
new block when it's allocated (aka VOP_SPACE directly, or in a backend also
calle
Frank Hofmann wrote:
On the technical side, I don't think a new VOP will be needed. This
could easily be done in VOP_SPACE together with a new per-fs property -
bleach new block when it's allocated (aka VOP_SPACE directly, or in a
backend also called e.g. on allocating writes / filling holes),
On Dec 19, 2006, at 7:14 AM, Mike Seda wrote:
Anton B. Rang wrote:
I have a Sun SE 3511 array with 5 x 500 GB SATA-I disks in a RAID
5. This
2 TB logical drive is partitioned into 10 x 200GB slices. I gave
4 of these slices to a Solaris 10 U2 machine and added each of
them to a concat (non
On Tue, Dec 19, 2006 at 04:37:36PM +, Darren J Moffat wrote:
> I think you are saying it should have INHERITY set to YES and EDIT set
> to NO. We don't currently have any properties like that but crypto will
> need this as well - for a very similar reason with clones.
What I mean is that if
On Dec 19, 2006, at 10:15, Torrey McMahon wrote:
Darren J Moffat wrote:
Jonathan Edwards wrote:
On Dec 19, 2006, at 07:17, Roch - PAE wrote:
Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?
why? what if the redundancy
On Tue, 19 Dec 2006, Jonathan Edwards wrote:
On Dec 18, 2006, at 11:54, Darren J Moffat wrote:
[EMAIL PROTECTED] wrote:
Rather than bleaching which doesn't always remove all stains, why can't
we use a word like "erasing" (which is hitherto unused for filesystem use
in Solaris, AFAIK)
and t
Nicolas Williams wrote:
On Tue, Dec 19, 2006 at 02:04:37PM +, Darren J Moffat wrote:
In case it wasn't clear I am NOT proposing a UI like this:
$ zfs bleach ~/Documents/company-finance.odp
Instead ~/Documents or ~ would be a ZFS file system with a policy set
something like this:
# zfs s
On Tue, Dec 19, 2006 at 02:04:37PM +, Darren J Moffat wrote:
> In case it wasn't clear I am NOT proposing a UI like this:
>
> $ zfs bleach ~/Documents/company-finance.odp
>
> Instead ~/Documents or ~ would be a ZFS file system with a policy set
> something like this:
>
> # zfs set erase=fil
Torrey McMahon wrote:
Darren J Moffat wrote:
Jonathan Edwards wrote:
On Dec 19, 2006, at 07:17, Roch - PAE wrote:
Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?
why? what if the redundancy is below the pool .. should
On Dec 18, 2006, at 11:54, Darren J Moffat wrote:
[EMAIL PROTECTED] wrote:
Rather than bleaching which doesn't always remove all stains, why
can't
we use a word like "erasing" (which is hitherto unused for
filesystem use
in Solaris, AFAIK)
and this method doesn't remove all stains from t
Darren J Moffat wrote:
Jonathan Edwards wrote:
On Dec 19, 2006, at 07:17, Roch - PAE wrote:
Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?
why? what if the redundancy is below the pool .. should we
warn that ZFS isn't
On Dec 19, 2006, at 08:59, Darren J Moffat wrote:
Darren Reed wrote:
If/when ZFS supports this then it would be nice to also be able
to have Solaris bleach swap on ZFS when it shuts down or reboots.
Although it may be that this option needs to be put into how we
manage swap space and not speci
On 19 December, 2006 - Nathalie Poulet (IPSL) sent me these 1,4K bytes:
> Hello,
> After an export and an importation, the size of the pool remains
> unchanged. As there were no data on this partition, I destroyed and
> recreate the pool. The size was indeed taken into account.
>
> The correct
Hello,
After an export and an importation, the size of the pool remains
unchanged. As there were no data on this partition, I destroyed and
recreate the pool. The size was indeed taken into account.
The correct size is indicated by the order "zpool list". The order "df
- k" shows a size high
Jonathan Edwards wrote:
On Dec 19, 2006, at 07:17, Roch - PAE wrote:
Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?
why? what if the redundancy is below the pool .. should we
warn that ZFS isn't directly involved in red
In case it wasn't clear I am NOT proposing a UI like this:
$ zfs bleach ~/Documents/company-finance.odp
Instead ~/Documents or ~ would be a ZFS file system with a policy set
something like this:
# zfs set erase=file:zero
Or maybe more like this:
# zfs create -o erase=file -o erasemethod=zer
Darren Reed wrote:
If/when ZFS supports this then it would be nice to also be able
to have Solaris bleach swap on ZFS when it shuts down or reboots.
Although it may be that this option needs to be put into how we
manage swap space and not specifically zomething for ZFS.
Doing this to swap space
Jeffrey Hutzelman wrote:
On Monday, December 18, 2006 05:51:14 PM -0600 Nicolas Williams
<[EMAIL PROTECTED]> wrote:
On Mon, Dec 18, 2006 at 06:46:09PM -0500, Jeffrey Hutzelman wrote:
On Monday, December 18, 2006 05:16:28 PM -0600 Nicolas Williams
<[EMAIL PROTECTED]> wrote:
> Or an iovec
Nicolas Williams wrote:
On Mon, Dec 18, 2006 at 05:44:08PM -0500, Jeffrey Hutzelman wrote:
On Monday, December 18, 2006 11:32:37 AM -0600 Nicolas Williams
<[EMAIL PROTECTED]> wrote:
I'd say go for both, (a) and (b). Of course, (b) may not be easy to
implement.
Another option would be to wa
Jonathan Edwards writes:
> On Dec 19, 2006, at 07:17, Roch - PAE wrote:
>
> >
> > Shouldn't there be a big warning when configuring a pool
> > with no redundancy and/or should that not require a -f flag ?
>
> why? what if the redundancy is below the pool .. should we
> warn that ZFS isn
On Dec 19, 2006, at 07:17, Roch - PAE wrote:
Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?
why? what if the redundancy is below the pool .. should we
warn that ZFS isn't directly involved in redundancy decisions?
---
.
On Dec 18, 2006, at 17:52, Richard Elling wrote:
In general, the closer to the user you can make policy decisions,
the better
decisions you can make. The fact that we've had 10 years of RAID
arrays
acting like dumb block devices doesn't mean that will continue for
the next
10 years :-) I
Anton B. Rang wrote:
I have a Sun SE 3511 array with 5 x 500 GB SATA-I disks in a RAID 5. This
2 TB logical drive is partitioned into 10 x 200GB slices. I gave 4 of these slices to a
Solaris 10 U2 machine and added each of them to a concat (non-raid) zpool as listed below:
This is certain
Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?
-r
Al Hopper writes:
> On Sun, 17 Dec 2006, Ricardo Correia wrote:
>
> > On Friday 15 December 2006 20:02, Dave Burleson wrote:
> > > Does anyone have a document that descr
Jason J. W. Williams writes:
> Hi Jeremy,
>
> It would be nice if you could tell ZFS to turn off fsync() for ZIL
> writes on a per-zpool basis. That being said, I'm not sure there's a
> consensus on that...and I'm sure not smart enough to be a ZFS
> contributor. :-)
>
> The behavior is
52 matches
Mail list logo