Just a reminder...
Hope to see you there.
-Jennifer
-
To: Developers and Students
You are invited to participate in the first OpenSolaris Security Summit
OpenSolaris Security Summit
Tuesday, November 3rd, 2009
Baltimore Marriott Waterfront
700
On Mon, Oct 05, 2009 at 02:14:24PM -0700, Mark Horstman wrote:
> I have a snapshot that I'd like to destroy:
If you have a filesystem and a clone of that filesystem, a snapshot
always connects them. You can destroy the snapshot only if there are no
clones.
--
Darren
Hi Osvald,
Can you comment on how the disks shrank or how the labeling on these
disks changed?
We would like to track the issues that causes the hardware underneath
a live pool to change so that we can figure out how to prevent pool
failures in the future.
Thanks,
Cindy
On 10/03/09 09:46, Osv
On 5-Oct-09, at 3:32 PM, Miles Nordin wrote:
"bm" == Brandon Mercer writes:
I'm now starting to feel that I understand this issue,
and I didn't for quite a while. And that I understand the
risks better, and have a clearer idea of what the possible
fixes are. And I didn't before.
haha, y
Replying to a few folks in a digest format, because I'm lazy and don't
have that much to say.
On Wed, Sep 30, 2009 at 5:53 PM, Tim Cook wrote:
> What are you hoping to accomplish? You're still going to need a drives
> worth of free space, and if you're so performance strapped that one drive
> ma
Sorry. My environment:
# uname -a
SunOS xx 5.10 Generic_141414-10 sun4v sparc SUNW,SPARC-Enterprise-T5220
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listin
I have a snapshot that I'd like to destroy:
# zfs list rpool/ROOT/be200909160...@200909160720
NAME USED AVAIL REFER MOUNTPOINT
rpool/ROOT/be200909160...@200909160720 1.88G - 4.18G -
But when I try it warns me of dependent clones:
# zfs destroy rpool
Richard,
The sub-threads being woven with a couple of other people are very important,
though they are not my immediate issue. I really don't think you need us
debating with *you* about this - I think you could argue our point also. What
we need to get across is a perspective.
I am pretty su
> "vl" == Victor Latushkin writes:
vl> It changes setting of checksum=on to mean "fletcher4"
oh, good. so it is only the ZIL that's unfixed? At least that fix
could come from a simple upgrade, if it ever gets fixed.
pgp0027QlEfr3.pgp
Description: PGP signature
___
> "bm" == Brandon Mercer writes:
>> I'm now starting to feel that I understand this issue,
>> and I didn't for quite a while. And that I understand the
>> risks better, and have a clearer idea of what the possible
>> fixes are. And I didn't before.
haha, yes, I think I can
On Sun, Oct 4, 2009 at 3:23 PM, Trevor Pretty wrote:
> I think you've taken volume snapshots. I believe you need to make file
> system snapshots and each users/username a zfs file system.
> Lets play..
Automatic .snapshot directories are a feature of NetApp filers and are
pretty nice at times
On 05.10.09 23:07, Miles Nordin wrote:
"re" == Richard Elling writes:
re> As I said before, if the checksum matches, then the data is
re> checked for sequence number = previous + 1, the blk_birth ==
re> 0, and the size is correct. Since this data lives inside the
re> block, it
> "re" == Richard Elling writes:
re> As I said before, if the checksum matches, then the data is
re> checked for sequence number = previous + 1, the blk_birth ==
re> 0, and the size is correct. Since this data lives inside the
re> block, it is unlikely that a collision would a
Victor Latushkin wrote:
Liam Slusser wrote:
Long story short, my cat jumped on my server at my house crashing two
drives at the same time. It was a 7 drive raidz (next time ill do
raidz2).
Long story short - we've been able to get access to data in the pool.
This involved finding better old
Richard,
it is the same controller used inside Sun's thumpers; It could be a problem in
my unit (which is a couple of years old now), though.
Is there something I can do to find out if I owe you that steak? :)
Thanks.
Maurilio.
--
This message posted from opensolaris.org
_
On Oct 4, 2009, at 11:52 PM, Maurilio Longo wrote:
Richard,
thanks for the explanation.
So can we say that the problem is in the disks loosing a command now
and then under stress?
It may be the disks or the HBA. I'll bet a steak dinner it is the HBA.
-- richard
_
I am one of the much blessed university users who wishes to provide home
directories and web space to thousands of users and is being bitten by
abysmal scaling behaviour of zfs, the overhead of creating thousands of zfs
files sytems in a pool can takes days to complete. Sharing or unsharing
them c
Question (for Richard E): Is there a write-up on the ZFS broken fletcher fix?
Is the default checksum for new pool creation changed in U8?
Is the default checksum for new pool creation change in OpenSolaris or
SXCE (which versions)?
Is there a case open to allow the user to select the checksum to
On Sat, October 3, 2009 20:50, Jeff Haferman wrote:
> And why does an rsync take so much
> longer on these directories when directories that contain hundreds of
> gigabytes transfer much faster?
Rsync protocol has to exchange information about each file between client
and server, as part of the p
On Sat, October 3, 2009 17:18, Ray Clark wrote:
> Thank you all for your help, not to snub anyone, but Darren, Richard, and
> Cindy especially come to mind. Thanks for sparring with me until we
> understood each other.
I'd like to echo this (and extend the thanks to include Ray). I'm now
start
On Mon, Oct 5, 2009 at 10:27 AM, David Dyer-Bennet wrote:
>
> On Sat, October 3, 2009 17:18, Ray Clark wrote:
>
>> Thank you all for your help, not to snub anyone, but Darren, Richard, and
>> Cindy especially come to mind. Thanks for sparring with me until we
>> understood each other.
>
> I'd lik
21 matches
Mail list logo