> fyleow wrote:
> > I have a raidz1 tank of 5x 640 GB hard drives on my
> newly installed OpenSolaris 2009.06 system. I did a
> zpool export tank and the process has been running
> for 3 hours now taking up 100% CPU usage.
> >
> > When I do a zfs list tank it's still shown as
> mounted. What's goi
fyleow wrote:
I have a raidz1 tank of 5x 640 GB hard drives on my newly installed OpenSolaris
2009.06 system. I did a zpool export tank and the process has been running for
3 hours now taking up 100% CPU usage.
When I do a zfs list tank it's still shown as mounted. What's going on here?
Shoul
I have a raidz1 tank of 5x 640 GB hard drives on my newly installed OpenSolaris
2009.06 system. I did a zpool export tank and the process has been running for
3 hours now taking up 100% CPU usage.
When I do a zfs list tank it's still shown as mounted. What's going on here?
Should it really be t
On Mon, 27 Jul 2009 15:17:52 -0700 (PDT)
Tim Cook wrote:
> buMP? I watched the stream for several hours and never heard a word
> about dedupe. The blogs also all seem to be completely bare of mention.
> What's the deal?
ZFS Deduplication was most definitely talked about in both
Bill and Jeff's
On 28/07/2009, at 9:22 AM, Robert Thurlow wrote:
I can't help with your ZFS issue, but to get a reasonable crash
dump in circumstances like these, you should be able to do
"savecore -L" on OpenSolaris.
That would be well and good if I could get a login - due to the rpool
being unresponsive,
On 27-Jul-09, at 3:44 PM, Frank Middleton wrote:
On 07/27/09 01:27 PM, Eric D. Mudama wrote:
Everyone on this list seems to blame lying hardware for ignoring
commands, but disks are relatively mature and I can't believe that
major OEMs would qualify disks or other hardware that willingly
ig
Brian
This is a chunk of a script I wrote: To make it go to another machine
change the send/receive something like the other example below
Creates a copy of a zfs filesystem and mounts it on the local machine
(the "do_command" just made my demo self running).
scrubbing is easy just a cron en
James Lever wrote:
I had help trying to create a crash dump, but everything we tried didn't
cause the system to panic. 0>eip;:c;:c and other weird magic I don't
fully grok
I can't help with your ZFS issue, but to get a reasonable crash
dump in circumstances like these, you should be able to
David Magda wrote:
> This is also (theoretically) why a drive purchased from Sun is more
> that expensive then a drive purchased from your neighbourhood computer
> shop: Sun (and presumably other manufacturers) takes the time and
> effort to test things to make sure that when a drive says "I'
Hi Laurent,
I was able to reproduce on it on a Solaris 10 5/09 system.
The problem is fixed in a current Nevada bits and also in
the upcoming Solaris 10 release.
The bug fix that integrated this change might be this one:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6328632
zpool o
On 28/07/2009, at 6:44 AM, dick hoogendijk wrote:
Are there any known issues with zfs in OpenSolaris B118?
I run my pools formatted like the original release 2009.06 (I want to
be able to go back to it ;-). I'm a bit scared after reading about
serious issues in B119 (will be skipped, I heard).
Tim,
If you could send me your email address privately, the
OpenSolaris list folks have a better chance of resolving
this problem.
I promise I won't sell it to anyone. :-)
Cindy
On 07/27/09 16:25, cindy.swearin...@sun.com wrote:
Tim,
I sent your subscription problem to the OpenSolaris help l
Tim,
I sent your subscription problem to the OpenSolaris help list.
We should hear back soon.
Cindy
On 07/27/09 16:15, Tim Cook wrote:
So it is broken then... because I'm on week 4 now, no responses to this thread,
and I'm still not getting any emails.
Anyone from Sun still alive that can a
buMP? I watched the stream for several hours and never heard a word about
dedupe. The blogs also all seem to be completely bare of mention. What's the
deal?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
So it is broken then... because I'm on week 4 now, no responses to this thread,
and I'm still not getting any emails.
Anyone from Sun still alive that can actually do something?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
z
On 27-Jul-09, at 15:14 , David Magda wrote:
Also, I think it may have already been posted, but I haven't found
the
option to disable VirtualBox' disk cache. Anyone have the incantation
handy?
http://forums.virtualbox.org/viewtopic.php?f=8&t=13661&start=0
It tells VB not to ignore the sync/fl
On Jul 27, 2009, at 10:27 AM, Eric D. Mudama wrote:
On Sun, Jul 26 at 1:47, David Magda wrote:
On Jul 25, 2009, at 16:30, Carson Gaspar wrote:
Frank Middleton wrote:
Doesn't this mean /any/ hardware might have this problem, albeit
with much lower probability?
No. You'll lose unwritten
Thanks - that answers my question ! :))
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Dean,
may you provide more infos about that?
Are you able to send me a bug description for a better understanding?
Is there a patch available, or do I have to use a previous patch of sam-qfs?
Thanks in advance...
Tobias
Dean Roehrich schrieb:
On Mon, Jul 27, 2009 at 02:14:24PM +0200,
On 07/27/09 01:27 PM, Eric D. Mudama wrote:
Everyone on this list seems to blame lying hardware for ignoring
commands, but disks are relatively mature and I can't believe that
major OEMs would qualify disks or other hardware that willingly ignore
commands.
You are absolutely correct, but if th
On Mon, July 27, 2009 13:59, Adam Sherman wrote:
> Also, I think it may have already been posted, but I haven't found the
> option to disable VirtualBox' disk cache. Anyone have the incantation
> handy?
http://forums.virtualbox.org/viewtopic.php?f=8&t=13661&start=0
It tells VB not to ignore the
dick hoogendijk nagual.nl> writes:
>
> Than why is it that most AMD MoBo's in the shops clearly state that ECC
> Ram is not supported on the MoBo?
To restate what Erik explained: *all* AMD CPUs support ECC RAM, however poorly
written motherboard specs often make the mistake of confusing "non-EC
On Mon, Jul 27, 2009 at 12:54 PM, Chris Ridd wrote:
>
> On 27 Jul 2009, at 18:49, Thomas Burgess wrote:
>
>>
>> i was under the impression it was virtualbox and it's default setting that
>> ignored the command, not the hard drive
>
> Do other virtualization products (eg VMware, Parallels, Virtual P
On 27-Jul-09, at 13:54 , Chris Ridd wrote:
i was under the impression it was virtualbox and it's default
setting that ignored the command, not the hard drive
Do other virtualization products (eg VMware, Parallels, Virtual PC)
have the same default behaviour as VirtualBox?
I've a suspicion
On 27 Jul 2009, at 18:49, Thomas Burgess wrote:
i was under the impression it was virtualbox and it's default
setting that ignored the command, not the hard drive
Do other virtualization products (eg VMware, Parallels, Virtual PC)
have the same default behaviour as VirtualBox?
I've a s
i was under the impression it was virtualbox and it's default setting that
ignored the command, not the hard drive
On Mon, Jul 27, 2009 at 1:27 PM, Eric D. Mudama
wrote:
> On Sun, Jul 26 at 1:47, David Magda wrote:
>
>>
>> On Jul 25, 2009, at 16:30, Carson Gaspar wrote:
>>
>> Frank Middleton wr
On Sun, Jul 26 at 1:47, David Magda wrote:
On Jul 25, 2009, at 16:30, Carson Gaspar wrote:
Frank Middleton wrote:
Doesn't this mean /any/ hardware might have this problem, albeit
with much lower probability?
No. You'll lose unwritten data, but won't corrupt the pool, because
the on-disk
On 27-Jul-09, at 5:46 AM, erik.ableson wrote:
The zfs send command generates a differential file between the two
selected snapshots so you can send that to anything you'd like.
The catch of course is that then you have a collection of files on
your Linux box that are pretty much useless s
Heh, I'd kill for failures to be handled in 2 or 3 seconds. I saw the failure
of a mirrored iSCSI disk lock the entire pool for 3 minutes. That has been
addressed now, but device hangs have the potential to be *very* disruptive.
--
This message posted from opensolaris.org
_
On Mon, 27 Jul 2009, Marcelo Leal wrote:
Well, i'm trying to understand this workload, but what i'm seeing to
reproduce this is just flood the SSD with writes, and the disks show
no activity. I'm testing with aggr (two links), and for one or two
seconds there is no read activity (output from s
> That's only one element of it Bob. ZFS also needs
> devices to fail quickly and in a predictable manner.
>
> A consumer grade hard disk could lock up your entire
> pool as it fails. The kit Sun supply is more likely
> to fail in a manner ZFS can cope with.
I agree 100%.
Hardware, firmware,
On Mon, 27 Jul 2009 08:26:06 -0600
Mark Shellenbaum wrote:
> I would suggest you open a bug on this.
> http://defect.opensolaris.org/bz/
Done. Bugzilla – Bug 10294 Submitted
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u7 05/09 | OpenSolaris 2010.02 B118
+ All th
On Mon, Jul 27, 2009 at 02:14:24PM +0200, Tobias Exner wrote:
> Hi list,
>
> I've did some tests and run into a very strange situation..
>
>
> I created a zvol using "zfs create -V" and initialize an sam-filesystem
> on this zvol.
> After that I restored some testdata using a dump from another
dick hoogendijk wrote:
# zfs create store/snaps
# zfs set sharenfs='rw=arwen,root=arwen' store/snaps
# share
-...@store/snaps /store/snaps sec=sys,rw=arwen,root=arwen ""
arwen# zfs send -Rv rp...@0906 > /net/westmark/store/snaps/rpool.0906
zsh: permission denied: /net/westmark/store/sna
Hello,
Well, i'm trying to understand this workload, but what i'm seeing to reproduce
this is just flood the SSD with writes, and the disks show no activity. I'm
testing with aggr (two links), and for one or two seconds there is no read
activity (output from server).
Right now i'm suspecting
Hi list,
I've did some tests and run into a very strange situation..
I created a zvol using "zfs create -V" and initialize an sam-filesystem
on this zvol.
After that I restored some testdata using a dump from another system.
So far so good.
After some big troubles I found out that releasing
Oh well, whole system seems to be deadlocked.
nice. Little too keen keeping data safe :-P
Yours
Markus Kovero
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Markus Kovero
Sent: 27. heinäkuuta 2009 13:39
To: zfs-discuss@opensolaris.org
Subject:
Hi, how come zfs destroy being so slow, eg. destroying 6TB dataset renders zfs
admin commands useless for time being, in this case for hours?
(running osol 111b with latest patches.)
Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolar
The zfs send command generates a differential file between the two
selected snapshots so you can send that to anything you'd like. The
catch of course is that then you have a collection of files on your
Linux box that are pretty much useless since your can't mount them or
read the contents
Brian wrote:
Yes Ive thought about some off-site strategy. My parents are used to loading their data onto an external hard drive, however this always struck me as a bad strategy. A tape backup system is unlikely due to the cost, however I could get them to continue also loading the data onto an
Thank you, Ill definitely implement a script to scrub the system, and have the
system email me if there is a problem.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailma
Yes Ive thought about some off-site strategy. My parents are used to loading
their data onto an external hard drive, however this always struck me as a bad
strategy. A tape backup system is unlikely due to the cost, however I could
get them to continue also loading the data onto an external ha
On Mon, Jul 27, 2009 at 2:51 PM, Axelle
Apvrille wrote:
> Hi,
> I've already sent a few posts around this issue, but haven't quite got the
> answer - so I'll try to clarify my question :)
>
> Since I have upgraded from 2008.11 to 2009.06 a new BE has been created. On
> ZFS, that corresponds to tw
Hi,
I've already sent a few posts around this issue, but haven't quite got the
answer - so I'll try to clarify my question :)
Since I have upgraded from 2008.11 to 2009.06 a new BE has been created. On
ZFS, that corresponds to two file systems, both (strangely) mounted on /. The
old BE correspo
44 matches
Mail list logo