comment below...
Ian Collins wrote:
Richard Elling wrote:
Buried in the announcements last week from Sun is the Sun Flash Module.
http://www.sun.com/storage/flash/module.jsp
I wanted to bring this up on this forum because it represents an
interesting
way to add SSD technology to a system des
Bob Friesenhahn wrote:
On Fri, 17 Apr 2009, Richard Elling wrote:
Brief specifications:
SATA interface
Thoughts?
SATA is so "yesterday". It represents "in the box" thinking. Sun
engineering should still be capable of thinking "outside the box".
Considerable optimizations/improvements ar
Bob Friesenhahn wrote:
On Fri, 17 Apr 2009, Richard Elling wrote:
Brief specifications:
SATA interface
Thoughts?
SATA is so "yesterday". It represents "in the box" thinking. Sun
engineering should still be capable of thinking "outside the box".
Considerable optimizations/improvements ar
On Fri, 17 Apr 2009, Richard Elling wrote:
Brief specifications:
SATA interface
Thoughts?
SATA is so "yesterday". It represents "in the box" thinking. Sun
engineering should still be capable of thinking "outside the box".
Considerable optimizations/improvements are possible by erradicati
On Sat, 18 Apr 2009, Ian Collins wrote:
It does represent the next big thin in storage, but it risks languishing in a
corner unless actively promoted in an easy to use form. Or until a company
with more aggressive marketing picks up the idea and grabs the market.
Violin (http://violin-memor
On 04/17/09 21:19, Kyle McDonald wrote:
> One reason is that you're not timing how long it takes for the destroy's
> to complete. You're only timing how long it takes to start all the jobs
> in the background.
Right, I'm sorry, my example was an oversimplification of a script I made.
That script
Richard Elling wrote:
Buried in the announcements last week from Sun is the Sun Flash Module.
http://www.sun.com/storage/flash/module.jsp
I wanted to bring this up on this forum because it represents an
interesting
way to add SSD technology to a system design. The new Sun Blade X6275
has slot
Buried in the announcements last week from Sun is the Sun Flash Module.
http://www.sun.com/storage/flash/module.jsp
I wanted to bring this up on this forum because it represents an
interesting
way to add SSD technology to a system design. The new Sun Blade X6275
has slots for these SSDs and I
Hello Will,
Monday, April 13, 2009, 6:44:47 PM, you wrote:
WM> On Mon, Apr 13, 2009 at 07:03, Robert Milkowski wrote:
>> Hello Daniel,
>>
>> Thursday, April 9, 2009, 3:35:07 PM, you wrote:
>>
>> DR> Jonathan schrieb:
OpenSolaris Forums wrote:
> if you have a snapshot of your files and r
Joep Vesseur wrote:
All,
I was wondering why "zfs destroy -r" is so excruciatingly slow compared to
parallel destroys.
< SNIP>
while a little handy-work with
# time for i in `zfs list | awk '/blub2\\// {print $1}'` ;\
do ( zfs destroy $i & ) ; done
yields
real0m8.191s
On Fri, Apr 17, 2009 at 1:17 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Fri, 17 Apr 2009, Dave wrote:
>
>>
>> Not to nitpick, but I think most people would prefer the singular 'data'
>> when referring to the storage of data. The plural 'data' in this case is
>> very awkward.
>
On Fri, 17 Apr 2009, Dave wrote:
Not to nitpick, but I think most people would prefer the singular 'data' when
referring to the storage of data. The plural 'data' in this case is very
awkward.
Assuming that what is stored can be classified as data!
http://en.wikipedia.org/wiki/Data
Why d
Carson Gaspar wrote:
Tim wrote (although it wasn't his error originally):
Unless you want to have a different response for each of the repair
methods, I'd just drop that part:
status: One or more devices has experienced an error. The error has been
automatically corrected by zfs.
All,
I was wondering why "zfs destroy -r" is so excruciatingly slow compared to
parallel destroys.
On my x4500, for example, after having created 1000 filesystems named
pool/blub2/
[...]
pool/blub2/0999
and keeping them empty, a subsequent destroy with
# time zfs destroy -r poo
Are you assuming that bad disk blocks are returned to the free pool?
Hrm. I was assuming that zfs was unaware of the source of the error, and
therefore unable to avoid running into it again. If it was a bad sector, and the
disk knows about it, then you probably woulnd't see it again. But if th
Tim wrote (although it wasn't his error originally):
Unless you want to have a different response for each of the repair
methods, I'd just drop that part:
status: One or more devices has experienced an error. The error has been
automatically corrected by zfs.
Data on the pool is
On Fri, Apr 17, 2009 at 12:25 PM, Richard Elling
wrote:
> Drew Balfour wrote:
>
>> Now I wonder where that error came from. It was just a single checksum
>> error. It couldn't go away with an earlier scrub, and seemingly left no
>> traces of badness on the drive. Something serious? At leas
On 17-Apr-09, at 11:49 AM, Frank Middleton wrote:
... One might argue that a machine this flaky should
be retired, but it is actually working quite well,
If it has bad memory, you won't get much useful work done on it until
the memory is replaced - unless you want to risk your data with
r
Drew Balfour wrote:
Now I wonder where that error came from. It was just a single
checksum error. It couldn't go away with an earlier scrub, and
seemingly left no traces of badness on the drive. Something
serious? At least it looks a tad contradictory: "Applications are
unaffected.", it is unr
On Fri, Apr 17, 2009 at 6:15 AM, erik.ableson wrote:
> Hi there,
>
> I'm working on a new OS 2008.11 setup here and running into a few issues
> with the nfs integration. Notably, it appears that subnet values attributed
> to sharenfs are ignored and gives back a permission denied for all
> conne
>I'd like to submit an RFE suggesting that data + checksum be copied for
>mirrored writes, but I won't waste anyone's time doing so unless you
>think there is a point. One might argue that a machine this flaky should
>be retired, but it is actually working quite well, and perhaps represents
>not e
On Fri, 17 Apr 2009, Mark J Musante wrote:
The dependency is based on the names.
I should clarify what I mean by that. There are actually two dependencies
here: one is based on dataset names, and one is based on snapshots and
clones.
If there are two datasets, pool/foo and pool/foo/bar, t
On 04/16/09 04:39, casper@sun.com wrote:
You really believe that the copy was copied and checksummed twice before
writing to the disk? Of course not. Copying the data doesn't help;
both pieces of memory need to be good. It's checksummed once.
If OpenSolaris succeeds in being significant
The dependency is based on the names. Try renaming
testpool/testfs2/clone1 out of the hierarchy:
zfs rename testpool/testfs2/clone1 testpool/foo
Then it should be possible to destroy testpool/testfs2.
On Fri, 17 Apr 2009, Grant Lowe wrote:
I was wondering if there is a solution for this
I was wondering if there is a solution for this. I've been able to replicate a
similar problem on a different server. Basically I'm still unable to use zfs
destroy on a filesystem, that was a parent filesystem and is now a child
filesystem after a promotion.
bash-3.00# zpool history
History f
On Fri, Apr 17, 2009 at 12:29:23PM +0100, Andrew Robert Nicols wrote:
> I'm still seeing this problem frequently and the suggestions Viktor made
> below haven't helped (exclude: drv/ohci in /etc/system).
>
> I've got a selection of core dumps for analysis if anyone can suggest what
> analysis to d
I'm still seeing this problem frequently and the suggestions Viktor made
below haven't helped (exclude: drv/ohci in /etc/system).
I've got a selection of core dumps for analysis if anyone can suggest what
analysis to do with them.
I've also replicated this on a second identical X4500 which was ru
Hi there,
I'm working on a new OS 2008.11 setup here and running into a few
issues with the nfs integration. Notably, it appears that subnet
values attributed to sharenfs are ignored and gives back a permission
denied for all connection attempts. I have another environment where
permissi
28 matches
Mail list logo