John Kotches wrote:
> Relling sez:
>
>> In general, I agree. However, the data does not
>> necessarily support
>> this as a solution and there is a point of
>> diminishing return.
>>
>
> I sent a reply via e-mail to Richard as well; I basically said something
> along these lines...
>
> Yo
Relling sez:
> In general, I agree. However, the data does not
> necessarily support
> this as a solution and there is a point of
> diminishing return.
I sent a reply via e-mail to Richard as well; I basically said something along
these lines...
You missed my point though. It's nothing at all t
On Fri, 18 Jul 2008, Tim wrote:
> Except the article was redacted. The reason the battery life
> decreased was because the throughput increased so much that it drove
> up the cpu usage up, thus bringing down battery life. It just goes to
> show how SEVERELY io bound we currently are. The flash
Except the article was redacted. The reason the battery life
decreased was because the throughput increased so much that it drove
up the cpu usage up, thus bringing down battery life. It just goes to
show how SEVERELY io bound we currently are. The flash itself was
using LESS power.
--tim
On
On Fri, 18 Jul 2008, Al Hopper wrote:
> If you look at the overall I/O throughput in Mb/Sec over the years and
> compare it with the advances in server memory size or SPEC-int rates
> over the years, the I/O throughput curve looks *almost* flat - as the
> delta between the other two curves continu
On Fri, Jul 18, 2008 at 3:11 PM, Richard Elling <[EMAIL PROTECTED]> wrote:
> Here is an interesting view on flash drive developments...
> http://www.eetimes.com/showArticle.jhtml?articleID=209100803&cid=NL_eet
> -- richard
Thanks Richard. I saw that announcement in a couple of other
techsites du
John Kotches wrote:
> Oh, they should also fix Thumper and Thumper2 to have 2 slots for mirrored OS
> away from the big honking storage.
>
In general, I agree. However, the data does not necessarily support
this as a solution and there is a point of diminishing return.
Years ago, disk MTBFs
Here is an interesting view on flash drive developments...
http://www.eetimes.com/showArticle.jhtml?articleID=209100803&cid=NL_eet
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
I'm living with this error for almost 4 months and probably have record
number of checksum errors:
core# zpool status -xv
pool: box5
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in
Why can't I do this in a ZFS directory?
(I was able to set the group with no problems.)
# chown auser *
chown: DIR1: cannot change owner [Invalid argument]
chown: DIR2: cannot change owner [Invalid argument]
Debugging info:
# id -a
uid=0(root) gid=0(root)
groups=0(root),1(other),2(bin),3(sys)
Hi Tim,
No, I'm still chasing the supplier who lost their stock of cards. So far
they're not getting back to me which isn't good news. There were a couple of
other people on here who had these cards and needed the driver, I haven't heard
whether anybody has tested it yet however.
If anybody
Andrew sez:
...
> RAIDZ arrays are not supported as root pools (at the
> moment).
>
> Cheers
>
> Andrew.
I appreciate that this is quite a substantial work. Just off the top of my
head, each member of the RAIDZ has to have the same boot block information,
then as you bring the RAIDZ up you h
On Thu, Jul 10, 2008 at 5:51 AM, Ross <[EMAIL PROTECTED]> wrote:
> Hey everybody,
>
> Well, my pestering paid off. I have a Solaris driver which you're welcom
> to download, but please be aware that it comes with NO SUPPORT WHATSOEVER.
>
> I'm very grateful to the chap who provided this driver, p
> > I ran a scrub on a root pool after upgrading to snv_94, and got checksum
> > errors:
>
> Hmm, after reading this, I started a zpool scrub on my mirrored pool,
> on a system that is running post snv_94 bits: It also found checksum errors
...
> OTOH, trying to verify checksums with zdb -c did
> I ran a scrub on a root pool after upgrading to snv_94, and got checksum
> errors:
Hmm, after reading this, I started a zpool scrub on my mirrored pool,
on a system that is running post snv_94 bits: It also found checksum errors
# zpool status files
pool: files
state: DEGRADED
status: One
Evert Meulie wrote:
> Hi all!
>
> I'm planning a system which will be hosting various VM's. The host system
> will be kept small.
>
> I was thinking of using ZFS on all partitions.
> Does anyone see any reason I should NOT do the following?
>
IMHO, 15 GBytes seems a little bit small for ke
Hi all!
I'm planning a system which will be hosting various VM's. The host system will
be kept small.
I was thinking of using ZFS on all partitions.
Does anyone see any reason I should NOT do the following?
DISK1
partition 1: 15GB - host OS (ZFS RAID1)
partition 2: remainder - ZFS R
17 matches
Mail list logo