Were those tests you mentioned on Raid-5/6/Raid-Z/z2 or on Mirrored
volumes of some kind?
We've found here that VM loads on raid 10 sata volumes, with relatively
high numbers of disks actually works pretty well - and depending size of
the drives, you quite often get more usuable space too. ;-)
On Mon, Aug 3, 2009 at 10:18 PM, Tim Cook wrote:
>
>
> On Mon, Aug 3, 2009 at 3:34 PM, Joachim Sandvik
> wrote:
>
>> I am looking at a nas software from nexenta, and after some initial
>> testing i like what i see. So i think we will find in funding the budget for
>> a dual setup.
>>
>> We are l
On Mon, Aug 3, 2009 at 3:34 PM, Joachim Sandvik wrote:
> I am looking at a nas software from nexenta, and after some initial testing
> i like what i see. So i think we will find in funding the budget for a dual
> setup.
>
> We are looking at a dual cpu Supermicro server with about 32gb ram and 2
>
James Lever wrote:
Nathan Hudson-Crim,
On 04/08/2009, at 8:02 AM, Nathan Hudson-Crim wrote:
Andre, I've seen this before. What you have to do is ask James each
question 3 times and on the third time he will tell the truth. ;)
fwiw.. I totally could see this in a joking context.. I won't tell
Nathan Hudson-Crim,
On 04/08/2009, at 8:02 AM, Nathan Hudson-Crim wrote:
Andre, I've seen this before. What you have to do is ask James each
question 3 times and on the third time he will tell the truth. ;)
I know this is probably meant to be seen as a joke, but it's clearly
in very poor t
On Tue, 4 Aug 2009, James C. McPherson wrote:
If so, did anyone see the presentation?
Yes. Everybody who attended.
You know, I think we might even have some evidence of their attendance!
http://mexico.purplecow.org/static/kca_spk/tn/IMG_2177.jpg.html
http://mexico.purplecow.org/static/kca_
On Mon, 03 Aug 2009 18:26:44 -0500
Wes Felter wrote:
> Dave McDorman wrote:
> > I don't think is at liberty to discuss ZFS Deduplication at this point in
> > time:
>
> Did Jeff Bonwick and Bill Moore give a presentation at kernel.conf.au or
> not?
Yes they did - a keynote, and they participa
Dave McDorman wrote:
I don't think is at liberty to discuss ZFS Deduplication at this point in time:
Did Jeff Bonwick and Bill Moore give a presentation at kernel.conf.au or
not? If so, did anyone see the presentation? Did the conference
attendees all sign NDAs or something?
Wes Felter
___
I was absolutely not impugning you in any way but rather trying to lighten the
mood with a little Austin Powers humor. I'll refrain from this in the future.
And quite to the contrary of any negative feelings, I am pleased and grateful
that you are participating in this conversation.
Regards,
Na
On Mon, 03 Aug 2009 15:02:43 -0700 (PDT)
Nathan Hudson-Crim wrote:
> > On Sun, 02 Aug 2009 15:26:12 -0700 (PDT)
> > Andre Lue wrote:
> >
> > Was de-duplication slated for snv_119?
> >
> > No.
> >
> > > If not can anyone say which snv_xxx and in which
> > form will we
> > > see it (synchronou
On Mon, 3 Aug 2009, Joachim Sandvik wrote:
Will the IOPS in the mirrored setup be so good, that a ssd cache
disk might not be needed? And i then might go for 10 x mirror with 2
x 1tb instead of 9? I really dont think that space will be an issue
This really depends on how many synchronous writ
> On Sun, 02 Aug 2009 15:26:12 -0700 (PDT)
> Andre Lue wrote:
>
> Was de-duplication slated for snv_119?
>
> No.
>
> > If not can anyone say which snv_xxx and in which
> form will we
> > see it (synchronous, asynchronous both)?
>
> No, and no.
>
> Sorry,
> James
Andre, I've seen this before
Will the IOPS in the mirrored setup be so good, that a ssd cache disk might not
be needed? And i then might go for 10 x mirror with 2 x 1tb instead of 9? I
really dont think that space will be an issue on this system as we for now are
using about 3tb, and i have been testing compression with gre
On Mon, Aug 3, 2009 at 12:43 PM, Kyle McDonald wrote:
> I think I've read that the AMD 790FX/750SB chipset's SATA controller is
> upported, but may have recently had bugs?
I think the SB700 / SB750 problem was related to doing DMA transfers
with more than 4GB of memory. I think the chip set lies a
On Mon, 3 Aug 2009, Joachim Sandvik wrote:
We are looking at a dual cpu Supermicro server with about 32gb ram
and 2 x250gb OS disks, 21 x 1TB SATA disks, and 1 x 64gb SSD disk.
The system will use nexenta's auto-cdp which i think are based on
AVS to remote mirror to a system a few miles away.
On Mon, Aug 03, 2009 at 01:15:49PM -0700, Jan wrote:
> Yes, I have an EFI label on that device.
> This is my procedure to try growing the capacity of the device:
> -> export the zpool
> -> overwrite the existing EFI label with format tool
> -> auto-configure it
> -> import the zpool
>
> What do y
I am looking at a nas software from nexenta, and after some initial testing i
like what i see. So i think we will find in funding the budget for a dual
setup.
We are looking at a dual cpu Supermicro server with about 32gb ram and 2 x250gb
OS disks, 21 x 1TB SATA disks, and 1 x 64gb SSD disk.
Hi Darren,
thanks for your reply.
> What did you try?
> Since you're larger than 1T, you certainly have an EFI label. What you
> have to do is destroy the existing EFI label, then have format create a
> new one for the larger LUN. Finally, create slice 0 as the size of the
> entire (now larger) d
Hi all,
I think I've read that the AMD 790FX/750SB chipset's SATA controller is
upported, but may have recently had bugs?
I'm currently trying to decide between a MB with that chipset and
another that uses the nVidia 780a and nf200 south bridge.
Is the nVidia SATA controller well supported?
Pilot Error fixed... thanks!
-- Forwarded message --
From: Blake
Date: Mon, Aug 3, 2009 at 10:53 AM
Subject: Re: [zfs-discuss] missing disk space
To: "David E. Anderson"
Cc: zfs-discuss@opensolaris.org
On Mon, Aug 3, 2009 at 1:41 PM, David E. Anderson
wrote:
> $ zfs get all sto
On 03.08.09 03:44, Stephen Pflaum wrote:
George,
I have a pool with family photos on it which needs recovery. Is there a livecd with a tool to invalidate the uberblock which will boot on a macbookpro?
This has been recovered by rolling two txgs back. pool is being scrubbed now.
More details
On Mon, Aug 3, 2009 at 1:41 PM, David E. Anderson wrote:
> $ zfs get all storage
> NAME PROPERTY VALUE SOURCE
> storage type filesystem -
> storage creation Fri Jul 10 21:19 2009 -
> storage used 89.2K
On 03.08.09 21:34, Blake wrote:
On Mon, Aug 3, 2009 at 12:35 PM, David E. Anderson wrote:
I am new to ZFS, so please bear with me...
I created a raidz1 pool from three 1.5TB disks on OpenSolaris 2009.6. I see
less than 1TB useable space. What did I do wrong?
$ zpool list
NAME SIZE USE
Andrew,
Take a look at your zpool list output, which identifies the size of your
iscsi-pool pool.
Regardless of how the volume size was determined, your remaining
pool size is still 33GB and yes, some of it is used for metadata.
cs
On 08/03/09 11:26, andrew.r...@sun.com wrote:
hi cindy,
tnx
$ zfs get all storage
NAME PROPERTY VALUE SOURCE
storage type filesystem -
storage creation Fri Jul 10 21:19 2009 -
storage used 89.2K -
storage available 913G
On Mon, Aug 3, 2009 at 12:35 PM, David E. Anderson wrote:
> I am new to ZFS, so please bear with me...
>
> I created a raidz1 pool from three 1.5TB disks on OpenSolaris 2009.6. I see
> less than 1TB useable space. What did I do wrong?
>
> $ zpool list
> NAME SIZE USED AVAIL CAP HEALTH
I have not carried out any research into this area, but when I was
building my home server I wanted to use a Promise SATA-PCI card, but
alas (Open)Solaris has no support at all for the Promise chipsets.
Instead I used a rather old card based on the sil3124 chipset.
n
On Mon, Aug 3, 2009 at 9:35
hi cindy,
tnx for the response.
here's every attr that has "size" in it: :-)
# zlvsz iscsi-pool/log_1_1
NAMEAVAIL REFER USED QUOTA RECSIZE REFQUOTA REFRESERV
RESERV VOLBLOCK VOLSIZE
iscsi-pool/log_1_1 33.7G54K 24.4G -- - 24.4G
none
Hi Andrew,
The AVAIL column indicates the pool size, not the volsize
in this example.
In your case, the iscsi-pool/log_1_1 volume is 24 GB in size
and the remaining pool space is 33.7G. The 33.7G reflects
your pool space, not your volume size.
The sizing is easier to see if you include the zpoo
On 07/31/09 06:12 PM, Jorgen Lundman wrote:
Finding a SATA card that would work with Solaris, and be hot-swap, and
more than 4 ports, sure took a while. Oh and be reasonably priced ;)
Let's take this first point; "card that works with Solaris"
I might try to find some engineers to write
hi,
i'm using a zvol someone else created (and then used as
an iSCSI Target, via: "iscsitadm ... -b /dev/zvol ...").
I see that AVAIL has a size of 33GB, yet the VOLSIZE is 24GB ;
# zfs list -t volume -o name,avail,used,volsize iscsi-pool/log_1_1
NAMEAVAIL USED VOLSIZE
iscsi-
I am new to ZFS, so please bear with me...
I created a raidz1 pool from three 1.5TB disks on OpenSolaris 2009.6. I see
less than 1TB useable space. What did I do wrong?
$ zpool list
NAME SIZE USED AVAILCAP HEALTH ALTROOT
rpool 464G 42.2G 422G 9% ONLINE -
storage 1.36
Kurt Schreiner wrote:
Hi Tobias,
On Mon, Aug 03, 2009 at 01:24:35PM +0200, Tobias Exner wrote:
there's no such property available.
There's no entry in the manpages, too.
bash-3.00# zpool set autoexpand=on testpool
cannot set property for 'testpool': invalid property 'autoexpand'
Ahh, I understand.
The autoexpand is a feature which allows to grow the filesystem on the fly.
That's great but export/import is good enough for the moment.
thanks,
Tobias
Kurt Schreiner schrieb:
Hi Tobias,
On Mon, Aug 03, 2009 at 01:24:35PM +0200, Tobias Exner wrote:
there's no such pr
Hi Tobias,
On Mon, Aug 03, 2009 at 01:24:35PM +0200, Tobias Exner wrote:
>
> there's no such property available.
> There's no entry in the manpages, too.
>
>
> bash-3.00# zpool set autoexpand=on testpool
> cannot set property for 'testpool': invalid property 'autoexpand'
>
> Maybe a problem of
Hello,
Recently, one of the disks in a raidz1 on my OpenSolaris (snv_118)
file server failed.
It continued operating in DEGRADED for a day or so until I noticed.
At which point I removed the faulted disk and turned it back on (to
confirm I had removed the correct disk).
When I replaced the disk, I
This may have been mentioned elsewhere and, if so, I apologize for
repeating.
Is it possible your difficulty here is with the Marvell driver and not,
strictly speaking, ZFS? The Solaris Marvell driver has had many, MANY
bug fixes and continues to this day to be supported by IDR patches and
o
Hi Francois,
I just tried it and it's done.
Thank you very much!
*my "zpool history"*
Verlauf für 'testpool':
2009-08-03.11:14:27 zpool create testpool raidz1 c1t1d0 c1t2d0 c1t3d0
2009-08-03.11:20:48 zpool offline testpool c1t1d0
2009-08-03.11:24:19 zpool replace testpool c1t1d0 c1t4d0
2009-
*Hi Kurt,
Hi Scott,
there's no such property available.
There's no entry in the manpages, too.*
/bash-3.00# zpool set autoexpand=on testpool
cannot set property for 'testpool': invalid property 'autoexpand'/
*Maybe a problem of the zfs version?
Here's my "zpool history"
*
/Verlauf für 'tes
On Mon, Aug 03, 2009 at 11:51:59AM +0200, Tobias Exner wrote:
> Hi list,
>
> some months ago I spoke with an zfs expert on a Sun Storage event.
>
> He told it's possible to grow a zpool by replacing every single disk
> with a larger one.
> After replacing and resilvering all disks of this pool z
Tobias Exner wrote:
Hi list,
some months ago I spoke with an zfs expert on a Sun Storage event.
He told it's possible to grow a zpool by replacing every single disk
with a larger one.
After replacing and resilvering all disks of this pool zfs will
provide the new size automatically.
Now
Hi list,
some months ago I spoke with an zfs expert on a Sun Storage event.
He told it's possible to grow a zpool by replacing every single disk
with a larger one.
After replacing and resilvering all disks of this pool zfs will provide
the new size automatically.
Now I found time to check t
42 matches
Mail list logo