On Wed, Jul 22, 2009 at 02:45:52PM -0500, Bob Friesenhahn wrote:
> On Wed, 22 Jul 2009, t. johnson wrote:
> >Lets say I have a simple-ish setup that uses vmware files for
> >virtual disks on an NFS share from zfs. I'm wondering how zfs'
> >variable block size comes into play? Does it make the ali
On Thu, Jul 23, 2009 at 12:29 PM, thomas wrote:
> Hmm.. I guess that's what I've heard as well.
>
> I do run compression and believe a lot of others would as well. So then, it
> seems
> to me that if I have guests that run a filesystem formatted with 4k blocks for
> example.. I'm inevitably going
Hmm.. I guess that's what I've heard as well.
I do run compression and believe a lot of others would as well. So then, it
seems
to me that if I have guests that run a filesystem formatted with 4k blocks for
example.. I'm inevitably going to have this overlap when using ZFS network
storage?
So if
> Where is the best space to read the latest support of
> ZFS with SSD and its roadmap as the latest ZFS
> release adds SSD management to ZFS.
I recommend these blog posts, if you have not read them yet.
ZFS L2ARC
http://blogs.sun.com/brendan/entry/test
ZIL
http://blogs.sun.com/perrin/entry/slog
Found this:
ECC Mode [Disabled]
Disables or sets the DRAM ECC mode that allows the hardware to report and
correct memory errors. Set this item to [Basic] [Good] or [Max] to allow ECC
mode
auto-adjustment. Set this item to [Super] to adjust the DRAM BG Scrub sub-item
manually. You may also adjust
Good news; the manual for the M4N78-VM mentions ECC and gives the following
BIOS options: disabled/basic/good/super/maxi/user.
Unsure what these mean but that's a start.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discu
On Fri, 17 Jul 2009 14:16:32 -0400
Miles Nordin wrote:
> > "rl" == Rob Logan writes:
>
> rl> Is there some magic that load balances the 4 SAS ports as this
> rl> shows up as one "scsi-bus"?
>
> The LSI card is not SATA framework. I've the impression drive
> enumeration and topolog
On Thu, 23 Jul 2009, Scott Lawson wrote:
The plan is to have a couple of x4240's with Dual quad core
processors, 16 GB RAM and 6 x 146 GB 10K SAS drives plus 1 x 32 GB
SSD as L2ARC. I can add this later if support for this not available
at build time, but is road mapped for S8?
I suggest ma
On Wed, 22 Jul 2009, Roch wrote:
HI Bob did you consider running the 2 runs with
echo zfs_prefetch_disable/W0t1 | mdb -kw
and see if performance is constant between the 2 runs (and low).
That would help clear the cause a bit. Sorry, I'd do it for
you but since you have the setup etc...
Revert
Hi All,
Can anyone shed some light on / if L2ARC support will be included in the
next
Solaris 10 update? Or if it is included in a Kernel patch over and above
the standard Kernel
patch rev that ships in 05/09 (AKA U7)?
The reason I ask is that I have standardised on S10 here and am not keen
I have (2) of the following boxes, exact matching.
(2) Super Micro X7DBN Motherboard Dual
(16) GB of RAM (8GB in each box)
(4) 1.6GHz Intel XEON Quad-Core LGA771
(2) Super Micro 2U RM (12 Bay Chassis)
(2) Super Micro AOC 8-port SATA Controller
I'd like ZFS replicate this box to the other, is this
Have you considered running your script with ZFS pre-fetching disabled
altogether to see if
the results are consistent between runs?
Brad
Brad Diggs
Senior Directory Architect
Virtualization Architect
xVM Technology Lead
Sun Microsystems, Inc.
Phone x52957/+1 972-992-0002
Mail bradley
> "aym" == Anon Y Mous writes:
> "mg" == Mario Goebbels writes:
aym> I don't mean to be offensive Russel, but if you do ever return
aym> to ZFS, please promise me that you will never, ever, EVER run
aym> it virtualized on top of NTFS
he said he was using raw disk devices IIRC.
On Wed, 22 Jul 2009, t. johnson wrote:
Lets say I have a simple-ish setup that uses vmware files for
virtual disks on an NFS share from zfs. I'm wondering how zfs'
variable block size comes into play? Does it make the alignment
problem go away? Does it make it worse? Or should we perhaps be
i7 doesn't support ECC even motherboard supports it, you need XEON W3500 which
costs the same as i7 to support ECC.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/
One of the things that commonly comes up in the server virtualization world is
making sure that all of the storage elements are "aligned". This is because
there are often so many levels of abstraction each using their own "block size"
that without any tuning, they'll usually overlap and can cau
I can't speak to whether it's a good idea or not, but I also wanted to do this
and it was rather difficult. The problem is the opensolaris installer doesn't
let you setup slices on a device to install to.
The two ways I came up with were:
1) using the automated installer to do everything becau
I've started reading up on this, and I know I have alot more reading to
do, but I've already got some questions... :)
I'm not sure yet that it will help for my purposes, but I was
considering buying 2 SSD's for mirrored boot devices anyway.
My main question is: Can a pair of say 60GB SSD's
Thanks! Rats, we're running GA u7 and not Opensolaris for now:
# zpool set autoexpand=on pool (my pool is, in fact, named "pool")
cannot set property for 'pool': invalid property 'autoexpand'
We're not in production yet, but I eventually have to install Veritas Netbackup
on this thing (please
4. Yes :-D
While you can't shrink, you can already replace drives with bigger ones, and
ZFS does increase the size at the end (although I think it needs an
unmount/mount right now).
However, even though you can simply pull one drive and replace it with a bigger
one, that does degrade your arr
Hi--
With 40+ drives, you might consider two pools any way. If you want to
use a ZFS root pool, some like this:
- Mirrored ZFS root pool (2 x 500 GB drives)
- Mirrored ZFS non-root pool for everything else
Mirrored pools are flexible and provide good performance. See this site
for more tips:
h
Daniel S wrote:
I am running basic mirroring in a server setup. When I pull out a hard drive
and put it back in, it won't detect it and resilver it until I reboot the
system. Is there a way to force it to detect it and resilver it in real time?
More info on your hardware is required. In part
We have Thumper that we got at a good price from the Sun Educational Grant
program (thank you Sun!) but it came populated with 500GB drives. The box will
be used as a virtual tape library and general purpose NFS/iSCSI/Samba file
server for users' stuff. Probably, in about two years, we will want
I am running basic mirroring in a server setup. When I pull out a hard drive
and put it back in, it won't detect it and resilver it until I reboot the
system. Is there a way to force it to detect it and resilver it in real time?
Thank you.
Dan
--
This message posted from opensolaris.org
___
To All : The ECC discussion was very interesting as I had never
considered it that way! I willl be buying ECC memory for my home
machine!!
You have to make sure your mainboard, chipset and/or CPU support it,
otherwise any ECC modules will just work like regular modules.
The mainboard needs to
Once these bits are available in Opensolaris then users will be able to
upgrade rather easily. This would allow you to take a liveCD running
these bits and recover older pools.
Do you currently have a pool which needs recovery?
Thanks,
George
Alexander Skwar wrote:
Hi.
Good to Know!
But ho
Maybe I should have posted the zdb -l output. Having seen another thread which
suggests that I might be looking at the most recent txg being damaged, I went
to get my pool's txg counter:
hydra# zdb -l /dev/dsk/c3t10d0s0 | grep txg
txg=10168474
txg=10168474
txg=6324561
txg=632456
I don't mean to be offensive Russel, but if you do ever return to ZFS, please
promise me that you will never, ever, EVER run it virtualized on top of NTFS
(a.k.a. worst file system ever) in a production environment. Microsoft Windows
is a horribly unreliable operating system in situations where
Hi.
Good to Know!
But how do we deal with that on older sStems, which don't have the
patch applied, once it is out?
Thanks, Alexander
On Tuesday, July 21, 2009, George Wilson wrote:
> Russel wrote:
>
> OK.
>
> So do we have an zpool import --xtg 56574 mypoolname
> or help to do it (script?)
>
Thanks for the feed back George.
I hope we get the tools soon.
At home I have now blown the ZFS away now and creating
a HW raid-5 set :-( Hopefully in the future when the tools
are there I will return to ZFS.
To All : The ECC discussion was very interesting as I had never
considered it that way
zio_assess went away with SPA 3.0 :
6754011 SPA 3.0: lock breakup, i/o pipeline refactoring, device failure
handling
You now have :
zio_vdev_io_assess(zio_t *zio)
Yes it's one of the last stages of the I/O pipeline (see zio_impl.h).
-r
tester writes:
> Hi,
>
> What does zio
Stuart Anderson writes:
>
> On Jun 21, 2009, at 10:21 PM, Nicholas Lee wrote:
>
> >
> >
> > On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson
> > > > wrote:
> >
> > However, it is a bit disconcerting to have to run with reduced data
> > protection for an entire week. While I am certai
tester writes:
> Hello,
>
> Trying to understand the ZFS IO scheduler, because of the async nature
> it is not very apparent, can someone give a short explanation for each
> of these stack traces and for their frequency
>
> this is the command
>
> dd if=/dev/zero of=/test/test1/tras
Don't hear about triple-parity RAID that often:
I agree completely. In fact, I have wondered (probably in these
forums), why we don't bite the bullet and make a generic raidzN,
where N is any number >=0.
I agree, but raidzN isn't simple to implement and it's potentially
difficult
to get
34 matches
Mail list logo