Adam Sherman wrote:
On 6-Aug-09, at 15:16 , Ian Collins wrote:
This ended up being a costly mistake, the environment I ended up with
didn't play well with Live Upgrade. So I suggest what ever you do,
make sure you can create a new BE and boot into it before committing.
I assume this was old-
On Aug 7, 2009, at 2:29 PM, Ed Spencer wrote:
Let me give a real life example of what I believe is a fragmented
zfs pool.
Currently the pool is 2 terabytes in size (55% used) and is made of
4 san luns (512gb each).
The pool has never gotten close to being full. We increase the size
of th
Stephen Green wrote:
Also, I got my wife to agree to a new SSD, so I presume that I can
simply do the re-silver with the new drive when it arrives.
And the last thing for today, I ended up getting:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820609330
which is 16GB and should be suf
ZFS filesystem version 6
ZFS storage pool version 6
Platform - 0.7RC1 (revision 4735) // FreeBSD 7.2-RELEASE-p1 (revision 199506)
// i386-embedded on AMD Athlon(tm) XP 1500+
Drives - 250GB & 150GB IDE // ZFS Raid Stripe
Am aware this is not OpenSolaris but looking for the ZFS expertise. Thanks.
EON 64-bit x86 CIFS ISO image version 0.59.2 based on snv_119
* eon-0.592-119-64-cifs.iso
* MD5: a8560cf9b407c9da846dfa773aeaf676
* Size: ~87Mb
* Released: Friday 07-August-2009
EON 64-bit x86 Samba ISO image version 0.59.2 based on snv_119
* eon-0.592-119-64-smb.iso
* MD5:
Let me give a real life example of what I believe is a fragmented zfs pool.
Currently the pool is 2 terabytes in size (55% used) and is made of 4 san luns
(512gb each).
The pool has never gotten close to being full. We increase the size of the pool
by adding 2 512gb luns about once a year or so
Hey Richard,
I believe 6844090 would be a candidate for an s10 backport.
The behavior of 6844090 worked nicely when I replaced a disk of the same
physical size even though the disks were not identical.
Another flexible storage feature is George's autoexpand property (Nevada
build 117), where yo
Hello Kyle!
Sorry for the late answer.
> > Be careful with nVidia if you want to use Samsung SATA disks.
> > There is a problem with the disk freezing up. This bit me with
> > our X2100M2 and X2200M2 systems.
>
> I don't know if it's related to your issue, but I have also seen
> comments aroun
> I frist create lun by "stmfadm create-lu ", and
> add-view , so the initiator can see the created
> lun.
>
> Now I use "zfs snapshot" to create snapshot for the
> created lun.
>
> hat can I do to make the snapshot is accessed by the
> Initiator? Thanks.
Hi,
This is a good question and some
Stephen Green wrote:
Oh, and for those following along at home, the re-silvering of the slog
to a file is proceeding well. 72% done in 25 minutes.
And, for the purposes of the archives, the re-silver finished in 34
minutes and I successfully removed the RAM disk. Thanks, Erik for the
eminen
I frist create lun by "stmfadm create-lu ", and add-view , so the initiator
can see the created lun.
Now I use "zfs snapshot" to create snapshot for the created lun.
What can I do to make the snapshot is accessed by the Initiator? Thanks.
--
This message posted from opensolaris.org
_
On Fri, 7 Aug 2009, Scott Meilicke wrote:
Bob, since the ZIL is used always, whether a separate device or not,
won't writes to a system without a separate ZIL also be written as
intelligently as with a separate ZIL?
I don't know the answer to that. Perhaps there is no current
advantage. T
On 08/07/09 10:54, Scott Meilicke wrote:
ZFS absolutely observes synchronous write requests (e.g. by NFS or a
database). The synchronous write requests do not benefit from the
long write aggregation delay so the result may not be written as
ideally as ordinary write requests. Recently zfs has
Scott Meilicke wrote:
Note - this has a mini PCIe interface, not PCIe.
Well, that's an *excellent* point. I guess that lets that one out.
It turns out I do have an open SATA port, so I might just go for a disk
that has a SATA interface, since that should just work.
I had the 64GB version
Hi,
Is the ability to add a log device to a root pool on the roadmap for ZFS?
Thanks,
Gregg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> ZFS absolutely observes synchronous write requests (e.g. by NFS or a
> database). The synchronous write requests do not benefit from the
> long write aggregation delay so the result may not be written as
> ideally as ordinary write requests. Recently zfs has added support
> for using a SSD as
Note - this has a mini PCIe interface, not PCIe.
I had the 64GB version in a Dell Mini 9. While it was great for it's small
size, low power and low heat characteristics (no fan on the Mini 9!), it was
only faster than the striped sata drives in my mac pro when it came to random
reads. Everythin
Sweet! Thanks! You rock!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 6-Aug-09, at 15:16 , Ian Collins wrote:
This ended up being a costly mistake, the environment I ended up
with didn't play well with Live Upgrade. So I suggest what ever you
do, make sure you can create a new BE and boot into it before
committing.
I assume this was old-style LU and the
On 6-Aug-09, at 11:32 , Thomas Burgess wrote:
i've seen some people use usb sticks, and in practice it works on
SOME machines. The biggest difference is that the bios has to allow
for usb booting. Most of todays computers DO. Personally i like
compact flash because it is fairly easy to us
On Thu, 6 Aug 2009, Hua wrote:
1. Due to the COW nature of zfs, files on zfs are more tender to be
fragmented comparing to traditional file system. Is this statement
correct?
Yes and no. Fragmentation is a complex issue.
ZFS uses 128K data blocks by default whereas other filesystems
typica
Hi Michael,
I will get this fixed.
Thanks for letting us know.
Cindy
On 08/07/09 09:24, Michael Marburger wrote:
Who do we contact to fix mis-information in the evil tuning guide?
at:
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#How_to_Tune_Cache_Sync_Handling_Per_St
On Fri, Aug 7, 2009 at 8:49 AM, Dick Hoogendijk wrote:
> I've a new MB (tyhe same as before butthis one works..) and I want to
> change the way my SATA drives were connected. I had a ZFS boot mirror
> conncted to SATA3 and 4 and I wat those drives to be on SATA1 and 2 now.
>
> Question: will ZFS
Stephen Green wrote:
Thanks for the advice, I think it might be time to convince the wife
that I need to buy an SSD. Anyone have recommendations for a reasonably
priced SSD for a home box?
For example, does anyone know if something like:
http://www.newegg.com/Product/Product.aspx?Item=N82E16
Who do we contact to fix mis-information in the evil tuning guide?
at:
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#How_to_Tune_Cache_Sync_Handling_Per_Storage_Device
Item 2 indicates SPARC uses file name ssd.conf and X64 uses sd.conf to insert a
line "sd-config-list ...
erik.ableson wrote:
On 7 août 09, at 02:03, Stephen Green wrote:
Man, that looks so nice I think I'll change my mail client to do dates
in French :-)
Now my only question is: what do I do when it's done? If I reboot
and the ram disk disappears, will my tank be dead? Or will it just
conti
I've a new MB (tyhe same as before butthis one works..) and I want to
change the way my SATA drives were connected. I had a ZFS boot mirror
conncted to SATA3 and 4 and I wat those drives to be on SATA1 and 2 now.
Question: will ZFS see this and boot the system OK or will I have to
take some pr
>Yes, but to see if a separate ZIL will make a difference the OP should
>try his iSCSI workload first with ZIL then temporarily disable ZIL and
>re-try his workload.
or you may use the zilstat utility
--
This message posted from opensolaris.org
___
zf
> Besides the /etc/system, you could also export all
> the pools, use mdb to
> set the same variable that /etc/system sets, and then
> import the pools
> again. Don't know of any other mechanism to limit
> ZFS's memory foot print.
>
> If you don't do ZFS boot, manually import the pools
> after t
On 7 août 09, at 02:03, Stephen Green wrote:
I used a 2GB ram disk (the machine has 12GB of RAM) and this jumped
the backup up to somewhere between 18-40MB/s, which means that I'm
only a couple of hours away from finishing my backup. This is, as
far as I can tell, magic (since I started th
30 matches
Mail list logo