Hi,
Anyone who has the experience of Texas Memory Systems's RamSan in ZFS?
Thanks.
Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Nov 28, 2010 at 5:18 PM, Krunal Desai wrote:
> > There are problems with Sandforce controllers, according to forum posts.
> Buggy firmware. And in practice, Sandforce is far below it's theoretical
> values. I expect Intel to have fewer problems.
>
> I believe it's more the firmware (and p
Hi Karel,
Try /usr/bin/find instead of /usr/gnu/bin/find:
# which find
/usr/gnu/bin/find
# zfs snapshot rpool/cin...@snap1
# cd /rpool/cindys/.zfs
# /usr/bin/find . -type f
./snapshot/snap1/file.1
./snapshot/snap1/file.2
Thanks,
Cindy
On 11/25/10 15:22, Karel Gardas wrote:
Hello,
after upgra
On 30 November 2010 03:09, Krunal Desai wrote:
> I assume it either:
>
> 1. does a really good job of 512-byte emulation that results in little
> to no performance degradation
> (
> http://consumer.media.seagate.com/2010/06/the-digital-den/advanced-format-drives-with-smartalign/
> references "te
On Sat, Nov 27, 2010 at 03:04:27AM -0800, Erik Trimble wrote:
Hi,
>
> I haven't had a chance to test a Vertex 2 PRO against my 2 EX, and I'd
> be interested if anyone else has. The EX is SLC-based, and the PRO is
> MLC-based, but the claimed performance numbers are similar. If the PRO
> work
Karel,
You can't create snapshots in a read-only pool.
You will have to use something else besides zfs snapshots, such as
tar or cpio.
You could have used zfs send if a snapshot already existed but you
can't write anything to the pool when it is in read-only mode.
Thanks,
Cindy
On 11/25/10 0
On Mon, Nov 29, 2010 at 10:59 AM, Krunal Desai wrote:
> The Seagate datasheet for those parts report 512-byte sectors. What is
> the deal with the ST32000542AS: native 512-byte sectors, native
> 4k-byte sector with selectable emulation, or native 4k-byte sectors
> with 512-byte sector emulation al
> I'm using these drives for one of the vdevs in my pool. The pool was created
> with ashift=12 (zpool binary
> from http://digitaldj.net/2010/11/03/zfs-zpool-v28-openindiana-b147-4k-drives-and-you/),
> which limits the minimum block size to 4KB, the same as the physical block
> size on these drive
On 29-11-2010 14:35, rwali...@washdcmail.com wrote:
I haven't done this on Solaris 11 Express, but this worked on
OpenSolaris 2009-06:
prtvtoc /dev/rdsk/c5t0d0s0 | fmthard -s - /dev/rdsk/c5t1d0s0
Where the first disk is the current root and the second one is the new mirror.
It works om solari
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Paul Piscuc
>
> looks promising. One element that we cannot determine is the optimum
> number of disks in a raid-z pool. In the ZFS best practice guide, 7,9 and
11
There are several important
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Erik Trimble
>
> (1) Unless you are using Zvols for "raw" disk partitions (for use with
> something like a database), the recordsize value is a MAXIMUM value, NOT
> an absolute value. Thus, if
On Mon, November 29, 2010 04:50, taemun wrote:
> I would urge you to consider a 2^n + p number of disks. For raidz, p = 1,
> so an acceptable number of total drives is 3, 5 or 9. raidz2 has two
> parity drives, hence 4, 6 or 10. These vdev widths ensure that the data
> blocks are divided into nic
On Nov 29, 2010, at 8:05 AM, Dick Hoogendijk wrote:
> OK, I've got a proble I can't solve by myself. I've installed solaris 11
> using just one drive.
> Now I want to create a mirror by attached a second one tot the rpool.
> However, the first one has NO partition 9 but the second one does. This
OK, I've got a proble I can't solve by myself. I've installed solaris 11
using just one drive.
Now I want to create a mirror by attached a second one tot the rpool.
However, the first one has NO partition 9 but the second one does. This
way the sizes differ if I create a partiotion 0 (needed bec
On 29 November 2010 15:03, Erik Trimble wrote:
> I'd have to re-look at the ZFS Best Practices Guide, but I'm pretty sure
> the recommendation of 7, 9, or 11 disks was for a raidz1, NOT a raidz2. Due
> to #5 above, best performance comes with an EVEN number of data disks in any
> raidZ, so a wri
Hi,
Thanks for the quick reply. Now that you have mentioned , we have a
different issue. What is the advantage of using spare disks instead of
including them in the raid-z array? If the system pool is on mirrored disks,
I think that this would be enough (hopefully). When one disk fails, isn't
it
Thanks, I need to try modified zpool than.
On Nov 29, 2010, at 10:50 AM, taemun wrote:
> On 29 November 2010 20:39, GMAIL wrote:
> Does anyone use Seagate ST32000542AS disks with ZFS?
>
> I wonder if the performance is not that ugly as with WD Green WD20EARS disks.
>
> I'm using these drives f
On 29 November 2010 20:39, GMAIL wrote:
> Does anyone use Seagate ST32000542AS disks with ZFS?
>
> I wonder if the performance is not that ugly as with WD Green WD20EARS
> disks.
>
I'm using these drives for one of the vdevs in my pool. The pool was created
with ashift=12 (zpool binary from
http
Hi,
Does anyone use Seagate ST32000542AS disks with ZFS?
I wonder if the performance is not that ugly as with WD Green WD20EARS disks.
Thanks,
--
Piotr Jasiukajtis | estibi | SCA OS0072
http://estseg.blogspot.com
___
zfs-discuss mailing list
zfs-disc
19 matches
Mail list logo