Now the test for the Vertex 2 Pro. This was fun.
For more explanation please see the thread "Crucial RealSSD C300 and cache
flush?"
This time I made sure the device is attached via 3GBit SATA. This is also
only a short test. I'll retest after some weeks of usage.
cache enabled, 32 buffers, 64k blo
I've been testing the ZFS root recovery using 10u6 and have come across a very
odd problem.
When following this procedure I the disk I am setting up my rpool on keeps
reverting to an EFI label.
http://docs.sun.com/app/docs/doc/819-5461/ghzur?l=en&a=view
Here is what the exact steps I am doing;
I've discovered the source of the problem.
zpool create -f -o failmode=continue -R /a -m legacy -o
cachefile=/etc/zfs/zpool.cache rpool c1t0d0
It seems a root pool must only be created on a slice. Therefore
zpool create -f -o failmode=continue -R /a -m legacy -o
cachefile=/etc/zfs/zpool.cache
It seems we are hitting a boundary with zfs send/receive over a network
link (10Gb/s). We can see peak values of up to 150MB/s, but on average
about 40-50MB/s are replicated. This is far away from the bandwidth that
a 10Gb link can offer.
Is it possible, that ZFS is giving replication a too
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Mika Borner
>
> It seems we are hitting a boundary with zfs send/receive over a network
> link (10Gb/s). We can see peak values of up to 150MB/s, but on average
> about 40-50MB/s are replicated
On 25.06.2010 14:32, Mika Borner wrote:
>
> It seems we are hitting a boundary with zfs send/receive over a network
> link (10Gb/s). We can see peak values of up to 150MB/s, but on average
> about 40-50MB/s are replicated. This is far away from the bandwidth that
> a 10Gb link can offer.
>
> Is i
>
>
> Conclusion: This device will make an excellent slog device. I'll order
> them today ;)
>
>
I have one and i love it...I sliced it though, used 9 gb for ZIL and the
rest for L2ARC (my server is on a smallish network with about 10 clients)
It made a huge difference in NFS performance and other
On 25 Jun 2010, at 15:23, Thomas Burgess
mailto:wonsl...@gmail.com>> wrote:
Conclusion: This device will make an excellent slog device. I'll order
them today ;)
I have one and i love it...I sliced it though, used 9 gb for ZIL and the rest
for L2ARC (my server is on a smallish network with abo
On Jun 25, 2010, at 4:44 AM, Sean . wrote:
> I've discovered the source of the problem.
>
> zpool create -f -o failmode=continue -R /a -m legacy -o
> cachefile=/etc/zfs/zpool.cache rpool c1t0d0
>
> It seems a root pool must only be created on a slice. Therefore
>
> zpool create -f -o failmode
Sean,
If you review the doc section you included previously, you will see
that all the root pool examples include slice 0.
The slice is a long-standing boot requirement and is described in
the boot chapter, in this section:
http://docs.sun.com/app/docs/doc/819-5461/ggrko?l=en&a=view
ZFS Storag
>From: Arne Jansen
>Sent: Friday, June 25, 2010 3:21 AM
>
>Now the test for the Vertex 2 Pro. This was fun.
>For more explanation please see the thread "Crucial RealSSD C300 and cache
>flush?"
>This time I made sure the device is attached via 3GBit SATA. This is also
only a
>short test. I'll rete
Good morning all.
This question has probably poped up before, but maybe not in this exact way…
I am planning on building a SAN for my home meta centre, and have some of
the raid cards I need for the build. I will be ordering the case soon, and
then the drives. The cards I have are 2 8 port PX
Tiernan,
Hardware redundancy is important, but I would be thinking about how you
are going to back up data in the 6-24 TB range, if you actually need
that much space.
Balance your space requirements with good redundancy and how much data
you can safely back up because stuff happens: hardware fai
I've noticed (at least on Solaris 10) that the resiver rate appears to
slow down considerably as it nears completion.
On an eight 500G raidz2 vdev, after 28 hours zpool status reported:
spare DEGRADED 0 063
c1t6d0 DEGRADED 0 011 too many e
How much of a difference is there in supporting applications in between Ubuntu
and OpenSolaris?
I was not considering Ubuntu until OpenSOlaris would not load onto my machine...
Any info would be great. I have not been able to find any sort of comparison of
ZFS on Ubuntu and OS.
Thanks.
(My cur
On Fri, Jun 25, 2010 at 6:31 PM, Ben Miles wrote:
> How much of a difference is there in supporting applications in between
> Ubuntu and OpenSolaris?
> I was not considering Ubuntu until OpenSOlaris would not load onto my
> machine...
>
> Any info would be great. I have not been able to find any
On 6/25/2010 6:49 PM, Freddie Cash wrote:
On Fri, Jun 25, 2010 at 6:31 PM, Ben Miles wrote:
How much of a difference is there in supporting applications in between Ubuntu
and OpenSolaris?
I was not considering Ubuntu until OpenSOlaris would not load onto my machine...
Any info would be gr
On Fri, Jun 25, 2010 at 9:08 PM, Erik Trimble wrote:
> (2) Ubuntu is a desktop distribution. Don't be fooled by their "server"
> version. It's not - it has too many idiosyncrasies and bad design choices to
> be a stable server OS. Use something like Debian, SLES, or RHEL/CentOS.
Why would you sa
I recently installed a Seagate LP drive in an Atom ICH7 based system. The
drive is showing up dmesg but not available in format. Is this a known
problem? Is there a work around for it?
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-
Hello,
Is it possible to detach a clone from its snapshot (and copy all its data
physically)? I ran into an obscure situation where 'zfs promote' does not help.
Snapshot S has clones C1 and C2, both of which are boot environments. S has a
data error that cannot be corrected. The error affects
On Fri, Jun 25, 2010 at 9:20 PM, Brandon High wrote:
> I recently installed a Seagate LP drive in an Atom ICH7 based system. The
> drive is showing up dmesg but not available in format. Is this a known
> problem? Is there a work around for it?
I just found an older thread where this was discussed
Geoff Nordli wrote:
Is this the one
(http://www.ocztechnology.com/products/solid-state-drives/2-5--sata-ii/maxim
um-performance-enterprise-solid-state-drives/ocz-vertex-2-pro-series-sata-ii
-2-5--ssd-.html) with the built in supercap?
Yes.
Geoff
__
22 matches
Mail list logo