On Mon, Feb 28, 2011 at 10:38 PM, Moazam Raja wrote:
> We've noticed that on systems with just a handful of filesystems, ZFS
> send (recursive) is quite quick, but on our 1800+ fs box, it's
> horribly slow.
When doing an incremental send, the system has to identify what blocks
have changed, which
On Mon, Feb 28, 2011 at 9:39 PM, Dave Pooser wrote:
> Is the same true of controllers? That is, will c12 remain c12 or
> /pci@0,0/pci8086,340c@5 remain /pci@0,0/pci8086,340c@5 even if other
> controllers are active?
You can rebuild the device tree if it bothers you. There are some
(outdated) inst
Hi
I'm running OpenSolaris 148 on a few boxes, and newer boxes are getting
installed as we speak. What would you suggest for a good SLOG device? It seems
some new PCI-E-based ones are hitting the market, but will those require
special drivers? Cost is obviously alsoo an issue here
Vennlige
On Tue, Mar 01, 2011 at 08:03:42AM -0800, Roy Sigurd Karlsbakk wrote:
> Hi
>
> I'm running OpenSolaris 148 on a few boxes, and newer boxes are
> getting installed as we speak. What would you suggest for a good SLOG
> device? It seems some new PCI-E-based ones are hitting the market,
> but will tho
(Dave P...I sent this yesterday, but it bounced on your email address)
A small comment from me would be to create some test pools and replace
devices in the pools to see if device names remain the same or change
during these operations.
If the device names change and the pools are unhappy, retes
I'd back that. X25E's are great but also look at the STECH ZeusIOPS as well
as the new Intel's.
---
W. A. Khushil Dep - khushil@gmail.com - 07905374843
Windows - Linux - Solaris - ZFS - XenServer - FreeBSD - C/C++ - PHP/Perl -
LAMP - Nexenta - Development - Consulting & Contracting
http://www
> a) do you need an SLOG at all? Some workloads (asynchronous ones) will
> never benefit from an SLOG.
We're planning to use this box for CIFS/NFS, so we'll need an SLOG to speed
things up.
> b) form factor. at least one manufacturer uses a PCIe card which is
> not compliant with the PCIe form-
On Tue, Mar 01, 2011 at 09:56:35AM -0800, Roy Sigurd Karlsbakk wrote:
> > a) do you need an SLOG at all? Some workloads (asynchronous ones) will
> > never benefit from an SLOG.
>
> We're planning to use this box for CIFS/NFS, so we'll need an SLOG to
> speed things up.
>
> > b) form factor. at l
Personally I am trying out the OCZ revodrives, seem like a decent price for
performance for SLOG.
From: "Roy Sigurd Karlsbakk"
Sent: Tuesday, March 01, 2011 9:56 AM
To: "Garrett D'Amore"
Subject: Re: [OpenIndiana-discuss] [zfs-discuss] Good SLOG devices
On Tue, March 1, 2011 11:11, Khushil Dep wrote:
> I'd back that. X25E's are great but also look at the STECH ZeusIOPS as
> well as the new Intel's.
STEC's products are not available to retail customers, only OEMs. (Unless
something has changed recently, in which case a link would be useful.)
Next gen spec sheets suggest the X25-E will get a "Power Safe Write
Cache," something it does not have today.
See:
http://www.anandtech.com/Show/Index/3965?cPage=5&all=False&sort=0&page=1&slug=intels-3rd-generation-x25m-ssd-specs-revealed
(Article is about X25-M, scroll down for X25-E info.)
On
> Next gen spec sheets suggest the X25-E will get a "Power Safe Write
> Cache," something it does not have today.
>
> See:
> http://www.anandtech.com/Show/Index/3965?cPage=5&all=False&sort=0&page=1&slug=intels-3rd-generation-x25m-ssd-specs-revealed
>
> (Article is about X25-M, scroll down for X25
David,
STEC/DataON ZeusRAM(Z4RZF3D-8UC-DNS) SSD now available for users in channel.
It is 8GB DDR3 RAM based SAS SSD protected by supercapacitor and NVRAM 16GB.
It is designed for ZFS ZIL with low latency
http://dataonstorage.com/zeusram
Rocky
-Original Message-
From: zfs-discus
The PCIe based ones are good (typically they are quite fast), but check
the following first:
a) do you need an SLOG at all? Some workloads (asynchronous ones) will
never benefit from an SLOG.
b) form factor. at least one manufacturer uses a PCIe card which is
not compliant with
Surprised that one of the most approachable outputs for any customer to
use which would enable simple identification/resolution of many
of these discussions didn't come up, namely:
cfgadm -al
for a reasonable physical mapping in which SAS/SATA drives are
relatively easy to map out by ID a
15 matches
Mail list logo