I find it baffling that RaidZ(2,3) was designed to split a record-size block
into N (N=# of member devices) pieces and send the uselessly tiny requests to
spinning rust when we know the massive delays entailed in head seeks and
rotational delay. The ZFS-mirror and load-balanced configuration do
On Sun, Jan 03, 2010 at 08:26:47PM -0800, Richard Elling wrote:
> On Jan 3, 2010, at 4:05 PM, Jack Kielsmeier wrote:
> >
> >With L2arc, no such redundancy is needed. So, with a $100 SSD, if
> >you can get 8x the performance out of your dedup'd dataset, and you
> >don't have to worry about "what
On Thu, Dec 31, 2009 at 9:37 PM, Michael Herf wrote:
> I've written about my slow-to-dedupe RAIDZ.
>
> After a week of.waitingI finally bought a little $100 30G OCZ
> Vertex and plugged it in as a cache.
>
> After <2 hours of warmup, my zfs send/receive rate on the pool is
> >16MB/sec (re
On Jan 3, 2010, at 4:05 PM, Jack Kielsmeier wrote:
With L2arc, no such redundancy is needed. So, with a $100 SSD, if
you can get 8x the performance out of your dedup'd dataset, and you
don't have to worry about "what if the device fails", I'd call that
an awesome investment.
AFAIK, the L
> On Sun, 3 Jan 2010, Jack Kielsmeier wrote:
> >
> > help. It is suggested not to put zil on a device
> external to the
> > disks in the pool unless you mirror the zil device.
> This is
> > suggested to prevent data loss if the zil device
> dies.
>
> The reason why it is suggested that the inten
On Sun, Jan 3, 2010 at 6:58 PM, Jerome Warnier wrote:
> Hi,
>
> I'm "smbsharing" ZFS filesystems.
> I know how to restrict access to it to some hosts (and users), but did
> not find any way to forbid the smb protocol being advertised on a
> specific interface (or the other way around, specify the
On Mon, Jan 4, 2010 at 5:52 AM, Mark Bennett wrote:
> Hi,
>
> Is it possible to import a zpool and stop it mounting the zfs file systems,
> or override the mount paths?
Try "zpool import -R ..."
--
Fajar
___
zfs-discuss mailing list
zfs-discuss@opens
On Sun, 3 Jan 2010, Jack Kielsmeier wrote:
help. It is suggested not to put zil on a device external to the
disks in the pool unless you mirror the zil device. This is
suggested to prevent data loss if the zil device dies.
The reason why it is suggested that the intent log reside in the same
Eric D. Midama did a very good job answering this, and I don't have
much to add. Thanks Eric!
On 3 jan 2010, at 07.24, Erik Trimble wrote:
> I think you're confusing erasing with writing.
I am now quite certain that it actually was you who were
confusing those. I hope this discussion has cleare
> Just l2arc. Guess I can always repartition later.
>
> mike
>
>
> On Sun, Jan 3, 2010 at 11:39 AM, Jack Kielsmeier
> wrote:
> > Are you using the SSD for l2arc or zil or both?
> > --
> > This message posted from opensolaris.org
> > ___
> > zfs-discus
Hi,
I'm "smbsharing" ZFS filesystems.
I know how to restrict access to it to some hosts (and users), but did
not find any way to forbid the smb protocol being advertised on a
specific interface (or the other way around, specify the ones I agree with).
Is there any other way than setting up a firew
I have used these cards several UIO capable Supermicro systems and Opensolaris,
with the Supermicro storage chassis and up to 30 stata 1Tb disks.
With IT mode firmware (non-raid) they are excellent. They usually have the
"hardware assisted" raid firmware by default.
The card is designed for the
Hi,
Is it possible to import a zpool and stop it mounting the zfs file systems, or
override the mount paths?
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
I had to use the labelfix hack (and I had to recompile it at that) on 1/2 of an
old zpool. I made this change:
/* zio_checksum(ZIO_CHECKSUM_LABEL, &zc, buf, size); */
zio_checksum_table[ZIO_CHECKSUM_LABEL].ci_func[0](buf, size, &zc);
and I'm assuming [0] is the correct endiannes
Thanks for the response Marion. I'm glad that I"m not the only one. :)
Message was edited by: mijohnst
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Just l2arc. Guess I can always repartition later.
mike
On Sun, Jan 3, 2010 at 11:39 AM, Jack Kielsmeier wrote:
> Are you using the SSD for l2arc or zil or both?
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-dis
Are you using the SSD for l2arc or zil or both?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Since there's nothing I love better on a Sunday than a religious OT
discussion:
On January 2, 2010 8:51:25 PM -0500 Tim Cook wrote:
On Saturday, January 2, 2010, Bob Friesenhahn
wrote:
Hardly any Apple users are complaining about the advanced filesytem they
have already.
That's a joke right
Well it appears that the pci-x version of the card might or might not work with
drives bigger than 1TB
Attached WD15EADS to ICH9R on motherboard works fine.
Jeb
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@o
joerg.schill...@fokus.fraunhofer.de (Joerg Schilling) writes:
> The netapps patents contain claims on ideas that I invented for my Diploma
> thesis work between 1989 and 1991, so the netapps patents only describe prior
> art. The new ideas introduced with "wofs" include the ideas on how to use CO
Tim Cook wrote:
> On Saturday, January 2, 2010, Bob Friesenhahn
> wrote:
> > On Sat, 2 Jan 2010, David Magda wrote:
> >
> >
> > Apple is (sadly?) probably developing their own new file system as well.
> >
> >
> > I assume that you are talking about developing a filesystem design more
> > suitab
Last night I was trying to setup nfs to share a pool. It was working fine until
I started to have trouble writing. I did a zpool status to see if everything
was ok, and I got this.
pool: spool
state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make sure
Hello list,
someone (actually neil perrin (CC)) mentioned in this thread:
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-December/034340.html
that is should be possible to import a pool with failed log devices
(with or without data loss ?).
/
/>/ Has the following error no conseque
David Magda wrote:
> Apple is (sadly?) probably developing their own new file system as well.
Well, I still don't understand Apple. Apple likes to get a grant for an
indemnification for something that cannot happen in a country with a proper
law system.
The netapps patents contain claims on
On Sat, Jan 2 at 22:24, Erik Trimble wrote:
In MLC-style SSDs, you typically have a block size of 2k or 4k.
However, you have a Page size of several multiples of that, 128k
being common, but by no means ubiquitous.
I believe your terminology is crossed a bit. What you call a block is
usually
25 matches
Mail list logo