he next) dataset
copies.
HTH.
On Tue, Aug 3, 2010 at 22:48, Eduardo Bragatto wrote:
> On Aug 3, 2010, at 10:08 PM, Khyron wrote:
>
> Long answer: Not without rewriting the previously written data. Data
>> is being striped over all of the top level VDEVs, or at least it should
>
Short answer: No.
Long answer: Not without rewriting the previously written data. Data
is being striped over all of the top level VDEVs, or at least it should
be. But there is no way, at least not built into ZFS, to re-allocate the
storage to perform I/O balancing. You would basically have to d
My inclination, based on what I've read and heard from others, is to say
"no".
But again, the best way to find out is to write the code. :\
On Wed, Jun 9, 2010 at 11:45, Edward Ned Harvey wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Be
It would be helpful if you posted more information about your
configuration.
Numbers *are* useful too, but minimally, describing your setup, use case,
the hardware and other such facts would provide people a place to start.
There are much brighter stars on this list than myself, but if you are
sha
To answer the question you asked here...the answer is "no". There have been
MANY discussions of this in the past. Here's the lng thread I started
back
in May about backup strategies for ZFS pools and file systems:
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/038678.html
But
Ian: Of course they expected answers to those questions here. It seems many
people do not read the forums or mailing list archives to see their
questions
previously asked (and answered) many many times over, or the flames that
erupt from them. It's scary how much people don't check historical re
A few things come to mind...
1. A lot better than...what? Setting the recordsize to 4K got you some
deduplication but maybe the pertinent question is what were you
expecting?
2. Dedup is fairly new. I haven't seen any reports of experiments like
yours so...CONGRATULATIONS!! You're probably the
I have no idea who you're talking to, but presumably you mean this link:
http://lists.freebsd.org/pipermail/freebsd-questions/2010-April/215269.html
Worked fine for me. I didn't post it. I'm not the OP on this thread or on
the FreeBSD thread. So what "broken link" are you talking about and to
This is how rumors get started.
>From reading that thread, the OP didn't seem to know much of anything
about...
anything. Even less so about Solaris and OpenSolaris. I'd advise not to
get your
news from mailing lists, especially not mailing lists for people who don't
use the
product you're inter
I would advise getting familiar with the basic terminology and vocabulary of
ZFS
first. Start with the Solaris 10 ZFS Administration Guide. It's a bit more
complete
for a newbie.
http://docs.sun.com/app/docs/doc/819-5461?l=en
You can then move on to the Best Practices Guide, Configuration Guide
Now is probably a good time to mention that dedupe likes LOTS of RAM, based
on
experiences described here. 8 GiB minimum is a good start. And to avoid
those
obscenely long removal times due to updating the DDT, an SSD based L2ARC
device
seems to be highly recommended as well.
That is, of course,
Response below...
2010/4/5 Andreas Höschler
> Hi Edward,
>
> thanks a lot for your detailed response!
>
>
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Andreas Höschler
>>>
>>> • I would like to remove the two SSDs as log devices from
Yes, I think Eric is correct.
Funny, this is an adjunct to the thread I started entitled "Thoughts on ZFS
Pool
Backup Strategies". I was going to include this point in that thread but
thought
better of it.
It would be nice if there were an easy way to extract a pool configuration,
with
all of th
Heh.
The original definition of "I" was inexpensive. Was never meant to be
"independent".
Guess that changed by vendors. The idea all along was to take inexpensive
hardware
and use software to turn it into a reliable system.
http://portal.acm.org/citation.cfm?id=50214
http://www.cs.cmu.edu/~ga
Responses inline below...
On Sat, Mar 20, 2010 at 00:57, Edward Ned Harvey wrote:
> > 1. NDMP for putting "zfs send" streams on tape over the network. So
>
> Tell me if I missed something here. I don't think I did. I think this
> sounds like crazy talk.
>
> I used NDMP up till November, when w
Erik,
I don't think there was any confusion about the block nature of "zfs send"
vs. the file nature of star. I think what this discussion is coming down to
is
the best ways to utilize "zfs send" as a backup, since (as Darren Moffat has
noted) it supports all the ZFS objects and metadata.
I see
Responses inline...
On Tue, Mar 16, 2010 at 07:35, Robin Axelsson
wrote:
> I've been informed that newer versions of ZFS supports the usage of hot
> spares which is denoted for drives that are not in use but available for
> resynchronization/resilvering should one of the original drives fail in t
y could be useful for learning
or other purposes, may not be directly usable for the systems people are
running with OpenSolaris.
At least, that's what I think Bob meant.
On Fri, Mar 19, 2010 at 17:08, Alex Blewitt wrote:
> On 19 Mar 2010, at 15:30, Bob Friesenhahn wrote:
>
> >
I'm also a Mac user. I use Mozy instead of DropBox, but it sounds like
DropBox should get a place at the table. I'm about to download it in a few
minutes.
I'm right now re-cloning my internal HD due to some HFS+ weirdness. I
have to completely agree that ZFS would be a great addition to MacOS X
Ahhh, this has been...interesting...some real "personalities" involved in
this
discussion. :p The following is long-ish but I thought a re-cap was in
order.
I'm sure we'll never finish this discussion, but I want to at least have a
new
plateau or base from which to consider these questions.
I've
Ian,
When you say you spool to tape for off-site archival, what software do you
use?
On Wed, Mar 17, 2010 at 18:53, Ian Collins wrote:
>
> I have been using a two stage backup process with my main client,
> send/receive to a backup pool and spool to tape for off site archival.
>
> I use a pa
For those following along, this is the e-mail I meant to send to the list
but
instead sent directly to Tonmaus. My mistake, and I apologize for having to
re-send.
=== Start ===
My understanding, limited though it may be, is that a scrub touches ALL data
that
has been written, including the pari
Ugh! I meant that to go to the list, so I'll probably re-send it for the
benefit
of everyone involved in the discussion. There were parts of that that I
wanted
others to read.
>From a re-read of Richard's e-mail, maybe he meant that the number of I/Os
queued to a device can be tuned lower and no
To be sure, Ed, I'm not asking:
Why bother trying to backup with "zfs send" when there are fully supportable
and
working options available right NOW?
Rather, I am asking:
Why do we want to adapt "zfs send" to do something it was never intended
to do, and probably won't be adapted to do (well, if
Exactly!
This is what I meant, at least when it comes to backing up ZFS datasets.
There
are tools available NOW, such as Star, which will backup ZFS datasets due to
the
POSIX nature of those datasets. As well, Amanda, Bacula, NetBackup,
Networker
and probably some others I missed. Re-inventing t
Note to readers: There are multiple topics discussed herein. Please
identify which
idea(s) you are responding to, should you respond. Also make sure to take
in all of
this before responding. Something you want to discuss may already be
covered at
a later point in this e-mail, including NDMP and
The issue as presented by Tonmaus was that a scrub was negatively impacting
his RAIDZ2 CIFS performance, but he didn't see the same impact with RAIDZ.
I'm not going to say whether that is a "problem" one way or the other; it
may
be expected behavior under the circumstances. That's for ZFS develope
In following this discussion, I get the feeling that you and Richard are
somewhat
talking past each other. He asked you about the hardware you are currently
running
on, whereas you seem to be interested in a model for the impact of scrubbing
on
I/O throughput that you can apply to some not-yet-acq
Yeah, this threw me. A 3 disk RAID-Z2 doesn't make sense, because at a
redundancy level, RAID-Z2 looks like RAID 6. That is, there are 2 levels of
parity for the data. Out of 3 disks, the equivalent of 2 disks will be used
to
store redundancy (parity) data and only 1 disk equivalent will store
I thought pointing out some of this information might come in handy for some
of the
folks who are new to the (Open)Solaris world.
The following section discusses differences between SMI labels (aka VTOC)
and EFI
GPT labels. It may not be everything one needs to know in order to
successfully
manag
I'm imagining that OpenSolaris isn't *too* different from Solaris 10 in this
regard.
I believe Richard Elling recommended "cfgadm -v". I'd also suggest
"iostat -E", with and without "-n" for good measure.
So that's "iostat -E" and "iostat -En". As long as you know the physical
drive
specificat
Ugh! If you received a direct response to me instead of via the list,
apologies for
that.
Rob:
I'm just reporting the news. The RFE is out there. Just like SLOGs, I
happen to
think it a good idea, personally, but that's my personal opinion. If it
makes dedup
more usable, I don't see the harm
The DDT is stored within the pool, IIRC, but there is an RFE open to allow
you to
store it on a separate top level VDEV, like a SLOG.
The other thing I've noticed with all of the "destroyed a large dataset with
dedup
enabled and it's taking forever to import/destory/ wrote:
> Just thought I'd chi
Well, it's an attack, right? Neither Skein nor Threefish has been
compromised.
In fact, this is what you want to see - researchers attacking an algorithm
which
goes a long way toward furthering or proving the security of said
algorithm. I
think I agree with Darren overall, but this still looks pr
I think the point, Chester, which everyone seems to be dancing around
or missing, is that your planning may need to go back to the drawing board
on this one. Absorb the resources out there for how to best configure
your pools and vdevs, *then* implement. That's the most efficient way to go
about
35 matches
Mail list logo