Hey guys, I have a zpool built from iSCSI targets from several machines at
present, i'm considering buying a 16 port SATA controller and putting all
the drives into one machine, if i remove all the drives from the machines
offering the iSCSI targets and place them into the 1 machine, connected via
Hi guys,
I'm currently running 2 zpools each in a raidz1 configuration, totally
around 16TB usable data. I'm running it all on an OpenSolaris based box with
2gb memory and an old Athlon 64 3700 CPU, I understand this is very poor and
underpowered for deduplication, so I'm looking at building a new
Is there perhaps a workaround for this? A way to condense the free blocks
information?
If not, any idea when an improvement might be implemented?
We are currently suffering from incremental snapshots that refer to zero new
blocks, but where incremental snapshots required over a gigabyte even
do either of you know the current story about this card? i can't get it to work
at all in solaris 10, but i'm very new to the OS.
thanks!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.
to clarify: i mean the promise sata300 tx4.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Excellent.
Oct 9 13:36:01 zeta1 scsi: [ID 107833 kern.warning] WARNING: /[EMAIL
PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL
PROTECTED],0 (sd13):
Oct 9 13:36:01 zeta1 Error for Command: readError Level:
Retryable
Scrubbing now.
Big thanks gg
I am using a x4500 with a single "4*( raid2z 9 + 2)+ 2 spare" pool. I some bad
blocks on one of the disks
Oct 9 13:36:01 zeta1 scsi: [ID 107833 kern.warning] WARNING: /[EMAIL
PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL
PROTECTED],0 (sd13):
Oct 9 13:36:01 zeta1 Error f
Thanks. Looks like I have this bug. Is it a hardware problem combined with a
software problem?
Oct 9 09:35:43 zeta1 sata: [ID 801593 kern.notice] NOTICE: /[EMAIL
PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]:
Oct 9 09:35:43 zeta1 port 3: device reset
Oct 9 09:35:43 zeta1 s
I've got the box on eval, and just pushing through its paces. Ideally I would
be replicating to another x4500, but I don't have another one and didn't want
to use 22 disks for another pool.
This message posted from opensolaris.org
___
zfs-discuss ma
I got it too. Its a brand new x4500 (my 2nd eval box after the other one use to
freeze up). I got this while running a java program that tries and reads a 128G
file while writing a 100G file in 2 threads with 128K blocks.
Oct 29 00:56:28 zeta1 marvell88sx: [ID 670675 kern.info] NOTICE: marvell88
On Thu, Mar 4, 2010 at 4:12 AM, Thomas Burgess wrote:
> I got a norco 4020 (the 4220 is good too)
>
> Both of those cost around 300-350 dolars. That is a 4u case with 20 hot
> swap bays.
Typically rackmounts are not designed for quiet. He said quietness is
#2 in his priorities...
Or does the N
> It's very nice.
>
>
> On Thu, Mar 4, 2010 at 3:03 PM, Michael Shadle wrote:
>>
>> On Thu, Mar 4, 2010 at 4:12 AM, Thomas Burgess wrote:
>>
>> > I got a norco 4020 (the 4220 is good too)
>> >
>> > Both of those cost around 300-350 dol
On Sun, Mar 7, 2010 at 6:09 PM, Slack-Moehrle
wrote:
> OpenSolaris or FreeBSD with ZFS?
zfs for sure. it's nice having something bitrot-resistant.
it was designed with data integrity in mind.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
Sorry if this is too basic -
So I have a single zpool in addition to the rpool, called xpool.
NAMESIZE USED AVAILCAP HEALTH ALTROOT
rpool 136G 109G 27.5G79% ONLINE -
xpool 408G 171G 237G42% ONLINE -
I have 408 in the pool, am using 171 leaving me 237 GB.
The
That solved it.
Thank you Cindy.
Zpool list NOT reporting raidz overhead is what threw me...
Thanks again.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listin
By the way,
I would like to chip in about how informative this thread has been, at least
for me, despite (and actually because of) the strong opinions on some of the
posts about the issues involved.
>From what I gather, there is still an interesting failure possibility with
>ZFS, although prob
Also, pardon my typos, and my lack of re-titling my subject to note that it is
a fork from the original topic. Corrections in text that I noticed after
finally sorting out getting on the mailing list are below...
On Apr 19, 2010, at 3:26 AM, Michael DeMan wrote:
> By the way,
>
>
In all honesty, I haven't done much at sysadmin level with Solaris since it was
SunOS 5.2. I found ZFS after becoming concerned with reliability of
traditional RAID5 and RAID6 systems once drives exceeded 500GB.
I have a few months running ZFS on FreeBSD lately on a test/augmentation basis
wit
whereas
answers regarding questions like the one you're floating come from the
marketing/management side of the house.
The best chance for you to find out about this is to talk to your Oracle
sales rep.
Michael
--
michael.schus...@oracle.com
Recursion, n.: see 'Recursion'
__
ndependent clones and sharing / moving between
filesystems?
Michael Bosch
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
,
Mike
---
Michael Sullivan
michael.p.sulli...@me.com
http://www.kamiogi.net/
Japan Mobile: +81-80-3202-2599
US Phone: +1-561-283-2034
On 23 Apr 2010, at 10:22 , BM wrote:
> On Tue, Apr 20, 2010 at 2:18 PM, Ken Gunderson wrote:
>> Greetings All:
>>
>> G
- consider hard links. (and sorry for not answering sooner, this obvious
one didn't occur to me earlier).
Michael
--
michael.schus...@oracle.com http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing li
Quick sanity check here. I created a zvol and exported it via iSCSI to
a Windows machine so Windows could use it as a block device. Windows
formats it as NTFS, thinks it's a local disk, yadda yadda.
Is ZFS doing it's magic checksumming and whatnot on this share, even
though it is seeing junk data
might be of use.
I suspect it has something to do with the DDT table.
Best Regards
Michael
zdb output:
rpool:
version: 22
name: 'rpool'
state: 0
txg: 10643295
pool_guid: 16751367988873007995
hostid: 13336047
hostname: ''
vdev_children: 1
vdev_
m to find an
answer.
Mike
---
Michael Sullivan
michael.p.sulli...@me.com
http://www.kamiogi.net/
Japan Mobile: +81-80-3202-2599
US Phone: +1-561-283-2034
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.ope
Ok, thanks.
So, if I understand correctly, it will just remove the device from the VDEV and
continue to use the good ones in the stripe.
Mike
---
Michael Sullivan
michael.p.sulli...@me.com
http://www.kamiogi.net/
Japan Mobile: +81-80-3202-2599
US Phone: +1-561-283-2034
On 5
90 reads and not a single comment? Not the slightest hint of what's going on?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This is how my zpool import command looks like:
Attached you'll find the output of zdb -l of each device.
pool: tank
id: 10904371515657913150
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
tank ONLINE
raidz1-0 ONLIN
Thanks for your reply! I ran memtest86 and it did not report any errors. The
disk controller I've not replaced, yet. The server is up in multi-user mode
with the broken pool in an un-imported state. Format now works and properly
lists all my devices without panic'ing. zpool import panic's the b
I got a suggestion to check what fmdump -eV gave to look for PCI errors if the
controller might be broken.
Attached you'll find the last panic's fmdump -eV. It indicates that ZFS can't
open the drives. That might suggest a broken controller, but my slog is on the
motherboard's internal controll
Hi Ed,
Thanks for your answers. Seem to make sense, sort of…
On 6 May 2010, at 12:21 , Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Michael Sullivan
>>
>> I have a question I canno
On 6 May 2010, at 13:18 , Edward Ned Harvey wrote:
>> From: Michael Sullivan [mailto:michael.p.sulli...@mac.com]
>>
>> While it explains how to implement these, there is no information
>> regarding failure of a device in a striped L2ARC set of SSD's. I have
>
&
anything to come close in its approach to disk data
management. Let's just hope it keeps moving forward, it is truly a unique way
to view disk storage.
Anyway, sorry for the ramble, but to everyone, thanks again for the answers.
Mike
---
Michael Sullivan
michael.p.sulli...@m
ead a block from more devices simultaneously, it
will cut the latency of the overall read.
On 7 May 2010, at 02:57 , Marc Nicholas wrote:
> Hi Michael,
>
> What makes you think striping the SSDs would be faster than round-robin?
>
> -marc
>
> On Thu, May 6, 2010 at 1:09 PM,
rks really well.
>
> --
> -Peter Tribble
> http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Mi
standard monitoring tools? If not, what other tools exist that can do the
same?
"zpool iostat" for one.
Michael
--
michael.schus...@oracle.com http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mail
I agree on the motherboard and peripheral chipset issue.
This, and the last generation AMD quad/six core motherboards all seem to use
the AMD SP56x0/SP5100 chipset, which I can't find much information about
support on for either OpenSolaris or FreeBSD.
Another issue is the LSI SAS2008 chipset f
er a proper replace of the failed
partitions?
Many thanks,
Michael
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 19.05.10 17:53, John Andrunas wrote:
Not to my knowledge, how would I go about getting one? (CC'ing discuss)
man savecore and dumpadm.
Michael
On Wed, May 19, 2010 at 8:46 AM, Mark J Musante wrote:
Do you have a coredump? Or a stack trace of the panic?
On Wed, 19 May 2010,
For the record, in case anyone else experiences this behaviour: I tried
various things which failed, and finally as a last ditch effort, upgraded my
freebsd, giving me zpool v14 rather than v13 - and now it's resilvering as it
should.
Michael
On Monday 17 May 2010 09:26:23 Michael Do
On Fri, Jun 11, 2010 at 2:50 AM, Alex Blewitt wrote:
> You are sadly mistaken.
>
> From GNU.org on license compatibilities:
>
> http://www.gnu.org/licenses/license-list.html
>
> Common Development and Distribution License (CDDL), version 1.0
> This is a free software license. It has
Just in case any stray searches finds it way here, this is what happened to my
pool: http://phrenetic.to/zfs
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinf
detected in the middle of resilvering.)
I will of course have a backup of the pool, but I may opt for additional backup
if the entire pool could be lost due to data corruption (as opposed to just a
few files potentially being lost).
Thanks,
Michael
[1] http://dlc.sun.com/osol/docs/co
storing backups of my personal files on this), so if there's a
chance
that ZFS wouldn't handle errors well when on top of encryption, I'll just go
without it.
Thanks,
Michael
___
zfs-discuss mailing list
zfs-discuss@opensolar
on 11/07/2010 15:54 Andriy Gapon said the following:
>on 11/07/2010 14:21 Roy Sigurd Karlsbakk said the following:
>>
>> I'm planning on running FreeBSD in VirtualBox (with a Linux host)
>> and giving it raw disk access to four drives, which I plan to
>> configure as a raidz2 volume.
Nikola M wrote:
>Freddie Cash wrote:
>> You definitely want to do the ZFS bits from within FreeBSD.
>Why not using ZFS in OpenSolaris? At least it has most stable/tested
>implementation and also the newest one if needed?
I'd love to use OpenSolaris for exactly those reasons, but I'm wary of using
saying that you employ enough kernel hackers to keep up even without Oracle?
(I
am admittedly ignorant about the OpenSolaris developer community; this is all
based on others' statements and opinions that I've read.)
Michael
___
I just don't need more than 1 TB of
available
storage right now, or for the next several years.) This is on an AMD64 system,
and the OS in question will be running inside of VirtualBox, with raw access to
the drives.
Thanks,
Michael
Garrett D'Amore wrote:
>On Fri, 2010-07-16 at 10:24 -0700, Michael Johnson wrote:
>> I'm currently planning on running FreeBSD with ZFS, but I wanted to
>>double-check
>> how much memory I'd need for it to be stable. The ZFS wiki currently says
>you
>
On Mon, Jul 19, 2010 at 3:11 PM, Haudy Kazemi wrote:
> ' iostat -Eni ' indeed outputs Device ID on some of the drives,but I still
> can't understand how it helps me to identify model of specific drive.
Curious:
[r...@nas01 ~]# zpool status -x
pool: tank
state: DEGRADED
status: One or more de
On Mon, Jul 19, 2010 at 4:16 PM, Marty Scholes wrote:
> Start a scrub or do an obscure find, e.g. "find /tank_mointpoint -name core"
> and watch the drive activity lights. The drive in the pool which isn't
> blinking like crazy is a faulted/offlined drive.
>
> Ugly and oh-so-hackerish, but it
On Mon, Jul 19, 2010 at 4:16 PM, Marty Scholes wrote:
> Start a scrub or do an obscure find, e.g. "find /tank_mointpoint -name core"
> and watch the drive activity lights. The drive in the pool which isn't
> blinking like crazy is a faulted/offlined drive.
Actually I guess my real question is
On Mon, Jul 19, 2010 at 4:26 PM, Richard Elling wrote:
> Aren't you assuming the I/O error comes from the drive?
> fmdump -eV
okay - I guess I am. Is this just telling me "hey stupid, a checksum
failed" ? In which case why did this never resolve itself and the
specific device get marked as degra
On Mon, Jul 19, 2010 at 4:35 PM, Richard Elling wrote:
> I depends on if the problem was fixed or not. What says
> zpool status -xv
>
> -- richard
[r...@nas01 ~]# zpool status -xv
pool: tank
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
them is to destroy snapshots.
Or have I still misunderstood the question?
yes, I think so.
Here's how I read it: the snapshots contain lots more than the core files,
and OP wants to remove only the core files (I'm assuming they weren't
discovered before the snapshot
- provide measurements (lockstat, iostat, maybe some DTrace) before and
during test, add some timestamps so people can correlate data to events.
- anything else you can think of that might be relevant.
HTH
Michael
___
zfs-discuss mailing list
z
Hello,
I've been getting warnings that my zfs pool is degraded. At first it was
complaining about a few corrupt files, which were listed as hex numbers instead
of filenames, i.e.
VOL1:<0x0>
After a scrub, a couple of the filenames appeared - turns out they were in
snapshots I don't really nee
This seems like you're doing an awful lot of planning for only 8 SATA
+ 4 SAS bays?
I agree - SOHO usage of ZFS is still a scary "will this work?" deal. I
found a working setup and I cloned it. It gives me 16x SATA + 2x SATA
for mirrored boot, 4GB ECC RAM and a quad core processor - total cost
wit
Yeah - give me a bit to rope together the parts list and double check
it, and I will post it on my blog.
On Mon, Sep 28, 2009 at 2:34 PM, Ware Adams wrote:
> On Sep 28, 2009, at 4:20 PM, Michael Shadle wrote:
>
>> I agree - SOHO usage of ZFS is still a scary "will this work?&qu
rackmount chassis aren't usually designed with acoustics in mind :)
however i might be getting my closet fitted so i can put half a rack
in. might switch up my configuration to rack stuff soon.
On Mon, Sep 28, 2009 at 3:04 PM, Thomas Burgess wrote:
> personally i like this case:
>
>
> http://www
7;s got 4 fans but they are
> really big and don't make nearly as much noise as you'd think. honestly,
> it's not bad at all. I know someone who sits it vertically as well,
> honestly, it's a good case for the money
>
>
> On Mon, Sep 28, 2009 at 6:06 PM, Micha
i looked at possibly doing one of those too - but only 5 disks was too
small for me. and i was too nervous about compatibility with mini-itx
stuff.
On Wed, Sep 30, 2009 at 6:22 PM, Jorgen Lundman wrote:
>
> I too went with a 5in3 case for HDDs, in a nice portable Mini-ITX case, with
> Intel Atom.
uld be the first step for you.
HTH
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 01.10.09 08:25, camps support wrote:
I did zpool import -R /tmp/z rootpool
It only mounted /export and /rootpool only had /boot and /platform.
I need to be able to get /etc and /var?
zfs set mountpoint ...
zfs mount
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n
, but check the man-page).
"echo * | wc" is also a way to find out what's in a directory, but you'll
miss "."files, and the shell you're using may have an influence ..
HTH
Michael
--
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see
). How's
that possible ?
just a few thoughts:
- how do you measure how much space your data consumes?
- how do you copy?
- is the other FS also ZFS?
Michael
--
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
Stathis Kamperis wrote:
2009/10/23 michael schuster :
Stathis Kamperis wrote:
Salute.
I have a filesystem where I store various source repositories (cvs +
git). I have compression enabled on and zfs get compressratio reports
1.46x. When I copy all the stuff to another filesystem without
OpenSolaris 2009.06
I have a ST2540 Fiber Array directly attached to a X4150. There is a
zpool on the fiber device. The zpool went into a faulted state, but I
can't seem to get it back via scrub or even delete it? Do I have to
re-install the entire OS if I want to use that device again?
T
Hi guys, after reading the mailings yesterday i noticed someone was after
upgrading to zfs v21 (deduplication) i'm after the same, i installed
osol-dev-127 earlier which comes with v19 and then followed the instructions on
http://pkg.opensolaris.org/dev/en/index.shtml to bring my system up to da
is this about a different FS
*on top of* zpools/zvols? If so, I'll have to defer to Team ZFS.
HTH
Michael
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
} && zfs get used rpool/export/home && cp /testfile
/export/home/d${i}; done
as far as I understood it, the dedup works during writing, and won't
deduplicate already written data (this is planned for a later release).
isn't he doing just that (writing, that
have been compressed using compress. It
is the equivalent of uncompress-c. Input files are not
affected.
:-)
cheers
Michael
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-dis
all it 'ln' ;-) and that even works on ufs.
Michael
--
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ntics of ZFS.
I actually was thinking of creating a hard link (without the -s option),
but your point is valid for hard and soft links.
cheers
Michael
--
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
files to end up in one file; the
traditional concat operation will cause all the data to be read and written
back, at which point dedup will kick in, and so most of the processing has
already been spent. (Per, please correct/comment)
Michael
--
Michael Schusterhttp://blogs.sun.com/recur
Hi, I'm using zfs version 6 on mac os x 10.5 using the old macosforge
pkg. When I'm writing files to the fs they are appearing as 1kb files
and if I do zpool status or scrub or anything the command is just
hanging. However I can still read the zpool ok, just write is having
problems and any
, there is no "original" and "copy"; rather, every directory
entry points to "the data" (the inode, in ufs-speak), and if one directory
entry of several is deleted, only the reference count changes.
It's probably a little more complicated with dedup, but I think
Am in the same boat, exactly. Destroyed a large set and rebooted, with a scrub
running on the same pool.
My reboot stuck on "Reading ZFS Config: *" for several hours (disks were
active). I cleared the zpool.cache from single-user and am doing an import (can
boot again). I wasn't able to boot m
zpool import done! Back online.
Total downtime for 4TB pool was about 8 hours, don't know how much of this was
completing the destroy transaction.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Most manufacturers have a utility available that sets this behavior.
For WD drives, it's called WDTLER.EXE. You have to make a bootable USB stick to
run the app, but it is simple to change the setting to the enterprise behavior.
--
This message posted from opensolaris.org
___
Note you don't get the better vibration control and other improvements the
enterprise drives have. So it's not exactly that easy. :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensol
I have also had slow scrubbing on filesystems with lots of files, and I
agree that it does seem to degrade badly. For me, it seemed to go from 24
hours to 72 hours in a matter of a few weeks.
I did these things on a pool in-place, which helped a lot (no rebuilding):
1. reduced number of snapshots
Mine is similar (4-disk RAIDZ1)
- send/recv with dedup on: <4MB/sec
- send/recv with dedup off: ~80M/sec
- send > /dev/null: ~200MB/sec.
I know dedup can save some disk bandwidth on write, but it shouldn't save
much read bandwidth (so I think these numbers are right).
There's a warning in a Je
I have observed the opposite, and I believe that all writes are slow to my
dedup'd pool.
I used local rsync (no ssh) for one of my migrations (so it was restartable,
as it took *4 days*), and the writes were slow just like zfs recv.
I have not seen fast writes of real data to the deduped volume,
My ARC is ~3GB.
I'm doing a test that copies 10GB of data to a volume where the blocks
should dedupe 100% with existing data.
First time, the test that runs <5MB sec, seems to average 10-30% ARC *miss*
rate. <400 arc reads/sec.
When things are working at disk bandwidth, I'm getting 3-5% ARC misse
Anyone who's lost data this way: were you doing weekly scrubs, or did you
find out about the simultaneous failures after not touching the bits for
months?
mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
For me, arcstat.pl is a slam-dunk predictor of dedup throughput. If my
"miss%" is in the single digits, dedup write speeds are reasonable. When the
arc misses go way up, dedup writes get very slow. So my guess is that this
issue depends entirely on whether or not the DDT is in RAM or not. I don't
h
FWIW, I just disabled prefetch, and my dedup + zfs recv seems to be
running visibly faster (somewhere around 3-5x faster).
echo zfs_prefetch_disable/W0t1 | mdb -kw
Anyone else see a result like this?
I'm using the "read" bandwidth from the sending pool from "zpool
iostat -x 5" to estimate transf
ddie Cash)
2. Re: Benchmarks results for ZFS + NFS, using SSD's as slog
devices (ZIL) (Richard Elling)
3. Re: Troubleshooting dedup performance (Michael Herf)
4. ZFS write bursts cause short app stalls (Saso Kiselkov)
-
I have a 4-disk RAIDZ, and I reduced the time to scrub it from 80
hours to about 14 by reducing the number of snapshots, adding RAM,
turning off atime, compression, and some other tweaks. This week
(after replaying a large volume with dedup=on) it's back up, way up.
I replayed a 700G filesystem to
I've written about my slow-to-dedupe RAIDZ.
After a week of.waitingI finally bought a little $100 30G OCZ
Vertex and plugged it in as a cache.
After <2 hours of warmup, my zfs send/receive rate on the pool is
>16MB/sec (reading and writing each at 16MB as measured by zpool
iostat).
That's
Make that 25MB/sec, and rising...
So it's 8x faster now.
mike
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Just l2arc. Guess I can always repartition later.
mike
On Sun, Jan 3, 2010 at 11:39 AM, Jack Kielsmeier wrote:
> Are you using the SSD for l2arc or zil or both?
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-dis
I replayed a bunch of filesystems in order to get dedupe benefits.
Only thing is a couple of them are rolled back to November or so (and
I didn't notice before destroy'ing the old copy).
I used something like:
zfs snapshot pool/f...@dd
zfs send -Rp pool/f...@dd | zfs recv -d pool/fs2
(after done.
left it on. Anything possible there?
The only other thing is that I did "zfs rollback" for a totally
unrelated filesystem in the pool, but I have no idea if this could
have affected it.
(I've verified that I got the right one with "zpool history".)
mike
On Tue, Jan 5, 2
ptionally slow partially
because it will sort the output.
that's what '-f' was supposed to avoid, I'd guess.
Michael
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e is the directory pointer is shuffled around. This is not the
case with ZFS data sets, even though they're on the same pool?
no - mv doesn't know about zpools, only about posix filesystems.
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion&
to get rid of them (because they eat 80% of disk space) it seems
to be quite challenging.
I've been following this thread. Would it be faster to do the reverse.
Copy the 20% of disk then format then move the 20% back.
I'm not sure the OS installation would survive that
Many large-scale photo hosts start with netapp as the default "good
enough" way to handle multiple-TB storage. With a 1-5% cache on top,
the workload is truly random-read over many TBs. But these workloads
almost assume a frontend cache to take care of hot traffic, so L2ARC
is just a nice implement
> The best Mail Box to use under Dovecot for ZFS is
> MailDir, each email is store as a individual file.
Can not agree on that. dbox is about 10x faster - at least if you have > 1
messages in one mailbox folder. Thats not because of ZFS but dovecot just
handles dbox files (one for each messa
1 - 100 of 375 matches
Mail list logo