Re: [zfs-discuss] Consolidating a huge stack of DVDs using ZFS dedup: automation?

2010-03-02 Thread valrh...@gmail.com
Freddie: I think you understand my intent correctly. 

This is not about a perfect backup system. The point is that I have hundreds of 
DVDs that I don't particularly want to sort out, but they are pretty useless 
from a management standpoint in their current form. ZFS + dedup would be the 
way to at least get them all in one place, where at least I can search, 
etc.---which is pretty much impossible on a stack of disks.

I also don't want file-level dedup, as a lot of these disks are a "oh, it's the 
end of the day; I'm going to burn what I worked on today, so if my computer 
dies I won't be completely stuck on this project..." File-level dedup would be 
a nightmare to sort out, because of lots of incremental changes---exactly the 
point of block-level dedup.

This is not an organized archive at all; I just want to consolidate a bunch of 
old disks, in the small case they could be useful, and do it without investing 
much time.

So does anyone know of an autoloader solution that would do this?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-04 Thread valrh...@gmail.com
Does this work with dedup? If you have a deduped pool and send it to a file, 
will it reflect the smaller size, or will this "rehydrate" things first?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-04 Thread valrh...@gmail.com
How does this work with an incremental backup?

Right now, I do my incremental backup with:

zfs send -R -i p...@snapshot1 p...@snapshot2 | ssh r...@192.168.1.200 zfs 
receive -dF destination_pool

Does it make sense to put a -D in there, and if so, where? THanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedupratio riddle

2010-03-16 Thread valrh...@gmail.com
Someone correct me if I'm wrong, but it could just be a coincidence. That is, 
perhaps the data that you copied happens to lead to a dedup ratio relative to 
the data that's already on there. You could test this out by copying a few 
gigabytes of data you know is unique (like maybe a DVD video file or 
something), and that should change the dedup ratio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS effective short-stroking and connection to thin provisioning?

2010-04-10 Thread valrh...@gmail.com
A theoretical question on how ZFS works, for the experts on this board.
I am wondering about how and where ZFS puts the physical data on a mechanical 
hard drive. In the past, I have spent lots of money on 15K rpm SCSI and then 
SAS drives, which of course have great performance. However, given the increase 
in areal density in modern consumer SATA drives, similar performance can be 
reached by short-stroking the drives; that is, the outermost tracks are similar 
in performance to the average performance, and sometimes exceeding the peak, on 
the 15K drives.

My question is how ZFS lays the data out on the disk, and if there's a way to 
capture some of this effectively. It seems inefficient to do physically 
short-stroke any of the drives, but more sensible to have ZFS handle this (if 
in fact it has this capability). But if I am using mirrored pairs of 2 TB 
drives, but only have a few hundred GB of data, in effect if only the outer 
tracks are used, then the performance should be similar to if I have 
nearly-full 15 K drives, in practice. Given that ZFS can also thin provision, 
thereby disconnecting the virtual space and physical space on the drives, how 
does the data layout maximize performance?

The practical question: I have something like 600 GB of data on a mirrored pair 
of 2 TB Hitachi SATA drives, with compression and deduplication. Before, I had 
a RAID5 of four 147 GB 10K rpm Seagate Savvio 10K.2 2.5" SAS drives on a Dell 
PERC5/i caching RAID controller. The old RAID was nearly full (20-30 GB free), 
and performed substantially slower than the current setup in daily use (it had 
noticeably slower disk access, and transfer rates), because the drives were 
nearly full. I'm curious to see if I switched from these two disks to the new 
Western Digital Velociraptors (10K RPM SATA), if I could even tell the 
difference. Or because those drives would be nearly full, would the whole setup 
be slower?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Help with slow zfs send | receive performance within the same box.

2010-06-10 Thread valrh...@gmail.com
I've today set up a new fileserver using EON 0.600 (based on SNV130). I'm now 
copying files between mirrors, and the performance is slower than I had hoped. 
I am trying to figure out what to do to make things a bit faster in terms of 
performance. Thanks in advance for reading, and sharing any thoughts you might 
have.

SYstem (brand new today): Dell Poweredge T410. Intel Xeon E5504 5.0 GHz (Core 
i7-based) with 4 GB of RAM. I have one zpool of four 2-TB Hitachi Deskstar SATA 
drives. I used the SATA mode on the motherboard (not the RAID mode, because I 
don't want the motherboard's RAID controller to do something funny to the 
drives). Everything gets recognized, and the EON storage "install" was just 
fine. 

I then configured the drives into an array of two mirrors, made with zpool 
create mirror (drives 1 and 2), then zpool add mirror (drives 3 and 4). 
The output from zpool status is:
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
hextb_data  ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
c1d0ONLINE   0 0 0
c1d1ONLINE   0 0 0
  mirror-1  ONLINE   0 0 0
c2d0ONLINE   0 0 0
c2d1ONLINE   0 0 0

This is a 4TB array, initially empty, that I want to copy data TO.

I then added two more 2 TB drives that were an existing pool on an older 
machine. I want to move about 625 GB of deduped data from the old pool (the 
simple mirror of two 2 TB drives that I physically moved over) to the new pool. 
The case can accommodate all six drives. 

I snapshotted the old data on the 2 TB array, and made a new filesystem on the 
4 TB array. I then moved the data over with:

zfs send -RD data_on_old_p...@snapshot | zfs recv -dF data_on_new_pool

Here's the problem. When I run "iostat -xn", I get:

   extended device statistics  
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   70.00.0 6859.40.3  0.2  0.22.12.4   5  10 c3d0
   69.80.0 6867.00.3  0.2  0.22.22.4   5  10 c4d0
   20.0   68.0  675.1 6490.6  0.9  0.6   10.06.6  22  32 c1d0
   19.5   68.0  675.4 6490.6  0.9  0.6   10.16.7  22  33 c1d1
   19.0   67.2  669.2 6492.5  1.2  0.7   13.87.8  28  36 c2d0
   20.2   67.1  676.8 6492.5  1.2  0.7   13.97.8  28  37 c2d1

The OLD pool is the mirror of c3d0 and c4d0. The NEW pool is the striped set of 
mirrors involving c1d0, c1d1, c2d0 and c2d1.

The transfer started out a few hours ago at about 3 MB/sec. Now it's nearly 7 
MB/sec. But why is this so low? Everything is deduped and compressed. And it's 
an internal transfer, within the same machine, from one set of hard drives to 
another, via the SATA controller. Yet the net effect is very slow. I'm trying 
to figure out what this is, since it's much slower than I would have hoped.

Any and all advice on what to do to troubleshoot and fix the problem would be 
quite welcome. Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrating to ZFS

2010-06-10 Thread valrh...@gmail.com
Are you going to use this machine as a fileserver, at least the OpenSolaris 
part? You might consider trying EON storage (http://eonstorage.blogspot.com/), 
which just runs on a CD. If that's all you need, then you don't have to worry 
about partitioning around Windows, since Windows won't be able to read your ZFS 
array anyway.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with slow zfs send | receive performance within the same box.

2010-06-11 Thread valrh...@gmail.com
So I think you're right. With the "ATA" option, I can see the pci-ide driver.

However, there is no AHCI option; the only other two are "off" (obviously 
useless) and "RAID". The "RAID" option gives control over to the RAID 
controller on the motherboard. However, there is nothing I can do in terms of 
formatting the disks, initializing in various ways, that works at all. That is, 
when I boot back into EON, I can run "format" and don't see anything. It just 
says 

Searching for disks...done
No disks found!

Any ideas? Maybe I should just buy a SATA controller which is known to work 
with OpenSolaris?

The good part is that I can go back to ATA mode and my data is still there, so 
at least nothing has been lost yet. I don't these motherboard RAID controllers, 
because if something goes wrong, you have to have the same model controller. It 
also means you can't easily move drives. So I want to avoid the RAID, if there 
is something that requires the drive to be attached to that 
motherboard/controller.

Or is there another way?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with slow zfs send | receive performance within the same box.

2010-06-11 Thread valrh...@gmail.com
> > From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of
> valrh...@gmail.com
> > 
> > So I think you're right. With the "ATA" option, I
> can see the pci-ide
> > driver.
> 
> Um, if you'd like to carry on a conversatin, you'll
> have to better at
> quoting.  This response you posted is totally out of
> context, and a lot of
> people (like me) won't know what you're talking about
> anymore, because your
> previous thread of discussion isn't the only thing
> we're thinking about.
> 
> Suggestions are:
> 
> When replying, keep the original From line.  (As
> above.)
> 
> Use in-line quoting, as above. 
Thanks. I just saw a rather heated exchange on ZFS discuss on how quoting is 
getting out of hand, so I tried to keep my message short. I'll do a better job 
in the future; thanks for the heads-up.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Dedup RAM requirements, vs. L2ARC?

2010-06-28 Thread valrh...@gmail.com
I'm putting together a new server, based on a Dell PowerEdge T410. 

I have  simple SAS controller, with six 2TB Hitachi DeskStar 7200 RPM SATA 
drives. The processor is a quad-core 2 GHz Core i7-based Xeon.

I will run the drives as one set of three mirror pairs striped together, for 6 
TB of homogeneous storage.

I'd like to run Dedup, but right now the server has only 4 GB of RAM. It has 
been pointed out to me several times that this is far too little. So how much 
should I buy? A few considerations:

1. I would like to run dedup on old copies of backups (dedup ratio for these 
filesystems are 3+). Basically I have a few years of backups onto tape, and 
will consolidate these. I need to have the data there on disk, but I rarely 
need to access it (maybe once a month). So those filesystems can be exported, 
and effectively shut off. Am I correct in guessing that, if a filesystem has 
been exported, its dedup table is not in RAM, and therefore is not relevant to 
RAM requirements? I don't mind if it's really slow to do the first and only 
copy to the file system, as I can let it run for a week without a problem.

2. Are the RAM requirements for ZFS with dedup based on the total available 
zpool size (I'm not using thin provisioning), or just on how much data is in 
the filesystem being deduped? That is, if I have 500 GB of deduped data but 6 
TB of possible storage, which number is relevant for calculating RAM 
requirements?

3. What are the RAM requirements for ZFS in the absence of dedup? That is, if I 
only have deduped filesystems in an exported state, and all that is active is 
non-deduped, is 4 GB enough?

4. How does the L2ARC come into play? I can afford to buy a fast Intel X25M G2, 
for instance, or any of the newer SandForce-based MLC SSDs to cache the dedup 
table. But does it work that way? It's not really affordable for me to get more 
than 16 GB of RAM on this system, because there are only four slots available, 
and the 8 GB DIMMS are a bit pricey.

5. Could I use one of the PCIe-based SSD cards for this purpose, such as the 
brand-new OCZ Revo? That should be somewhere between a SATA-based SSD and RAM.

Thanks in advance for all of your advice and help.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedup RAM requirements, vs. L2ARC?

2010-06-30 Thread valrh...@gmail.com
Thanks to everyone for such helpful and detailed answers. Contrary to some of 
the trolls in other threads, I've had a fantastic experience here, and am 
grateful to the community.

Based on the feedback, I'll upgrade my machine to 8 GB of RAM. I only have two 
slots on the motherboard, and either add two 2 GB DIMMs to add to the two I 
have there, or throw those away and start over with 4 GB DIMMs, which is not 
something I'm quite ready to do yet (before this is all working, for instance).

Now, for the SSD, Crucial appears to have their (recommended above) C300 64 GB 
drive for $150, which seems like a good deal. Intel's X25M G2 is $200 for 80 
GB. Does anyone have a strong opinion as to which would work better for the 
L2ARC? I am having a hard time understanding, from the performance numbers 
given, which would be a better choice.

Finally, for my purposes, it doesn't seem like a ZIL is necessary? I'm the only 
user of the fileserver, so there probably won't be more than two or three 
computers, maximum, accessing stuff (and writing stuff) remotely.

But, from what I can gather, by spending a little under $400, I should 
substantially increase the performance of my system with dedup? Many thanks, 
again, in advance.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedup RAM requirements, vs. L2ARC?

2010-06-30 Thread valrh...@gmail.com
Another question on SSDs in terms of performance vs. capacity.

Between $150 and $200, there are at least three SSDs that would fit the rough 
specifications for the L2ARC on my system:

1. Crucial C300, 64 GB: $150: medium performance, medium capacity.
2. OCZ Vertex 2, 50 GB: $180: higher performance, lower capacity. (The Agility 
2 is similar, but $15 cheaper)
3. Corsair Force 60 GB, $195: similar performance, slightly higher capacity 
(more over-provisioning with the same SandForce controller).
4. Intel X25M G2, 80 GB: $200: largest capacity, probably lowest(?) performance.

So which would be the best choice L2ARC? Is it size, or is it throughput, that 
really matter for this? 

Within this range, price doesn't make much difference. Thanks, as always, for 
the guidance.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] SATA 6G controller for OSOL

2010-07-07 Thread valrh...@gmail.com
I'm wanting to fire up a new SSD for an L2ARC on a ZFS box I've put together, 
and was looking at some of the new drives. Many of the faster ones, with great 
read speeds, are SATA-6G compatible, and I'm wondering if any of you has gotten 
these cards to work. 

In particular, the Asus U3S6:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813995004

And the SIIG SC-SA0E12-S1:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816150028&cm_re=sata_6g-_-16-150-028-_-Product

Does anyone have an opinion, or some experience? Thanks in advance!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA 6G controller for OSOL

2010-07-08 Thread valrh...@gmail.com
Thanks! I just need the SATA part for the SSD serving as my L2ARC. Could care 
less about PATA, and have no USB3 peripherals, anyway. I'll let everyone know 
how it works!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA 6G controller for OSOL

2010-07-14 Thread valrh...@gmail.com
Thanks for all of the help.

I now have the Asus U3S6 installed. There are two SATA ports on the board, and 
I've plugged one into the CD-ROM drive, and the other for my SSD being used as 
an L2ARC. Upon booting the machine, I get the message:

Marvell 88SE91xx Adapter, BIOS version 0.0.1012

It then successfully lists both the SSD and the CD-ROM drive.

I then boot EON 0.600 from the CD, which does just fine. After logging in as 
root, I run "format" and now it doesn't see the SSD. Yet it booted the 
operating system from the CD, plugged into the same SATA card!?!?

This makes no sense to me. Does anyone have a suggestion as to what I can do to 
possibly get this working? Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] HELP!! SATA 6G controller for OSOL

2010-07-20 Thread valrh...@gmail.com
So I've tried both the ASUS U3S6, and the Koutech IO-PESA-A230R, recommended by 
the helpful blog:

http://blog.zorinaq.com/?e=10

In BOTH cases, the SSD appears in the card's BIOS screen at bootup, so that the 
card sees it and recognizes it properly.

I'm running EON 0.60 (SNV130), and once I log in as root and run "format", the 
SSD Is not there at all. I just wanted a cheap card to add to my server to run 
my SSD as an L2ARC, so nothing needs to be fancy.

Is there anything I can do? I'm really stuck now... Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] When is the L2ARC refreshed if on a separate drive?

2010-08-03 Thread valrh...@gmail.com
I'm running a mirrored pair of 2 TB SATA drives as my data storage drives on my 
home workstation, a Core i7-based machine with 10 GB of RAM. I recently added a 
sandforce-based 60 GB SSD (OCZ Vertex 2, NOT the pro version) as an L2ARC to 
the single mirrored pair. I'm running B134, with ZFS pool version 22, with 
dedup enabled. If I understand correctly, the dedup table should be in the 
L2ARC on the SSD, and I should have enough RAM to keep the references to that 
table in memory, and that this is therefore a well-performing solution.

My question is what happens at power off. Does the cache device essentially get 
cleared, and the machine has to rebuild it when it boots? Or is it persistent. 
That is, should performance improve after a little while following a reboot, or 
is it always constant once it builds the L2ARC once? 

Rather informally, it sometimes seems that the hard drives are a bit slower the 
first time they load a program now, vs. when I didn't have the SSD installed as 
a cache device on the pool. But this is mainly an impression. Thanks for your 
help!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] LTFS and LTO-5 Tape Drives

2010-08-03 Thread valrh...@gmail.com
Has anyone looked into the new LTFS on LTO-5 for tape backups? Any idea how 
this would work with ZFS? I'm presuming ZFS send / receive are not going to 
work. But it seems rather appealing to have the metadata properly with the 
data, and being able to browse files directly instead of having to rely on 
backup software, however nice tar may be. Has anyone used this with 
OpenSolaris, or have an opinion on how this would work in practice? Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] When is the L2ARC refreshed if on a separate drive?

2010-08-03 Thread valrh...@gmail.com
Thanks for the info!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Corrupt file without filename

2010-08-04 Thread valrh...@gmail.com
I have one corrupt file in my rpool, but when I run "zpool status -v", I don't 
get a filename, just an address. Any idea how to fix this? Here's the output:

p...@dellt7500:~# zpool status -v rpool 
  pool: rpool
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c4t0d0s0  ONLINE   0 0 0

errors: Permanent errors have been detected in the following files:

rpool/export/home/plu:<0x12491>
p...@dellt7500:~#
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LTFS and LTO-5 Tape Drives

2010-08-04 Thread valrh...@gmail.com
Actually, no. I could care less about incrementals, and multivolume handling. 
My purpose is to have occasional, long-term archival backup of big experimental 
data sets. The challenge is keeping everything organized, and readable several 
years later, where I only need to recall a small subset of what's on the tape. 
The idea that the tape has a browseable filesystem is therefore extremely 
useful in principle.

Has anyone actually tried this with OpenSolaris? The LTFS websites I've seen 
only talk about Mac and Linux support, but if it's supported on Linux, in 
principle the (open-source?) drivers should be portable, no?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Corrupt file without filename

2010-08-04 Thread valrh...@gmail.com
Oooh... Good call!

I scrubbed the pool twice, then it showed a real filename from an old snapshot 
that I had attempted to delete before (like a month ago), and gave an error, 
which I subsequently forgot about. I deleted the snapshot and cleaned up a few 
other snaphots, cleared the error, rescrubbed. And now, no more corrupt file. 
Nice!

Love this forum... thanks so much!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Corrupt file without filename

2010-08-05 Thread valrh...@gmail.com
I ran fmdump -eV > dump.txt, and opened the 64 MB text file. What should I be 
looking for?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS

2010-08-07 Thread valrh...@gmail.com
I've been running OpenSolaris on my Dell Precision Workstation T7500 for 9 
months, and it works great. It's my default desktop operating system, and I've 
had zero problems with hardware compatibility.

I also have installed EON 0.600 on my Dell PowerEdge T410 (not so different 
from your R510). A few words of caution:

1. Beware of the onboard controllers. The "RAID" controller on that motherboard 
only works in Windows; neither Linux nor OpenSolaris can recognize drives 
attached to it at all. So I was stuck running in "ATA" mode at the beginning, 
which is awful in terms of performance.

2. I'd also recommend avoiding the PERC cards, in particular since it makes 
drives attached to it impossible to transport to another system. Instead, I use 
the SAS 6i/R controller. That's built into the motherboard on the PW T7500, and 
I got one separately for the PE T410. That works well, and is completely fine 
with OpenSolaris. I'd recommend those, because then you can be sure to get the 
cabling from Dell (which in the case of the PowerEdge, was completely 
nonstandard). And if the card fails, they'll replace it ASAP, which isn't 
necessarily the case with other vendors' cards.

So aside from the RAID controller and cabling issues on the PE T410, I've had 
nothing but good experiences in terms of Dell Precision workstations and 
PowerEdge servers, running OpenSolaris.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Optimizing performance on a ZFS-based NAS

2010-08-12 Thread valrh...@gmail.com
Thanks to the help from many people on this board, I finally got my 
OpenSolaris-based NAS box up and running.

I have a Dell T410 with a Xeon E5504 2.0 GHz (Nehalem) quad-core processor, 8 
GB of RAM. I have six 2TB Hitachi Deskstar (HD32000IDK/7K) SATA drives, set up 
as stripes across three mirrored pairs. I have an OCZ Vertex 2 (NOT Pro) 60 GB 
SSD (Sandforce-based) for the L2ARC. All seven drives are attached to a Dell 
SAS 6i/R controller, which is an 8-channel SAS controller based on an LSI 
chipset. I've enabled dedup and compression on all filesystems of the single 
zpool.

Everything is working pretty well, and over NFS, I can get a solid 80 MB/sec if 
I'm copying big files. This is adequate, but I am wondering if I can do any 
better. I'm only using this box to share between two or three other machines, 
in a private (home or lab) network. I think I've followed all of the 
suggestions I've been given; in particular, running 8 GB of RAM with the 60 GB 
SSD for the L2ARC should allow full caching of the dedup table. I ran 
zilstat.ksh, but it always came up with zeros, which suggests there's no point 
in a ZIL log SSD. 

Is there anything left to tune? If so, how do I go about figuring out how to 
increase performance? Right now, I'm just copying large files and looking at 
the transfer rate as calculated by nautilus, or with iostat -x. What's the next 
thing to do, as far as diagnostics? I'd like to learn a bit more about the 
process of optimizing, since I have other such boxes I want to set up and tune, 
but with different hardware.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New Supermicro SAS/SATA controller: AOC-USAS2-L8e in SOHO NAS and HD HT

2010-08-12 Thread valrh...@gmail.com
Has anyone bought one of these cards recently? It seems to list for around $170 
at various places, which seems like quite a decent deal. But no well-known 
reputable vendor I know seems to sell these, and I want to be able to have 
someone backing the sale if something isn't perfect. Where do you all recommend 
buying this card from?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New Supermicro SAS/SATA controller: AOC-USAS2-L8e in SOHO NAS and HD HT

2010-08-13 Thread valrh...@gmail.com
Thanks for the link; as a result, I learned how to use dd to get some better 
data on transfer rates, which was extremely helpful. I guess you can fit the 
card in standard PCIe slot with some spacers, but does anyone have any specific 
info on this?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS compression and deduplication on root pool on SSD

2010-02-28 Thread valrh...@gmail.com
I am running my root pool on a 60 GB SLC SSD (OCZ Agility EX). At present, my 
rpool/ROOT has no compression, and no deduplication. I was wondering about 
whether it would be a good idea, from a performance and data integrity 
standpoint, to use one, the other, or both, on the root pool. My current 
problem is that I'm starting to run out of space on the SSD, and based on a 
send|receive I did to a backup server, I should be able to compress by about a 
factor of 1.5x. If I enable both on the rpool filesystem, then clone the boot 
environment, that should enable it on the new BE (which would be a child of 
rpool/ROOT), right?

Also, I don't have the numbers to prove this, but it seems to me that the 
actual size of rpool/ROOT has grown substantially since I did a clean install 
of build 129a (I'm now at build133). WIthout compression, either, that was 
around 24 GB, but things seem to have accumulated by an extra 11 GB or so. Or 
am I imagining things? Is there a way to get rid of all of the legacy stuff 
that's in there? I already deleted the old snapshots and boot environments that 
were taking up much space.

Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS compression and deduplication on root pool on SSD

2010-03-01 Thread valrh...@gmail.com
One of the great privileges of using OpenSolaris is the helpfulness and deep 
knowledge of the community. Thanks for the suggestions.

I cleared out /var/pkg/downloads, and got back a couple of gigabytes. I've 
enabled compress and dedup.

Is there a simple set of commands that I could send that filesystem somewhere, 
bring it back on to the now dedup/compress-enabled boot, and then update BE to 
get it to work? I'm assuming I could this with send/receive, but are there some 
options that I need to specify? Or is there a way to force beadm to copy the 
files over to a new filesystem, so that it ends up being deduped?

This SSD Is only for my rpool. I've got a mirror of SATA drives to handle the 
data separately.

Also, assuming I recover most of the space, is there anything I can do to 
clean-up the SLC SSD, like TRIM, but compatible with ZFS?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Consolidating a huge stack of DVDs using ZFS dedup: automation?

2010-03-01 Thread valrh...@gmail.com
One of the most useful things I've found with ZFS dedup (way to go Jeff Bonwick 
and Co.!) is the ability to consolidate backups. I had six different complete 
backups of all of my files spread out over various hard drives, and dedup 
allowed me to consolidate them into something that took less twice the space of 
the original. I was thrilled when I saw this the first time. 

This led me to another idea: I have been using DVDs for small backups here and 
there for a decade now, and have a huge pile of several hundred. They have a 
lot of overlapping content, so I was thinking of feeding the entire stack into 
some sort of DVD autoloader, which would just read each disk, and write its 
contents to a ZFS filesystem with dedup enabled. Even if the autoloader had to 
run on Windows or Linux, I could just use a mounted drive to achieve the same 
ends. That would allow me to consolidate a few hundred CDs and DVDs onto 
probably a terabyte or so, which could then be kept conveniently on a hard 
drive and archived to tape. Does anyone know of a DVD autoloader that would 
allow me to do this easily, and if someone might be willing to rent one to me 
(I'm in the Boston area)? I only need to do this once.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Disk on Module (DOM) for NAS boot drive?

2010-09-01 Thread valrh...@gmail.com
I have a file server that I've basically maxed out the drive bays for. At the 
moment, I'm running Nexenta on an SSD that is sort of resting on something else 
in the case. I was wondering if, instead, I could install Nexenta on a SATA 
Disk on Module (DOM), say something like 4 GB, dual channel, SLC:

http://www.kingspec.com/solid-state-disk-products/series-domsata.htm

I did try with a USB memory stick, but it was slow. And my previous 
installation of EON on a memory stick got corrupted and I lost everything (not 
the data, but the configuration). Has anyone gotten this to work before (for 
Nexenta, EON, etc.)? Any suggestions or advice? And how much space does a 
plain-vanilla installation of Nexenta actually require?

Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss