Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-16 Thread thomas
Even if it might not be the best technical solution, I think what a lot of 
people are looking for when this comes up is a knob they can use to say "I only 
want X IOPS per vdev" (in addition to low prioritization) to be used while 
scrubbing. Doing so probably helps them feel more at ease that they have some 
excess capacity on cpu and vdev if production traffic should come along.

That's probably a false sense of moderating resource usage when the current 
"full speed, but lowest prioritization" is just as good and would finish 
quicker.. but, it gives them peace of mind?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD best practices

2010-04-22 Thread thomas
Someone on this list threw out the idea a year or so ago to just setup 2 
ramdisk servers, export a ramdisk from each and create a mirror slog from them.

Assuming newer version zpools, this sounds like it could be even safer since 
there is (supposedly) less of a chance of catastrophic failure if your ramdisk 
setup fails. Use just one remote ramdisk or two with battery backup.. whatever 
meets your paranoia level.

It's not ssd cheap, but I'm sure you could dream up several options that are 
less than stec prices. You also could probably use these machines on multiple 
pools if you've got them. I know, it still probably sounds a bit too cowboy for 
most on this list though.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New SSD options

2010-05-18 Thread thomas
40k IOPS sounds like "best in case, you'll never see it in the real world" 
marketing to me. There are a few benchmarks if you google and they all seem to 
indicate the performance is probably +/- 10% of an intel x25-e. I would 
personally trust intel over one of these drives.

Is it even possible to buy a zeus iops anywhere? I haven't been able to find 
one. I get the impression they mostly sell to other vendors like sun? I'd be 
curious what the price is on a 9GB zeus iops is these days?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New SSD options

2010-05-21 Thread thomas
On the PCIe side, I noticed there's a new card coming from LSI that claims 
150,000 4k random writes. Unfortunately this might end up being an OEM-only 
card.

I also notice on the ddrdrive site that they now have an opensolaris driver and 
are offering it in a beta program.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can you recover a pool if you lose the zil (b134+)

2010-05-25 Thread thomas
Is there a best practice on keeping a backup of the zpool.cache file? Is it 
possible? Does it change with changes to vdevs?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Snapshots, txgs and performance

2010-06-06 Thread thomas
Very interesting. This could be useful for a number of us. Would you be willing 
to share your work?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-22 Thread Thomas
Hi,

I have and raidz1 conisting 6 5400rpm drives on this zpool. I have stored some 
Media in a FS and in an other 200k files. Both FS are written not much. The 
Pool is 85% Full. 

Could this issue also the reason that if Iam playing(reading) some Media that 
the playback is lagging?

OSOL ips_111b
E5200, 8gb RAM

Thank you
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-23 Thread Thomas
No snapshots running. I have only 21 filesystems mounted. Blocksize is the 
default one. Slow disk I dont think so because I get read and write rates about 
350MB/s. Bios is the last also I tried to splitt the pool to two controllers 
all this dont help.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] triple-parity: RAID-Z3

2009-07-20 Thread Thomas
which gap?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] triple-parity: RAID-Z3

2009-07-20 Thread Thomas
http://mail.opensolaris.org/pipermail/onnv-notify/2009-July/009872.html

second bug, its the same link like in the first post.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] virtualization, alignment and zfs variation stripes

2009-07-22 Thread thomas
Hmm.. I guess that's what I've heard as well.

I do run compression and believe a lot of others would as well. So then, it 
seems
to me that if I have guests that run a filesystem formatted with 4k blocks for
example.. I'm inevitably going to have this overlap when using ZFS network
storage?

So if "A" were zfs blocks and "B" were virtualized guest blocks, I think it 
might
look like this with compression on?

|   B1   |   B2   |   B3   |   B4   |
|   A1   | A2 | A3  |  A4  |

So if the guest OS wants blocks B2 or B4, it actually has to read 2 blocks from 
the
underlying zfs storage?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD's and ZFS...

2009-07-23 Thread thomas
> I think it is a great idea, assuming the SSD has good write performance.
> This one claims up to 230MB/s read and 180MB/s write and it's only $196.
> 
> http://www.newegg.com/Product/Product.aspx?Item=N82E16820609393
> 
> Compared to this one (250MB/s read and 170MB/s write) which is $699.
> 
> Are those claims really trustworthy? They sound too good to be true!


MB/s numbers are not a good indication of performance. What you should pay 
attention to are usually random IOPS write and read. They tend to correlate a 
bit, but those numbers on newegg are probably just best case from the 
manufacturer.

In the world of consumer grade SSDs, Intel has crushed everyone on IOPS 
performance.. but the other manufacturers are starting to catch up a bit.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris attached to 70 disk HP array

2009-07-23 Thread thomas
That is an interesting bit of kit. I wish a "white box" manufacturer would 
create something like this (hint hint supermicro)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using consumer drives in a zraid2

2009-08-25 Thread thomas
Are there *any* consumer drives that don't respond for a long time trying to 
recover from an error? In my experience they all behave this way which has been 
a nightmare on hardware raid controllers.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using consumer drives in a zraid2

2009-08-25 Thread thomas
> I'll admit, I was cheap at first and my
> fileserver right now is consumer drives.  You
> can bet all my future purchases will be of the enterprise grade.  And
> guess what... none of the drives in my array are less than 5 years old, so 
> even
> if they did die, and I had bought the enterprise versions, they'd be
> covered.

Anything particular happen that made you change your mind? I started with
"enterprise grade" because of similar information discussed in this thread.. 
but I've
also been wondering how zfs holds up with consumer level drives and if I could 
save
money by using them in the future. I guess I'm looking for horror stories that 
can be
attributed to them? ;)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using consumer drives in a zraid2

2009-08-26 Thread thomas
Hi Richard,


> So you have to wait for the sd (or other) driver to
> timeout the request. By
> default, this is on the order of minutes. Meanwhile,
> ZFS is patiently awaiting a status on the request. For
> enterprise class drives, there is a limited number
> of retries on the disk before it reports an error.
> You can expect responses of success in the order of
> 10 seconds or less. After the error is detected, ZFS
> can do something about it.
> 
> All of this can be tuned, of course.  Sometimes that
> tuning is ok by default, sometimes not. Until recently, the
> biggest gripes were against the iscsi client which had a
> hardwired 3 minute error detection. For current  
> builds you can tune these things without recompiling.
>   -- richard


So are you suggesting that tuning the sd driver's settings to timeout sooner if 
using
a consumer class drive wouldn't be wise for perhaps other reasons?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status/priority of 6761786

2009-08-27 Thread thomas
For whatever it's worth to have someone post on a list.. I would *really*
like to see this improved as well. The time it takes to iterate over
both thousands of filesystems and thousands of snapshots makes me very
cautious about taking advantage of some of the built-in zfs features in
an HA environment.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Weird drive configuration, how to improve the situation

2010-03-02 Thread Thomas W
Hi!

I'm new to ZFS so this may be (or certainly is) a kind of newbie question.

I started with a small server I built from parts I had left over.
I only had 2 500GB drives and wanted to go for space. So i just created a zpool 
without any option. That now looks like this.

NAMESTATE READ WRITE CKSUM
swamp   ONLINE   0 0 0
  c1d0  ONLINE   0 0 0
  c2d0  ONLINE   0 0 0

So far so good. But like always the provisional solution became a permanent 
solution. Now I have an extra 1TB disk that I can add to the system. And I want 
to go for file security.

How can I get the best out of this setup. Is there a way of mirroring the data 
automatically between those three drives?

Any help is appreciated but please don't tell me I have to delete anything ;)

Thanks a lot,
  Thomas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Weird drive configuration, how to improve the situation

2010-03-02 Thread Thomas W
Thanks... works perfect!

Currently it's resilvering. That is all too easy ;)

Thanks again,
  Thomas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Weird drive configuration, how to improve the situation

2010-03-02 Thread Thomas Wuerdemann
Hi Cindy,

thanks for your advice. I guess this would be the better way to mirror one
drive on a physical extra drive but Richards suggetion was fitting my
current conditions better.

Because I didn't want to buy an extra disk or copy all data back and forth.
I just happened to have an extra 1TB drive around and wondered how I could
use this to create some sort of protection for my small but trusted file
server.

I belive this is a good solution by now.

Of course I will fix this as soon as I have an extra 500GB or bigger drive
available. Till then my current setup has to work ;)

Thanks a lot,
  Thomas


2010/3/2 Cindy Swearingen 

> Hi Thomas,
>
> I see that Richard has suggested mirroring your existing pool by
> attaching slices from your 1 TB disk if the sizing is right.
>
> You mentioned file security and I think you mean protecting your data
> from hardware failures. Another option is to get one more disk to
> convert this non-redundant pool to a mirrored pool by attaching the 1 TB
> disk and another similarly sized disk. See the example below.
>
> Another idea would be to create a new pool with the 1 TB disk and then
> use zfs send/receive to send over the data from swamp, but this wouldn't
> work because you couldn't reuse swamp's disks by attaching the 500GB
> disks to the new pool because they are smaller than the 1 TB disk.
>
> Keep in mind that if you do recreate this pool as a mirrored
> configuration:
>
> mirror pool = 1 500GB + 1 500GB disks, total capacity is 500GB
> mirror pool = 1 500GB + 1GB disks, total capacity is 500GB
>
> Because of the unequal disk sizing, the mirrored pool capacity would
> be equal to the smallest disk.
>
> Thanks,
>
> Cindy
>
> # zpool status tank
>  pool: tank
>  state: ONLINE
>  scrub: none requested
> config:
>
>
>NAMESTATE READ WRITE CKSUM
>tankONLINE   0 0 0
>  c2t7d0ONLINE   0 0 0
>  c2t8d0ONLINE   0 0 0
>
> errors: No known data errors
> # zpool attach tank c2t7d0 c2t9d0
> # zpool attach tank c2t8d0 c2t10d0
> # zpool status tank
>  pool: tank
>  state: ONLINE
>  scrub: resilver completed after 0h0m with 0 errors on Tue Mar  2 14:32:21
> 2010
> config:
>
>
>NAME STATE READ WRITE CKSUM
>tank ONLINE   0 0 0
>  mirror-0   ONLINE   0 0 0
>c2t7d0   ONLINE   0 0 0
>c2t9d0   ONLINE   0 0 0
>  mirror-1   ONLINE   0     0 0
>c2t8d0   ONLINE   0 0 0
>c2t10d0  ONLINE   0 0 0  56.5K resilvered
>
> errors: No known data errors
>
>
> On 03/02/10 12:58, Thomas W wrote:
>
>> Hi!
>>
>> I'm new to ZFS so this may be (or certainly is) a kind of newbie question.
>>
>> I started with a small server I built from parts I had left over.
>> I only had 2 500GB drives and wanted to go for space. So i just created a
>> zpool without any option. That now looks like this.
>>
>>NAMESTATE READ WRITE CKSUM
>>swamp   ONLINE   0 0 0
>>  c1d0  ONLINE   0 0 0
>>  c2d0  ONLINE   0 0 0
>>
>> So far so good. But like always the provisional solution became a
>> permanent solution. Now I have an extra 1TB disk that I can add to the
>> system. And I want to go for file security.
>>
>> How can I get the best out of this setup. Is there a way of mirroring the
>> data automatically between those three drives?
>>
>> Any help is appreciated but please don't tell me I have to delete anything
>> ;)
>>
>> Thanks a lot,
>>  Thomas
>>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware for high-end ZFS NAS file server - 2010 March edition

2010-03-04 Thread Thomas Burgess
On Thu, Mar 4, 2010 at 4:46 AM, Dan Dascalescu <
bigbang7+opensola...@gmail.com > wrote:

> Please recommend your up-to-date high-end hardware components for building
> a highly fault-tolerant ZFS NAS file server.
>
> I've seen various hardware lists online (and I've summarized them at
> http://wiki.dandascalescu.com/reviews/storage.edit#Solutions), but they're
> on the cheapo side. I want to build a media server and be done with with for
> a few years, (until the next generation storage media (holograms?
> nanowires?) becomes commercially available. So please, knock yourselves out.
> The bills for ZFS NAS boxes that I've seen run around $1k, and I'm willing
> to invest up to $3k.
>
> Requirements, in decreasing order of importance:
> 1. Extremely fault-tolerant.  I'd like to be able to lose two disks and
> still be OK. I also want any silent hard disk read errors that are detected
> by ZFS, to be reported somehow.
> 2. As quiet as it gets.
> 3. Able to easily extend storage
> 4. (low) If feasible, I'd like to be able to use a Blu-Ray drive with the
> system.
>
> I also have a few software requirements, which I think are pretty
> independent of the hardware:
> a. Secure – I want to be able to tweak and control access at every level
> b. Very fast network performance. The server should be able to stream 1080p
> while doing a number of other tasks without issues.
> c. Ability to serve all different types of hosts: NFS, SMB, SCP/SFTP
> d. Flexible. I do a number of other things/experiments, and I’d like to be
> able to use it for more than just serving files.
>
> Really only #1 (reliable) and #2 (quiet) matter most. I've been mulling
> over this server for too long and want to get it over with.
>
> Looking forward to your recommendations,
> Dan
>
>
What i did was this:

I got a norco 4020 (the 4220 is good too)

Both of those cost around 300-350 dolars.  That is a 4u case with 20 hot
swap bays.

Then i got a decent server board.  I used supermicro mbd-x7se because it has
4 pci-x slots.

I got 3 supermicro AOC-SAT2-mv8 cards  for the sata ports (each has 8)

20 1tb seagate drives, but you could use any size which fits your budget.

8 gb ddr2 800 ecc memory

3 64 gb ssd's (2 for rpool mirror and 1 for l2arc)

Intel q9550 cpu.

This gives you a pretty beastly machine which has around 18-36 raw TB's

I went with 3 raidz2 groups.

I plan to expland it with a sas expander and another norco case.  I hope
this gives you some ideas.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Non-redundant zpool behavior?

2010-03-04 Thread Thomas Burgess
no, if you don't use redundancy, each disk you add makes the pool that much
more likely to fair.  This is the entire point of raidz .

ZFS stripes data across all vdevs.

On Thu, Mar 4, 2010 at 12:32 PM, Travis Tabbal  wrote:

> I have a small stack of disks that I was considering putting in a box to
> build a backup server. It would only store data that is duplicated
> elsewhere, so I wouldn't really need redundancy at the disk layer. The
> biggest issue is that the disks are not all the same size. So I can't really
> do a raidz or mirror with them anyway. So I was considering just putting
> them all in one pool. My question is how does zpool behave if I lose one
> disk in this pool? Can I still access the data on the other disks? Or is it
> like a traditional raid0 and I lose the whole pool? Is there a better way to
> deal with this, using my old mismatched hardware?
>
> Yes, I could probably build a raidz by partitioning and such, but I'd like
> to avoid the complexity. I'd probably just use zfs send/recv to send
> snapshots over or perhaps crashplan.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware for high-end ZFS NAS file server - 2010 March edition

2010-03-04 Thread Thomas Burgess
its not quiet by default but it can be made somewhat more quiet by swapping
out the fans or going to larger fans.  Its still totally worth it.

I use smaller, silent htpc's for the actual media and connect to the norco
over gigabit.

My norco box is connected to the network with 2 link aggregated gigabit
ethernet cables.

It's very nice.


On Thu, Mar 4, 2010 at 3:03 PM, Michael Shadle  wrote:

> On Thu, Mar 4, 2010 at 4:12 AM, Thomas Burgess  wrote:
>
> > I got a norco 4020 (the 4220 is good too)
> >
> > Both of those cost around 300-350 dolars.  That is a 4u case with 20 hot
> > swap bays.
>
> Typically rackmounts are not designed for quiet. He said quietness is
> #2 in his priorities...
>
> Or does the Norco unit perform quietly or have the ability to be quieter?
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware for high-end ZFS NAS file server - 2010 March edition

2010-03-04 Thread Thomas Burgess
yah, i can dig it.  I'd be really upset if i couldn't use my rackmount
stuff.  I love my norco box.  I'm about to build a second one using a sas
expander...but i can totally understand how noise would be a concern

at the same time, it's not NEARLY as loud as something like an ac window
unit.



On Thu, Mar 4, 2010 at 3:27 PM, Michael Shadle  wrote:

> If I had a decently ventilated closet or space to do it in I wouldn't
> mind noise, but I don't, that's why I had to build my storage machines
> the way I did.
>
> On Thu, Mar 4, 2010 at 12:23 PM, Thomas Burgess 
> wrote:
> > its not quiet by default but it can be made somewhat more quiet by
> swapping
> > out the fans or going to larger fans.  Its still totally worth it.
> >
> > I use smaller, silent htpc's for the actual media and connect to the
> norco
> > over gigabit.
> >
> > My norco box is connected to the network with 2 link aggregated gigabit
> > ethernet cables.
> >
> > It's very nice.
> >
> >
> > On Thu, Mar 4, 2010 at 3:03 PM, Michael Shadle 
> wrote:
> >>
> >> On Thu, Mar 4, 2010 at 4:12 AM, Thomas Burgess 
> wrote:
> >>
> >> > I got a norco 4020 (the 4220 is good too)
> >> >
> >> > Both of those cost around 300-350 dolars.  That is a 4u case with 20
> hot
> >> > swap bays.
> >>
> >> Typically rackmounts are not designed for quiet. He said quietness is
> >> #2 in his priorities...
> >>
> >> Or does the Norco unit perform quietly or have the ability to be
> quieter?
> >
> >
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Weird drive configuration, how to improve the situation

2010-03-08 Thread Thomas W
Hi, it's me again.

First of all, technically slicing the drive worked like it should.

I started to experiment and found some issues I don't really understand.

My base playground setup:
- Intel D945GCLF2, 2GB ram, Opensolaris from EON
- 2 Sata Seagates 500GB

A normal zpool of the two drives to get a TB of space.
Now I added a 1 TB USB drive (I sliced it to have 500GB partitions). I attached 
them to the Sata drives to mirror them.
Worked great...
But, suddenly the throughput dropped from around 15MB/s to 300KB/s. After 
detaching the USB drives it went back to 15MB/s.

My Question:
Is it possible that mixing USB 2.0 external drives and Sata drives isn't a good 
idea or is the problem that I sliced the external drive?

After removing the USB drive I done a little benchmarking as I was curious how 
well the Intel system works at all.
I wonder if this 'iostat' output is okay (For me it doesn't)
sumpf804G   124G257  0  32.0M  0
sumpf804G   124G  0  0  0  0
sumpf804G   124G178  0  22.2M  0
sumpf804G   124G 78  0  9.85M  0
sumpf804G   124G  0  0  0  0
sumpf804G   124G257  0  32.0M  0
sumpf804G   124G  0  0  0  0
sumpf804G   124G  0  0  0  0
sumpf804G   124G257  0  32.0M  0
sumpf804G   124G  0  0  0  0
sumpf804G   124G257  0  32.0M  0
sumpf804G   124G  0  0  0  0

Why are there so many 0 in this chart? No wonder I only get 15MB/s max...

Thanks for helping a Solaris beginner. Your help is very appreciated.
Thomas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Weird drive configuration, how to improve the situation

2010-03-09 Thread Thomas W
Okay... I found the solution to my problem.

And it has nothing to do with my hard drives... It was the Realtek NIC drivers. 
I read about problems and added a new driver (I got that from the forum 
thread). And now I have about 30MB/s read and 25MB/s write performance. That's 
enough (for the beginning).

Thanks for all your input and support. 

Thomas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] When to Scrub..... ZFS That Is

2010-03-13 Thread Thomas Burgess
I scrub once a week.

I think the general rule is:

once a week for consumer grade drives
once a month for enterprise grade drives.


On Sat, Mar 13, 2010 at 3:29 PM, Tony MacDoodle  wrote:

> When would it be necessary to scrub a ZFS filesystem?
> We have many "rpool", "datapool", and a NAS 7130, would you consider to
> schedule monthly scrubs at off-peak hours or is it really necessary?
>
> Thanks
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Benchmarking Methodologies

2010-04-21 Thread Thomas Uebermeier

Ben,

never trust a benchmark, you haven't faked yourself!

There are many benchmarks out there, but the question is, how relevant are
they for your usage pattern. How important are single stream benchmarks, when
you are opening and closing 1000s of files per second or if you run a DB on
top of it.
At the end there is only one benchmark, which is the one you wrote yourself
which simulates your application.

There are some generic ones, like dd or iozone, which gives you a throughput
number or some other (bonnie, etc.) , which tests other functions.
At the end you need to know what is important for your usage and if you care
on numbers like how many snapshots you can do per seconds or not.

Writing your own benchmark in perl or a similar scripting language is quickly
done and gives you the numbers you need. At the end storage is a complex
system and there are many variables between the write() request and the bit
being written on a piece of hardware. I wouldn't trust any numbers from
syscall/sec benchmarks being relevant in my environment.

Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] asus pike slot

2010-05-08 Thread Thomas Burgess
I was wondering if anyone had any first hand knowledge of compatibility with
any asus pike slot expansion cards and OpenSolaris.


I would guess this should work:

http://www.newegg.com/Product/Product.aspx?Item=N82E16816110042

because it's based on lsi 1068e but i'm currious if anyone knows for sure.
 Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is it safe to disable the swap partition?

2010-05-10 Thread Thomas Tornblom

2010-05-10 05:58, Bob Friesenhahn skrev:

On Sun, 9 May 2010, Edward Ned Harvey wrote:


So, Bob, rub it in if you wish. ;-) I was wrong. I knew the behavior in
Linux, which Roy seconded as "most OSes," and apparently we both
assumed the
same here, but that was wrong. I don't know if solaris and opensolaris
both
have the same swap behavior. I don't know if there's *ever* a situation
where solaris/opensolaris would swap idle processes. But there's at least
evidence that my two servers have not, or do not.


Solaris and Linux are different in many ways since they are completely
different operating systems. Solaris 2.X has never swapped processes. It
only sends dirty pages to the paging device if there is a shortage of
pages when more are requested, or if there are not enough free, but
first it will purge seldom accessed read-only pages which can easily be
restored. Zfs has changed things up again by not caching file data via
the "unified page cache" and using a specialized ARC instead. It seems
that simple paging and MMU control was found not to be smart enough.

Bob


Sorry, but this is incorrect.

Solaris (2 if you will) does indeed swap processes in case normal paging 
is deemed insufficient.


See the chapters on Soft and Hard swapping in:

http://books.google.com/books?id=r_cecYD4AKkC&pg=PA189&lpg=PA189&dq=solaris+internals+swapping&source=bl&ots=oBvgg3yAFZ&sig=lmXYtTLFWJr2JjueQVxsEylnls0&hl=sv&ei=JbXnS7nKF5L60wTtq9nTBg&sa=X&oi=book_result&ct=result&resnum=4&ved=0CCoQ6AEwAw#v=onepage&q&f=false
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-11 Thread Thomas Burgess
I was looking at building a new ZFS based server for my media files and i
was wondering if this cpu was supported...i googled and i coudlnt' find much
info about it.

I'm specificially looking at this motherboard:

http://www.newegg.com/Product/Product.aspx?Item=N82E16813182230



I'd hate to buy it and find out it doesn't work.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Storage 7410 Flush ARC for filebench

2010-05-11 Thread Johnson Thomas

Hi Experts,

Need assistance on this

Customer has this query
"If there is a way to flush ARC for filebench runs without rebooting the 
system"


He is running firmware 2010.02.09.0.2,1-1.13 on the NAS 7410


Please cc me also in the reply

regards

--
====
Johnson Thomas

Technical Support Engineer
Sun Solution Centre- APAC
Global Customer Services, Sun Microsystems, Inc.
Email- johnson.tho...@sun.com
Toll Free /Hotline:
Australia:1800 555 786  New Zealand:0800 275 786 
Singapore:1800 339 2786 India:1600 425 4786




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-11 Thread Thomas Burgess
the onboard sata is a secondary issue.  If i need to, i'll boot from the
oboard usb slots.  I have 2 LSI 1068e based sas controllers which i will be
using.


On Tue, May 11, 2010 at 8:40 PM, James C. McPherson wrote:

> On 12/05/10 10:32 AM, Michael DeMan wrote:
>
>> I agree on the motherboard and peripheral chipset issue.
>>
>> This, and the last generation AMD quad/six core motherboards
>>
> > all seem to use the AMD SP56x0/SP5100 chipset, which I can't
> > find much information about support on for either OpenSolaris or FreeBSD.
>
> If you can get the device driver detection utility to run
> on it, that will give you a reasonable idea.
>
>
>  Another issue is the LSI SAS2008 chipset for SAS controller
>>
> > which is frequently offered as an onboard option for many motherboards
> > as well and still seems to be somewhat of a work in progress in
> > regards to being 'production ready'.
>
> What metric are you using for "production ready" ?
> Are there features missing which you expect to see
> in the driver, or is it just "oh noes, I haven't
> seen enough big customers with it" ?
>
>
> James C. McPherson
> --
> Senior Software Engineer, Solaris
> Oracle
> http://www.jmcp.homeunix.com/blog
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-11 Thread Thomas Burgess
Well i went ahead and ordered the board.  I will report back soon with the
results..i'm pretty excited.  These CPU's seem great on paper.


On Tue, May 11, 2010 at 9:02 PM, Thomas Burgess  wrote:

> the onboard sata is a secondary issue.  If i need to, i'll boot from the
> oboard usb slots.  I have 2 LSI 1068e based sas controllers which i will be
> using.
>
>
> On Tue, May 11, 2010 at 8:40 PM, James C. McPherson 
> wrote:
>
>> On 12/05/10 10:32 AM, Michael DeMan wrote:
>>
>>> I agree on the motherboard and peripheral chipset issue.
>>>
>>> This, and the last generation AMD quad/six core motherboards
>>>
>> > all seem to use the AMD SP56x0/SP5100 chipset, which I can't
>> > find much information about support on for either OpenSolaris or
>> FreeBSD.
>>
>> If you can get the device driver detection utility to run
>> on it, that will give you a reasonable idea.
>>
>>
>>  Another issue is the LSI SAS2008 chipset for SAS controller
>>>
>> > which is frequently offered as an onboard option for many motherboards
>> > as well and still seems to be somewhat of a work in progress in
>> > regards to being 'production ready'.
>>
>> What metric are you using for "production ready" ?
>> Are there features missing which you expect to see
>> in the driver, or is it just "oh noes, I haven't
>> seen enough big customers with it" ?
>>
>>
>> James C. McPherson
>> --
>> Senior Software Engineer, Solaris
>> Oracle
>> http://www.jmcp.homeunix.com/blog
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-12 Thread Thomas Burgess
This is how i understand it.
I know the network cards are well supported and i know my storage cards are
supportedthe onboard sata may work and it may not.  If it does, great,
i'll use it for booting, if not, this board has 2 onboard bootable USB
sticksluckily usb seems to work regardless



On Wed, May 12, 2010 at 1:18 AM, Geoff Nordli  wrote:

>
>
> On Behalf Of James C. McPherson
> >Sent: Tuesday, May 11, 2010 5:41 PM
> >
> >On 12/05/10 10:32 AM, Michael DeMan wrote:
> >> I agree on the motherboard and peripheral chipset issue.
> >>
> >> This, and the last generation AMD quad/six core motherboards
> > > all seem to use the AMD SP56x0/SP5100 chipset, which I can't  > find
> much
> >information about support on for either OpenSolaris or FreeBSD.
> >
> >If you can get the device driver detection utility to run on it, that will
> give you a
> >reasonable idea.
> >
> >> Another issue is the LSI SAS2008 chipset for SAS controller
> > > which is frequently offered as an onboard option for many motherboards
> > as
> >well and still seems to be somewhat of a work in progress in  > regards to
> being
> >'production ready'.
> >
> >What metric are you using for "production ready" ?
> >Are there features missing which you expect to see in the driver, or is it
> just "oh
> >noes, I haven't seen enough big customers with it" ?
> >
> >
>
> I have been wondering what the compatibility is like on OpenSolaris.  My
> perception is basic network driver support is decent, but storage
> controllers are more difficult for driver support.
>
> My perception is if you are using external cards which you know work for
> networking and storage, then you should be alright.
>
> Am I out in left-field on this?
>
> Thanks,
>
> Geoff
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-12 Thread Thomas Burgess
>
>
>>
> Now wait just a minute. You're casting aspersions on
> stuff here without saying what you're talking about,
> still less where you're getting your info from.
>
> Be specific - put up, or shut up.
>
>
I think he was just trying to tell me that my cpu should be fine, that the
only thing which i might have to worry about is network and disk drivers.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-13 Thread Thomas Burgess
I ordered it.  It should be here monday or tuesday.  When i get everything
built and installed, i'll report back.  I'm very excited.  I am not
expecting problems now that i've talked to supermicro about it.  Solaris 10
runs for them so i would imagine opensolaris should be fine too.

On Thu, May 13, 2010 at 4:43 AM, Orvar Korvar <
knatte_fnatte_tja...@yahoo.com> wrote:

> Great! Please report here so we can read about your impressions.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-15 Thread Thomas Burgess
Well i just wanted to let everyone know that preliminary results are good.
 The livecd booted, all important things seem to be recognized. It sees all
16 gb of ram i installed and all 8 cores of my opteron 6128

The only real shocker is how loud the norco RPC-4220 fans are (i have
another machine with a norco 4020 case so i assumed the fans would be
similar.this was a BAD assumption)  This thing sounds like a hair dryer

Anyways, I'm running the install now so we'll see how that goes. It did take
about 10 minutes to "find a disk" durring the installer, but if i remember
right, this happened on other machines as well.


On Thu, May 13, 2010 at 9:56 AM, Thomas Burgess  wrote:

> I ordered it.  It should be here monday or tuesday.  When i get everything
> built and installed, i'll report back.  I'm very excited.  I am not
> expecting problems now that i've talked to supermicro about it.  Solaris 10
> runs for them so i would imagine opensolaris should be fine too.
>
> On Thu, May 13, 2010 at 4:43 AM, Orvar Korvar <
> knatte_fnatte_tja...@yahoo.com> wrote:
>
>> Great! Please report here so we can read about your impressions.
>> --
>> This message posted from opensolaris.org
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS home server (Brandon High)

2010-05-15 Thread Thomas Burgess
The Intel SASUC8I Is a pretty good deal.  around 150 dollars for 8 sas/sata
channels.  This card is identical to the LSI SAS3081E-R for a lot less
money.  It doesn't come with cables, but this leaves you free to buy the
type you need (in my case, i needed SFF-8087 - SFF-8087 cables, some people
will need SFF-8087- 4 sata breakout cables...either way, cables run 12-20
dollars each (and each card needs 2) so you can tack that on to the
priceThese cards also work well with expanders.

They are based on LSI 1068e chip.


On Sat, May 15, 2010 at 1:41 PM, Roy Sigurd Karlsbakk wrote:

> - "Annika"   skrev:
>
> > I'm also about to set up a small home server. This little box
> > http://www.fractal-design.com/?view=product&prod=39
> > is able housing six 3,5" hdd's and also has one 2,5" bay, eg for an
> > ssd.
> > Fine.
> >
> > I need to know which SATA controller cards (both PCI and PCI-E) are
> > supported in OS, also I'd be grateful for tips on which ones to use in
> > a
> > non-pro environment.
>
> See http://www.sun.com/bigadmin/hcl/data/os/ for supported hardware. There
> was also a post in her yesterday or perhaps earlier today about the choice
> of SAS/SATA controllers. Most will do in a home server environment, though.
> AOC-SAT2-MV8 are great controllers, but run on PCI-X, which isn't very
> compatible with PCI Express
>
> Best regards
>
> roy
> --
> Roy Sigurd Karlsbakk
> (+47) 97542685
> r...@karlsbakk.net
> http://blogg.karlsbakk.net/
> --
> I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det
> er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av
> idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate
> og relevante synonymer på norsk.
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-16 Thread Thomas Burgess
well, i haven't had a lot of time to work with this...but i'm having trouble
getting the onboard sata to work in anything but NATIVE IDE mode.


I'm not sure exactly what the problem isi'm wondering if i bought the
wrong cable (i have a norco 4220 case so the drives connect via a sas
sff-8087 on the backpane)

I thought this required a "reverse breakout cable" but maybe i was
wrongthis is the first time i've worked with sas

on the otherhand, I was able to flash my intel Intel SASUC8I cards with the
LSI SAS3081E IT firmware from the LSI site.  These seem to work fine.  I
think i'm just going to order a 3rd card and put it in the pci-e x4 slot.  I
don't want 16 drives running as sata and 4 running in IDE mode.Is there
any way i can tell if the drive i installed opensolaris to is in IDE or SATA
mode?



On Thu, May 13, 2010 at 4:43 AM, Orvar Korvar <
knatte_fnatte_tja...@yahoo.com> wrote:

> Great! Please report here so we can read about your impressions.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-17 Thread Thomas Burgess
hey, when i do this single user boot, is there anyway to capture what pops
on the screen?  It's a LOT of stuff.


anyways, it seems to work fine when i do singleuser -srv

cpustat -h lists exactly what you said it should plus a lot more (though the
"more" is above, so like you said, it shows what it should show at the
bottom)


I'll capture all that later and post it.


On Sat, May 15, 2010 at 8:35 PM, Dennis Clarke wrote:

> - Original Message -
> From: Thomas Burgess 
> Date: Saturday, May 15, 2010 8:09 pm
> Subject: Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?
> To: Orvar Korvar 
> Cc: zfs-discuss@opensolaris.org
>
>
> > Well i just wanted to let everyone know that preliminary results are
> good.
> >  The livecd booted, all important things seem to be recognized. It
> > sees all
> > 16 gb of ram i installed and all 8 cores of my opteron 6128
> >
> > The only real shocker is how loud the norco RPC-4220 fans are (i have
> > another machine with a norco 4020 case so i assumed the fans would be
> > similar.this was a BAD assumption)  This thing sounds like a hair
> > dryer
> >
> > Anyways, I'm running the install now so we'll see how that goes. It
> > did take
> > about 10 minutes to "find a disk" durring the installer, but if i
> remember
> > right, this happened on other machines as well.
> >
>
> Once you have the install done could you post ( somewhere ) what you see
> during a single user mode boot with options -srv ?
>
> I would like to see all the gory details.
>
> Also, could you run "cpustat -h" ?
>
> At the bottom, according to usr/src/uts/intel/pcbe/opteron_pcbe.c you shoud
> see :
>
> See "BIOS and Kernel Developer's Guide (BKDG) For AMD Family 10h
> Processors" (AMD publication 31116)
>
> The following registers should be listed :
>
>  #defineAMD_FAMILY_10h_generic_events
> \
>{ "PAPI_tlb_dm","DC_dtlb_L1_miss_L2_miss",  0x7 },  \
>{ "PAPI_tlb_im","IC_itlb_L1_miss_L2_miss",  0x3 },  \
>{ "PAPI_l3_dcr","L3_read_req",  0xf1 }, \
>{ "PAPI_l3_icr","L3_read_req",  0xf2 }, \
>{ "PAPI_l3_tcr","L3_read_req",  0xf7 }, \
>{ "PAPI_l3_stm","L3_miss",  0xf4 }, \
>{ "PAPI_l3_ldm","L3_miss",  0xf3 }, \
>{ "PAPI_l3_tcm","L3_miss",  0xf7 }
>
>
> You should NOT see anything like this :
>
> r...@aequitas:/root# uname -a
> SunOS aequitas 5.11 snv_139 i86pc i386 i86pc Solaris
> r...@aequitas:/root# cpustat -h
> cpustat: cannot access performance counters - Operation not applicable
>
>
> ... as well as psrinfo -pv please ?
>
>
> When I get my HP Proliant with the 6174 procs I'll be sure to post whatever
> I see.
>
> Dennis
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-17 Thread Thomas Burgess
psrinfo -pv shows:


The physical processor has 8 virtual processors (0-7)
x86  (AuthenticAMD 100F91 family 16 model 9 step 1 clock 200 MHz)
   AMD Opteron(tm) Processor 6128   [  Socket: G34 ]




On Sat, May 15, 2010 at 8:35 PM, Dennis Clarke wrote:

> - Original Message -
> From: Thomas Burgess 
> Date: Saturday, May 15, 2010 8:09 pm
> Subject: Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?
> To: Orvar Korvar 
> Cc: zfs-discuss@opensolaris.org
>
>
> > Well i just wanted to let everyone know that preliminary results are
> good.
> >  The livecd booted, all important things seem to be recognized. It
> > sees all
> > 16 gb of ram i installed and all 8 cores of my opteron 6128
> >
> > The only real shocker is how loud the norco RPC-4220 fans are (i have
> > another machine with a norco 4020 case so i assumed the fans would be
> > similar.this was a BAD assumption)  This thing sounds like a hair
> > dryer
> >
> > Anyways, I'm running the install now so we'll see how that goes. It
> > did take
> > about 10 minutes to "find a disk" durring the installer, but if i
> remember
> > right, this happened on other machines as well.
> >
>
> Once you have the install done could you post ( somewhere ) what you see
> during a single user mode boot with options -srv ?
>
> I would like to see all the gory details.
>
> Also, could you run "cpustat -h" ?
>
> At the bottom, according to usr/src/uts/intel/pcbe/opteron_pcbe.c you shoud
> see :
>
> See "BIOS and Kernel Developer's Guide (BKDG) For AMD Family 10h
> Processors" (AMD publication 31116)
>
> The following registers should be listed :
>
>  #defineAMD_FAMILY_10h_generic_events
> \
>{ "PAPI_tlb_dm","DC_dtlb_L1_miss_L2_miss",  0x7 },  \
>{ "PAPI_tlb_im","IC_itlb_L1_miss_L2_miss",  0x3 },  \
>{ "PAPI_l3_dcr","L3_read_req",  0xf1 }, \
>{ "PAPI_l3_icr","L3_read_req",  0xf2 }, \
>{ "PAPI_l3_tcr","L3_read_req",  0xf7 }, \
>{ "PAPI_l3_stm","L3_miss",  0xf4 }, \
>{ "PAPI_l3_ldm","L3_miss",  0xf3 }, \
>{ "PAPI_l3_tcm","L3_miss",  0xf7 }
>
>
> You should NOT see anything like this :
>
> r...@aequitas:/root# uname -a
> SunOS aequitas 5.11 snv_139 i86pc i386 i86pc Solaris
> r...@aequitas:/root# cpustat -h
> cpustat: cannot access performance counters - Operation not applicable
>
>
> ... as well as psrinfo -pv please ?
>
>
> When I get my HP Proliant with the 6174 procs I'll be sure to post whatever
> I see.
>
> Dennis
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-17 Thread Thomas Burgess
no.it doesn't.  The only sata ports that show up are the ones connected
to the backpane via the reverse breakout sas cableand they show as
 emptyso i'm thinking that opensolaris isn't working with the chipset
sata.

In the bios i can select from:

Native IDE
AMD_AHCI
RAID
Legacy IDE


I have it set to AMD_AHCIbut my board also has an IDE slot which i was
using for the CDROM drive (this is what i used to load opensolaris in the
first place)

I also have an option called  "Sate IDE combined mode"

I think this may be my problem...i had this enabled, because i thought i
needed it in order to use both sata and idei think now it's something
else.


I'm going to try to boot without it on, if it doesn't work, i'll try to
reinstall with it disabled.



On Sun, May 16, 2010 at 8:18 PM, Ian Collins  wrote:

> On 05/17/10 12:08 PM, Thomas Burgess wrote:
>
>> well, i haven't had a lot of time to work with this...but i'm having
>> trouble getting the onboard sata to work in anything but NATIVE IDE mode.
>>
>>
>> I'm not sure exactly what the problem isi'm wondering if i bought the
>> wrong cable (i have a norco 4220 case so the drives connect via a sas
>> sff-8087 on the backpane)
>>
>> I thought this required a "reverse breakout cable" but maybe i was
>> wrongthis is the first time i've worked with sas
>>
>> on the otherhand, I was able to flash my intel Intel SASUC8I cards with
>> the LSI SAS3081E IT firmware from the LSI site.  These seem to work fine.  I
>> think i'm just going to order a 3rd card and put it in the pci-e x4 slot.  I
>> don't want 16 drives running as sata and 4 running in IDE mode.Is there
>> any way i can tell if the drive i installed opensolaris to is in IDE or SATA
>> mode?
>>
>>  Does it show up in cfgadm?
>
> --
> Ian.
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-17 Thread Thomas Burgess
ok, well this was part of the problem.

I disabled the Sata IDE combined mode and reinstalled opensolaris (i tried
to just disable it but osol wouldn't boot)


now the drive connected to the SSD DOES show up in cfgadm so it seems to be
in sata mode...but the drives connected to the reverse breakout cable still
don't show up.

On the bright side, the drives connected to my SAS cards (through the same
backpane, with a standard sff-8087 to sff-8087 cable) DO show up.


S, now i just need to figure out why these 4 drives aren't showing up.

(my case is the norco RPC-4220, i thought i'd be ok with 2 SAS cards (8 sata
ports each) and then 4 of the onboard ports using the reverse breakout
cable.something must be wrong with the cablei'll test the 2 drives
connected directly in a biti have to take everything appart to do that)

On Mon, May 17, 2010 at 4:04 PM, Brandon High  wrote:

> On Mon, May 17, 2010 at 12:51 PM, Thomas Burgess 
> wrote:
> > In the bios i can select from:
> > Native IDE
> > AMD_AHCI
>
> This is probably what you want. AHCI is supposed to be chipset agnostic.
>
> > I also have an option called  "Sate IDE combined mode"
>
> See if there's anything in the docs about what this actually does. You
> might need it to use the PATA port, but it could be what's messing
> things up. If you can't use the cdrom, maybe install from a thumb
> drive or usb crdrom. (My ASUS M2N-LR board refuses to boot from a
> thumb drive. Likewise with a friend's Supermicro Intel board. Both
> work fine from a usb cdrom.)
>
> > I think this may be my problem...i had this enabled, because i thought i
> > needed it in order to use both sata and idei think now it's something
> > else.
>
> I think so. It makes the first 4 ports look like IDE drives (two
> channels, two drives per channel) and the remaining BIOS RAID or AHCI.
>
> -B
>
> --
> Brandon High : bh...@freaks.com
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Strategies for expanding storage area of home storage-server

2010-05-17 Thread Thomas Burgess
I'd have to agree.  Option 2 is probably the best.

I recently found myself in need of more space...i had to build an entirely
new server...my first one was close to full (it has 20 1TB drives in 3
raidz2 groups 7/7/6 and i was down to 3 TB)  I ended up going with a whole
new serverwith 2TB drives this time...I considered replacing the drives
in my current server with new 2 TB drives but for the money, it made more
sense to keep that server online and build a second

That's where i am now...If i could have done what you are looking to do, it
woudl have been a lot easier

On Mon, May 17, 2010 at 11:29 AM, Freddie Cash  wrote:

> On Mon, May 17, 2010 at 6:25 AM, Andreas Gunnarsson wrote:
>
>> I've got a home-storage-server setup with Opensolaris (currently dev build
>> 134) that is quickly running out of storage space, and I'm looking through
>> what kind of options I have for expanding it.
>>
>> I currently have my "storage-pool" in a 4x 1TB drive setup in RAIDZ1, and
>> have room for 8-9 more drives in the case/controllers.
>> Preferably I'd like to change it all to a RAIDZ2 with 12 drives, and 1
>> hotspare, but that would require me to transfer out all the data to an
>> external storage, and then recreating a new pool, which would require me
>> buying some additional external storage that will not be used after I'm done
>> with the transfer.
>>
>> I could also add 2 more 4 drive vdevs to the current pool, but then I
>> would have 3 RAIDZ1 vdevs striped, and I'm not entirely sure that I'm
>> comfortable with that level of protection on the data.
>>
>> Another version would be creating a 6 drive RAIDZ2 pool, moving the data
>> to that one and the destroying the old pool and adding another 6 drive vdev
>> to the new pool (striped).
>>
>> So the question is what would you recommend for growing my storage space:
>> 1. Buying extra hardware to copy the data to, and rebuild the pool as a 12
>> drive RAIDZ2.
>> 2. Move data to a 6 drive RAIDZ2 and then destroy the old pool and stripe
>> an additional RAIDZ2 vdevs.
>> 3. Stripe 2 additional RAIDZ1 4 drive vdevs.
>> 4. Something else.
>
>
> I'd go with option 2.
>
> Create a 6-drive raidz2 vdev in a separate pool.  Migrate the data from the
> old pool to the new pool.  Destroy the old pool.  Create a second 6-drive
> raidz2 vdev in the new pool.  Voila!  You'll have a lot of extra space, be
> able to withstand up to 4 drive failures (2 per vdev), and it should be
> faster as well (even with the added overhead of raidz2).
>
> Option 3 would give the best performance, but you don't have much leeway in
> terms of resilver time if using 1 TB+ drives, and if a second drive fails
> while the first is resilvering ...
>
> Option 1 would be horrible in terms of performance.  Especially resilver
> times, as you'll be thrashing 12 drives.
>
> --
> Freddie Cash
> fjwc...@gmail.com
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool revcovery from replaced disks.

2010-05-18 Thread Thomas Burgess
wow, that's a truly excelent question.

If you COULD do it, it might work with a simple import

but i have no idea...i'd love to know myself.


On Tue, May 18, 2010 at 7:06 AM, Demian Phillips
wrote:

> Is it possible to recover a pool (as it was) from a set of disks that
> were replaced during a capacity upgrade?
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ideal SATA/SAS Controllers for ZFS

2010-05-18 Thread Thomas Burgess
A really great alternative to the UIO cards for those who don't want the
headache of modifying the brackets or cases is the Intel SASUC8I
*
*
*
*
*This is a rebranded LSI SAS3081E-R*
*
*
*It can be flashed with the LSI IT firmware from the LSI website and is
physically identical to the LSI card.  It is really the exact same card, and
typically around 140-160 dollars.*
*
*
*These are what i went with.*
* *
On Tue, May 18, 2010 at 12:28 PM, Marc Bevand  wrote:

> Marc Nicholas  gmail.com> writes:
> >
> > Nice write-up, Marc.Aren't the SuperMicro cards their funny "UIO" form
> > factor? Wouldn't want someone buying a card that won't work in a standard
> > chassis.
>
> Yes, 4 or the 6 Supermicro cards are UIO cards. I added a warning about it.
> Thanks.
>
> -mrb
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] send/recv over ssh

2010-05-20 Thread Thomas Burgess
I know i'm probably doing something REALLY stupid.but for some reason i
can't get send/recv to work over ssh.  I just built a new media server and
i'd like to move a few filesystem from my old server to my new server but
for some reason i keep getting strange errors...

At first i'd see something like this:


pfexec: can't get real path of ``/usr/bin/zfs''


or something like this:


zfs: Command not found


from google it's mentioned something about nfs but i've disabled autofs..

anyways, thanks for any helpi know it is just something stupid but my
brain just isn't working...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] send/recv over ssh

2010-05-20 Thread Thomas Burgess
also, i forgot to say:


one server is b133, the new one is b134



On Thu, May 20, 2010 at 4:23 PM, Thomas Burgess  wrote:

> I know i'm probably doing something REALLY stupid.but for some reason i
> can't get send/recv to work over ssh.  I just built a new media server and
> i'd like to move a few filesystem from my old server to my new server but
> for some reason i keep getting strange errors...
>
> At first i'd see something like this:
>
>
> pfexec: can't get real path of ``/usr/bin/zfs''
>
>
> or something like this:
>
>
> zfs: Command not found
>
>
> from google it's mentioned something about nfs but i've disabled autofs..
>
> anyways, thanks for any helpi know it is just something stupid but my
> brain just isn't working...
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] send/recv over ssh

2010-05-21 Thread Thomas Burgess
I seem to be getting decent speed with arcfour (this was what i was using to
begin with)

Thanks for all the helpthis honestly was just me being stupid...looking
back on yesterday, i can't even remember what i was doing wrong nowi was
REALLY tired when i asked this question.


On Fri, May 21, 2010 at 2:43 PM, Brandon High  wrote:

> On Fri, May 21, 2010 at 11:28 AM, David Dyer-Bennet  wrote:
> > I thought I remembered a "none" cipher, but couldn't find it the other
> > year and decided I must have been wrong.  I did use ssh-1, so maybe I
> > really WAS remembering after all.
>
> It may have been in ssh2 as well, or at least the commercial version
> .. I thought it used to be a compile time option for openssh too.
>
> > Seems a high price to pay to try to protect idiots from being idiots.
> > Anybody who doesn't understand that "encryption = none" means it's not
> > encrypted and hence not safe isn't safe as an admin anyway.
>
> Well, it won't expose your passwords since the key exchange it still
> encrypted ... That's good, right?
>
> Circling back to the original topic, you can use ssh to start up
> mbuffer on the remote side, then start the send. Something like:
>
> #!/bin/bash
>
> ssh -f r...@${recv_host} "mbuffer -q -I ${SEND_HOST}:1234 | zfs recv
> puddle/tank"
> sleep 1
> zfs send -R tank/foo/bar | mbuffer -O ${RECV_HOST}:1234
>
>
> When I was moving datasets between servers, I was on the console of
> both, so manually starting the send/recv was not a problem.
>
> I've tried doing it with netcat rather than mbuffer but it was
> painfully slow, probably due to network buffers. ncat (from the nmap
> devs) may be a suitable alternative, and can support ssl and
> certificate based auth.
>
> -B
>
> --
> Brandon High : bh...@freaks.com
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-21 Thread Thomas Burgess
 8:10:12:15:20
supported_max_cstates   0
vendor_id   AuthenticAMD

module: cpu_infoinstance: 7
name:   cpu_info7   class:misc
brand   AMD Opteron(tm) Processor 6128
cache_id7
chip_id 0
clock_MHz   2000
clog_id 7
core_id 7
cpu_typei386
crtime  9171.560266487
current_clock_Hz20
current_cstate  0
family  16
fpu_typei387 compatible
implementation  x86 (chipid 0x0 AuthenticAMD 100F91
family 16 model 9 step 1 clock 2000 MHz)
model   9
ncore_per_chip  8
ncpu_per_chip   8
pg_id   11
pkg_core_id 7
snaptime113230.737322698
socket_type G34
state   on-line
state_begin 1274377645
stepping1
supported_frequencies_Hz
 8:10:12:15:20
supported_max_cstates   0
vendor_id   AuthenticAMD


On Mon, May 17, 2010 at 5:55 PM, Dennis Clarke wrote:

>
> >On 05-17-10, Thomas Burgess  wrote:
> >psrinfo -pv shows:
> >
> >The physical processor has 8 virtual processors (0-7)
> >x86  (AuthenticAMD 100F91 family 16 model 9 step 1 clock 200 MHz)
> >   AMD Opteron(tm) Processor 6128   [  Socket: G34 ]
> >
>
> That's odd.
>
> Please try this :
>
> # kstat -m cpu_info -c misc
> module: cpu_infoinstance: 0
> name:   cpu_info0   class:misc
>brand   VIA Esther processor 1200MHz
>cache_id0
>chip_id 0
>clock_MHz   1200
>clog_id 0
>core_id 0
>cpu_typei386
>crtime  3288.24125364
>current_clock_Hz1199974847
>current_cstate  0
>family  6
>fpu_typei387 compatible
>implementation  x86 (CentaurHauls 6A9 family 6 model
> 10 step 9 clock 1200 MHz)
>model   10
>ncore_per_chip  1
>ncpu_per_chip   1
>pg_id   -1
>pkg_core_id 0
>snaptime1526742.97169617
>socket_type Unknown
>state   on-line
>state_begin 1272610247
>stepping9
>supported_frequencies_Hz1199974847
>supported_max_cstates   0
>vendor_id   CentaurHauls
>
> You should get a LOT more data.
>
> Dennis
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-21 Thread Thomas Burgess
Something i've been meaning to ask

I'm transfering some data from my older server to my newer one.  the older
server has a socket 775 intel Q9550 8 gb ddr2 800 20 1TB drives in raidz2 (3
vdevs, 2 with 7 drives one with 6) connected to 3 AOC-SAT2-MV8 cards spread
as evenly across them as i could

The new server is socket g34 based with the opteron 6128 8 core cpu with 16
gb ddr3 1333 ECC ram with 10 2TB drives (so far) in a single raidz2 vdev
connected to 3 LSI SAS3081E-R cards (flashed with IT firmware)

I'm sure this is due to something i don't understand, but durring zfs
send/recv from the old server to the new server (3 send/recv streams)  I'm
noticing the loadavg on the old server is much less than the new one

this is form top on the old server:

load averages:  1.58,  1.57,  1.37;   up 5+05:13:17
 04:52:42


and this is the newer server

load averages:  6.20,  5.98,  5.30;   up 1+05:03:02
 18:49:57




shouldn't the newer server have LESS load?

Please forgive my ubernoobness.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-21 Thread Thomas Burgess
is 3 zfs recv's random?



On Fri, May 21, 2010 at 10:03 PM, Brandon High  wrote:

> On Fri, May 21, 2010 at 5:54 PM, Thomas Burgess 
> wrote:
> > shouldn't the newer server have LESS load?
> > Please forgive my ubernoobness.
>
> Depends on what it's doing!
>
> Load average is really how many process are waiting to run, so it's
> not always a useful metric. If there are processes waiting on disk,
> you can have high load with almost no cpu use. Check the iowait with
> iostat or top.
>
> You've got a pretty wide stripe, which isn't going to give the best
> performance, especially for random write workloads. Your old 3 vdev
> config will have better random write performance.
>
> Check to see what's using the CPU with top or prstat. prstat gives
> better info for threads, imo.
>
> -B
>
> --
> Brandon High : bh...@freaks.com
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-21 Thread Thomas Burgess
yeah, i'm aware of the performance aspects.  I use these servers as mostly
hd video servers for my house...they don't need to perform amazingly.  I
originally went with the setup on the old server because of everything i had
read about performance with wide stripes...in all honesty it performed
amazingly well, much more than i truly need...i plan to have 2 raidz2
stripes of 10 drives in this server (new one).

At most it will be serving 4-5 HD streams (mostly 720p mkv files, with some
1080p as well)

The older server can EASILY  max out 2  Gb/s links..i imagine the new server
will be able to do this as well...i think a scrub of the old server takes
4-5 hours.i'm not sure what this equates to in MB/s but its WAY more
than i ever really need.

This is what led me to use wider stripes in the new server, and i'm honestly
considering redoing the old server as well, if i switched to 2  wider
stripes instead of 3 i'd gain another TB or twofor my use i don't think
that would be a horrible thing.


On Fri, May 21, 2010 at 10:03 PM, Brandon High  wrote:

> On Fri, May 21, 2010 at 5:54 PM, Thomas Burgess 
> wrote:
> > shouldn't the newer server have LESS load?
> > Please forgive my ubernoobness.
>
> Depends on what it's doing!
>
> Load average is really how many process are waiting to run, so it's
> not always a useful metric. If there are processes waiting on disk,
> you can have high load with almost no cpu use. Check the iowait with
> iostat or top.
>
> You've got a pretty wide stripe, which isn't going to give the best
> performance, especially for random write workloads. Your old 3 vdev
> config will have better random write performance.
>
> Check to see what's using the CPU with top or prstat. prstat gives
> better info for threads, imo.
>
> -B
>
> --
> Brandon High : bh...@freaks.com
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-21 Thread Thomas Burgess
I can't tell you for sure

For some reason the server lost power and it's taking forever to come back
up.

(i'm really not sure what happened)

anyways, this leads me to my next couple questions:


Is there any way to "resume" a zfs send/recv

Why is it taking so long for the server to come up?
it's stuck on "Reading ZFS config"

and there is a FLURRY of hard drive lights blinking (all 10 in sync )



On Sat, May 22, 2010 at 12:26 AM, Brandon High  wrote:

> On Fri, May 21, 2010 at 7:57 PM, Thomas Burgess 
> wrote:
> > is 3 zfs recv's random?
>
> It might be. What do a few reports of 'iostat -xcn 30' look like?
>
> -B
>
> --
> Brandon High : bh...@freaks.com
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-21 Thread Thomas Burgess
yah, it seems that rsync is faster for what i need anywaysat least right
now...


On Sat, May 22, 2010 at 1:07 AM, Ian Collins  wrote:

> On 05/22/10 04:44 PM, Thomas Burgess wrote:
>
>> I can't tell you for sure
>>
>> For some reason the server lost power and it's taking forever to come back
>> up.
>>
>> (i'm really not sure what happened)
>>
>> anyways, this leads me to my next couple questions:
>>
>>
>> Is there any way to "resume" a zfs send/recv
>>
>>  Nope.
>
>
>  Why is it taking so long for the server to come up?
>> it's stuck on "Reading ZFS config"
>>
>> and there is a FLURRY of hard drive lights blinking (all 10 in sync )
>>
>>  It's cleaning up the mess.  If you had a lot of data copied over, it'll
> take a while deleting it!
>
> --
> Ian.
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-21 Thread Thomas Burgess
  0.24.21.5  13  17 c6t4d0
   55.72.0 3821.3   91.1  0.3  0.24.73.0   6  10 c6t5d0
   81.22.0 5866.7   91.2  0.2  0.41.95.2   5  14 c6t6d0
0.9  227.2   23.4 28545.1  4.7  0.6   20.42.8  63  64 c8t5d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t7d0
 cpu
 us sy wt id
 39 32  0 29
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.00.00.00.0  0.0  0.00.00.0   0   0 fd0
1.52.4   35.4   33.6  0.0  0.03.61.0   0   0 c8t1d0
  105.81.9 5560.1   95.5  0.3  0.32.72.9   8  16 c5t0d0
  109.62.5 5546.4   95.6  0.0  0.50.04.3   0  13 c4t0d0
  110.82.6 5504.7   95.4  0.3  0.32.22.6   7  15 c4t1d0
  104.62.4 5596.9   95.5  0.0  0.60.05.4   0  15 c5t1d0
  109.92.2 5522.1   86.1  0.2  0.32.02.5   7  14 c4t2d0
  104.61.9 5533.6   86.2  0.3  0.32.53.1   7  16 c5t2d0
  109.22.7 5498.4   86.1  0.2  0.32.12.4   7  14 c4t3d0
  105.32.9 5593.8   95.5  0.0  0.60.05.1   0  15 c5t3d0
   57.81.9 3938.4   90.7  0.2  0.13.51.5   6   9 c4t5d0
   50.82.3 3298.6   90.8  0.0  0.30.05.2   0   8 c5t4d0
  105.02.6 5541.2   86.1  0.4  0.23.71.4  11  15 c5t5d0
   90.82.3 6376.7   90.7  0.2  0.32.43.1   6  13 c5t6d0
   87.41.8 6085.2   90.6  0.0  0.50.05.4   0  13 c5t7d0
  104.22.4 5550.8   86.1  0.0  0.50.05.0   0  14 c6t0d0
  106.82.4 5543.6   95.5  0.0  0.60.05.5   0  16 c6t1d0
  105.42.5 5517.5   86.1  0.4  0.23.81.4  12  16 c6t2d0
  106.62.4 5569.1   95.6  0.0  0.50.05.0   0  15 c6t3d0
  107.22.2 5536.4   86.1  0.2  0.32.12.8   7  15 c6t4d0
   61.22.4 4085.2   90.7  0.0  0.30.05.4   0  10 c6t5d0
   70.31.8 5018.2   90.7  0.3  0.14.71.7   9  12 c6t6d0
0.8  203.3   12.3 25514.5  3.9  0.6   19.22.7  54  55 c8t5d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t7d0
 cpu
 us sy wt id
 38 30  0 32
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.00.00.00.0  0.0  0.00.00.0   0   0 fd0
2.22.5   64.2   35.2  0.0  0.03.30.9   0   0 c8t1d0
   98.63.1 5441.3  110.3  0.0  0.60.05.9   0  16 c5t0d0
  102.13.7 5392.7  110.2  0.0  0.50.04.3   0  13 c4t0d0
  104.13.3 5390.7  110.4  0.0  0.50.05.0   0  15 c4t1d0
   98.23.0 5437.3  110.2  0.0  0.50.05.1   0  14 c5t1d0
  104.73.8 5437.3  104.5  0.0  0.50.04.8   0  15 c4t2d0
   97.73.4 5481.1  104.6  0.0  0.60.06.0   0  16 c5t2d0
  103.13.4 5468.4  104.6  0.0  0.60.05.2   0  15 c4t3d0
   98.73.0 5415.2  110.3  0.0  0.50.05.1   0  14 c5t3d0
   55.73.1 3883.4   93.7  0.1  0.12.02.5   4   8 c4t5d0
   44.52.9 3141.2   93.6  0.0  0.30.05.5   0   7 c5t4d0
   99.23.3 5464.0  104.5  0.4  0.24.21.5  12  15 c5t5d0
   82.32.8 6119.3   93.4  0.0  0.50.06.4   0  14 c5t6d0
   75.22.7 5601.1   93.4  0.1  0.41.74.8   3  13 c5t7d0
   97.83.1 5458.8  104.5  0.0  0.50.05.2   0  14 c6t0d0
   99.23.2 5441.5  110.2  0.0  0.60.05.8   0  16 c6t1d0
   98.43.0 5475.7  104.6  0.3  0.43.03.5   8  17 c6t2d0
   99.83.0 5434.4  110.1  0.0  0.50.05.1   0  14 c6t3d0
  100.63.2 5453.9  104.6  0.0  0.60.05.5   0  15 c6t4d0
   54.93.0 3878.1   93.5  0.1  0.21.54.2   3   9 c6t5d0
   68.42.9 5128.3   93.5  0.2  0.33.14.2   6  13 c6t6d0
0.9  201.9   34.2 25338.0  3.8  0.5   18.92.6  51  52 c8t5d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t7d0


On Sat, May 22, 2010 at 12:26 AM, Brandon High  wrote:

> On Fri, May 21, 2010 at 7:57 PM, Thomas Burgess 
> wrote:
> > is 3 zfs recv's random?
>
> It might be. What do a few reports of 'iostat -xcn 30' look like?
>
> -B
>
> --
> Brandon High : bh...@freaks.com
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-21 Thread Thomas Burgess
well it wasn't.

it was running pretty slow.

i had one "really big" filesystemwith rsync i'm able to do multiple
streams and it's moving much faster


On Sat, May 22, 2010 at 1:45 AM, Ian Collins  wrote:

> On 05/22/10 05:22 PM, Thomas Burgess wrote:
>
>> yah, it seems that rsync is faster for what i need anywaysat least
>> right now...
>>
>>  ZFS send/receive should run at wire speed for a Gig-E link.
>
> Ian.
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-22 Thread Thomas Burgess
yah, unfortunately this is the first send.  i'm trying to send 9 TB of data.
 It really sucks because i was at 6 TB when it lost power

On Sat, May 22, 2010 at 2:34 AM, Brandon High  wrote:

> You can "resume" a send if the destination has a snapshot in common with
> the source. If you don't, there's nothing you can do.
>
> It probably taking a while to restart because the sends that were
> interrupted need to be rolled back.
>
> Sent from my Nexus One.
>
> On May 21, 2010 9:44 PM, "Thomas Burgess"  wrote:
>
> I can't tell you for sure
>
> For some reason the server lost power and it's taking forever to come back
> up.
>
> (i'm really not sure what happened)
>
> anyways, this leads me to my next couple questions:
>
>
> Is there any way to "resume" a zfs send/recv
>
> Why is it taking so long for the server to come up?
> it's stuck on "Reading ZFS config"
>
> and there is a FLURRY of hard drive lights blinking (all 10 in sync )
>
>
>
>
>
> On Sat, May 22, 2010 at 12:26 AM, Brandon High  wrote:
> >
> > On Fri, May 21, 201...
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HDD Serial numbers for ZFS

2010-05-22 Thread Thomas Burgess
install smartmontools


There is no package for it but it's EASY to install

once you do, you can get ouput like this:


pfexec /usr/local/sbin/smartctl -d sat,12 -a /dev/rdsk/c5t0d0
smartctl 5.39.1 2010-01-28 r3054 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family: Seagate Barracuda 7200.12 family
Device Model: ST31000528AS
Serial Number:6VP06FF5
Firmware Version: CC34
User Capacity:1,000,204,886,016 bytes
Device is:In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  ATA-8-ACS revision 4
Local Time is:Sat May 22 11:15:50 2010 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status:  (   0) The previous self-test routine
completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection:  ( 609) seconds.
Offline data collection
capabilities:  (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities:(0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability:(0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time:  (   1) minutes.
Extended self-test routine
recommended polling time:  ( 192) minutes.
Conveyance self-test routine
recommended polling time:  (   2) minutes.
SCT capabilities:(0x103f) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE  UPDATED
 WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate 0x000f   113   099   006Pre-fail  Always
  -   55212722
  3 Spin_Up_Time0x0003   095   095   000Pre-fail  Always
  -   0
  4 Start_Stop_Count0x0032   100   100   020Old_age   Always
  -   132
  5 Reallocated_Sector_Ct   0x0033   100   100   036Pre-fail  Always
  -   1
  7 Seek_Error_Rate 0x000f   081   060   030Pre-fail  Always
  -   136183285
  9 Power_On_Hours  0x0032   091   091   000Old_age   Always
  -   7886
 10 Spin_Retry_Count0x0013   100   100   097Pre-fail  Always
  -   0
 12 Power_Cycle_Count   0x0032   100   100   020Old_age   Always
  -   132
183 Runtime_Bad_Block   0x   100   100   000Old_age   Offline
   -   0
184 End-to-End_Error0x0032   100   100   099Old_age   Always
  -   0
187 Reported_Uncorrect  0x0032   100   100   000Old_age   Always
  -   0
188 Command_Timeout 0x0032   100   100   000Old_age   Always
  -   0
189 High_Fly_Writes 0x003a   085   085   000Old_age   Always
  -   15
190 Airflow_Temperature_Cel 0x0022   063   054   045Old_age   Always
  -   37 (Lifetime Min/Max 32/40)
194 Temperature_Celsius 0x0022   037   046   000Old_age   Always
  -   37 (0 16 0 0)
195 Hardware_ECC_Recovered  0x001a   048   025   000Old_age   Always
  -   55212722
197 Current_Pending_Sector  0x0012   100   100   000Old_age   Always
  -   0
198 Offline_Uncorrectable   0x0010   100   100   000Old_age   Offline
   -   0
199 UDMA_CRC_Error_Count0x003e   200   200   000Old_age   Always
  -   0
240 Head_Flying_Hours   0x   100   253   000Old_age   Offline
   -   23691039612915
241 Total_LBAs_Written  0x   100   253   000Old_age   Offline
   -   263672243
242 Total_LBAs_Read 0x   100   253   000Old_age   Offline
   -   960644151

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]


SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
100  Not_testing
200  Not_testing
300  Not_testing
400  Not_testing
500  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.


On Sat, May 22, 2010 at 3:09 AM, Andreas Iannou <
andreas_wants_the_w...@hotmail.com> wrote:

>  

Re: [zfs-discuss] HDD Serial numbers for ZFS

2010-05-22 Thread Thomas Burgess
i don't think there is but it's dirt simple to install.

I followed the instructions here:


http://cafenate.wordpress.com/2009/02/22/setting-up-smartmontools-on-opensolaris/



On Sat, May 22, 2010 at 3:19 AM, Andreas Iannou <
andreas_wants_the_w...@hotmail.com> wrote:

>  Thanks Thomas, I thought there'd already be a package in the repo for it.
>
> Cheers,
> Andre
>
> --
> Date: Sat, 22 May 2010 03:17:38 -0400
> Subject: Re: [zfs-discuss] HDD Serial numbers for ZFS
> From: wonsl...@gmail.com
> To: andreas_wants_the_w...@hotmail.com
> CC: zfs-discuss@opensolaris.org
>
> install smartmontoolsá
>
>
> There is no package for it but it's EASY to install
>
> once you do, you can get ouput like this:
>
>
>  pfexec /usr/local/sbin/smartctl -d sat,12 -a /dev/rdsk/c5t0d0
> smartctl 5.39.1 2010-01-28 r3054 [i386-pc-solaris2.11] (local build)
> Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net
>
> === START OF INFORMATION SECTION ===
> Model Family: á á Seagate Barracuda 7200.12 family
> Device Model: á á ST31000528AS
> Serial Number: á á6VP06FF5
> Firmware Version: CC34
> User Capacity: á á1,000,204,886,016 bytes
> Device is: á á á áIn smartctl database [for details use: -P show]
> ATA Version is: á 8
> ATA Standard is: áATA-8-ACS revision 4
> Local Time is: á áSat May 22 11:15:50 2010 EDT
> SMART support is: Available - device has SMART capability.
> SMART support is: Enabled
>
> === START OF READ SMART DATA SECTION ===
> SMART overall-health self-assessment test result: PASSED
>
> General SMART Values:
> Offline data collection status: á(0x82) Offline data collection activity
> was completed without error.
> Auto Offline Data Collection: Enabled.
> Self-test execution status: á á á( á 0) The previous self-test routine
> completed
> without error or no self-test has everá
> been run.
> Total time to complete Offlineá
> data collection: ( 609) seconds.
> Offline data collection
> capabilities: (0x7b) SMART execute Offline immediate.
> Auto Offline data collection on/off support.
> Suspend Offline collection upon new
> command.
> Offline surface scan supported.
> Self-test supported.
> Conveyance Self-test supported.
> Selective Self-test supported.
> SMART capabilities: á á á á á á(0x0003) Saves SMART data before entering
> power-saving mode.
> Supports SMART auto save timer.
> Error logging capability: á á á á(0x01) Error logging supported.
> General Purpose Logging supported.
> Short self-test routineá
> recommended polling time: ( á 1) minutes.
> Extended self-test routine
> recommended polling time: ( 192) minutes.
> Conveyance self-test routine
> recommended polling time: ( á 2) minutes.
> SCT capabilities: á á á (0x103f) SCT Status supported.
> SCT Feature Control supported.
> SCT Data Table supported.
>
> SMART Attributes Data Structure revision number: 10
> Vendor Specific SMART Attributes with Thresholds:
> ID# ATTRIBUTE_NAME á á á á áFLAG á á VALUE WORST THRESH TYPE á á áUPDATED
> áWHEN_FAILED RAW_VALUE
> áá1 Raw_Read_Error_Rate á á 0x000f á 113 á 099 á 006 á áPre-fail áAlways á
> á á - á á á 55212722
> áá3 Spin_Up_Time á á á á á á0x0003 á 095 á 095 á 000 á áPre-fail áAlways á
> á á - á á á 0
> áá4 Start_Stop_Count á á á á0x0032 á 100 á 100 á 020 á áOld_age á Always á
> á á - á á á 132
> áá5 Reallocated_Sector_Ct á 0x0033 á 100 á 100 á 036 á áPre-fail áAlways á
> á á - á á á 1
> áá7 Seek_Error_Rate á á á á 0x000f á 081 á 060 á 030 á áPre-fail áAlways á
> á á - á á á 136183285
> áá9 Power_On_Hours á á á á á0x0032 á 091 á 091 á 000 á áOld_age á Always á
> á á - á á á 7886
> á10 Spin_Retry_Count á á á á0x0013 á 100 á 100 á 097 á áPre-fail áAlways á
> á á - á á á 0
> á12 Power_Cycle_Count á á á 0x0032 á 100 á 100 á 020 á áOld_age á Always á
> á á - á á á 132
> 183 Runtime_Bad_Block á á á 0x á 100 á 100 á 000 á áOld_age á Offline á
> á á- á á á 0
> 184 End-to-End_Error á á á á0x0032 á 100 á 100 á 099 á áOld_age á Always á
> á á - á á á 0
> 187 Reported_Uncorrect á á á0x0032 á 100 á 100 á 000 á áOld_age á Always á
> á á - á á á 0
> 188 Command_Timeout á á á á 0x0032 á 100 á 100 á 000 á áOld_age á Always á
> á á - á á á 0
> 189 High_Fly_Writes á á á á 0x003a á 085 á 085 á 000 á áOld_age á Always á
> á á - á á á 15
> 190 Airflow_Temperature_Cel 0x0022 á 063 á 054 á 045 á áOld_age á Always á
> á á - á á á 37 (Lifetime Min/Max 32/40)
> 194 Temperature_Celsius á á 0x0022 á 037 á 046 á 000 á áOld_age á Always á
> á á - á á á 37 (0 16 0 0)
> 195 Hardware_ECC_Recovered á0x001a á 048 á 025 á 000 á áOld_age á Always á
> á á - á á á 55212722
> 197 Current_Pending_Sector á0x0012 á 100 á 100 á 000 á áOld_age á Al

Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-22 Thread Thomas Burgess
i only care about the most recent snapshot, as this is a growing video
collection.

i do have snapshots, but i only keep them for when/if i accidently delete
something, or rename something wrong.


On Sat, May 22, 2010 at 3:43 AM, Brandon High  wrote:

> On Fri, May 21, 2010 at 10:22 PM, Thomas Burgess 
> wrote:
> > yah, it seems that rsync is faster for what i need anywaysat least
> right
> > now...
>
> If you don't have snapshots you want to keep in the new copy, then
> probably...
>
> -B
>
> --
> Brandon High : bh...@freaks.com
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
If you install Opensolaris with the AHCI settings off, then switch them on,
it will fail to boot


I had to reinstall with the settings correct.

the best way to tell if ahci is working is to use cfgadm
if you see your drives there, ahci is on

if not, then you may need to reinstall with it on (for the rpool at least)


On Sat, May 22, 2010 at 4:43 PM, Brian  wrote:

> Is there a way within opensolaris to detect if AHCI is being used by
> various controllers?
>
> I suspect you may be accurate an AHCI is not turned on.  The bios for this
> particular motherboard is fairly confusing on the AHCI settings.  The only
> setting I have is actually in the raid section, and it seems to let select
> between "IDE/AHCI/RAID" as an option.  However, I can't tell if it applies
> only if one is using software RAID.
>
> If I set it to AHCI, another screen appears prior to boot that is titled
> AMD AHCI BIOS.  However, opensolaris hangs during booting with this enabled.
> Is there a way from the grub menu to request opensolaris boot without the
> splashscreen, but instead boot with debug information printed to the
> console?
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
just to make sure i understand what is going on here,

you have a rpool which is having performance issues, and you discovered ahci
was disabled?


you enabled it, and now it won't boot.  correct?

This happened to me and the solution was to export my storage pool and
reinstall my rpool with the ahci settings on.

Then i imported my storage pool and all was golden


On Sat, May 22, 2010 at 5:25 PM, Brian  wrote:

> Thanks -
>   I can give reinstalling a shot.  Is there anything else I should do
> first?  Should I export my tank pool?
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
This didn't work for me.  I had the exact same issue a few days ago.

My motherboard had the following:

Native IDE
AHCI
RAID
Legacy IDE

so naturally i chose AHCI, but it ALSO had a mode called "IDE/SATA combined
mode"

I thought i needed this to use both the ide and ant sata ports, turns out it
was basically an ide emulation mode for sata, long story short i ended up
with opensolaris installed in IDE mode.

I had to reinstall.  I tried the livecd/import method and it still failed to
boot.


On Sat, May 22, 2010 at 5:30 PM, Ian Collins  wrote:

> On 05/23/10 08:52 AM, Thomas Burgess wrote:
>
>> If you install Opensolaris with the AHCI settings off, then switch them
>> on, it will fail to boot
>>
>>
>> I had to reinstall with the settings correct.
>>
>>  Well you probably didn't have to.  Booting form the live CD and importing
> the pool would have put things right.
>
> --
> Ian.
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
this old thread has info on how to switch from ide->sata mode


http://opensolaris.org/jive/thread.jspa?messageID=448758񭣶




On Sat, May 22, 2010 at 5:32 PM, Ian Collins  wrote:

> On 05/23/10 08:43 AM, Brian wrote:
>
>> Is there a way within opensolaris to detect if AHCI is being used by
>> various controllers?
>>
>> I suspect you may be accurate an AHCI is not turned on.  The bios for this
>> particular motherboard is fairly confusing on the AHCI settings.  The only
>> setting I have is actually in the raid section, and it seems to let select
>> between "IDE/AHCI/RAID" as an option.  However, I can't tell if it applies
>> only if one is using software RAID.
>>
>>
>>
> [answered in other post]
>
>
>  If I set it to AHCI, another screen appears prior to boot that is titled
>> AMD AHCI BIOS.  However, opensolaris hangs during booting with this enabled.
>> Is there a way from the grub menu to request opensolaris boot without the
>> splashscreen, but instead boot with debug information printed to the
>> console?
>>
>>
>
> Just hit a key once the bar is moving.
>
> --
> Ian.
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
GREAT, glad it worked for you!



On Sat, May 22, 2010 at 7:39 PM, Brian  wrote:

> Ok.  What worked for me was booting with the live CD and doing:
>
> pfexec zpool import -f rpool
> reboot
>
> After that I was able to boot with AHCI enabled.  The performance issues I
> was seeing are now also gone.  I am getting around 100 to 110 MB/s during a
> scrub.  Scrubs are completing in 20 minutes for 1TB of data rather than 1.2
> hours.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] snapshots send/recv

2010-05-22 Thread Thomas Burgess
I'm confusedI have a filesystem on server 1 called tank/nas/dump

I made a snapshot called first

zfs snapshot tank/nas/d...@first

then i did a zfs send/recv like:

zfs send tank/nas/d...@first | ssh wonsl...@192.168.1.xx "/bin/pfexec
/usr/sbin/zfs recv tank/nas/dump"


this worked fine, next today, i wanted to send what has changed

i did


zfs snapshot tank/nas/d...@second


now, heres where i'm confusedfrom reading the man page i thought this
command would work:


pfexec zfs send -i tank/nas/d...@first tank/nas/d...@second| ssh
wonsl...@192.168.1.15 "/bin/pfexec /usr/sbin/zfs recv -vd tank/nas/dump"



but i get an error:

cannot receive incremental stream: destination tank/nas/dump has been
modified
since most recent snapshot


why is this?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshots send/recv

2010-05-22 Thread Thomas Burgess
On Sat, May 22, 2010 at 9:26 PM, Ian Collins  wrote:

> On 05/23/10 01:18 PM, Thomas Burgess wrote:
>
>>
>> this worked fine, next today, i wanted to send what has changed
>>
>> i did
>> zfs snapshot tank/nas/d...@second
>>
>> now, heres where i'm confusedfrom reading the man page i thought this
>> command would work:
>>
>> pfexec zfs send -i tank/nas/d...@first tank/nas/d...@second| ssh
>> wonsl...@192.168.1.15 <mailto:wonsl...@192.168.1.15> "/bin/pfexec
>> /usr/sbin/zfs recv -vd tank/nas/dump"
>>
>>  It should (you can shorten the first snap to "first".
>
>
>> but i get an error:
>>
>> cannot receive incremental stream: destination tank/nas/dump has been
>> modified
>> since most recent snapshot
>>
>>  Well has it?  Even wandering around the filesystem with atime enabled
> will cause this error.
>
> Add -f to the receive to force a roll-back to the state after the original
> snap.
>
> Ahh, this i didn't know. Yes, i DID cd to the dir and check some stuff and
atime IS enabledthis is NOT very intuitive.

adding -F worked...thanks




> --
>
> Ian.
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshots send/recv

2010-05-22 Thread Thomas Burgess
oh, this makes sense

let me ask a question though.

Lets say i have a filesystem

tank/something

i make the snapshot

tank/someth...@one

i send/recv it


then i do something (add a file...remove something, whatever) on the send
side, then i do a send/recv and force it of the next filesystem

will the new recv'd filesystem be identical to the original forced snapshot
or will it be a combination of the 2?


On Sat, May 22, 2010 at 11:50 PM, Edward Ned Harvey
wrote:

> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Thomas Burgess
> >
> > but i get an error:
> >
> > cannot receive incremental stream: destination tank/nas/dump has been
> > modified
> > since most recent snapshot
>
> Whenever you send a snap, and you intend to later receive an incremental,
> just make the filesystem read-only, to ensure you'll be able to receive the
> incremental later.
>
> zfs set readonly=on somefilesystem
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshots send/recv

2010-05-22 Thread Thomas Burgess
ok, so forcing just basically makes it drop whatever "changes" were made

Thats what i was wondering...this is what i expected


On Sun, May 23, 2010 at 12:05 AM, Ian Collins  wrote:

> On 05/23/10 03:56 PM, Thomas Burgess wrote:
>
>> let me ask a question though.
>>
>> Lets say i have a filesystem
>>
>> tank/something
>>
>> i make the snapshot
>>
>> tank/someth...@one
>>
>> i send/recv it
>>
>> then i do something (add a file...remove something, whatever) on the send
>> side, then i do a send/recv and force it of the next filesystem
>>
>>  What do you mean "force it of the next filesystem"?
>
>
>  will the new recv'd filesystem be identical to the original forced
>> snapshot or will it be a combination of the 2?
>>
>
> The received filesystem will be identical to the sending one.
>
> --
> Ian.
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] confused

2010-05-23 Thread Thomas Burgess
did this come out?

http://cr.opensolaris.org/~gman/opensolaris-whats-new-2010-05/

i was googling trying to find info about the next release and ran across
this


Does this mean it's actually about to come out before the end of the month
or is this something else?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] confused

2010-05-23 Thread Thomas Burgess
never mindjust found more info on this...shoudl have held back from
asking


On Mon, May 24, 2010 at 1:26 AM, Thomas Burgess  wrote:

> did this come out?
>
> http://cr.opensolaris.org/~gman/opensolaris-whats-new-2010-05/
>
> i was googling trying to find info about the next release and ran across
> this
>
>
> Does this mean it's actually about to come out before the end of the month
> or is this something else?
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] questions about zil

2010-05-24 Thread Thomas Burgess
I recently got a new SSD (ocz vertex LE 50gb)

It seems to work really well as a ZIL performance wise.  My question is, how
safe is it?  I know it doesn't have a supercap so lets' say dataloss
occursis it just dataloss or is it pool loss?


also, does the fact that i have a UPS matter?


the numbers i'm seeing are really nicethese are some nfs tar times
before zil:


real 2m21.498s

user 0m5.756s

sys 0m8.690s


real 2m23.870s

user 0m5.756s

sys 0m8.739s



and these are the same ones after.




real 0m32.739s

user 0m5.708s

sys 0m8.515s



real 0m35.580s

user 0m5.707s

sys 0m8.526s




I also sliced iti have 16 gb ram so i used a 9 gb slice for zil and the
rest for L2ARC



this is for a single 10 drive raidz2 vdev so fari'm really impressed
with the performance gains
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions about zil

2010-05-24 Thread Thomas Burgess
>
>
>  ZFS is always consistent on-disk, by design. Loss of the ZIL will result
> in loss of the data in the ZIL which hasn't been flushed out to the hard
> drives, but otherwise, the data on the hard drives is consistent and
> uncorrupted.
>
>
>
> This is what i thought.  I have read this list on and off for awhile now
but i'm not a guruI see a lot of stuff about the intel ssd and disabling
the write cacheso i just wasn't sure...This is good news.





>
>  It avoids the scenario of losing data in your ZIL due to power loss (and,
> of course, the rest of your system).  So, yes, if you actually care about
> your system, I'd recommend at least a minimal UPS to allow for quick
> shutdown after a power loss.
>
>
> yes, i have a nice little UPS.  I've tested it a few times and it seems to
work well.  It gives me about 20 minutes of power and can even send commands
via a script to shut down the system before the battery goes dry.




> That's going to pretty much be the best-case use for the ZIL - NFS writes
> being synchronous.  Of course, using the rest of the SSD for L2ARC is likely
> to be almost (if not more) helpful for performance for a wider variety of
> actions.
>
>
> yes, i have another machine without a zil (i bought a kingston 64 gb ssd on
sale and intended to try it as a zil but ultimately decided to just use it
as l2arc because of the performance numbers...)  but the l2arc helps a ton
for my uses.  I did slice this ssd...i used 9 gb for zil and the rest for
l2arc (about 36 gb)   I'm really impressed with this ssdfor only 160
dollars (180 - 20 mail in rebate) it's a killer deal.

it can do 235 MB/s sustained writes and has soemthing like 15,000 iops





> --
> Erik Trimble
> Java System Support
> Mailstop:  usca22-123
> Phone:  x17195
> Santa Clara, CA
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions about zil

2010-05-24 Thread Thomas Burgess
>
>
> Not familiar with that model
>
>
It's a sandforce sf-1500 model but without a supercapheres some info on
it:



Maximum Performance

   - Max Read: up to 270MB/s
   - Max Write: up to 250MB/s
   - Sustained Write: up to 235MB/s
   - Random Write 4k: 15,000 IOPS
   - Max 4k IOPS: 50,000



per
http://www.ocztechnology.com/products/solid-state-drives/2-5--sata-ii/performance-enterprise-solid-state-drives/ocz-vertex-limited-edition-sata-ii-2-5--ssd.html


>
>
> Wow.  That's a pretty huge improvement. :-)
>
> - Garrett (newly of Nexenta)
>
>
>
yes, i love it.  I'm really impressed with this ssd for the money160 usd
(180 - 20 rebate)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New SSD options

2010-05-24 Thread Thomas Burgess
>
>
>
> From earlier in the thread, it sounds like none of the SF-1500 based
> drives even have a supercap, so it doesn't seem that they'd necessarily
> be a better choice than the SLC-based X-25E at this point unless you
> need more write IOPS...
>
> Ray
>

I think the upcoming OCZ Vertex 2 Pro will have a supercap.

I just bought a ocz vertex le, it doesn't have a supercap but it DOES have
some awesome specs otherwise..
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] question about zpool iostat output

2010-05-25 Thread Thomas Burgess
I was just wondering:

I added a SLOG/ZIL to my new system today...i noticed that the L2ARC shows
up under it's own headingbut the SLOG/ZIL doesn'tis this correct?


see:



   capacity operationsbandwidth
poolalloc   free   read  write   read  write
--  -  -  -  -  -  -
rpool   15.3G  44.2G  0  0  0  0
  c6t4d0s0  15.3G  44.2G  0  0  0  0
--  -  -  -  -  -  -
tank10.9T  7.22T  0  2.43K  0   300M
  raidz210.9T  7.22T  0  2.43K  0   300M
c4t6d0  -  -  0349  0  37.6M
c4t5d0  -  -  0350  0  37.6M
c5t7d0  -  -  0350  0  37.6M
c5t3d0  -  -  0350  0  37.6M
c8t0d0  -  -  0354  0  37.6M
c4t7d0  -  -  0351  0  37.6M
c4t3d0  -  -  0350  0  37.6M
c5t8d0  -  -  0349  0  37.6M
c5t0d0  -  -  0348  0  37.6M
c8t1d0  -  -  0353  0  37.6M
  c6t5d0s0  0  8.94G  0  0  0  0
cache   -  -  -  -  -  -
  c6t5d0s1  37.5G  0  0158  0  19.6M



It seems sort of strange to me that it doesn't look like this instead:






   capacity operationsbandwidth
poolalloc   free   read  write   read  write
--  -  -  -  -  -  -
rpool   15.3G  44.2G  0  0  0  0
  c6t4d0s0  15.3G  44.2G  0  0  0  0
--  -  -  -  -  -  -
tank10.9T  7.22T  0  2.43K  0   300M
  raidz210.9T  7.22T  0  2.43K  0   300M
c4t6d0  -  -  0349  0  37.6M
c4t5d0  -  -  0350  0  37.6M
c5t7d0  -  -  0350  0  37.6M
c5t3d0  -  -  0350  0  37.6M
c8t0d0  -  -  0354  0  37.6M
c4t7d0  -  -  0351  0  37.6M
c4t3d0  -  -  0350  0  37.6M
c5t8d0  -  -  0349  0  37.6M
c5t0d0  -  -  0348  0  37.6M
c8t1d0  -  -  0353  0  37.6M
log   -  -  -  -  -  -
  c6t5d0s0  0  8.94G  0  0  0  0
cache   -  -  -  -  -  -
  c6t5d0s1  37.5G  0  0158  0  19.6M
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB Flashdrive as SLOG?

2010-05-25 Thread Thomas Burgess
The last couple times i've read this questions, people normally responded
with:

It depends

you might not even NEED a slog, there is a script floating around which can
help determine that...

If you could benefit from one, it's going to be IOPS which help youso if
the usb drive has more iops than your pool configuration does, then it might
give some benefit.but then again, usb might not be as safe either, and
if an older version you may want to mirror it.


On Tue, May 25, 2010 at 8:11 AM, Kyle McDonald wrote:

> Hi,
>
> I know the general discussion is about flash SSD's connected through
> SATA/SAS or possibly PCI-E these days. So excuse me if I'm askign
> something that makes no sense...
>
> I have a server that can hold 6 U320 SCSI disks. Right now I put in 5
> 300GB for a data pool, and 1 18GB for the root pool.
>
> I've been thinking lately that I'm not sure I like the root pool being
> unprotected, but I can't afford to give up another drive bay. So
> recently the idea occurred to me to go the other way. If I were to get 2
> USB Flash Thunb drives say 16 or 32 GB each, not only would i be able to
> mirror the root pool, but I'd also be able to put a 6th 300GB drive into
> the data pool.
>
> That led me to wonder whether partitioning out 8 or 12 GB on a 32GB
> thumb drive would be beneficial as an slog?? I bet the USB bus won't be
> as good as SATA or SAS, but will it be better than the internal ZIL on
> the U320 drives?
>
> This seems like at least a "win-win", and possibly a "win-win-win".
> Is there some other reason I'm insane to consider this?
>
>  -Kyle
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question about zpool iostat output

2010-05-25 Thread Thomas Burgess
i am running the last release from the genunix page

uname -a output:

SunOS wonslung-raidz2 5.11 snv_134 i86pc i386 i86pc Solaris


On Tue, May 25, 2010 at 10:33 AM, Cindy Swearingen <
cindy.swearin...@oracle.com> wrote:

> Hi Thomas,
>
> This looks like a display bug. I'm seeing it too.
>
> Let me know which Solaris release you are running and
> I will file a bug.
>
> Thanks,
>
> Cindy
>
>
> On 05/25/10 01:42, Thomas Burgess wrote:
>
>> I was just wondering:
>>
>> I added a SLOG/ZIL to my new system today...i noticed that the L2ARC shows
>> up under it's own headingbut the SLOG/ZIL doesn'tis this correct?
>>
>>
>> see:
>>
>>
>>
>>   capacity operationsbandwidth
>> poolalloc   free   read  write   read  write
>> --  -  -  -  -  -  -
>> rpool   15.3G  44.2G  0  0  0  0
>>  c6t4d0s0  15.3G  44.2G  0  0  0  0
>> --  -  -  -  -  -  -
>> tank10.9T  7.22T  0  2.43K  0   300M
>>  raidz210.9T  7.22T  0  2.43K  0   300M
>>c4t6d0  -  -  0349  0  37.6M
>>c4t5d0  -  -  0350  0  37.6M
>>c5t7d0  -  -  0350  0  37.6M
>>c5t3d0  -  -  0350  0  37.6M
>>c8t0d0  -  -  0354  0  37.6M
>>c4t7d0  -  -  0351  0  37.6M
>>c4t3d0  -  -  0350  0  37.6M
>>c5t8d0  -  -  0349  0  37.6M
>>c5t0d0  -  -  0348  0  37.6M
>>c8t1d0  -  -  0353  0  37.6M
>>  c6t5d0s0  0  8.94G  0  0  0  0
>> cache   -  -  -  -  -  -
>>  c6t5d0s1  37.5G  0  0158  0  19.6M
>>
>>
>>
>> It seems sort of strange to me that it doesn't look like this instead:
>>
>>
>>
>>
>>
>>
>>   capacity operationsbandwidth
>> poolalloc   free   read  write   read  write
>> --  -  -  -  -  -  -
>> rpool   15.3G  44.2G  0  0  0  0
>>  c6t4d0s0  15.3G  44.2G  0  0  0  0
>> --  -  -  -  -  -  -
>> tank10.9T  7.22T  0  2.43K  0   300M
>>  raidz210.9T  7.22T  0  2.43K  0   300M
>>c4t6d0  -  -  0349  0  37.6M
>>c4t5d0  -  -  0350  0  37.6M
>>c5t7d0  -  -  0350  0  37.6M
>>c5t3d0  -  -  0350  0  37.6M
>>c8t0d0  -  -  0354  0  37.6M
>>c4t7d0  -  -  0351  0  37.6M
>>c4t3d0  -  -  0350  0  37.6M
>>c5t8d0  -  -  0349  0  37.6M
>>c5t0d0  -  -  0348  0  37.6M
>>c8t1d0  -  -  0353  0  37.6M
>> log   -  -  -  -  -  -
>>  c6t5d0s0  0  8.94G  0  0  0  0
>> cache   -  -  -  -  -  -
>>  c6t5d0s1  37.5G  0  0158  0  19.6M
>>
>>
>>
>>
>>
>>
>> 
>>
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions about zil

2010-05-25 Thread Thomas Burgess
On Tue, May 25, 2010 at 11:27 AM, Edward Ned Harvey
wrote:

> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Nicolas Williams
> >
> > > I recently got a new SSD (ocz vertex LE 50gb)
> > >
> > > It seems to work really well as a ZIL performance wise.
> > > I know it doesn't have a supercap so lets' say dataloss
> > > occursis it just dataloss or is it pool loss?
> >
> > Just dataloss.
>
> WRONG!
>
> The correct answer depends on your version of solaris/opensolaris.  More
> specifically, it depends on the zpool version.  The latest fully updated
> sol10 and the latest opensolaris release (2009.06) only go up to zpool 14
> or
> 15.  But in zpool 19 is when a ZIL loss doesn't permanently offline the
> whole pool.  I know this is available in the developer builds.
>
> The best answer to this, I think, is in the ZFS Best Practices Guide:
> (uggh, it's down right now, so I can't paste the link)
>
> If you have zpool <19, and you lose an unmirrored ZIL, then you lose your
> pool.  Also, as a configurable option apparently, I know on my systems, it
> also meant I needed to power cycle.
>
> If you have zpool >=19, and you lose an unmirrored ZIL, then performance
> will be degraded, but everything continues to work as normal.
>
> Apparently the most common mode of failure for SSD's is also failure to
> read.  To make it worse, a ZIL is only read after system crash, which means
> the possibility of having a failed SSD undetected must be taken into
> consideration.  If you do discover a failed ZIL after crash, with zpool <19
> your pool is lost.  But with zpool >=19 only the unplayed writes are lost.
> With zpool >=19, your pool will be intact, but you would lose up to 30sec
> of
> writes that occurred just before the crash.
>
>
> I didn't ask about losing my zil.

I asked about power loss taking out my pool.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions about zil

2010-05-25 Thread Thomas Burgess
>
>
> At least to me, this was not clearly "not asking about losing zil" and was
> not clearly "asking about power loss."  Sorry for answering the question
> you
> thought you didn't ask.
>

I was only responding to your response of WRONG!!!   The guy wasn't wrong in
regards to my questions.  I'm sorry for not making THAT more clear in my
post.


>
> I would suggest clarifying your question, by saying instead:  "so lets' say
> *power*loss occurs"  Then it would have been clear what you were asking.
>
>
I'm pretty sure i did ask about power lossor at least it was implied by
my point about the UPS.  You're right, i probably should have been a little
more clear.


> Since this is a SSD you're talking about, unless you have enabled
> nonvolatile write cache on that disk (which you should never do), and the
> disk incorrectly handles cache flush commands (which it should never do),
> then the supercap is irrelevant.  All ZIL writes are to be done
> synchronously.
>
> This SSD doesn't use nonvolatile write cache (at least i don't think it
does, it's a SF-1500 based ssd)
I might be wrong about this, but i thought one of the biggest things about
the sandforce was that it doesn't use DRAM


> If you have a power loss, you don't lose your pool, and you also don't lose
> any writes in the ZIL.  You do, however, lose any async writes that were
> not
> yet flushed to disk.  There is no way to prevent that, regardless of ZIL
> configuration.
>
Yes, I know that i lose async writesi just wasn't sure if that resulted
in an issue...I might be somewhat confused to how the ZIL works but i
thought the point of the ZIL was to "pretend" a write actually happened when
it may not have actually been flushed to disk yet...in this case, a write to
the zil might not make it to diski just didn't know if this could result
in a loss of a pool due to some sort of corruption of the uberblock or
something.I'm not entirely up to speed on the voodoo that is ZFS.



I wasn't trying to be rude, sorry if it came off like that.

I am aware of the issue regarding removing the ZIL on non-dev versions of
opensolarisi am on b134 so that doesnt' apply to me.  Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions about zil

2010-05-25 Thread Thomas Burgess
On Tue, May 25, 2010 at 12:38 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> On Mon, 24 May 2010, Thomas Burgess wrote:
>
>>
>> It's a sandforce sf-1500 model but without a supercapheres some info
>> on it:
>>
>> Maximum Performance
>>
>>  *  Max Read: up to 270MB/s
>>  *  Max Write: up to 250MB/s
>>  *  Sustained Write: up to 235MB/s
>>  *  Random Write 4k: 15,000 IOPS
>>  *  Max 4k IOPS: 50,000
>>
>
> Isn't there a serious problem with these specifications?  It seems that the
> minimum assured performance values (and the median) are much more
> interesting than some "maximum" performance value which might only be
> reached during a brief instant of the device lifetime under extremely ideal
> circumstances.  It seems that toilet paper may of much more practical use
> than these specifications.  In fact, I reject them as being specifications
> at all.
>
> The Apollo reentry vehicle was able to reach amazing speeds, but only for a
> single use.
>
> Bob
>
What exactly do you mean?
Every review i've read about this device has been great.  Every review i've
read about the sandforce controllers has been good toare you saying they
have shorter lifetimes?  Everything i've read has made them sound like they
should last longer than typical ssds because they write less actual data




> --
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions about zil

2010-05-25 Thread Thomas Burgess
Also, let me note, it came with a 3 year warranty so I expect it to last at
least 3 years...but if it doesn't, i'll just return it under the warranty.


On Tue, May 25, 2010 at 1:26 PM, Thomas Burgess  wrote:

>
>
> On Tue, May 25, 2010 at 12:38 PM, Bob Friesenhahn <
> bfrie...@simple.dallas.tx.us> wrote:
>
>> On Mon, 24 May 2010, Thomas Burgess wrote:
>>
>>>
>>> It's a sandforce sf-1500 model but without a supercapheres some info
>>> on it:
>>>
>>> Maximum Performance
>>>
>>>  *  Max Read: up to 270MB/s
>>>  *  Max Write: up to 250MB/s
>>>  *  Sustained Write: up to 235MB/s
>>>  *  Random Write 4k: 15,000 IOPS
>>>  *  Max 4k IOPS: 50,000
>>>
>>
>> Isn't there a serious problem with these specifications?  It seems that
>> the minimum assured performance values (and the median) are much more
>> interesting than some "maximum" performance value which might only be
>> reached during a brief instant of the device lifetime under extremely ideal
>> circumstances.  It seems that toilet paper may of much more practical use
>> than these specifications.  In fact, I reject them as being specifications
>> at all.
>>
>> The Apollo reentry vehicle was able to reach amazing speeds, but only for
>> a single use.
>>
>> Bob
>>
> What exactly do you mean?
> Every review i've read about this device has been great.  Every review i've
> read about the sandforce controllers has been good toare you saying they
> have shorter lifetimes?  Everything i've read has made them sound like they
> should last longer than typical ssds because they write less actual data
>
>
>
>
>> --
>> Bob Friesenhahn
>> bfrie...@simple.dallas.tx.us,
>> http://www.simplesystems.org/users/bfriesen/
>> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>>
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ideal SATA/SAS Controllers for ZFS

2010-05-26 Thread Thomas Burgess
On Wed, May 26, 2010 at 5:47 PM, Brandon High  wrote:

> On Sat, May 15, 2010 at 4:01 AM, Marc Bevand  wrote:
> > I have done quite some research over the past few years on the best (ie.
> > simple, robust, inexpensive, and performant) SATA/SAS controllers for
> ZFS.
>
> I've spent some time looking at the capabilities of a few controllers
> based on the questions about the SiI3124 and PMP support.
>
> According to the docs, the Marvell 88SX6081 driver doesn't support NCQ
> or PMP, though the card does. While I'm not really performance bound
> on my system, I imagine NCQ would help performance a bit, at least for
> scrubs or resilvers. Even more so because I'm using the slow WD10EADS
> drives.
>
> This raises the question of whether a SAS controller supports NCQ for
> sata drives. Would an LSI 1068e based controller? What about a LSI
> 2008 based card?
>
>

If that is the chip on the AOC-SAT2-MV8 then i'm pretty sure it does suppoer
NCQ

I'm also pretty sure the LSI supports NCQ

I'm not 100% sure though
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ideal SATA/SAS Controllers for ZFS

2010-05-26 Thread Thomas Burgess
I thought it didI couldn't imagine sun using that chip in the original
thumper if it didn't suppoer NCQalso, i've read where people have had to
DISABLE ncq on this driver to fix one bug or another (as a work around)


On Wed, May 26, 2010 at 8:40 PM, Marty Faltesek
wrote:

> On Wed, 2010-05-26 at 17:18 -0700, Brandon High wrote:
> > > If that is the chip on the AOC-SAT2-MV8 then i'm pretty sure it does
> > suppoer
> > > NCQ
> >
> > Not according to the driver documentation:
> > http://docs.sun.com/app/docs/doc/819-2254/marvell88sx-7d
> > "In addition, the 88SX6081 device supports the SATA II Phase 1.0
> > specification features, including SATA II 3.0 Gbps speed, SATA II Port
> > Multiplier functionality and SATA II Port Selector. Currently the
> > driver does not support native command queuing, port multiplier or
> > port selector functionality."
> >
> > The driver source isn't available (or I couldn't find it) so it's not
> > easy to confirm.
>
> marvell88sx does support NCQ.  This man page error was corrected in
> nevada build 138.
>
> Marty
>
>
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Reconfiguring a RAID-Z dataset

2010-06-12 Thread Thomas Burgess
>
>
>  Yeah, this is what I was thinking too...
>
> Is there anyway to retain snapshot data this way? I've read about the ZFS
> replay/mirror features, but my impression was that this was more so for a
> development mirror for testing rather than a reliable backup? This is the
> only way I know of that one could do something like this. Is there some
> other way to create a solid clone, particularly with a machine that won't
> have the same drive configuration?
>
>
>
>
I recently used zfs send/recv to copy a bunch of datasets from a raidz2 box
to a box made on mirrors.  It works fine.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Reconfiguring a RAID-Z dataset

2010-06-12 Thread Thomas Burgess
On Sun, Jun 13, 2010 at 12:18 AM, Joe Auty  wrote:

>  Thomas Burgess wrote:
>
>
>>   Yeah, this is what I was thinking too...
>>
>> Is there anyway to retain snapshot data this way? I've read about the ZFS
>> replay/mirror features, but my impression was that this was more so for a
>> development mirror for testing rather than a reliable backup? This is the
>> only way I know of that one could do something like this. Is there some
>> other way to create a solid clone, particularly with a machine that won't
>> have the same drive configuration?
>>
>>
>>
>>
>  I recently used zfs send/recv to copy a bunch of datasets from a raidz2
> box to a box made on mirrors.  It works fine.
>
>
>  ZFS send/recv looks very cool and very convenient. I wonder what it was
> that I read that suggested not relying on it for backups? Maybe this was
> alluding to the notion that like relying on RAID for a backup, if there is
> corruption your mirror (i.e. machine you are using with zfs recv) will be
> corrupted too?
>
> At any rate, thanks for answering this question! At some point if I go this
> route I'll test send and recv functionality to give all of this a dry run.
>
>
>



well, it's not considered to be an "enterprise ready backup solution"  I
think this is due to the fact that you can't recover a single file from a
zfs send stream but despite this limitation it's still VERY handy.

Another reason, from what i understand by reading this list, is that the
"zfs send" streams aren't resilient.  If you do not pipe it directly into a
zfs receive, it might get corrupted and be worthless(basically don't
save the output of zfs send and expect to receive it later)

again, this is not relevant if you are doing a zfs send into a zfs receive
at the other end

I think the 2 reasons i just gave are the reasons people have warned against
it...but still, it's damn amazing.





> --
> Joe Auty, NetMusician
> NetMusician helps musicians, bands and artists create beautiful,
> professional, custom designed, career-essential websites that are easy to
> maintain and to integrate with popular social networks.
> www.netmusician.org
> j...@netmusician.org
>
>
<>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] panic after zfs mount

2010-06-13 Thread Thomas Nau
Dear all

We ran into a nasty problem the other day. One of our mirrored zpool
hosts several ZFS filesystems. After a reboot (all FS mounted at that
time an in use) the machine paniced (console output further down). After
detaching one of the mirrors the pool fortunately imported automatically
in a faulted state without mounting the filesystems. Offling the
unplugged device and clearing the fault allowed us to disable
auto-mounting the filesystems. Going through them one by one all but one
mounted OK. The one again triggered a panic. We left mounting on that
one disabled for now to be back in production after pulling data from
the backup tapes. Scrubbing didn't show any error so any idea what's
behind the problem? Any chance to fix the FS?

Thomas


---

panic[cpu3]/thread=ff0503498400: BAD TRAP: type=e (#pf Page fault)
rp=ff001e937320 addr=20 occurred in module "zfs" due to a NULL
pointer dereference

zfs: #pf Page fault
Bad kernel fault at addr=0x20
pid=27708, pc=0xf806b348, sp=0xff001e937418, eflags=0x10287
cr0: 8005003b cr4: 6f8
cr2: 20cr3: 4194a7000cr8: c

rdi: ff0503aaf9f0 rsi:0 rdx:0
rcx: 155cda0b  r8: eaa325f0  r9: ff001e937480
rax:  7ff rbx:0 rbp: ff001e937460
r10:  7ff r11:0 r12: ff0503aaf9f0
r13: ff0503aaf9f0 r14: ff001e9375d0 r15: ff001e937610
fsb:0 gsb: ff04e7e5c040  ds:   4b
 es:   4b  fs:0  gs:  1c3
trp:e err:0 rip: f806b348
 cs:   30 rfl:10287 rsp: ff001e937418
 ss:   38

ff001e937200 unix:die+dd ()
ff001e937310 unix:trap+177e ()
ff001e937320 unix:cmntrap+e6 ()
ff001e937460 zfs:zap_leaf_lookup_closest+40 ()
ff001e9374f0 zfs:fzap_cursor_retrieve+c9 ()
ff001e9375b0 zfs:zap_cursor_retrieve+19a ()
ff001e937780 zfs:zfs_purgedir+4c ()
ff001e9377d0 zfs:zfs_rmnode+52 ()
ff001e937810 zfs:zfs_zinactive+b5 ()
ff001e937860 zfs:zfs_inactive+ee ()
ff001e9378b0 genunix:fop_inactive+af ()
ff001e9378d0 genunix:vn_rele+5f ()
ff001e937ac0 zfs:zfs_unlinked_drain+af ()
ff001e937af0 zfs:zfsvfs_setup+fb ()
ff001e937b50 zfs:zfs_domount+16a ()
ff001e937c70 zfs:zfs_mount+1e4 ()
ff001e937ca0 genunix:fsop_mount+21 ()
ff001e937e00 genunix:domount+ae3 ()
ff001e937e80 genunix:mount+121 ()
ff001e937ec0 genunix:syscall_ap+8c ()
ff001e937f10 unix:brand_sys_sysenter+1eb ()


-
GPG fingerprint: B1 EE D2 39 2C 82 26 DA  A5 4D E0 50 35 75 9E ED
___
cifs-discuss mailing list
cifs-disc...@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/cifs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] panic after zfs mount

2010-06-13 Thread Thomas Nau
Thanks for the link Arne.


On 06/13/2010 03:57 PM, Arne Jansen wrote:
> Thomas Nau wrote:
>> Dear all
>>
>> We ran into a nasty problem the other day. One of our mirrored zpool
>> hosts several ZFS filesystems. After a reboot (all FS mounted at that
>> time an in use) the machine paniced (console output further down). After
>> detaching one of the mirrors the pool fortunately imported automatically
>> in a faulted state without mounting the filesystems. Offling the
>> unplugged device and clearing the fault allowed us to disable
>> auto-mounting the filesystems. Going through them one by one all but one
>> mounted OK. The one again triggered a panic. We left mounting on that
>> one disabled for now to be back in production after pulling data from
>> the backup tapes. Scrubbing didn't show any error so any idea what's
>> behind the problem? Any chance to fix the FS?
> 
> We had the same problem. Victor pointed my to
> 
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6742788
> 
> with a workaround to mount the filesystem read-only to save the data.
> I still hope to figure out the chain of events that causes this. Did you
> use any extended attributes on this filesystem?
> 
> -- 
> Arne


To my knowledge we haven't used any extended attributes but I'll double
check after mounting the filesystem read-only. As it's one that's
"exported" using Samba it might be indeed the case. For sure a lot of
ACLs are used

Thomas

-
GPG fingerprint: B1 EE D2 39 2C 82 26 DA  A5 4D E0 50 35 75 9E ED
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] panic after zfs mount

2010-06-13 Thread Thomas Nau
Arne,

On 06/13/2010 03:57 PM, Arne Jansen wrote:
> Thomas Nau wrote:
>> Dear all
>>
>> We ran into a nasty problem the other day. One of our mirrored zpool
>> hosts several ZFS filesystems. After a reboot (all FS mounted at that
>> time an in use) the machine paniced (console output further down). After
>> detaching one of the mirrors the pool fortunately imported automatically
>> in a faulted state without mounting the filesystems. Offling the
>> unplugged device and clearing the fault allowed us to disable
>> auto-mounting the filesystems. Going through them one by one all but one
>> mounted OK. The one again triggered a panic. We left mounting on that
>> one disabled for now to be back in production after pulling data from
>> the backup tapes. Scrubbing didn't show any error so any idea what's
>> behind the problem? Any chance to fix the FS?
> 
> We had the same problem. Victor pointed my to
> 
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6742788
> 
> with a workaround to mount the filesystem read-only to save the data.
> I still hope to figure out the chain of events that causes this. Did you
> use any extended attributes on this filesystem?
> 
> -- 
> Arne

Mounting the FS read-only worked, thanks again. I checked the attributes
and the set for all files is:

{archive,nohidden,noreadonly,nosystem,noappendonly,nonodump,noimmutable,av_modified,noav_quarantined,nonounlink}

so just the default ones

Thomas

-
GPG fingerprint: B1 EE D2 39 2C 82 26 DA  A5 4D E0 50 35 75 9E ED
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] size of slog device

2010-06-14 Thread Thomas Burgess
On Mon, Jun 14, 2010 at 4:41 AM, Arne Jansen  wrote:

> Hi,
>
> I known it's been discussed here more than once, and I read the
> Evil tuning guide, but I didn't find a definitive statement:
>
> There is absolutely no sense in having slog devices larger than
> then main memory, because it will never be used, right?
> ZFS will rather flush the txg to disk than reading back from
> zil?
> So there is a guideline to have enough slog to hold about 10
> seconds of zil, but the absolute maximum value is the size of
> main memory. Is this correct?
>
>


I thought it was half the size of memory.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool is wrong size in b134

2010-06-17 Thread Thomas Burgess
>
>
>
> Also, the disks were replaced one at a time last year from 73GB to 300GB to
> increase the size of the pool.  Any idea why the pool is showing up as the
> wrong size in b134 and have anything else to try?  I don't want to upgrade
> the pool version yet and then not be able to revert back...
>
> thanks,
> Ben
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


sometimes when you upgrade a pool by replacing drives with bigger ones, you
have to export the pool, then import it.

Or at least that's what i've always done
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Thomas Burgess
On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen  wrote:

> On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
> > Well, I've searched my brains out and I can't seem to find a reason for
> this.
> >
> > I'm getting bad to medium performance with my new test storage device.
> I've got 24 1.5T disks with 2 SSDs configured as a zil log device. I'm using
> the Areca raid controller, the driver being arcmsr. Quad core AMD with 16
> gig of RAM OpenSolaris upgraded to snv_134.
> >
> > The zpool has 2 11-disk raidz2's and I'm getting anywhere between 1MB/sec
> to 40MB/sec with zpool iostat. On average, though it's more like 5MB/sec if
> I watch while I'm actively doing some r/w. I know that I should be getting
> better performance.
> >
>
> How are you measuring the performance?
> Do you understand raidz2 with that big amount of disks in it will give you
> really poor random write performance?
>
> -- Pasi
>
>
i have a media server with 2 raidz2 vdevs 10 drives wide myself without a
ZIL (but with a 64 gb l2arc)

I can write to it about 400 MB/s over the network, and scrubs show 600 MB/s
but it really depends on the type of i/o you haverandom i/o across 2
vdevs will be REALLY slow (as slow as the slowest 2 drives in your pool
basically)

40 MB/s might be right if it's randomthough i'd still expect to see
more.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Thomas Burgess
On Fri, Jun 18, 2010 at 6:34 AM, Curtis E. Combs Jr. wrote:

> Oh! Yes. dedup. not compression, but dedup, yes.





dedup may be your problem...it requires some heavy ram and/or decent L2ARC
from what i've been reading.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ Vertex 2 Pro performance numbers

2010-06-25 Thread Thomas Burgess
>
>
> Conclusion: This device will make an excellent slog device. I'll order
> them today ;)
>
>
I have one and i love it...I sliced it though, used 9 gb for ZIL and the
rest for L2ARC (my server is on a smallish network with about 10 clients)

It made a huge difference in NFS performance and other stuff as well (for
instance, doing something like du will run a TON faster than before)

For the money, it's a GREAT deal.  I am very impressed



> --Arne
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Confused about consumer drives and zfs can someone help?

2010-07-23 Thread Thomas Burgess
I've found the Seagate 7200.12 1tb drives and Hitachi 7k2000 2TB drives to
be by far the best.

I've read lots of horror stories about any WD drive with 4k
sectorsit'sbest to stay away from them.

I've also read plenty of people say that the green drives are terrible.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] L2ARC and ZIL on same SSD?

2010-07-23 Thread Thomas Burgess
On Wed, Jul 21, 2010 at 12:42 PM, Orvar Korvar <
knatte_fnatte_tja...@yahoo.com> wrote:

> Are there any drawbacks to partition a SSD in two parts and use L2ARC on
> one partition, and ZIL on the other? Any thoughts?
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


It's not going to be as good as having separate but i can tell you that i
did this on my home system and it was WELL worth it.

I used one of the sandforce 1500 based SSD's 50 gb

i used 9 gb for ZIL, and the rest for L2ARC.   adding the zil gave me about
400-500% nfs write performance.   Seeing as you can't ever use more than
half your ram for ZIL anyways, the only real downside to doing this is that
i/o becomes split between zil and L2arc but realistically it depends on your
workloadfor mine, i noticed a HUGE benefit from doing this.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS performance?

2010-07-23 Thread Thomas Burgess
On Fri, Jul 23, 2010 at 3:11 AM, Sigbjorn Lie  wrote:

> Hi,
>
> I've been searching around on the Internet to fine some help with this, but
> have been
> unsuccessfull so far.
>
> I have some performance issues with my file server. I have an OpenSolaris
> server with a Pentium D
> 3GHz CPU, 4GB of memory, and a RAIDZ1 over 4 x Seagate (ST31500341AS) 1,5TB
> SATA drives.
>
> If I compile or even just unpack a tar.gz archive with source code (or any
> archive with lots of
> small files), on my Linux client onto a NFS mounted disk to the OpenSolaris
> server, it's extremely
> slow compared to unpacking this archive on the locally on the server. A
> 22MB .tar.gz file
> containng 7360 files takes 9 minutes and 12seconds to unpack over NFS.
>
> Unpacking the same file locally on the server is just under 2 seconds.
> Between the server and
> client I have a gigabit network, which at the time of testing had no other
> significant load. My
> NFS mount options are: "rw,hard,intr,nfsvers=3,tcp,sec=sys".
>
> Any suggestions to why this is?
>
>
> Regards,
> Sigbjorn
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



as someone else said, adding an ssd log device can help hugely.  I saw about
a 500% nfs write increase by doing this.
I've heard of people getting even more.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS performance?

2010-07-23 Thread Thomas Burgess
On Fri, Jul 23, 2010 at 5:00 AM, Sigbjorn Lie  wrote:

> I see I have already received several replies, thanks to all!
>
> I would not like to risk losing any data, so I believe a ZIL device would
> be the way for me. I see
> these exists in different prices. Any reason why I would not buy a cheap
> one? Like the Intel X25-V
> SSD 40GB 2,5"?
>
> What size of ZIL device would be recommened for my pool consisting for 4 x
> 1,5TB drives? Any
> brands I should stay away from?
>
>
>
> Regards,
> Sigbjorn
>
> Like i said, i bought a 50 gb OCZ Vertex Limited Edition...it's like 200
dollars, up to 15,000 random iops (iops is what you want for fast zil)


I've gotten excelent performance out of it.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  1   2   3   4   5   >