Re: [zfs-discuss] ZFS on Dell with FreeBSD

2011-10-20 Thread Koopmann, Jan-Peter


> 
> On the Dell website I've the choice between : 
> 
> 
>SAS 6Gbps External Controller
>PERC H800 RAID Adapter for External JBOD, 512MB Cache, PCIe 
>PERC H800 RAID Adapter for External JBOD, 512MB NV Cache, PCIe 
>PERC H800 RAID Adapter for External JBOD, 1GB NV Cache, PCIe
>PERC 6/E SAS RAID Controller, 2x4 Connectors, External, PCIe 256MB Cache
>PERC 6/E SAS RAID Controller, 2x4 Connectors, External, PCIe 512MB Cache
>LSI2032 SCSI Internal PCIe Controller Card
> 

The first one probably is a LSI card. However check with DELL (and if it is 
LSI, check what card exactly). And check if with that controller they support 
seeing all individual drives in the chassis as JBOD. 

Otherwise consider buying the chassis without the controller and get just the 
LSI from someone else. 

Regards,
  JP
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Compatibility of Hitachi Deskstar 7K3000 HDS723030ALA640 with ZFS

2012-03-06 Thread Koopmann, Jan-Peter
Hi Brandon,


On Mon, Mar 5, 2012 at 9:52 AM, luis Johnstone 
mailto:l...@luisjohnstone.com>> wrote:
As far as I can tell, the Hitachi Deskstar 7K3000 (HDS723030ALA640) uses
512B sectors and so I presume does not suffer from such issues (because it
doesn't lie about the physical layout of sectors on-platter)

Both the 7K3000 and 5K3000 drives have 512B physical sectors.

Do you or anyone else have experience with the 3TB 5K3000 drives (namely 
HDS5C3030ALA630)? I am thinking of replacing my current 4*1TB drives with 4*3TB 
drives (home server). Any issues with TER or alike?


Kind regards,
   JP
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dell PERC H200: drive failed to power up

2012-05-16 Thread Koopmann, Jan-Peter
Hi,

are those DELL branded WD disks? DELL tends to manipulate the firmware of
the drives so that power handling with Solaris fails. If this is the case
here:

Easiest way to make it work is to modify /kernel/drv/sd.conf and add an
entry
for your specific drive similar to this

sd-config-list= "WD  WD2000FYYG","power-condition:false",
"SEAGATE ST2000NM0001","power-condition:false",
"SEAGATE ST32000644NS","power-condition:false",
"SEAGATE ST91000640SS","power-condition:false";

Naturally you would have to find out the correct drive names. My latest
version for a R710 with a MD1200 attached is:

sd-config-list="SEAGATE ST2000NM0001","power-condition:false",
"SEAGATE ST1000NM0001","power-condition:false",
"SEAGATE ST91000640SS","power-condition:false";


Are you using the H200 with the base firmware or did you flash it to LSI IT?
I am not sure that Solaris handles the H200 natively at all and if then it
will not have direct drive access since the H200 will only show virtual
drives to Solaris/OI will it not?

Kind regards,
   JP

PS: These are not my findings. Cudos to Sergei (tehc...@gmail.com) and
Niklas Tungström.

Von:  Sašo Kiselkov 
An:  zfs-discuss 
Betreff:  [zfs-discuss] Dell PERC H200: drive failed to power up

> Hi,
> 
> I'm getting weird errors while trying to install openindiana 151a on a
> Dell R715 with a PERC H200 (based on an LSI SAS 2008). Any time the OS
> tries to access the drives (for whatever reason), I get this dumped into
> syslog:
> 
> genunix: WARNING: Device
> /pci@0,0/pci1002,5a18@4/pci10b58424@0/pci10b5,8624@0/pci1028,1f1e@0/iport@40/d
> isk@w5c0f01004ebe,0
> failed to power up
> genunix: WARNING: Device
> /pci@0,0/pci1002,5a18@4/pci10b58424@0/pci10b5,8624@0/pci1028,1f1e@0/iport@80/d
> isk@w5c0f01064e9e,0
> failed to power up
> 
> (these are two WD 300GB 10k SAS drives)
> 
> When this log message shows up, I can see each drive light up the drive
> LED briefly and then it turns off, so apparently the OS tried to
> initialize the drives, but somehow failed and gave up.
> 
> Consequently, when I try and access them in format(1), they show up as
> an unknown type and installing openindiana on them fails while the
> installer is trying to do fdisk.
> 
> Has anybody got any idea what I can do to the controller/drives/whatever
> to fix the "failed to power up" problem? One would think that a LSI SAS
> 2008 chip would be problem free under Solaris (the server even lists
> Oracle Solaris as an officially supported OS), but alas, I have yet to
> succeed.
> 
> Cheers,
> --
> Saso
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 




smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Recommendation for home NAS external JBOD

2012-06-17 Thread Koopmann, Jan-Peter
Hi,

my oi151 based home NAS is approaching a frightening "drive space" level. Right 
now the data volume is a 4*1TB Raid-Z1, 3 1/2" local disks individually 
connected to an 8 port LSI 6Gbit controller.

So I can either exchange the disks one by one with autoexpand, use 2-4 TB disks 
and be happy. This was my original approach. However I am totally unclear about 
the 512b vs 4Kb issue. What sata disk could I use that is big enough and still 
uses 512b? I know about the discussion about the upgrade from a 512b based pool 
to a 4 KB pool but I fail to see a conclusion. Will the autoexpand mechanism 
upgrade ashift? And what disks do not lie? Is the performance impact 
significant?

So I started to think about option 2. That would be using an external JOBD 
chassis (4-8 disks) and eSATA. But I would either need a JBOD with 4-8 eSATA 
connectors (which I am yet to find) or use a JBOD with a "good" expander. I see 
several cheap sata to esata jbod chassis making use of "port multiplier". Is 
this referring to a expander backplane and will work with oi, LSI and mpt or 
mpt_sas? I am aware that this is not the most performant solution but this is a 
home nas storing tons of pictures and videos only. And I could use the internal 
disks for backup purposes. 

Any suggestion for components are greatly appreciated. 

And before you ask: Currently I have 3TB net. 6 TB net would be the minimum 
target. 9TB sounds nicer. So if you have 512b HD recommendations with 2/3TB 
each or a good JBOD suggestion, please let me know!


Kind regards,
   JP

smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommendation for home NAS external JBOD

2012-06-17 Thread Koopmann, Jan-Peter
Hi Tim,

thanks to you and the others for answering.

> worst case).  The worst case for 512 emulated sectors on zfs is
> probably small (4KB or so) synchronous writes (which if they mattered
> to you, you would probably have a separate log device, in which case
> the data disk write penalty may not matter).

Good to know. This really opens up the possibility of buying 3 or 4TB
Hitachi drives. At least the 4TB Hitachi drives are 4k (512b emulated)
drives according to the latest news.


> I'm wondering, based on the comment about routing 4 eSATA cables, what
> kind of options your NAS case has, if your LSI controller has SFF-8087
> connectors (or possibly even if it doesn't),

It has actually. 



> you might be able to use
> an adapter to the SFF-8088 external 4 lane SAS connector, which may
> increase your options.

So what you are saying is that something like this will do the trick?

http://www.pc-pitstop.com/sata_enclosures/scsat44xb.asp

If I interpret this correctly I get a SFF-8087 to SFF-8088 bracket, connect
the 4 port LSI SFF-8077 to that bracket, then get a cable for this JBOD and
throw in 4 drives? This would leave me with four additional HDDs without any
SAS expander hassle. I had not come across these JBODs. Thanks a million for
the hint.

Do we agree that for a home NAS box a Hitachi Deskstar (not explicitly being
a server SATA drive) will suffice despite potential TLER problems? I was
thinking about Hitachi Deskstar 5k3000 drives. The 4TB seemingly came out
but are rather expensive in comparisonŠ


Kind regards,
   JP




smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommendation for home NAS external JBOD

2012-06-18 Thread Koopmann, Jan-Peter
Hi Carson,

> 
> I have 2 Sans Digital TR8X JBOD enclosures, and they work very well.
> They also make a 4-bay TR4X.
> 
> http://www.sansdigital.com/towerraid/tr4xb.html
> http://www.sansdigital.com/towerraid/tr8xb.html

looks nice! The only thing coming to mind is that according to the
specifications the enclosure is 3Gbits "only". If I choose to put in a SSD
with 6Gbits this would be not optimal. I looked at their site but failed to
find 6GBit enclosures. But I will keep looking since sooner or later they
will provide it. 

I think I will go for the option of replacing the four drives for now with
the Hitachi 3TB drives. This will give me 9TB net with RAID-Z1 level. I will
calculate how expensive a 8bay enclosure with a LSI 8port external
controller will be. Just in case the 9TB are not sufficient, I need a backup
place or I decide to go for RAID-Z2. :-)


> 
> They cost a bit more than the one you linked to, but the drives are hot
> swap. They also make similar cases with port multipliers, RAID, etc.,
> but I've only used the JBOD.
> 

I will bookmark them. Then enclosures do look nice.


Kind regards,
   JP




smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommendation for home NAS external JBOD

2012-06-18 Thread Koopmann, Jan-Peter
Hi Bob,


> On Mon, 18 Jun 2012, Koopmann, Jan-Peter wrote:
>>  
>>  looks nice! The only thing coming to mind is that according to the
>> specifications the enclosure is 3Gbits "only". If I choose
>>  to put in a SSD with 6Gbits this would be not optimal. I looked at their
>> site but failed to find 6GBit enclosures. But I will
>>  keep looking since sooner or later they will provide it.
> 
> I browsed the site and saw many 6GBit enclosures.  I also saw one with
> Nexenta (Solaris/zfs appliance) inside.

I found several high end enclosures. Or ones with bundled RAID cards. But
the equivalent of the one originally suggested I was not able to find.
However after looking at tons of sites for hours I might simply have missed
it. If you found one, can you please forward a link?


Kind regards,
   JP





smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommendation for home NAS external JBOD

2012-06-18 Thread Koopmann, Jan-Peter
Thanks. Just noticed that the Hitachi 3TB drives are not available. The 4TB 
ones are but with 512b emulated only. However I can get Barracudas 7200.14 with 
supposedly real 4k quite cheap. Anyone any experience with those? I might be 
getting one or two more and go for z2 instead of z1.  

I even found affordable passive enclosures available in Germany for very little 
money... The overall plan then would really be to switch to external JBODs and 
use the existing drives for backup only..


Kind regards,
  JP



Am 19.06.2012 um 01:02 schrieb "Carson Gaspar" :

> On 6/18/12 12:19 AM, Koopmann, Jan-Peter wrote:
>> Hi Carson,
>> 
>> 
>>I have 2 Sans Digital TR8X JBOD enclosures, and they work very well.
>>They also make a 4-bay TR4X.
>> 
>>http://www.sansdigital.com/towerraid/tr4xb.html
>>http://www.sansdigital.com/towerraid/tr8xb.html
>> 
>> 
>> looks nice! The only thing coming to mind is that according to the
>> specifications the enclosure is 3Gbits "only". If I choose to put in a
>> SSD with 6Gbits this would be not optimal. I looked at their site but
>> failed to find 6GBit enclosures. But I will keep looking since sooner or
>> later they will provide it.
> 
> The JBOD enclosures are completely passive. I can't imagine any reason 
> they wouldn't support 6Gbit SATA/SAS - there are no electronics in them, 
> just wire routing.
> 
> -- 
> Carson
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommendation for home NAS external JBOD

2012-06-18 Thread Koopmann, Jan-Peter

> 
> What makes you think the Barracuda 7200.14 drives report 4k sectors?

http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg48912.html

Nigel stated this here a few days ago. I did not check for myself. Maybe Nigel 
can comment on this?


As for the question "why do you want 4k drives": My thinking

- I will buy 4-6 disks now.
- I must assume that during the next 3-4 years one of them might fail
- Will I be able to buy a replacement in 3-3 years that reports the disk in 
such a way, that resilvering will work? According to the "Advanced Format" 
threat this seems to be a problem. I was hopimg to get arond this with these 
disks and have a more future proof solution

Moreover:
- If I buy new disks and a new JBOD etc. I might as well get a performant 
solution. In other threats ashift 9 vs 12 is presented as a problem. 
- Disk alignment: I am currently using whole disks AFAIK. But I do not 
remember. Did I use slicing etc? Is my alignment correct (btw. How do check?) 
So I thought: If I start over with a new pool O might get ot right and this 
seemed easier with those disks...

Might be totally wrong withmmy assumptions and if so: Hey that's the reason for 
asking you, knowing I am not the expert myself. :-)

Kind regards,
  JP

smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommendation for home NAS external JBOD

2012-06-19 Thread Koopmann, Jan-Peter
Hi Timothy,

> 
> I think that if you are running an illumos kernel, you can use
> /kernel/drv/sd.conf and tell it that the physical sectors for a disk
> model are 4k, despite what the disk says (and whether they really
> are).  So, if you want an ashift=12 pool on disks that report 512
> sectors, you should be able to do it now without a patched version of
> zpool.

That refers to creating a new pool and is good to know. However I was more
afraid about the comments in the "Advanced Format" threat stating that if
you have an ashift=9 512b based pool and need to replace a drive, resilver
might fail if you put in a 4K disk. Assuming that in 2-3 years you might not
be able to get 512b disks in the size you need them anymore, this could be a
serious problem.

> I think prtvtoc is all you need to determine
> if it is aligned, if the bytes/sector value under Dimensions is the
> true (physical) sector size, it should be aligned (if it reports 512
> when it has 4k sectors, then in theory if "First sector" is a multiple
> of 8, it is aligned, but it will probably issue writes of size 512
> which will degrade performance anyway).


Thanks. I will note that down and check once the drives arrive.


Kind regards,
   JP




smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Scrub performance

2013-02-04 Thread Koopmann, Jan-Peter
Hi Karl,


Recently, however, it has started taking over 20hours to complete. Not much has 
happened to it in that time: A few extra files added, maybe a couple of 
deletions, but not a huge amount. I am finding it difficult to understand why 
performance would have dropped so dramatically.

FYI the server is my dev box running Solaris 11 express, 2 mirrored pairs of 
1.5GB SATA disks for data (at v28), a separate root pool and a 64GB SSD for 
L2ARC. The data pool has 1.2TB allocated.

Can anyone shed some light on this?

all I can tell you is that I've had terrible scrub rates when I used dedup. The 
DDT was a bit too big to fit in my memory (I assume according to some very 
basic debugging). Only two of my datasets were deduped. On scrubs and resilvers 
I noticed that sometimes I had terrible rates with < 10MB/sec. Then later it 
rose up to < 70MB/sec. After upgrading some discs (same speeds observed) I got 
rid of the deduped datasets (zfs send/receive them) and guess what: All of the 
sudden scrub goes to 350MB/sec steady and only take a fraction of the time.

While I certainly cannot deliver all the necessary explanations I can only tell 
you that from my personal observation simply getting rid of dedup speeded up my 
scrub times by factor 7 or so (same server, same discs, same data).


Kind regards,
   JP

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Scrub performance

2013-02-04 Thread Koopmann, Jan-Peter
Hi Edward,

From: 
zfs-discuss-boun...@opensolaris.org<mailto:zfs-discuss-boun...@opensolaris.org> 
[mailto:zfs-discuss-
boun...@opensolaris.org<mailto:boun...@opensolaris.org>] On Behalf Of Koopmann, 
Jan-Peter
all I can tell you is that I've had terrible scrub rates when I used dedup.

I can tell you I've had terrible everything rates when I used dedup.

I am not alone then. Thanks! :-)

Only two of my datasets were deduped. On scrubs and
resilvers I noticed that sometimes I had terrible rates with < 10MB/sec. Then
later it rose up to < 70MB/sec. After upgrading some discs (same speeds
observed) I got rid of the deduped datasets (zfs send/receive them) and
guess what: All of the sudden scrub goes to 350MB/sec steady and only take
a fraction of the time.

Are you talking about scrub rates for the complete scrub?  Because if you sit 
there and watch it, from minute to minute, it's normal for it to bounce really 
low for a long time, and then really high for a long time, etc.  The only 
measurement that has any real meaning is time to completion.



Well both actually. The lowest rate I observed increased significantly without 
dedup and the time to completion decreased a lot as well. I remember doing a 
scrub after a resilver that took appr. 20-24 hours. Last scrub without dedup 
3:26. :-) A LOT FASTER….


Kind regards,
   JP
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Scrub performance

2013-02-04 Thread Koopmann, Jan-Peter
Hi,


OK then, I guess my next question would be what's the best way to "undedupe" 
the data I have?

Would it work for me to zfs send/receive on the same pool (with dedup off), 
deleting the old datasets once they have been 'copied'?

yes. Worked for my.


I think I remember reading somewhere that the DDT never shrinks, so this would 
not work, but it would be the simplest way.

Once you delete all snapshots with dedup on, the DDT will be empty. This is 
what I did here and it worked like a charm. But again: I only had it actively 
enabled on two datasets so my situation might be different from yours.

Kind regards,
   JP
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Freeing unused space in thin provisioned zvols

2013-02-10 Thread Koopmann, Jan-Peter
Why should it?

Unless you do a shrink on the vmdk and use a zfs variant with scsi unmap 
support (I believe currently only Nexenta but correct me if I am wrong) the 
blocks will not be freed, will they?

Kind regards
  JP


Sent from a mobile device.

Am 10.02.2013 um 11:01 schrieb "Datnus" :

> I run dd if=/dev/zero of=testfile bs=1024k count=5 inside the iscsi vmfs 
> from ESXi and rm textfile.
> 
> However, the zpool list doesn't decrease at all. In fact, the used storage 
> increase when I do dd.
> 
> FreeNas 8.0.4 and ESXi 5.0
> Help.
> Thanks.
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Freeing unused space in thin provisioned zvols

2013-02-10 Thread Koopmann, Jan-Peter
I forgot about compression. Makes sense. As long as the zeroes find their way 
to the backend storage this should work. Thanks!



Kind regards
JP

smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Gen-ATA read sector errors

2011-07-28 Thread Koopmann, Jan-Peter
Hi,

my system is running oi148 on a super micro X8SIL-F board. I have two pools (2 
disc mirror, 4 disc RAIDZ) with RAID level SATA drives. (Hitachi HUA72205 and 
SAMSUNG HE103UJ).  The system runs as expected however every few days 
(sometimes weeks) the system comes to a halt due to these errors:

Dec  3 13:51:20 nasjpk gda: [ID 107833 kern.warning] WARNING: 
/pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0 (Disk1):
Dec  3 13:51:20 nasjpk  Error for commandX \'read sector\' Error Level: Fatal
Dec  3 13:51:20 nasjpk gda: [ID 107833 kern.notice] Requested Block 
5503936, Error Block: 5503936
Dec  3 13:51:20 nasjpk gda: [ID 107833 kern.notice] Sense Key: 
uncorrectable data error
Dec  3 13:51:20 nasjpk gda: [ID 107833 kern.notice] Vendor \'Gen-ATA \' 
error code: XX7

It is not related to this one disk. It happens on all disks. Sometimes several 
are listed before the system "crashes", sometimes just one. I cannot pinpoint 
it to a single defect disk though (and already have replaced the disks). I 
suspect that this is an error with the SATA controller or the driver. Can 
someone give me a hint on whether or not that assumption sounds feasible? I am 
planning on getting a new "cheap" 6-8 way SATA2 or SATA3 controller and switch 
over the drives to that controller. If it is driver/controller related the 
problem should disappear. Is it possible to simply reconnect the drives and all 
is going to be well or will I have to reinstall due to different SATA "layouts" 
on the disks or alike?

Kind regards,
   JP
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss