Re: [zfs-discuss] HP JBOD D2700 - ok?

2011-11-30 Thread Sašo Kiselkov
On 11/30/2011 02:40 PM, Edmund White wrote: > Absolutely. > > I'm using a fully-populated D2700 with an HP ProLiant DL380 G7 server > running NexentaStor. > > On the HBA side, I used the LSI 9211-8i 6G controllers for the server's > internal disks (boot, a handful of large disks, Pliant SSDs for

Re: [zfs-discuss] Fixing txg commit frequency

2012-01-06 Thread Sašo Kiselkov
On 07/01/2011 12:01 AM, Sašo Kiselkov wrote: > On 06/30/2011 11:56 PM, Sašo Kiselkov wrote: >> Hm, it appears I'll have to do some reboots and more extensive testing. >> I tried tuning various settings and then returned everything back to the >> defaults. Yet, no

Re: [zfs-discuss] Windows 8 ReFS (OT)

2012-01-17 Thread Sašo Kiselkov
On 01/17/2012 01:06 AM, David Magda wrote: > Kind of off topic, but I figured of some interest to the list. There will be > a new file system in Windows 8 with some features that we all know and love > in ZFS: > >> As mentioned previously, one of our design goals was to detect and correct >> co

[zfs-discuss] Dell PERC H200: drive failed to power up

2012-05-16 Thread Sašo Kiselkov
Hi, I'm getting weird errors while trying to install openindiana 151a on a Dell R715 with a PERC H200 (based on an LSI SAS 2008). Any time the OS tries to access the drives (for whatever reason), I get this dumped into syslog: genunix: WARNING: Device /pci@0,0/pci1002,5a18@4/pci10b58424@0/pci10b5

Re: [zfs-discuss] Dell PERC H200: drive failed to power up

2012-05-16 Thread Sašo Kiselkov
On 05/16/2012 09:45 AM, Koopmann, Jan-Peter wrote: > Hi, > > are those DELL branded WD disks? DELL tends to manipulate the > firmware of the drives so that power handling with Solaris fails. > If this is the case here: > > Easiest way to make it work is to modify /kernel/drv/sd.conf and > add an

Re: [zfs-discuss] Dell PERC H200: drive failed to power up

2012-05-16 Thread Sašo Kiselkov
On 05/16/2012 09:45 AM, Koopmann, Jan-Peter wrote: > Hi, > > are those DELL branded WD disks? DELL tends to manipulate the firmware of > the drives so that power handling with Solaris fails. If this is the case > here: > > Easiest way to make it work is to modify /kernel/drv/sd.conf and add an >

Re: [zfs-discuss] Dell PERC H200: drive failed to power up

2012-05-16 Thread Sašo Kiselkov
On 05/16/2012 10:17 AM, Koopmann, Jan-Peter wrote: >> >> >> One thing came up while trying this - I'm on a text install >> image system, so my / is a ramdisk. Any ideas how I can change >> the sd.conf on the USB disk or reload the driver configuration on >> the fly? I tried looking for the file o

[zfs-discuss] MPxIO n00b question

2012-05-25 Thread Sašo Kiselkov
I'm currently trying to get a SuperMicro JBOD with dual SAS expander chips running in MPxIO, but I'm a total amateur to this and would like to ask about how to detect whether MPxIO is working (or not). My SAS topology is: *) One LSI SAS2008-equipped HBA (running the latest IT firmware from L

Re: [zfs-discuss] MPxIO n00b question

2012-05-25 Thread Sašo Kiselkov
On 05/25/2012 07:35 PM, Jim Klimov wrote: > Sorry I can't comment on MPxIO, except that I thought zfs could by > itself discern two paths to the same drive, if only to protect > against double-importing the disk into pool. Unfortunately, it isn't the same thing. MPxIO provides redundant signaling

Re: [zfs-discuss] MPxIO n00b question

2012-05-25 Thread Sašo Kiselkov
On 05/25/2012 08:40 PM, Richard Elling wrote: > See the soluion at https://www.illumos.org/issues/644 -- richard Good Lord, that was it! It never occurred to me that the drives had a say in this. Thanks a billion! Cheers, -- Saso ___ zfs-discuss mailing

Re: [zfs-discuss] Has anyone used a Dell with a PERC H310?

2012-05-27 Thread Sašo Kiselkov
On 05/07/2012 05:42 AM, Greg Mason wrote: > I am currently trying to get two of these things running Illumian. I don't > have any particular performance requirements, so I'm thinking of using some > sort of supported hypervisor, (either RHEL and KVM or VMware ESXi) to get > around the driver sup

Re: [zfs-discuss] Has anyone used a Dell with a PERC H310?

2012-05-28 Thread Sašo Kiselkov
On 05/28/2012 10:48 AM, Ian Collins wrote: > To follow up, the H310 appears to be useless in non-raid mode. > > The drives do show up in Solaris 11 format, but they show up as > unknown, unformatted drives. One oddity is the box has two SATA > SSDs which also show up the card's BIOS, but present

Re: [zfs-discuss] Has anyone used a Dell with a PERC H310?

2012-05-28 Thread Sašo Kiselkov
On 05/28/2012 11:48 AM, Ian Collins wrote: > On 05/28/12 08:55 PM, Sašo Kiselkov wrote: >> On 05/28/2012 10:48 AM, Ian Collins wrote: >>> To follow up, the H310 appears to be useless in non-raid mode. >>> >>> The drives do show up in Solaris 11 format, but they

Re: [zfs-discuss] Has anyone used a Dell with a PERC H310?

2012-05-28 Thread Sašo Kiselkov
On 05/28/2012 12:59 PM, Ian Collins wrote: > On 05/28/12 10:53 PM, Sašo Kiselkov wrote: >> On 05/28/2012 11:48 AM, Ian Collins wrote: >>> On 05/28/12 08:55 PM, Sašo Kiselkov wrote: >>>> On 05/28/2012 10:48 AM, Ian Collins wrote: >>>>> To follow up, th

Re: [zfs-discuss] Has anyone used a Dell with a PERC H310?

2012-05-28 Thread Sašo Kiselkov
On 05/28/2012 01:12 PM, Ian Collins wrote: > On 05/28/12 11:01 PM, Sašo Kiselkov wrote: >> On 05/28/2012 12:59 PM, Ian Collins wrote: >>> On 05/28/12 10:53 PM, Sašo Kiselkov wrote: >>>> On 05/28/2012 11:48 AM, Ian Collins wrote: >>>>> On 05/28/12 08:55

Re: [zfs-discuss] MPxIO n00b question

2012-05-30 Thread Sašo Kiselkov
On 05/25/2012 08:40 PM, Richard Elling wrote: > See the soluion at https://www.illumos.org/issues/644 > -- richard And predictably, I'm back with another n00b question regarding this array. I've put a pair of LSI-9200-8e controllers in the server and attached the cables to the enclosure to each o

Re: [zfs-discuss] MPxIO n00b question

2012-05-30 Thread Sašo Kiselkov
On 05/30/2012 10:53 PM, Richard Elling wrote: > On May 30, 2012, at 1:07 PM, Sašo Kiselkov wrote: > >> On 05/25/2012 08:40 PM, Richard Elling wrote: >>> See the soluion at https://www.illumos.org/issues/644 >>> -- richard >> >> And predictably, I'm

Re: [zfs-discuss] MPxIO n00b question

2012-05-30 Thread Sašo Kiselkov
On 05/30/2012 10:53 PM, Richard Elling wrote: > Those ereports are consistent with faulty cabling. You can trace all of the > cables and errors using tools like lsiutil, sg_logs, kstats, etc. > Unfortunately, > it is not really possible to get into this level of detail over email, and it > can >

[zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-06 Thread Sašo Kiselkov
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two LSI 9200 controllers and MPxIO) running OpenIndiana 151a4 and I'm occasionally seeing a storm of xcalls on one of the 32 VCPUs (>10 xcalls a second). Th

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-06 Thread Sašo Kiselkov
On 06/06/2012 04:55 PM, Richard Elling wrote: > On Jun 6, 2012, at 12:48 AM, Sašo Kiselkov wrote: > >> So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached >> to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two >> LSI 9200 contro

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-06 Thread Sašo Kiselkov
On 06/06/2012 05:01 PM, Sašo Kiselkov wrote: > I'll try and load the machine with dd(1) to the max to see if access > patterns of my software have something to do with it. Tried and tested, any and all write I/O to the pool causes this xcall storm issue, writing more data to it only

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-06 Thread Sašo Kiselkov
On 06/06/2012 09:43 PM, Jim Mauro wrote: > > I can't help but be curious about something, which perhaps you verified but > did not post. > > What the data here shows is; > - CPU 31 is buried in the kernel (100% sys). > - CPU 31 is handling a moderate-to-high rate of xcalls. > > What the data doe

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-12 Thread Sašo Kiselkov
Seems the problem is somewhat more egregious than I thought. The xcall storm causes my network drivers to stop receiving IP multicast packets and subsequently my recording applications record bad data, so ultimately, this kind of isn't workable... I need to somehow resolve this... I'm running four

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-12 Thread Sašo Kiselkov
On 06/12/2012 03:57 PM, Sašo Kiselkov wrote: > Seems the problem is somewhat more egregious than I thought. The xcall > storm causes my network drivers to stop receiving IP multicast packets > and subsequently my recording applications record bad data, so > ultimately, this kind of is

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-12 Thread Sašo Kiselkov
On 06/12/2012 05:21 PM, Matt Breitbach wrote: > I saw this _exact_ problem after I bumped ram from 48GB to 192GB. Low > memory pressure seemed to be the cuplrit. Happened usually during storage > vmotions or something like that which effectively nullified the data in the > ARC (sometimes 50GB of

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-12 Thread Sašo Kiselkov
On 06/12/2012 05:37 PM, Roch Bourbonnais wrote: > > So the xcall are necessary part of memory reclaiming, when one needs to tear > down the TLB entry mapping the physical memory (which can from here on be > repurposed). > So the xcall are just part of this. Should not cause trouble, but they do.

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-12 Thread Sašo Kiselkov
On 06/12/2012 06:06 PM, Jim Mauro wrote: > >> >>> So try unbinding the mac threads; it may help you here. >> >> How do I do that? All I can find on interrupt fencing and the like is to >> simply set certain processors to no-intr, which moves all of the >> interrupts and it doesn't prevent the xcal

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-12 Thread Sašo Kiselkov
On 06/12/2012 05:58 PM, Andy Bowers - Performance Engineering wrote: > find where your nics are bound too > > mdb -k > ::interrupts > > create a processor set including those cpus [ so just the nic code will > run there ] > > andy Tried and didn't help, unfortunately. I'm still seeing drops. Wh

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-12 Thread Sašo Kiselkov
On 06/12/2012 07:19 PM, Roch Bourbonnais wrote: > > Try with this /etc/system tunings : > > set mac:mac_soft_ring_thread_bind=0 set mac:mac_srs_thread_bind=0 > set zfs:zio_taskq_batch_pct=50 > Thanks for the recommendations, I'll try and see whether it helps, but this is going to take me a whi

Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks

2012-06-15 Thread Sašo Kiselkov
On 06/15/2012 02:14 PM, Hans J Albertsson wrote: > I've got my root pool on a mirror on 2 512 byte blocksize disks. > I want to move the root pool to two 2 TB disks with 4k blocks. > The server only has room for two disks. I do have an esata connector, though, > and a suitable external cabinet for

Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks

2012-06-15 Thread Sašo Kiselkov
On 06/15/2012 03:35 PM, Johannes Totz wrote: > On 15/06/2012 13:22, Sašo Kiselkov wrote: >> On 06/15/2012 02:14 PM, Hans J Albertsson wrote: >>> I've got my root pool on a mirror on 2 512 byte blocksize disks. I >>> want to move the root pool to two 2 TB disks wit

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-17 Thread Sašo Kiselkov
On 06/13/2012 03:43 PM, Roch wrote: > > Sašo Kiselkov writes: > > On 06/12/2012 05:37 PM, Roch Bourbonnais wrote: > > > > > > So the xcall are necessary part of memory reclaiming, when one needs to > tear down the TLB entry mapping the physical memory (which

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-18 Thread Sašo Kiselkov
On 06/18/2012 12:05 AM, Richard Elling wrote: > You might try some of the troubleshooting techniques described in Chapter 5 > of the DTtrace book by Brendan Gregg and Jim Mauro. It is not clear from your > description that you are seeing the same symptoms, but the technique should > apply. > -- r

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-19 Thread Sašo Kiselkov
On 06/19/2012 11:05 AM, Sašo Kiselkov wrote: > On 06/18/2012 07:50 PM, Roch wrote: >> >> Are we hitting : >> 7167903 Configuring VLANs results in single threaded soft ring fanout > > Confirmed, it is definitely this. Hold the phone, I just tried unconfiguring all

[zfs-discuss] New fast hash algorithm - is it needed?

2012-07-10 Thread Sašo Kiselkov
Hi guys, I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS implementation to supplant the currently utilized sha256. On modern 64-bit CPUs SHA-256 is actually much slower than SHA-512 and indeed much slower than many of the SHA-3 candidates, so I went out and did some testin

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-10 Thread Sašo Kiselkov
On 07/11/2012 02:18 AM, John Martin wrote: > On 07/10/12 19:56, Sašo Kiselkov wrote: >> Hi guys, >> >> I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS >> implementation to supplant the currently utilized sha256. On modern >> 64-bi

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 05:20 AM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Sašo Kiselkov >> >> I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS >>

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
gorithms? > On Wed, Jul 11, 2012 at 9:19 AM, Sašo Kiselkov wrote: Fletcher is a checksum, not a hash. It can and often will produce collisions, so you need to set your dedup to verify (do a bit-by-bit comparison prior to deduplication) which can result in significant write amplification (every w

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 10:41 AM, Ferenc-Levente Juhos wrote: > I was under the impression that the hash (or checksum) used for data > integrity is the same as the one used for deduplication, > but now I see that they are different. They are the same "in use", i.e. once you switch dedup on, that implies che

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 10:47 AM, Joerg Schilling wrote: > Sa??o Kiselkov wrote: > >> write in case verify finds the blocks are different). With hashes, you >> can leave verify off, since hashes are extremely unlikely (~10^-77) to >> produce collisions. > > This is how a lottery works. the chance is low b

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 11:02 AM, Darren J Moffat wrote: > On 07/11/12 00:56, Sašo Kiselkov wrote: >> * SHA-512: simplest to implement (since the code is already in the >> kernel) and provides a modest performance boost of around 60%. > > FIPS 180-4 introduces SHA-512/t support a

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 10:50 AM, Ferenc-Levente Juhos wrote: > Actually although as you pointed out that the chances to have an sha256 > collision is minimal, but still it can happen, that would mean > that the dedup algorithm discards a block that he thinks is a duplicate. > Probably it's anyway better to

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 11:53 AM, Tomas Forsman wrote: > On 11 July, 2012 - Sa??o Kiselkov sent me these 1,4K bytes: >> Oh jeez, I can't remember how many times this flame war has been going >> on on this list. Here's the gist: SHA-256 (or any good hash) produces a >> near uniform random distribution of outp

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 12:00 PM, casper@oracle.com wrote: > > >> You do realize that the age of the universe is only on the order of >> around 10^18 seconds, do you? Even if you had a trillion CPUs each >> chugging along at 3.0 GHz for all this time, the number of processor >> cycles you will have exe

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 12:24 PM, Justin Stringfellow wrote: >> Suppose you find a weakness in a specific hash algorithm; you use this >> to create hash collisions and now imagined you store the hash collisions >> in a zfs dataset with dedup enabled using the same hash algorithm. > > Sorry, but isn't t

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 12:32 PM, Ferenc-Levente Juhos wrote: > Saso, I'm not flaming at all, I happen to disagree, but still I understand > that > chances are very very very slim, but as one poster already said, this is > how > the lottery works. I'm not saying one should make an exhaustive search with > tr

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 12:37 PM, Ferenc-Levente Juhos wrote: > Precisely, I said the same thing a few posts before: > dedup=verify solves that. And as I said, one could use dedup= algorithm>,verify with > an inferior hash algorithm (that is much faster) with the purpose of > reducing the number of dedup can

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 01:09 PM, Justin Stringfellow wrote: >> The point is that hash functions are many to one and I think the point >> was about that verify wasn't really needed if the hash function is good >> enough. > > This is a circular argument really, isn't it? Hash algorithms are never > perfect,

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 01:36 PM, casper@oracle.com wrote: > > >> This assumes you have low volumes of deduplicated data. As your dedup >> ratio grows, so does the performance hit from dedup=verify. At, say, >> dedupratio=10.0x, on average, every write results in 10 reads. > > I don't follow. > > If

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 01:42 PM, Justin Stringfellow wrote: >> This assumes you have low volumes of deduplicated data. As your dedup >> ratio grows, so does the performance hit from dedup=verify. At, say, >> dedupratio=10.0x, on average, every write results in 10 reads. > > Well you can't make an omelette

Re: [zfs-discuss] Solaris derivate with the best long-term future

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 01:51 PM, Eugen Leitl wrote: > > As a napp-it user who recently needs to upgrade from NexentaCore I recently > saw > "preferred for OpenIndiana live but running under Illumian, NexentaCore and > Solaris 11 (Express)" > as a system recommendation for napp-it. > > I wonder about th

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 03:39 PM, David Magda wrote: > On Tue, July 10, 2012 19:56, Sašo Kiselkov wrote: >> However, before I start out on a pointless endeavor, I wanted to probe >> the field of ZFS users, especially those using dedup, on whether their >> workloads would benefit from a faster hash algorithm

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 03:57 PM, Gregg Wonderly wrote: > Since there is a finite number of bit patterns per block, have you tried to > just calculate the SHA-256 or SHA-512 for every possible bit pattern to see > if there is ever a collision? If you found an algorithm that produced no > collisions for a

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 03:58 PM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Sašo Kiselkov >> >> I really mean no disrespect, but this comment is so dumb I could swear >> my IQ dropped by a

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 04:19 PM, Gregg Wonderly wrote: > But this is precisely the kind of "observation" that some people seem to miss > out on the importance of. As Tomas suggested in his post, if this was true, > then we could have a huge compression ratio as well. And even if there was > 10% of the

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 04:22 PM, Bob Friesenhahn wrote: > On Wed, 11 Jul 2012, Sašo Kiselkov wrote: >> the hash isn't used for security purposes. We only need something that's >> fast and has a good pseudo-random output distribution. That's why I >> looked toward Edon-R

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 04:23 PM, casper@oracle.com wrote: > >> On Tue, 10 Jul 2012, Edward Ned Harvey wrote: >>> >>> CPU's are not getting much faster. But IO is definitely getting faster. >>> It's best to keep ahea > d of that curve. >> >> It seems that per-socket CPU performance is doubling every

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 04:27 PM, Gregg Wonderly wrote: > Unfortunately, the government imagines that people are using their home > computers to compute hashes and try and decrypt stuff. Look at what is > happening with GPUs these days. People are hooking up 4 GPUs in their > computers and getting huge

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 04:30 PM, Gregg Wonderly wrote: > This is exactly the issue for me. It's vital to always have verify on. If > you don't have the data to prove that every possible block combination > possible, hashes uniquely for the "small" bit space we are talking about, > then how in the world

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 04:36 PM, Justin Stringfellow wrote: > > >> Since there is a finite number of bit patterns per block, have you tried to >> just calculate the SHA-256 or SHA-512 for every possible bit pattern to see >> if there is ever a collision? If you found an algorithm that produced no >> c

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 04:39 PM, Ferenc-Levente Juhos wrote: > As I said several times before, to produce hash collisions. Or to calculate > rainbow tables (as a previous user theorized it) you only need the > following. > > You don't need to reproduce all possible blocks. > 1. SHA256 produces a 256 bit ha

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 04:54 PM, Ferenc-Levente Juhos wrote: > You don't have to store all hash values: > a. Just memorize the first one SHA256(0) > b. start cointing > c. bang: by the time you get to 2^256 you get at least a collision. Just one question: how long do you expect this going to take on averag

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 04:56 PM, Gregg Wonderly wrote: > So, if I had a block collision on my ZFS pool that used dedup, and it had my > bank balance of $3,212.20 on it, and you tried to write your bank balance of > $3,292,218.84 and got the same hash, no verify, and thus you got my > block/balance and no

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 05:10 PM, David Magda wrote: > On Wed, July 11, 2012 09:45, Sašo Kiselkov wrote: > >> I'm not convinced waiting makes much sense. The SHA-3 standardization >> process' goals are different from "ours". SHA-3 can choose to go with >> someth

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 05:33 PM, Bob Friesenhahn wrote: > On Wed, 11 Jul 2012, Sašo Kiselkov wrote: >> >> The reason why I don't think this can be used to implement a practical >> attack is that in order to generate a collision, you first have to know >> the disk block tha

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 05:58 PM, Gregg Wonderly wrote: > You're entirely sure that there could never be two different blocks that can > hash to the same value and have different content? > > Wow, can you just send me the cash now and we'll call it even? You're the one making the positive claim and I'm ca

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 06:23 PM, Gregg Wonderly wrote: > What I'm saying is that I am getting conflicting information from your > rebuttals here. Well, let's address that then: > I (and others) say there will be collisions that will cause data loss if > verify is off. Saying that "there will be" withou

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Sašo Kiselkov
On 07/11/2012 10:06 PM, Bill Sommerfeld wrote: > On 07/11/12 02:10, Sašo Kiselkov wrote: >> Oh jeez, I can't remember how many times this flame war has been going >> on on this list. Here's the gist: SHA-256 (or any good hash) produces a >> near uniform random di

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-12 Thread Sašo Kiselkov
On 07/12/2012 07:16 PM, Tim Cook wrote: > Sasso: yes, it's absolutely worth implementing a higher performing hashing > algorithm. I'd suggest simply ignoring the people that aren't willing to > acknowledge basic mathematics rather than lashing out. No point in feeding > the trolls. The PETABYTES

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-12 Thread Sašo Kiselkov
On 07/12/2012 09:52 PM, Sašo Kiselkov wrote: > I have far too much time to explain P.S. that should have read "I have taken far too much time explaining". Men are crap at multitasking... Cheers, -- Saso ___ zfs-discuss mailing lis

Re: [zfs-discuss] slow speed problem with a new SAS shelf

2012-07-23 Thread Sašo Kiselkov
Hi, Have you had a look iostat -E (error counters) to make sure you don't have faulty cabling? I've bad cables trip me up once in a manner similar to your situation here. Cheers, -- Saso On 07/23/2012 07:18 AM, Yuri Vorobyev wrote: > Hello. > > I faced with a strange performance problem with ne

Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-07-25 Thread Sašo Kiselkov
On 07/25/2012 05:49 PM, Habony, Zsolt wrote: > Hello, > There is a feature of zfs (autoexpand, or zpool online -e ) that it can > consume the increased LUN immediately and increase the zpool size. > That would be a very useful ( vital ) feature in enterprise environment. > > Though when I t

Re: [zfs-discuss] ZIL devices and fragmentation

2012-07-29 Thread Sašo Kiselkov
On 07/29/2012 04:07 PM, Jim Klimov wrote: > Hello, list Hi Jim, > For several times now I've seen statements on this list implying > that a dedicated ZIL/SLOG device catching sync writes for the log, > also allows for more streamlined writes to the pool during normal > healthy TXG syncs, than i

Re: [zfs-discuss] ZIL devices and fragmentation

2012-07-29 Thread Sašo Kiselkov
On 07/29/2012 06:01 PM, Jim Klimov wrote: > 2012-07-29 19:50, Sašo Kiselkov wrote: >> On 07/29/2012 04:07 PM, Jim Klimov wrote: >>>For several times now I've seen statements on this list implying >>> that a dedicated ZIL/SLOG device catching sync writes for t

Re: [zfs-discuss] Can the ZFS "copies" attribute substitute HW disk redundancy?

2012-08-01 Thread Sašo Kiselkov
On 08/01/2012 12:04 PM, Jim Klimov wrote: > Probably DDT is also stored with 2 or 3 copies of each block, > since it is metadata. It was not in the last ZFS on-disk spec > from 2006 that I found, for some apparent reason ;) That's probably because it's extremely big (dozens, hundreds or even thous

Re: [zfs-discuss] Can the ZFS "copies" attribute substitute HW disk redundancy?

2012-08-01 Thread Sašo Kiselkov
On 08/01/2012 03:35 PM, opensolarisisdeadlongliveopensolaris wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Jim Klimov >> >> Availability of the DDT is IMHO crucial to a deduped pool, so >> I won't be surprised to see it forced to

Re: [zfs-discuss] Can the ZFS "copies" attribute substitute HW disk redundancy?

2012-08-01 Thread Sašo Kiselkov
On 08/01/2012 04:14 PM, Jim Klimov wrote: > 2012-08-01 17:55, Sašo Kiselkov пишет: >> On 08/01/2012 03:35 PM, opensolarisisdeadlongliveopensolaris wrote: >>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >>>> boun...@opensolaris.org] On Behalf Of

Re: [zfs-discuss] number of blocks changes

2012-08-03 Thread Sašo Kiselkov
On 08/03/2012 03:18 PM, Justin Stringfellow wrote: > While this isn't causing me any problems, I'm curious as to why this is > happening...: > > > > $ dd if=/dev/random of=ob bs=128k count=1 && while true Can you check whether this happens from /dev/urandom as well? -- Saso __

Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-06 Thread Sašo Kiselkov
On 08/07/2012 12:12 AM, Christopher George wrote: >> Is your DDRdrive product still supported and moving? > > Yes, we now exclusively target ZIL acceleration. > > We will be at the upcoming OpenStorage Summit 2012, > and encourage those attending to stop by our booth and > say hello :-) > > http

Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-07 Thread Sašo Kiselkov
On 08/07/2012 02:18 AM, Christopher George wrote: >> I mean this as constructive criticism, not as angry bickering. I totally >> respect you guys doing your own thing. > > Thanks, I'll try my best to address your comments... Thanks for your kind reply, though there are some points I'd like to add

Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-07 Thread Sašo Kiselkov
On 08/07/2012 04:08 PM, Bob Friesenhahn wrote: > On Tue, 7 Aug 2012, Sašo Kiselkov wrote: >> >> MLC is so much cheaper that you can simply slap on twice as much and use >> the rest for ECC, mirroring or simply overprovisioning sectors. The >> common practice to extending

Re: [zfs-discuss] FreeBSD ZFS

2012-08-09 Thread Sašo Kiselkov
On 08/09/2012 12:52 PM, Joerg Schilling wrote: > Jim Klimov wrote: > >> In the end, the open-sourced ZFS community got no public replies >> from Oracle regarding collaboration or lack thereof, and decided >> to part ways and implement things independently from Oracle. >> AFAIK main ZFS developmen

Re: [zfs-discuss] FreeBSD ZFS

2012-08-09 Thread Sašo Kiselkov
On 08/09/2012 01:05 PM, Joerg Schilling wrote: > Sa?o Kiselkov wrote: > >>> To me it seems that the "open-sourced ZFS community" is not open, or could >>> you >>> point me to their mailing list archives? >>> >>> Jörg >>> >> >> z...@lists.illumos.org > > Well, why then has there been a discussi

Re: [zfs-discuss] FreeBSD ZFS

2012-08-09 Thread Sašo Kiselkov
On 08/09/2012 01:11 PM, Joerg Schilling wrote: > Sa?o Kiselkov wrote: > >> On 08/09/2012 01:05 PM, Joerg Schilling wrote: >>> Sa?o Kiselkov wrote: >>> > To me it seems that the "open-sourced ZFS community" is not open, or > could you > point me to their mailing list archives? >

Re: [zfs-discuss] Recovering lost labels on raidz member

2012-08-13 Thread Sašo Kiselkov
On 08/13/2012 03:02 AM, Scott wrote: > Hi all, > > I have a 5 disk raidz array in a state of disrepair. Suffice to say three > disks are ok, while two are missing all their labels. (Both ends of the > disks were overwritten). The data is still intact. There are 4 labels on a zfs-labeled disk,

Re: [zfs-discuss] Recovering lost labels on raidz member

2012-08-13 Thread Sašo Kiselkov
On 08/13/2012 10:00 AM, Sašo Kiselkov wrote: > On 08/13/2012 03:02 AM, Scott wrote: >> Hi all, >> >> I have a 5 disk raidz array in a state of disrepair. Suffice to say three >> disks are ok, while two are missing all their labels. (Both ends of the >> disks were

Re: [zfs-discuss] Recovering lost labels on raidz member

2012-08-13 Thread Sašo Kiselkov
On 08/13/2012 10:45 AM, Scott wrote: > Hi Saso, > > thanks for your reply. > > If all disks are the same, is the root pointer the same? No. > Also, is there a "signature" or something unique to the root block that I can > search for on the disk? I'm going through the On-disk specification at t

Re: [zfs-discuss] How do I import a zpool with a file as a member device?

2012-08-13 Thread Sašo Kiselkov
On 08/13/2012 12:48 PM, Ray Arachelian wrote: > While attempting to fix the last of my damaged zpools, there's one that > consists of 4 drives + one 60G file. The file happened by accident - I > attempted to add a partition off an SSD drive but missed the cache > keyword. Of course, once this is

Re: [zfs-discuss] How do I import a zpool with a file as a member device?

2012-08-13 Thread Sašo Kiselkov
On 08/13/2012 02:01 PM, Ray Arachelian wrote: > On 08/13/2012 06:50 AM, Sašo Kiselkov wrote: >> See the -d option to zpool import. -- Saso > > Many thanks for this, it worked very nicely, though the first time > I ran it, it failed. So what -d does is to substitute /dev. In

Re: [zfs-discuss] Recover data after zpool create -f

2012-08-20 Thread Sašo Kiselkov
On 08/20/2012 08:55 PM, Ernest Dipko wrote: > Is there any way to recover the data within a zpool after a spool create -f > was issued on the disks? > > We had a pool that contained two internal disks (mirrored) and we added a > zvol to it our of an existing pool for some temporary space. After

Re: [zfs-discuss] Recover data after zpool create -f

2012-08-20 Thread Sašo Kiselkov
On 08/20/2012 10:15 PM, Jim Klimov wrote: > 2012-08-20 23:39, Sašo Kiselkov пишет: >>> We then tried to recreate the pool, which was successful - but >>> without data… >> >> A zpool create overwrites all labels on a device (that's why you had to >> ad

Re: [zfs-discuss] Dedicated metadata devices

2012-08-24 Thread Sašo Kiselkov
This is something I've been looking into in the code and my take on your proposed points this: 1) This requires many and deep changes across much of ZFS's architecture (especially the ability to sustain tlvdev failures). 2) Most of this can be achieved (except for cache persistency) by implementi

Re: [zfs-discuss] Backing up ZFS metadata

2012-08-24 Thread Sašo Kiselkov
On 08/24/2012 05:13 PM, Scott Aitken wrote: > Hi all, > > I know the easiest answer to this question is "don't do it in the first > place, and if you do, you should have a backup", however I'll ask it > regardless. > > Is there a way to backup the ZFS metadata on each member device of a pool > to

Re: [zfs-discuss] Dedicated metadata devices

2012-08-24 Thread Sašo Kiselkov
Oh man, that's a million-billion points you made. I'll try to run through each quickly. On 08/24/2012 05:43 PM, Jim Klimov wrote: > First of all, thanks for reading and discussing! :) No problem at all ;) > 2012-08-24 17:50, Sašo Kiselkov wrote: >> This is something I&#

Re: [zfs-discuss] Dedicated metadata devices

2012-08-24 Thread Sašo Kiselkov
On 08/25/2012 12:22 AM, Jim Klimov wrote: > 2012-08-25 0:42, Sašo Kiselkov wrote: >> Oh man, that's a million-billion points you made. I'll try to run >> through each quickly. > > Thanks... > I still do not have the feeling that you've fully got my &g

Re: [zfs-discuss] Dedicated metadata devices

2012-08-25 Thread Sašo Kiselkov
On 08/25/2012 11:53 AM, Jim Klimov wrote: >> No they're not, here's l2arc_buf_hdr_t a per-buffer structure >> held for >> buffers which were moved to l2arc: >> >> typedef struct l2arc_buf_hdr { >> l2arc_dev_t *b_dev; >> uint64_t b_daddr; >> } l2arc_buf_hdr_t; >> >> That's about 16-bytes overhead p

Re: [zfs-discuss] slow speed problem with a new SAS shelf

2012-08-27 Thread Sašo Kiselkov
On 08/26/2012 07:40 AM, Yuri Vorobyev wrote: > Can someone with Supermicro JBOD equipped with SAS drives and LSI > HBA do this sequential read test? Did that on a SC847 with 45 drives, read speeds around 2GB/s aren't a problem. > Don't forget to set primarycache=none on testing dataset. There's

Re: [zfs-discuss] slow speed problem with a new SAS shelf

2012-08-27 Thread Sašo Kiselkov
On 08/27/2012 10:37 AM, Yuri Vorobyev wrote: > Is there any way to disable ARC for testing and leave prefetch enabled? No. The reason is quite simply because prefetch is a mechanism separate from your direct application's read requests. Prefetch runs on ahead of your anticipated read requests and

Re: [zfs-discuss] slow speed problem with a new SAS shelf

2012-08-27 Thread Sašo Kiselkov
On 08/27/2012 12:58 PM, Yuri Vorobyev wrote: > 27.08.2012 14:43, Sašo Kiselkov пишет: > >>> Is there any way to disable ARC for testing and leave prefetch enabled? >> >> No. The reason is quite simply because prefetch is a mechanism separate >> from your d

Re: [zfs-discuss] Zpool recovery after too many failed disks

2012-08-27 Thread Sašo Kiselkov
On 08/27/2012 09:02 PM, Mark Wolek wrote: > RAIDz set, lost a disk, replaced it... lost another disk during resilver. > Replaced it, ran another resilver, and now it shows all disks with too many > errors. > > Safe to say this is getting rebuilt and restored, or is there hope to recover > some

  1   2   >