On 06/13/13 22:55, Shlomi Fish wrote:
That's nice and dandy, but please don't publish the products of your
"one-liners turned into scripts" here without cleaning them up first, because
there are beginners on this mailing list who need to learn good practices from
the code and posts here, and code
Shlomi Fish writes:
>> > But if the size hasn't changed, then you still need to check something
>> > else. You can do another light check, or decide to do the heavy one.
>> >
>> > This is also important because a hash-value is only a fingerprint, so
>> > different files have (a small chance on ha
Jim Gibson writes:
> Some comments on your code:
>
> 1. Rather than doing this:
>
> while ( <$curridx>) { chomp $_; my $current_file = $_;
>
> you should do this:
>
> while ( my $current_file = <$curridx> ) { chomp($current_file);
Ah cool :) I really didn't like the way I did that, this is
Jim Gibson writes:
> On Jun 13, 2013, at 1:30 PM, lee wrote:
>
>> In my application, my estimate is that there will be a set of around
>> 100--150 files. Once a file is closed and reported one last time, it
>> doesn't need to be considered anymore, so the number of relevant files
>> is limited.
On Jun 13, 2013, at 10:57 PM, lee wrote:
> Hi,
>
> so I've done this script now --- my fourth perl script ever --- and
> since I'm getting so much help here, I thought I'd post it here. I'm
> sure it could be done much better, it's just plain and simple.
>
> It has one problem: The list of clo
On 14/06/2013 08:02, Shlomi Fish wrote:
On Thu, 13 Jun 2013 22:51:24 +0200
lee wrote:
How likely is it that the hash is the same though the file did change?
Well, if you take SHA-256 for example, then its hash has 256 bits so you have a
chance of 1 / (2**256) that two non-identical byte vec