[EMAIL PROTECTED] schreef:
> SELECT EMPNO FROM EMP WHERE EMPNO NOT IN ( SELECT EMPNO
> FROM [EMAIL PROTECTED] )
Or maybe use something like
select EMPNO as E1 from EMP left join [EMAIL PROTECTED] as E2 on E1.EMPNO =
E2.EMPNO where E2.EMPNO IS NULL
(untested)
> So search FILE1 for all line entri
Just an idea, don't know whether it's useful...
If you can get both files sorted (either by adding order to your sql
query that generates the file, or the commandline 'sort') the problem
becomes much more easy.
You'd just have to traverse each file, something like this:
read word_1 from file_1
rea
[EMAIL PROTECTED] wrote:
>
> I have two database tables, one is local and one is on a WAN. They are
> supposed to be in-sync but they at the moment, they are not. There are
> 8million+ plus rows on this table.
>
> I tried to do SELECT EMPNO FROM EMP WHERE EMPNO NOT IN ( SELECT EMPNO FROM
> [EMAIL
On 08/16/2006 04:35 PM, [EMAIL PROTECTED] wrote:
Hi all,
I have two database tables, one is local and one is on a WAN. They are supposed
to be in-sync but they at the moment, they are not. There are 8million+ plus
rows on this table.
I tried to do SELECT EMPNO FROM EMP WHERE EMPNO NOT IN ( SELE
<[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Hi all,
Hello,
>
> I have two database tables, one is local and one is on a WAN. They are
> supposed
> to be in-sync but they at the moment, they are not. There are 8million+
> plus
> rows on this table.
>
> I tried to do SELECT EM
First, I would ask, how many lines in each file ? under 100 ? above 1 ?
because that effect to choose the tatic for making things done.
Well, I assume there is reasonable to carry 1 names and each name not
longer then 20 character ( consumed about 200KB, still acceptable )
and I wi
MAIL PROTECTED]>
>To: "Steve Whittle" <[EMAIL PROTECTED]>,
><[EMAIL PROTECTED]>
>Subject: Re: Comparing two files
>Date: Sat, 9 Jun 2001 07:35:05 -0700
>
>If your system's memory is large enough to hold the smaller dataset,
>then as others have sai
If your system's memory is large enough to hold the smaller dataset,
then as others have said, working with hashes is the way to go:
read all of small dataset into hash
while another record in large dataset
if key for record exists in hash
delete hash{key}
Unless I am missing the point of the question, this seems to
me like an Intersection of Arrays problem which is covered in every Perl book
that I have seen under hashes.
Basically:
%seen=();
foreach (@array1) {
$seen($_)=1;
}
$intersection=grep($seen($_), @array2);
Reading the files into hashes is definitely the answer. If you need an example
look at section 5.11 of the Perl Cookbook.
-Bob
--- Eduard Grinvald <[EMAIL PROTECTED]> wrote:
> As suggested earlier for a similiar problem, dumping everything into a
> single hash as keys and then printing that out
Hi,
one approach is to sort that files first, and work with sorted files - you
then need to read them only once.
Second approach is to load smaller file into memory - to create a has with
something like
while () { chomp; $found1{$_}++; }
and then read second file and compare it:
while () {
As suggested earlier for a similiar problem, dumping everything into a
single hash as keys and then printing that out seems simple and
efficient enough.
__END__
=sincerely, eduard grinvald
=email: [EMAIL PROTECTED]
=dev-email: [EMAIL PROTECTED]
=dev-site: r-x-linux.sourceforge.net
=icq: 11409
12 matches
Mail list logo