Hi,
those data finally have to load into database..
before loading into dabase,,we need to do some
validations like remove duplicate etc.
that is why i am doing...
i have another idea..
What i am planning is first sort the file..
once we sort the file..then it is easy to find
duplicate
Hi -
> -Original Message-
> From: Madhu Reddy [mailto:[EMAIL PROTECTED]
> Sent: Saturday, February 22, 2003 11:12 AM
> To: [EMAIL PROTECTED]
> Subject: Out of memory while finding duplicate rows
>
>
> Hi,
> I have a script that will find out duplicate rows
> in a file...in a file i hav
Hi,
I have a script that will find out duplicate rows
in a file...in a file i have 13 millions of
records
out of that not morethan 5% are duplicate
for finding duplicate i am using following function...
while () {
if (find_duplicates ()) {
$dup++
}
}
# return 1, if
Desmond Coughlan wrote:
> Le Wed, Feb 12, 2003 at 02:14:53AM +, Desmond Coughlan a écrit ...
>
> > I post statistics to a newsgroup called news:alt.activism.death-penalty
> > once per week, using the 'ngstatistics' script by H. Alex LaHurreau and
> > Davide G. M. Salvetti. You can see the scr
On Fri, Feb 21, 2003 at 12:38:29AM +, Desmond Coughlan wrote:
> OK ... no wants to answer me.
>
> *shrug*
>
> D.
It's often not a matter of people not wanting to answer you so much as
not being able to. I personally have no experience in the area of your
question. Answers on mailing lists