hanks Peter - good point
>
> > -Original Message-
> > From: Peter Scott [mailto:[EMAIL PROTECTED]
> > Sent: Sunday, February 23, 2003 5:17 AM
> > To: [EMAIL PROTECTED]
> > Subject: RE: Out of memory while finding duplicate
> rows
> >
> >
> > In artic
Thanks Peter - good point
> -Original Message-
> From: Peter Scott [mailto:[EMAIL PROTECTED]
> Sent: Sunday, February 23, 2003 5:17 AM
> To: [EMAIL PROTECTED]
> Subject: RE: Out of memory while finding duplicate rows
>
>
> In article <[EMAIL PROTECTED]>
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Beau E. Cox) writes:
>Hi -
>
>Wait! If you are going to load the data into a database anyway,
>why not use the existing database (or the one being created) to
>remove duplicates. You don't even have to have an index on the
>column you are making u
PROTECTED]
> Subject: RE: Out of memory while finding duplicate rows
>
>
> Hi,
>those data finally have to load into database..
> before loading into dabase,,we need to do some
> validations like remove duplicate etc.
> that is why i am doing...
>
> i have ano
Hi,
those data finally have to load into database..
before loading into dabase,,we need to do some
validations like remove duplicate etc.
that is why i am doing...
i have another idea..
What i am planning is first sort the file..
once we sort the file..then it is easy to find
duplicate
Hi -
> -Original Message-
> From: Madhu Reddy [mailto:[EMAIL PROTECTED]
> Sent: Saturday, February 22, 2003 11:12 AM
> To: [EMAIL PROTECTED]
> Subject: Out of memory while finding duplicate rows
>
>
> Hi,
> I have a script that will find out duplicate rows
> in a file...in a file i hav