On Fri, 25 Nov 2011, Scott Mead wrote:
Why don't you first load the data into a table (no primary key), then use
SQL to find your dups?
once loaded:
SELECT , count(1) from group by 1 having
count(1) > 1;
At least then, you'll really know what you're in for. You can either
script a DELETE or
On Fri, Nov 25, 2011 at 11:05 AM, Rich Shepard wrote:
> The data originated in a spreadsheet and, based on my experience, contains
> duplicate records. After reformatting there are 143,260 rows to insert in
> the table. The approach I tried seems to have problems (explained below)
> and
> I would
On 11/25/2011 08:49 AM, Rich Shepard wrote:
On Fri, 25 Nov 2011, David Johnston wrote:
Simplistically you load all the data into a staging table that has no
natural primary key and then write a query that will result in only a
single record for whatever you define as a primary key. Insert the
r
On Fri, 25 Nov 2011, David Johnston wrote:
Simplistically you load all the data into a staging table that has no
natural primary key and then write a query that will result in only a
single record for whatever you define as a primary key. Insert the
results of that query into the final table.
On Nov 25, 2011, at 11:05, Rich Shepard wrote:
> The data originated in a spreadsheet and, based on my experience, contains
> duplicate records. After reformatting there are 143,260 rows to insert in
> the table. The approach I tried seems to have problems (explained below) and
> I would like to
The data originated in a spreadsheet and, based on my experience, contains
duplicate records. After reformatting there are 143,260 rows to insert in
the table. The approach I tried seems to have problems (explained below) and
I would like to learn the proper way to insert rows in either an empty