John
If what you are really after is to display any records where a given field is
duplicated, then I would say that a self join is the best bet.
If you just pull 500 consecutive records out into an array, and perform the
comparrisoon in php, then what happens if record 500 has a duplicate to rec
¥es, try this statement:
select [field_with_duplicate_entries], count(*) as num_entries from [yourDB] group by
[field_with_duplicate_entries] having
num_entries > 1;
So you get all duplicates. Store them in a array.
Make your 500-records-select and check the [field_with_duplicate_entries] with
Dump the records into a temp table.
Make sure you don't design a system that ends up loosing Data Entry somehow.
You could store info into an array, but what happens when a connection is
broken,etc... ???
""Asendorf, John"" <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PRO
>I'm about to create a piece of a project which which will be used to check
>for errors in data entry. Every 500 records entered will prompt the user to
>check their work. I have all of the triggers set up already. The database
>is all Oracle, PHP 4.0.4 (1381 build). What I would like to do is
I guess it'd be fastest to collect those 500 rows into an
array, and then do st. like this:
* create an array of all the values of the critical column,
* run array_count_values() on it,
* take the keys which have value >1,
* iterate the rows again, and if the critical column value
in any iterati