I made a HUGE mistake, and used 'UK' as the abbreviation for the
united kingdom ( the ISO abbv is 'GB' )
I've got a database where 8 tables have an FKEY on a table
'location_country' , using the text 'uk' as the value -- so i've got
9 tables that I need to swap data out on
can anyone s
see anything in the changelogs since 8.1.0 saying that it
was addressed-- I could have missed that-- but it also might not have
been.
So... where can i complain that the error message should include the
field name I tried to toss the references constraint in ?
// Jonathan Vanasco
--
On Oct 12, 2006, at 3:44 PM, A. Kretschmer wrote:
Can you show us your SQL? The message is clear: you create a new table
with a foreign key to an other table that doesn't exist. An example:
Yes, I know that part. The error message is bad though, because it
doesn't tell me exactly where the
On Oct 12, 2006, at 7:39 PM, Tom Lane wrote:
That's not necessarily all that much help, if you've got so many FK
constraints in your command that you don't know exactly where to look.
I think what you're really wishing for is an error cursor position.
8.2 has the infrastructure for this, eg
re
a__pkey" PRIMARY KEY, btree (id)
"table_a__idx__field_1" btree (field_1)
"table_a__idx__field_2" btree (field_2)
can anyone offer advice to help me use the indexes on this ?
// Jonathan Vanasco
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
On Mar 16, 2007, at 3:48 PM, Jeff Davis wrote:
To clarify a little:
No clarifcation needed. Joshua Drake's suggestion made perfect sense
and I was able to implement in 2 seconds.
works like a charm!
ETA 2 weeks -> 30mins
Thanks to all.
// Jonathan
I've got a DELETE query that takes ~2 seconds to process
The reasoning is simple -- its doing referential checks, to make sure
that I'm not deleting a valid fkey , as several tables fkey on that
one.
The issue i'm running into is obvious: this should not take 2
seconds. anything that f
i'm going crazy trying to optimize this select.
The table has ~25 columns, the select is based on 10. There are
approx 5 million records in the table and growing.
No matter how I index + analyze this table, including making an index
of every related column on the search, pg keeps doing a
On Mar 20, 2007, at 2:54 PM, Jonathan Vanasco wrote:
No matter how I index + analyze this table, including making an
index of every related column on the search, pg keeps doing a
sequential scan and never includes an index -- which takes
~2minutes to do. I really need to cut this down
On Mar 20, 2007, at 3:24 PM, Ron Johnson wrote:
How many *distinct* values are there in int_c? What percentage of
them match "int_c <= 10"?
right now there are 14, and 80% match -- but next week that number
will be 20 , 60% ... and so on
Same questions, but for int_h.
same answer
If i
i'm going crazy trying to optimize this select.
The table has ~25 columns, the select is based on 10. There are
approx 5 million records in the table and growing.
No matter how I index + analyze this table, including making an index
of every related column on the search, pg keeps doing a
7;ve tried turning fsync off, but no luck there.
if anyone could suggest some improvement tips, i'd be grateful. the
system isn't huge (only ~5M records ) , but its taking a rather long
enough time that any extra speed i can come up with would be a huge
improvement..
th # of segments , didn't affect anything. maybe
segment size ?
// Jonathan Vanasco
| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - -
e is used
i ran analyze, no change.
i ran vacuum analyze, no change.
i dropped the index, and recreated it -- its now used again.
this happened 3x already this week.
can anyone offer a suggestion?
// Jonathan Van
fine.
heavy use on the index just seems to convince the planner to not use
it -- and no amount of analyze or vaccum analyze will change that.
i'll post some verbose debugging on the queries and analyze the next
time the index blows
// Jonathan Va
I'm redoing a sql schema , and looking for some input
First I had 2 tables :
Table_A
id
name
a
b
c
Table_B
id
name
x
y
z
id
id_ref_a references tableA
id_comment references Comments
thats perfect, and simple.
the unified table is too dirty :) i've done stuff like that in the
past, and was always upset with it.
// Jonath
On Apr 3, 2007, at 11:44 AM, Scott Marlowe wrote:
I can't help but think that the way this application writes data is
optimized for MySQL's transactionless table type, where lots of
simultaneous input streams writing at the same time to the same table
would be death.
Can you step back and work
TO x (a,b) VALUES ( :id , :name );"""
for row in update_loop:
$prepared_statement->execute( row['id'] , row['name']
$db->commit
// Jonathan Vanasco
| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
.
the only thing worth mentioning, is that this sort of archive is a
PITA to handle unless you enter a record in both tables as record 0.
ie, every new insert puts the full data in both tables. its possible
to reconstruct information otherwise, but its a headache.
// Jonathan Vanasco
ch means I'd have to
redo all the constraints .
// Jonathan Vanasco
| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - -
I didn't see anything in the docs.
I saw something in the FAQ suggesting this was in a TODO.
Any idea when?
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
i believe i found a bug
psql> create table test_failure ( id integer primary key );
psql> alter table test_failure alter id rename to id_new ; # not
relevant, just illustrating the reason how i got to step 2
psql> alter table test_failure_pkey rename to test_failure_id_new_pke
On Apr 17, 2007, at 2:51 PM, Tom Lane wrote:
Jonathan Vanasco <[EMAIL PROTECTED]> writes:
i believe i found a bug
psql> create table test_failure ( id integer primary key );
psql> alter table test_failure alter id rename to id_new ; # not
relevant, just illustratin
sonally lean towards option a or b . anyone have suggestions ?
thanks.
// Jonathan Vanasco
| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - -
definitely go this route. It was my
first thought, but there is almost no documentation out there for
this type of storage.
// Jonathan Vanasco
| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - -
I need a certain unique constraint in pg that i can't figure out.
Given:
create table test_a (
id serial ,
name_1 varchar(32) ,
name_2 varchar(32)
);
I need name_1 and name_2 to both be unique so that:
name_1 never appear
I need to
rewrite evertyhing that queries this table - which is both the core
table for my application and supposed to be free of any multi-table
queries for simple reads.
so i'm going to try the trigger route.
// J
On Apr 20, 2007, at 6:13 PM, Jeff Davis wrote:
This is more correct structure, and yes, it would involve a join.
I know thats the 'more correct' way -- but I can't do the join ,
which is why I posted about a 2 column unique index.
I tested with a join before posting - i have an already larg
in into a
view before, so am at a loss on this being a possibility.
seeing little documentation on this, i'm thinking its not possible
and i'll have to use a function.
On Apr 25, 2007, at 2:05 PM, Richard Huxton wrote:
Owen Hartnett wrote:
I want to "freeze" a snapshot of the database every year (think of
end of year tax records). However, I want this frozen version
(and all the previous frozen versions) available to the database
user as read-only. My
101 - 131 of 131 matches
Mail list logo