Hi, everybody!
Here is a weird problem I ran into with 7.3.4.
This is the complete test case:
rapidb=# select version ();
version
-
PostgreSQL 7.3.4 on i686-pc-linux-gnu, compiled by GC
Hi, everybody!
I am getting a weird failure, trying to vacuum a table in 7.3 - it says
"ERROR: Index pg_toast_89407_index is not a btree".
Does it ring a bell to anyone? Any ideas what's wrong? Is it my database
screwed up? I just created it today...
I tried dropping and recreating it... and it
P.S. I also tried to look at the stats of that other database I
mentioned... The stats for b look similar:
stavalues1 |
{1028104,25100079,50685614,78032989,105221902,135832793,199827486,611968165,807597786,884897604,969971779}
But the stats for a are just *not there at all* (is it even possibl
Tom Lane wrote:
Dmitry Tkach <[EMAIL PROTECTED]> writes:
Also, I have another copy (not exact copy, but identical schema, and
similar content... but about twice smaller) of the original database...
I tried my query on it, and it works right too.
So, there must be something wron
, there must be something wrong with that particular database I suppose...
Any ideas what I should look at?
Thanks a lot!
Dima
Tom Lane wrote:
Dmitry Tkach <[EMAIL PROTECTED]> writes:
The query plan looks identical in both cases:
Limit (cost=0.00..12.51 rows=1 width=8)
-> Ne
Hi, everybody!
Here is a weird problem, I ran into...
I have two huge (80 million rows each) tables (a and b), with id as a PK
on both of them and also an FK from b referencing a.
When I try to run a query like:
select * from a, b where a.id >= 7901288 and a.id=b.id limit 1;
The query takes *f
Jonathan Bartlett wrote:
NOTE - after writing all this, I did think of a possible solution, but I'm
not sure if PG can handle it. If I made a table called "object" with one
column, the object_id, and then had EVERY table inherit from this table.
Then, I could have my constraints set up against th
Jonathan Bartlett wrote:
In the few instances where I go the other way, it's limited to 2
or 3 tables, and I do separate joins combined with a UNION.
If you can combine your queries with a union, your table layouts must be
very similar if not identical.
Why not put everything into the same tab
Deepa K wrote:
Hi,
Thanks for your prompt reply.
I think I didn't explained the problem
clearly.
Actually when a client (from an application like java)
tries to access the server database which is in network
How could I solve the problem. Is 'RAISE EXCEPTION' solves
the above
Jonathan Bartlett wrote:
Why not just drop the "references" clause? I mean, the point of having
transactions is to guarantee integrity within a transaction, if you're not
going to have that, why even bother with the clause?
Quite the opposite - the point is to guaratee the integrity *outside*
th
kay-uwe.genz wrote:
Hi @ all,
i've a little problem with two tables and FOREIGN KEYs. I've read about
this long time ago, but didn't remember me where. Well, I hope you can
help me.
I've create two TABLEs "counties" and "cities". "Countries" have a row
"capital" is REFERENCEd "cities". "citie
The first query is able to use the index on nr_proponente, because the
condition involves that column directly, the second query is not,
because the index only contains the values of nt_proponente, not results
of trunc(..)/
Try replacing that condition with something like
pa.nr_proponente B
Greg Stark wrote:
So I have to adjust a primary key by adding one to every existing record.
Obviously this isn't a routine operation, my data model isn't that messed up.
It's a one-time manual operation.
However when I tried to do the equivalent of:
update tab set pk = pk + 1
I got
ERROR: C
Curtis Hawthorne wrote:
Hi,
I'm setting up a table for a new project and have a question about choosing a
data type for one of the columns. It will be for a username that is retrieved
from an LDAP server. I know that I'll want to use either varchar or text.
The problem with using varchar is I
After looking at the docs on the
character datatypes I noticed that if you don't specify a limit on the varchar
type it will accept strings of any length. If that's the case, what's the
difference between it and text?
Actually, I'd like to know this too :-)
I think that there is no difference
Tom Lane wrote:
Dmitry Tkach <[EMAIL PROTECTED]> writes:
Well ... *today* there seem to be files between and 00EC
Is that range supposed to stay the same or does it vary?
It will vary, but not quickly --- each file represents 1 million
transactions.
If the problem is errati
Tom Lane wrote:
Proves nothing, since ANALYZE only touches a random sample of the rows.
Ok, I understand... Thanks.
If you get that behavior with VACUUM, or a full-table SELECT (say,
"SELECT count(*) FROM foo"), then it'd be interesting.
I never got it with select - only with vacuum and/or
The short answer is - there is no way you can do it.
Different connections in postgres (and in every other DB engine I heard
of) can never share the same transaction.
As far as I can see, the only way to do what you want is to rethink your
architechture so that the clients never talk directly to
Sean Chittenden wrote:
store 10mil+ syslog messages this might not be the right tool. I'm
just mentioning it because it perhaps the way the rrd keeps track
of wrap-around might be a good way to implement this in postgres.
Hmm. Using the cycling feature of a sequence, couldn't you create a
Ouch, this means that for every insert we would have to trigger a
procedure which will:
COUNT
IF > Limit
DELETE OLDEST
This would be pretty much damn ressource intensive on a table with million
of records, would not it ?
You can keep the count in a table on the side, and have it updated by
th
If you make an opclass that orders in the reverse order you can use that
opclass in creating the index (which effectively can give you an index
like x, y desc by using the new opclass on y). There was some talk
recently about whether we should provide such opclasses as builtins or
contrib items.
I am not sure if this is really a bug, but it certainly looks like one
to me...
I have a table that looks something like this:
create table huge_table
(
int x,
int y
);
create index huge_table_idx on huge_table (x,y);
It contains about 80 million rows...
I am trying to get those rows that
You've got your url wrong - it should be "://" after postrgesql instead
of "@"
I hope, it helps...
Dima
Kallol Nandi wrote:
Hi,
This is the code that I am using for native JDBC Driver to connect to
PostgreSql in Linux.
BTW the version of Postgres is 7.2.2 and the jar file is jdbc7.1-1.2.jar.
23 matches
Mail list logo