Create table entries (id bigint primary key, somedata varchar(500));
/* insert ~12mil rows of data, somedata mostly empty */
create table stuff (id bigint, bah int, primary key (id, bah));
insert into stuff (id,bah) select id, 1 from entries;
create index blah on stuff(id);
vacuum full analyze;
This is just an example isolating the problem.
Actual queries contain more tables and more joins and return
reasonable amount of data.
Performance of big indices however is appalling, with planner always
reverting to seqscan with default settings.
I tried to pre-filter the data as much as possible
Version is 8.1
The query I originally ran returned ~4-5 rows and had a lot of other
joins and filtering conditions prior to the join with the big table.
Is there any way to instruct postgres to do joins in the specific
order or smth?
---(end of broadcast)--
This is my first (and, by the love of the God, last) project w/pgsql
and everything but the simplest selects is so slow I want to cry.
This is especially bad with vacuum analyze - it takes several hours
for a database of mere 15 Gb on a fast double-core server w/2Gb of RAM
and virtually no workload
errr... workmem is 256Mb of course, and 5m for explain analyze costs.
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Hi. Sorry for being a bit emotional, I was pretty constructive in my
earlier posts (the earlier, the more constructive if you care to
search) but I am progressively getting pissed off :(
Thanks for the initial tip, running ANALYZE w/o vacuum is faster. Are
frequent vacuums even necessary if there
The server is dual Xeon with 4Gb RAM and 10k RPM RAID 1.
There is no workload, we are running test conversion hence autovacuum
off. I tried with on too, to no avail. Pg version is now 8.2.
Here's my pg sql config file, unabridged.
hba_file = '/etc/postgresql/8.2/main/pg_hba.conf' # host-ba
Hi.
Is there any way to truncate WAL log in postgres?
We want to use full-backup strategy where we stop the server and copy
the data directory, however WAL log is taking dozens gigabytes of
spaces.
Is there any way to remove it while keeping the database operational/
restore-able by copying it bac
Hi. We have archived and removed majority of data from a database, the
main impact was on 4 tables, which lost several million rows (3
tables) and several dozen million rows (one table).
Naturally we decided to execute VACUUM FULL on the database to reclaim
all the space; it keeps running for 22 h
Ok here's the update after ~30 hours we have killed vacuum full and
did vacuum on the tables we freed.
However, VACUUM hasn't freed any space at all 0_o
We want to launch vacuum full on per-table basis but we can't have any
more downtime right now so we will launch it at night today.
The original
On Aug 7, 9:57 pm, [EMAIL PROTECTED] ("Simon Riggs") wrote:
> On Sun, 2007-08-05 at 03:45 -0700, Sergei Shelukhin wrote:
> > Is there any way to truncate WAL log in postgres?
> > We want to use full-backup strategy where we stop the server and copy
> > the data direc
> If not, dump and restore the table.
Unfortunately we do not have adequate disk space, we wanted to reduce
the database size in order to back it up, cause there is no more space
for backups either 0_o
Is there any way to prevent
Dump & restore - you mean pg_dump?
---(en
Hi. I have a query that does some batch deletes and inserts into a
table. It runs for 2-3 minutes when the application that uses the
database is in production with users doing stuff.
Recently I have noticed that this query is a frequent victim of
deadlocks.
Is there any way to set a deadlock prio
Alvaro Herrera wrote:
Sergei Shelukhin escribió:
Hi. I have a query that does some batch deletes and inserts into a
table. It runs for 2-3 minutes when the application that uses the
database is in production with users doing stuff.
Recently I have noticed that this query is a frequent
Hi.
I was wondering if I could do something similar to this in Postgres and
if yes how?
UPDATE table1 SET blah = 1 FROM table1
INNER JOIN table2 ON table1.id = table2.t1id
If not, is there any way to make UPDATE ... WHERE id IN () use indexes?
---(end of br
Rodrigo E. De León Plicet wrote:
On Jan 12, 2008 5:22 PM, Sergei Shelukhin <[EMAIL PROTECTED]> wrote:
Hi.
I was wondering if I could do something similar to this in Postgres and
if yes how?
UPDATE table1 SET blah = 1 FROM table1
INNER JOIN table2 ON table1.id = table
1) Do I understand it correctly that cross database queries are not
possible? Two dbs on the same server are used in this case.
2) What's the difference between unlimited length varchar and text
columns? I am mostly interested in performance, namely = 'blah' and
like '%blah%' queries over these co
Thanks :)
As for pt3, I don't have the database I will use yet so it's hard to
test :(
---(end of broadcast)---
TIP 6: explain analyze is your friend
One more newbie question.
Due to limitations of both pg and php I gave up on the attempts to
make bigint keys work; I have a choice between 16-byte character
strings or "8-byte" bytea (decode(string_from_above, "hex")) for an
index column for a table.
Which one will be faster for exact matches and
Suppose there's a table with two text fields and a lot of int fields.
THe table will be huge, updated constantly, and will be searched by
all of the int fields, whereas text fields will be for display only.
Does it make sense to split it vertically into two tables, one with
ints and one with texts
On Apr 15, 12:13 am, "Sergei Shelukhin" <[EMAIL PROTECTED]> wrote:
> Suppose there's a table with two text fields and a lot of int fields.
> THe table will be huge, updated constantly, and will be searched by
> all of the int fields, whereas text fields will be for
21 matches
Mail list logo