contrib/intarray has an index type which could be what you need.
I was wondering about index types. Oracle has an index type called a
'bitmap' index. They describe this as an index for low cardinality
fields, where only the cardinal values are indexed in a b-tree, and
then it uses a bitmap b
There's a great deal about this in the list archives (probably more in
pgsql-hackers than in -performance). Most of the current interest has
to do with building in-memory bitmaps on the fly, as a way of decoupling
index and heap scan processing. Which is not quite what you're talking
about but sh
select s.*, ss.*
from shipment s, shipment_status ss, release_code r
where s.current_status_id = ss.id
and ss.release_code_id = r.id
and r.filtered_column = '5'
order by ss.date desc
limit 100;
Release code is just a very small table of 8 rows by looking at the
production data, hence the
So are you suggesting as a general rule then that sub-queries are the
way to force a specific join order in postgres? If that is the case, I
will do this from now on.
I'll try to explain a bit better...
Here's your original query :
select s.*, ss.*
from shipment s, shipment_status
SELECT * FROM shipment_status WHERE release_code_id = constant ORDER BY
release_code_id DESC, date DESC LIMIT 100;
I have done this in other queries where sorting by both release code and
date were important. You are right, it is very fast and I do have this
index in play. However, most of t
Do I understand correctly, however, that when you
create a unique SERIAL column an index is
automatically created on that column? If so, does
that sound like a possible culprit? We are not doing
any reindexing on that index at all. Could it be
suffering from index bloat? Do we need to
periodic
As I read the docs, a temp table doesn't solve our problem, as it does
not persist between sessions. With a web page there is no guarentee
that you will receive the same connection between requests, so a temp
table doesn't solve the problem. It looks like you either have to
create a real table (
clause will be a cheap query - and use it to test if
a table is empty, for instance. (because for
Oracle/Sybase/SQL Server, count(*) is cheap).
To test if a table is empty, use a SELECT EXISTS or whatever SELECT with
a LIMIT 1...
---(end of broadcast)--
Do you have anything performing any updates or inserts to this table,
even if it does not update the gist column, even if it does not update
anything ?
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://arc
Does the planner also take into account that the Hash Join will need a
huge temporary space which will exist for the whole length of the cursor
existence (which may be quite long if he intends to fetch everything),
whereas the Merge Join should need very little space as it is sending the
r
As a side note, I learned something very interesting for our developers
here.
We had been doing a drop database and then a reload off a db dump from
our
live server for test data. This takes 8-15 minutes depending on the
server
(the one above takes about 8 minutes). I learned through testi
2. Moving to more materialized views and prepared statements where we
can.
Definitely worth investigating. I wish I could, but I can't get my
customers to even consider slightly out of date stats :(
Put a button 'Stats updated every hour', which gives the results in 0.1
seconds, and a but
For example, I am a developer of Mambo, a PHP-based CMS application,
and am porting the mysql functions to ADOdb so I can use grown-up
databases ;-)
Just yesterday I "optimized" a query for a website running MySQL. It's
the 'new products' type query :
SELECT product_id, pd.product_name, p.pric
In terms of performance, Oracle is to Postgres as Postgres is to Mysql:
More
complexity, more overhead, more layers of abstraction, but in the long
run it
pays off when you need it. (Only without the user-friendliness of either
open-source softwares.)
I don't find postgres complex... I find
normally you shouldn't have to do anything, it should just work :
select field from table where field like 'abc%'
CREATE INDEX ... ON table( field );
that's all
If it does not use the index, I saw on the mailing list that the locale
could be an issue.
---(end
I don't know if this would work, but if you just want to restructure your
rows, your could do this:
UPDATE table SET id = id WHERE id BETWEEN 0 AND 2;
VACUUM table;
UPDATE table SET id = id WHERE id BETWEEN 20001 AND 4;
VACUUM table;
wash, rinse,
In the 'wishful hand waving' department :
read index -> determine (tuple id,page) to hit in table -> for each of
these, tell the OS 'I'm gonna need these' via a NON BLOCKING call. Non
blocking because you feed the information to the OS as you read the index,
streaming it.
Meanwhile, th
(For those not knowing - it's ReadFile/WriteFile where you pass an array
of "this many bytes to this address" as parameters)
Isn't that like the BSD writev()/readv() that Linux supports also? Is
that something we should be using on Unix if it is supported by the OS?
Nope, readv()/writev() read/w
From the Linux Kernel (make menuconfig) there seem to be two new reliable
sources for timing information. Note the remark about "Time Stamp Counter"
below. Question is, which one of these (or others) are your API functions
using ? I have absolutely no idea !
CONFIG_HPET_TIMER:
My web app does lots of inserts that aren't read until a session is
complete. The plan is to put the heavy insert session onto a ramdisk
based pg-db and transfer the relevant data to the master pg-db upon
session completion. Currently running 7.4.6.
From what you say I'd think you want to a
Look for the possibility that a foreign key check might not be using an
index. This would yield a seq scan for each insertion, which might be your
problem.
On Fri, 11 Mar 2005 19:22:56 -0500, Arshavir Grigorian <[EMAIL PROTECTED]>
wrote:
Many thanks for all the response.
I guess there are
Instead of a varchar(1) containing 'y' or 'n' you could use a BOOL or an
integer.
Your query seems of the form :
SELECT FROM main_table LEFT JOIN a lot of tables ORDER BY sort_key LIMIT
N OFFSET M;
I would suggest to rewrite it in a simpler way : instead of generating
the whole result s
I have asked him for the data and played with his queries, and obtained
massive speedups with the following queries :
http://boutiquenumerique.com/pf/miroslav/query.sql
http://boutiquenumerique.com/pf/miroslav/query2.sql
http://boutiquenumerique.com/pf/miroslav/materialize.sql
Note that my opt
On my machine (Laptop with Pentium-M 1.6 GHz and 512MB DDR333) I get the
following timings :
Big Joins Query will all the fields and no order by (I just put a SELECT
* in the first table) yielding about 6k rows :
=> 12136.338 ms
Replacing the SELECT * from the table with many fields by ju
Try :
SELECT owner from pictures group by owner;
Any ideas, apart from more or less manually maintaining a list of
distinct owners in another table ?
That would be a good idea too for normalizing your database.
---(end of broadcast)---
This is really great !
Think about altering the partitioning (this is quite complex) : imagine a
table split in several partitions "archive" and "current" where a row is
moved from current to archive when it will not be updated anymore.
Sometimes you can partition on a simple numeric val
tableoid would accomplish that already, assuming that the "partitioned
table" is effectively a view on separate physical tables.
regards, tom lane
Very good.
Also note the possibility to mark a partition READ ONLY. Or even a table.
It does not seem very useful but just think that for inst
It would also be reasonable to allow clustering individual partitions;
creating table or column constraints on some partitions and not others;
I have a session mamagement which works like that, using views now.
sessions.online is a table of the online sessions. It has a UNIQUE on
user_id.
Can I use an index on a varchar column to optimize the SELECT queries
that
use " column LIKE 'header%' "?
Yes
If yes what is the best tree algotithm to use ?
Btree
Note that if you want case insensitive matching you need to make an index
on lower(column) and SELECT WHERE lower(column
i am using postgresql 8.1.8,
Following configurations:
shared_buffers = 5000
work_mem = 65536
maintenance_work_mem = 65536
effective_cache_size = 16000
random_page_cost = 0.1
The cpu is waiting percentage goes upto 50%, and query result
Hello,
I am using Postgres with PHP and persistent connections.
For simple queries, parsing & preparing time is often longer than actual
query execution time...
I would like to execute a bunch of PREPARE statements to prepare my most
often used small queries on connection
create index foo1 on bla (a);
create index foo2 on bla (b);
create index foo3 on bla (a,b);
You say you need faster INSERT performance. Getting rid of some indexes
is a way, but can you tell a bit more about your hardware setup ?
For instance, if you only have one HDD, put an extra HDD in
is this 'normal'? (loaded question I know)
Depends. If you are on the edge, disk-wise, yes a big fat query can push
it over and make it fall.
Should I be looking to offload expensive reporting queries to read-only
replicants of my database?
You could do this, especially if the heavy q
Hm, so this table has 10 million entries and it does not fit in 32GB of
RAM ?
Could you investigate :
- average size of rows in both tables
- a quick description of your table columns especially the average size of
your TEXT fields, especially the large one(s) like comments etc (don't
bother
Also, sometimes offine TOASTing is evil :
Say you have a forum, you want the posts table to be CLUSTER'ed on
(topic_id, post_id) so displaying 1 page with 30 posts on it uses 1 seek,
not 30 seeks. But CLUSTER doesn't touch the data that has been pushed
offline in the toast table. So, in tha
Why do you claim that 'More platters also means slower seeks
and generally slower performance.'?
More platters -> more heads -> heavier head assembly -> slower seek time
But..
More platters -> higher density -> less seek distance (in mm of head
movement) -> faster seek time
On Fri, 4 Apr 2008, Ow Mun Heng wrote:
select * from table
where A=X
and B = Y
and C = Z
and D = AA
and E = BB
With that kind of WHERE condition, Postgres will use a Bitmap Index Scan
to combine your indices. If, however, postgres notices while looking at
the statistics gathered during A
When traffic to our PostgreSQL-backed website spikes, the first resource
we see being exhausted is the DB slots on the master server (currently
set to about 400).
I expect that as new Apache/mod_perl children are being put to us, they
are creating new database connections.
I'm interested in re
I have a performance problem with a script that does massive bulk
insert in 6 tables. When the script starts the performance is really
good but will degrade minute after minute and take almost a day to
finish!
Looks like foreign key checks slow you down.
- Batch INSERTS in trans
Well, this table has a primary key index on first column called acctid
which is an integer; instead the calldate column is a TIMESTAMPTZ and in
fact I'm using to do (calldate)::date in the ON clause because since the
time part of that column is always different and in the nesting I have
t
Hi, I've started my first project with Postgres (after several years of
using Mysql), and I'm having an odd performance problem that I was
hoping someone might be able to explain the cause of.
My query
- select count(*) from gene_prediction_view where gene_ref = 523
- takes
Perfect - thanks Arjen. Using your value of 200 decreased the time to
15 seconds, and using a value of 800 makes it almost instantaneous. I'm
really not concerned about space usage; if having more statistics
increases performance this much, maybe I'll just default it to 1000?
Strangely, th
Thanks a lot, all of you - this is excellent advice. With the data
clustered and statistics at a more reasonable value of 100, it now
reproducibly takes even less time - 20-57 ms per query.
1000x speedup with proper tuning - always impressive, lol.
IO seeks are always your w
I tried removing the index from the part_master table and got the same
result
Since all the data is in the partitions, the part_master table is empty,
so the index is not useful for your query.
myDB=# explain SELECT min(logdate) FROM part_master;
Proposals :
1- Use plpgsql t
Well if you're caching per-connection then it doesn't really matter
whether
you do it on the client side or the server side, it's pretty much
exactly the
same problem.
Actually I thought about doing it on the server since it would then also
work with connection pooling.
Doing it on the c
If you suspect your tables or indexes are bloated, restore your
dump to a test box.
Use fsync=off during restore, you don't care about integrity on the
test box.
This will avoid slowing down your production database.
Then look at the size of the restored database.
If it i
Hi
We currently have a 16CPU 32GB box running postgres 8.2.
When I do a pg_dump with the following parameters "/usr/bin/pg_dump -E
UTF8 -F c -b" I get a file of 14GB in size.
But the database is 110GB in size on the disk. Why the big difference
in size? Does this have anything to do with
Will this help with performance ?
Depends if the bloat is in part of your working set. If debloating can
make the working set fit in RAM, or lower your IOs, you'll get a boost.
Now, is the bloat in the tables (which tables ?) or in the indexes
(which indexes ?), or in the toast tabl
The queries in question (select's) occasionally take up to 5 mins even
if they take ~2-3 sec under "normal" conditions, there are no sequencial
scans done in those queries. There are not many users connected (around
3, maybe) to this database usually since it's still in a testing phase.
I
On Wed, 16 Apr 2008 06:07:04 +0200, Thomas Spreng <[EMAIL PROTECTED]> wrote:
On 16.04.2008, at 01:24, PFC wrote:
The queries in question (select's) occasionally take up to 5 mins even
if they take ~2-3 sec under "normal" conditions, there are no
sequencial scans
On Wed, 16 Apr 2008 23:14:11 +0200, samantha mahindrakar
<[EMAIL PROTECTED]> wrote:
Hi
Iam finding the following query is working a bit slow:
EXECUTE '(SELECT ARRAY(SELECT DISTINCT date_part(''day'',
measurement_start)
FROM ' || gettablestring(dates)|| '
WHERE lane_id IN (' || lanesidar
- why am I still getting a seq scan ?
You'll seq scan tmp1 obviously, and also the other table since you fetch a
very large part of it in the process.
It's the only way to do this query since there is no WHERE to restrict the
number of rows and the DISTINCT applies on columns from both tab
On Fri, 18 Apr 2008 11:36:02 +0200, Gregory Stark <[EMAIL PROTECTED]>
wrote:
"Francisco Reyes" <[EMAIL PROTECTED]> writes:
Is there any dissadvantage of using "group by" to obtain a unique list?
On a small dataset the difference was about 20% percent.
Group by
HashAggregate (cost=369.61..
On Sun, 20 Apr 2008 17:15:36 +0200, Francisco Reyes
<[EMAIL PROTECTED]> wrote:
PFC writes:
- If you process up to some percentage of your RAM worth of data,
hashing is going to be a lot faster
Thanks for the excellent breakdown and explanation. I will try and get
sizes of the
that's correct, there are nightly (at least at the moment) processes that
insert around 2-3 mio rows and delete about the same amount. I can see
that
those 'checkpoints are occurring too frequently' messages are only logged
during that timeframe.
Perhaps you should increase the quantity of
about 2300 connections in idle
(ps auxwww | grep postgres | idle)
[...]
The server that connects to the db is an apache server using persistent
connections. MaxClients is 2048 thus the high number of connections
needed. Application was written in PHP using the Pear DB class.
I should say that this is on postgresql 7.4.16 (debian stable).
Whoa.
I cannot understand why the following two queries differ so much in
execution time (almost ten times)
Post EXPLAIN ANALYZE for both, and also post table definitions (with
indexes), use \d table. This will allow
On Thu, 24 Apr 2008 03:14:54 +0200, Vlad Arkhipov <[EMAIL PROTECTED]>
wrote:
I found strange issue in very simple query. Statistics for all columns
is on the level 1000 but I also tried other levels.
create table g (
id bigint primary key,
isgroup boolean not null);
create table a (
gr
Our ~600,000,000
row table is changed very infrequently and is on a 12 disk software
raid-6
for historical reasons using an LSI Logic / Symbios Logic SAS1068 PCI-X
Fusion-MPT SAS Our ~50,000,000 row staging table is on a 12 disk
hardware
raid-10 using a Dell PowerEdge Expandable RAID con
Do you mean, that the overhead is an artefact of timing the query? In
that case, the query should run faster than its evaluation with EXPLAIN
ANALYZE, correct?
Is there a way to test this assumption regarding the speed of
gettimeofday? I'm on a Macbook and have no idea about the performan
An index scan looks through the index and pulls in each pages as it sees
it.
A bitmap index scan looks through the index and makes a sorted list of
all
the pages it needs and then the bitmap heap scan reads all the pages.
If your data is scattered then you may as well do the index scan, but
You got the order slightly wrong I guess.
1) hardware
Would only come first if your RAM is really too small, or you use RAID5
on write-heavy tables, or what limits you is transaction fsync (hint :
8.3).
Adding RAM is cheap.
2) rewriting my queries and table structures
Hello,
Executive summary :
Last year I wrote a database benchmark which simulates a forum.
It works on Postgres and MySQL.
It could be useful.
I have no time to touch this, so it is rotting on my harddrive.
Who wants to adopt it ? I will put it on pgfoundry.
I can spend a few hours documenting
I created several indices for the primary table, yes. Sure I can do a
table for a volatile column, but then I'll have to create a new such
table for each derived column -- that's why I tried to add a column to
the existing table. Yet seeing this is really slow, and I need to to
many der
i've had to write queries to get trail balance values out of the GL
transaction table and i'm not happy with its performance The table has
76K rows growing about 1000 rows per working day so the performance is
not that great it takes about 20 to 30 seconds to get all the records
for the t
On Tue, 06 May 2008 10:21:43 +0200, Frank van Vugt <[EMAIL PROTECTED]>
wrote:
L.S.
I'm noticing a difference in planning between a join and an in() clause,
before trying to create an independent test-case, I'd like to know if
there's
an obvious reason why this would be happening:
Is the
What is a "period" ? Is it a month, or something more "custom" ?
Can periods overlap ?
No periods can never overlap. If the periods did you would be in
violation of many tax laws around the world. Plus it you would not know
how much money you are making or losing.
I was wondering
When you create a foreign key to a table is there an index that is
created on the foreign key automatically?
No, Postgres doesn't do it for you, because if you create (ref_id)
references table.id, you will perhaps create an index on (ref_id, date)
which would then fill the purpose (and ot
Hi, all. I want to ask what type of index is better to create for
bigint types. I have table with bigint (bigserial) primary key. What
type is better to use for it? I tried btree and hash, but didn't
notice any differences in execution time. For GiST and GIN there is a
trouble that I must create o
Will it pay to go to a controller with higher memory for existing
machines? The one machine I am about to redo has PCI which seems to
somewhat limit our options.
Urgh.
You say that like you don't mind having PCI in a server whose job is to
perform massive query over large data s
You say that like you don't mind having PCI in a server whose job is
to perform massive query over large data sets.
I am in my 4th week at a new job. Trying to figure what I am working
with.
LOOL, ok, hehe, not exactly the time to have a "let's change everything"
fit ;)
From what I
On Wed, 14 May 2008 06:40:40 +0200, fernando castano
<[EMAIL PROTECTED]> wrote:
Hi all,
This sql is taking too long for the size of my tiny db. Any tips from
this alias? I tried moving the sort to the first left outer join
(between projects and features tables) using a nested subquery, b
The problem seem to arise when a SELECT that returns a lot of rows is
Does the SELECT return a lot of rows, or does it scan a lot of rows ?
(for instance, if you use aggregates, it might scan lots of data but only
return few rows).
The problem is that when the SELECTs are run the main a
The real question in my mind is why this turns into a bottleneck before
the similar task of cleaning the 16MB XLOG segment does. I expected
that one would need to be cracked before the CLOG switch time could
possibly be an issue, but reports from the field seem to suggest
otherwise.
H
On Tue, 20 May 2008 22:03:30 +0200, kevin kempter
<[EMAIL PROTECTED]> wrote:
Version 8.3.1
On May 20, 2008, at 1:51 PM, kevin kempter wrote:
Hi all;
I have 2 tables where I basically want to delete from the first table
(seg_id_tmp7) any rows where the entire row already exists in the
On Wed, 21 May 2008 15:09:49 +0200, Richard Huxton <[EMAIL PROTECTED]>
wrote:
Luke Lonergan wrote:
The problem is that the implied join predicate is not being
propagated. This is definitely a planner deficiency.
IIRC only equality conditions are propagated and gt, lt, between aren't.
I
On Wed, 21 May 2008 16:10:53 +0200, H. Hall <[EMAIL PROTECTED]>
wrote:
Does anyone know if there is a source that provides "Big O" notation for
postgres's aggregate functions and operations? For example is count(*)
= O(1) or O(n)?
Do the developers for postgres use Big O when selecting a
Thank you for your reply. I don't really need to count rows in
transactions table, I just thought this was a good example to show how
slow the query was.
Usually you're more interested in the performance of the queries you need
to make rather than the ones you don't need to make ;)
But b
On Wed, 04 Jun 2008 00:36:09 +0200, andrew klassen <[EMAIL PROTECTED]>
wrote:
Running postgres 8.2.5
I have a table that has 5 indices, no foreign keys or any
dependency on any other table. If delete the database and
start entering entries, everything works very well until I get
to some poin
I am using the c-library interface and for these particular transactions
I preload PREPARE statements. Then as I get requests, I issue a BEGIN,
followed by at most 300 EXECUTES and then a COMMIT. That is the
general scenario. What value beyond 300 should I try?
Thanks.
Do you have PREPARE sta
SELECT relpages, reltuples FROM pg_class WHERE relname ='users';
relpages | reltuples
--+---
54063 | 2307
(1 row)
This is a horribly bloated table.
The Output of query on the old server which is fast
relpages | reltuples
--+---
42 |
I want to know if the PostGree has limitations about the concurrent
access,
because a lot of people will access this database at the same time.
PostgreSQL has excellent concurrency provided you use it correctly.
But what do you mean by concurrent access ?
* Number o
by the way, there is a foreign key on another table that references the
primary key col0 on table test.
Is there an index on the referencing field in the other table ? Postgres
must find the rows referencing the deleted rows, so if you forget to index
the referencing column, this can tak
*) is the flash random write problem going to be solved in hardware or
specialized solid state write caching techniques. At least
currently, it seems like software is filling the role.
Those flash chips are page-based, not unlike a harddisk, ie. you cannot
erase and write a byte, you mus
PFC, I have to say these kind of posts make me a fan of yours. I've
read many of your storage-related replied and have found them all very
educational. I just want to let you know I found your assessment of the
impact of Flash storage perfectly-worded and unbelievably insig
Dear List,
I would like to improve seq scan performance. :-)
I have many cols in a table. I use only 1 col for search on it. It is
indexed with btree with text_pattern_ops. The search method is: r like
'%aaa%'
When I make another table with only this col values, the search time is
bett
update songs set views = 0;
UPDATE 54909
Time: 101907.837 ms
time is actually less than 10 minutes, but it is still very long :(
Wow.
test=> CREATE TABLE test (id SERIAL PRIMARY KEY, value INTEGER);
test=> INSERT INTO test (value) SELECT n FROM generate_series( 1,10 )
AS n;
Tem
I've been searching for performance metrics and tweaks for a few weeks
now. I'm trying to determine if the length of time to process my queries
is accurate or not and I'm having a difficult time determining that. I
know postgres performance is very dependent on hardware and settings and
I
OK, I see your problem. Try this :
read this : http://www.postgresql.org/docs/current/static/pgtrgm.html
locate and \i the pg_trgm.sql file
CREATE TABLE dict( s TEXT );
I loaded the english - german dictionary in a test table. I didn't parse
it, so it's just a bunch of 418552 strings, englis
Query below seems to use indexes everywhere in most optimal way.
dokumnr column is of type int
Speed of this query varies rapidly:
In live db fastest response I have got is 8 seconds.
Re-running same query after 10 seconds may take 60 seconds.
Re-running it again after 10 seconds may take 114
OK so vmstat says you are IO-bound, this seems logical if the same plan
has widely varying timings...
Let's look at the usual suspects :
- how many dead rows in your tables ? are your tables data, or bloat ?
(check vacuum verbose, etc)
- what's the size of the dataset re
How to vacuum full pg_shdepend automatically so that other users can
work at same time ?
Your table is horribly bloated.
You must use VACUUM FULL + REINDEX (as superuser) on it, however
unfortunately, it is blocking.
Therefore, you should wait for sunday night to do this, when noo
Server has 2 GB RAM.
It has SATA RAID 0,1 integrated controller (1.5Gbps) and SAMSUNG HD160JJ
mirrored disks.
You could perhaps run a little check on the performance of the RAID, is
it better than linux software RAID ?
Does it leverage NCQ appropriately when running queries in para
log file seems that mostly only those queries are slow:
SELECT ...
FROM dok JOIN rid USING (dokumnr)
JOIN ProductId USING (ProductId)
WHERE rid.ProductId LIKE :p1 || '%' AND dok.SaleDate>=:p2
:p1 and :p2 are parameters different for different queries.
dok contains several years of d
On Fri, 21 Nov 2008 21:07:02 +0100, Tom Lane <[EMAIL PROTECTED]> wrote:
PFC <[EMAIL PROTECTED]> writes:
Index on orders_products( product_id ) and orders_products( order_id ):
=> Same plan
Note that in this case, a smarter planner would use the new ind
Thank you very much for great sample.
I tried to create testcase from this to match production db:
1.2 million orders
3.5 million order details
13400 products with char(20) as primary keys containing ean-13 codes
mostly
3 last year data
every order has usually 1..3 detail lines
same product
Not yet no, but that's a good suggestion and I do intend to give it a
whirl. I get about 27MB/s from raid 1 (10 is about the same) so
hopefully I can up the throughput to the speed of about one disk with
sw raid.
FYI I get more than 200 MB/s out of a Linux Software RAID5 of 3 SATA
drives (
SELECT ... FROM bigtable ORDER BY intprimarykey OFFSET 100 LIMIT 100
I think pagination is overrated.
If the query produces, for instance, something like 100 rows or less,
more often than not, getting all the rows will take the exact same time as
getting a portion of the rows... in
First off, any thoughts per tuning inserts into large tables. I have a
large
table with an insert like this:
insert into public.bigtab1 (text_col1, text_col2, id) values ...
QUERY PLAN
--
Result (cost=0.00..0.01 rows=1 width=0)
(1 ro
I am struggeling with the following query which fetches a random subset
of 200 questions that matches certain tags within certain languages.
However, the query takes forever to evaluate, even though I have a
"limit 200" appended. Any ideas on how to optimize it?
QUERY: ===
201 - 300 of 318 matches
Mail list logo