AM, Subramaniam C > wrote:
>
>> First output show the output when the query is executed from sql command
>> line. The second output show when it is executed from the application. AS
>> per the output it is clear that the when the query is executed through JDBC
>> its not using
upgrade.
>
> Dave Cramer
>
> da...@postgresintl.com
> www.postgresintl.com
>
> On 28 September 2017 at 12:32, Subramaniam C
> wrote:
>
>> The JDBC version is 9.4-1201-jdbc41.
>>
>> Query :-
>>
>> select count(*) OVER() AS count
not automatically use a cursor, but it does use prepared
> statements which can be slower.
>
>
> Can you provide the query and the jdbc query ?
>
>
>
> Dave Cramer
>
> da...@postgresintl.com
> www.postgresintl.com
>
> On 28 September 2017 at 05:59, Subra
Filter: (("timestamp" >=
'150598950'::bigint) AND ("timestamp" <= '150599040'::bigint))
On Thu, Sep 28, 2017 at 2:56 PM, Pavy Philippe
wrote:
> https://www.postgresql.org/docs/current/static/auto-explain.html
>
>
> -Message d
I configured cursor_tuple_fraction to 1 but still I am facing the same
issue.
Please help.
On Thu, Sep 28, 2017 at 2:18 PM, Julien Rouhaud wrote:
> On Thu, Sep 28, 2017 at 10:19 AM, Subramaniam C
> wrote:
> > Hi
> >
> > When I try to execute the query from sql command
Hi
When I try to execute the query from sql command line then that query is
taking only around 1 sec. But when I execute the query using JDBC(Java)
using preparedStatement then the same query is taking around 10 secs.
Can you please let us know the reason and how to fix this issue?
Thanks and Re
Hi
I wanted to query top 20 rows by joining two tables, one table having
around 1 lac rows and other table having 5 lac rows. Since I am using ORDER
BY in the query so I created compound index with the columns being used in
ORDER BY. Initially index size was 939 MB.
Then I ran EXPLAIN(ANALYZE,BU
With this query I am trying to get the latest hour for a given timestamp so
that I can get whole health array of all object for a given hour. So I am
doing DISTINCT on mobid and order by hour and mobid DESC.
On Thu, Sep 14, 2017 at 6:03 PM, Subramaniam C
wrote:
> I created index on morbid
t; On 2017-09-14 13:51, Subramaniam C wrote:
>
>> Hi
>>
>> QUERY :-
>>
>> _select distinct on (health_timeseries.mobid) mobid,
>> health_timeseries.health, health_timeseries.hour from
>> health_timeseries where hour >=(1505211054000/(3600*1000
Hi
*Requirement :- *
We need to retrieve latest health of around 1.5 million objects for a given
time.
*Implementation :-*
We are storing hourly data of each object in single row. Given below is the
schema :-
*CREATE TABLE health_timeseries (*
* mobid text NOT NULL,
hour bigint NOT
That's almost identical to my tables.
You explained your problem very well ;)
I certainly will. Many thanks for those great lines of SQL!
You're welcome !
Strangely I didn't receive the mail I posted to the list (received yours
though).
--
Sent via pgsql-performance mailing list (pgsql-p
It's a fairly tricky problem. I have a number of sensors producing
energy data about every 5 minutes, but at random times between 1 and
15 minutes. I can't change that as that's the way the hardware of the
sensors works. These feed into another unit, which accumulates them
and forwards them in b
It's a fairly tricky problem. I have a number of sensors producing
energy data about every 5 minutes, but at random times between 1 and
15 minutes. I can't change that as that's the way the hardware of the
sensors works. These feed into another unit, which accumulates them
and forwards them in b
My table is a statistics counters table, so I can live with a partial
data
loss, but not with a full data loss because many counters are weekly and
monthly.
Unlogged table can increase speed, this table has about 1.6 millions of
update per hour, but unlogged with a chance of loss all informatio
I'm running a labour-intensive series of queries on a medium-sized dataset
(~100,000 rows) with geometry objects and both gist and btree indices.
The queries are embedded in plpgsql, and have multiple updates, inserts and
deletes to the tables as well as multiple selects which require the indice
, as it is right now, and thus inheriting the "200" default
value from vacuum_cost_limit). Does that sound right? (If, what might be a
good value to set?) Or perhaps there is a more foolproof way of doing this
that does not rely upon guesswork?
Any suggestions at all would be most welcome!
Daniel C.
can be purged entirely each day) rather than "delete froms", then
there truly would not be any reason to use "vacuum full". Does that sound
plausible?
Thanks again,
Daniel
On Thu, Jul 7, 2011 at 5:30 PM, Greg Smith wrote:
> On 07/07/2011 04:30 PM, D C wrote:
&
fairly high value (rather than it not
being set at all, as it is right now, and thus inheriting the "200" default
value from vacuum_cost_limit). Does that sound right? (If, what might be a
good value to set?) Or perhaps there is a more foolproof way of doing this
that does not rely upon guesswork?
Any suggestions at all would be most welcome!
Daniel C.
Load testing of postgresql 8.4 for OLTP application
suitability showed that throughput of the
database significantly degraded over time from thousands of write
transactions per second to almost zero.
A typical postgres benchmarking gotcha is :
- you start with empty tables
- t
When 1 client connected postgres do 180 execution per second
This is suspiciously close to 10.000 executions per minute.
You got 10k RPM disks ?
How's your IO system setup ?
Try setting synchronous_commit to OFF in postgresql.conf and see if that
changes the results. That'll give useful i
When 1 client connected postgres do 180 execution per second
This is suspiciously close to 10.000 executions per minute.
You got 10k RPM disks ?
How's your IO system setup ?
Try setting synchronous_commit to OFF in postgresql.conf and see if that
changes the results. That'll give useful inf
If i run 30,000 prepared "DELETE FROM xxx WHERE "ID" = ?" commands it
takes close to 10 minutes.
Do you run those in a single transaction or do you use one transaction per
DELETE ?
In the latter case, postgres will ensure each transaction is commited to
disk, at each commit. Since this in
My problem is, that in fact I don't know which tag to index since I'm
running a web admin application where users can enter arbitrary
queries.
For a tag cloud, try this :
- table tags ( tag_id, tag_name )
- table articles ( article_id )
- table articles_to_tags( article_id, tag_id )
now this
You wrote
Try to create a btree index on "(bench_hstore->bench_id) WHERE
(bench_hstore->bench_id) IS NOT NULL".
What do you mean exactly?
=> CREATE INDEX myhstore_kps_gin_idx ON myhstore USING gin(kvps) WHERE
??? IS NOT NULL;
My table's def is:
CREATE TABLE myhstore ( id bigint PRIMARY KEY,
Hi Merlin
The analyze command gave the following result:
On the KVP table:
Index Scan using kvpidx on bench_kvp (cost=0.00..8.53 rows=1 width=180)
(actual time=0.037..0.038 rows=1 loops=1)
Index Cond: (bench_id = '20_20'::text)
Total runtime: 0.057 ms
And on the Hstore table:
Bitma
This is a problem I encounter constantly wherever I go. Programmer
selects millions of rows from giant table. Programmer loops through
results one by one doing some magic on them. Programmer submits queries
back to the database. Even in batches, that's going to take ages.
Reminds me of a
why even have multiple rows? just jam it all it there! :-D
LOL
But seriously, when using an ORM to stuff an object hierarchy into a
database, you usually get problems with class inheritance, and all
solutions suck more or less (ie, you get a zillion tables, with assorted
pile of JOINs,
I suspect your app is doing lots of tiny single-row queries instead of
efficiently batching things. It'll be wasting huge amounts of time
waiting for results. Even if every query is individually incredibly
fast, with the number of them you seem to be doing you'll lose a LOT of
time if you loop o
While reading about NoSQL,
MongoDB let's you store and search JSON objects.In that case, you don't
need to have the same "columns" in each "row"
The following ensued. Isn't it cute ?
CREATE TABLE mongo ( id SERIAL PRIMARY KEY, obj hstore NOT NULL );
INSERT INTO mongo (obj) SELECT ('a=>'||n|
If you want to search by geographical coordinates, you could use a gist
index which can optimize that sort of things (like retrieving all rows
which fit in a box).
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.
What's more, this is already a new controller. It replaced the previous
one because of exactly the same persisting problem. I think tech support
people not knowing a solution just buy some time for them and say "flash
this beta firmware maybe it helps" or "replace your hardware".
We had a
Any time the volume tables are queried it is to calculate the deltas
between each in_octets and out_octets from the previous row (ordered
by timestamp). The delta is used because the external system, where
the data is retrieved from, will roll over the value sometimes. I
have a function to do t
I have created two tables. In the first table i am using many fields to
store 3 address.
as well as in b table, i am using array data type to store 3 address.
is
there any issue would face in performance related things which one
will
cause the performance issue.
The array is interest
When executing huge (10kb), hibernate-generated queries I noticed that
when executed remotly over high-latency network (ping to server
200-400ms), the query takes a lot longer to complete.
When the query is executed remotly (psql or jdbc) it takes 1800ms to
execute, when I issue the query in an
Thomas Pöhler wrote:
I remember you said you were using nginx and php-fastcgi, how many web
server boxes do you have, and what are the specs ?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpre
select * from account_transaction where trans_type_id in ( ...
long, hard-coded list ...) and account_id=? order by created desc
limit 25;
You could use an index on (account_id, created, trans_type), in
replacement of your index on (account_id, created). This will not prevent
the "Index
On Sun, 06 Feb 2011 19:16:23 +0100, Linos wrote:
I am searching what would be the best hardware combination to a new
server i have to install, the new server will have a Postgresql 9.0 with
a database of about 10gb, the database part it is not the problem for
me, in this size almost, the p
I have clustered that table, its still unbelievably slow.
Did you actually delete the old entries before clustering it? if it's
still got 4G of old sessions or whatever in it, clustering ain't gonna
help.
Also, IMHO it is a lot better to store sessions in something like
memcached, rather
On Fri, 04 Feb 2011 21:37:56 +0100, Shaun Thomas wrote:
On 02/04/2011 02:14 PM, felix wrote:
oh and there in the footnotes to django they say "dont' forget to run
the delete expired sessions management every once in a while".
thanks guys.
Oh Django... :)
it won't run now because its too b
And the risks are rather asymmetric. I don't know of any problem from
too large a buffer until it starts crowding out shared_buffers, while
under-sizing leads to the rather drastic performance consequences of
AdvanceXLInsertBuffer having to wait on the WALWriteLock while holding
the WALInsertLo
I wonder how the OP configured effective_io_concurrency ; even on a single
drive with command queuing the fadvise() calls that result do make a
difference...
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgr
t;
- do_my_queries() is a user function (in C) which uses postgres access
methods directly (like index access method on primary key), processes
queries, and sends results back as binary data
- repeat for next batch
Nested Loop Index Scan processes about 400.000 rows/s which is 2.5
us/query, mayb
Do you mean I should use PREPARE?
Currently I use PHP to access the DB which use libpq. Is that cosidered a
fast call API ? if not, can you please refer me to the right info.
PHP pg_pconnect command open a persistent PostgreSQL connection. Is it
enough or I better use PgPool2 or something sim
> The real performance problem with RAID 5 won't show up until a drive
> dies and it starts rebuilding
I don't agree with that. RAID5 is very slow for random writes, since
it needs to :
"The real problem" is when RAID5 loses a drive and goes from "acceptable"
kind of slow, to "someone'
fc=# explain analyse select collection, period, tariff, sum(bytesSent),
sum(bytesReceived), sum(packets), max(sample), (starttime / 3600) * 3600
as startchunk from sample_20101001 where starttime between 1287493200
and 1290171599 and collection=128and ip = '10.9.125.207' group by
st
Dear Friends,
I have a requirement for running more that 15000 queries per
second.
Can you please tell what all are the postgres parameters needs to be
changed
to achieve this.
Already I have 17GB RAM and dual core processor and this machine
is dedicated for database operati
If the data are stored as a byte array but retrieve into a ResultSet,
the unpacking time goes up by an order of magnitude and the
observed total throughput is 25 MB/s. If the data are stored in a
Postgres float array and unpacked into a byte stream, the
observed throughput is 20 MB/s.
fl
On Thu, 09 Dec 2010 06:51:26 +0100, Alex Goncharov
wrote:
,--- You/Divakar (Wed, 8 Dec 2010 21:17:22 -0800 (PST)) *
| So it means there will be visible impact if the nature of DB
interaction is DB
| insert/select. We do that mostly in my app.
You can't say a "visible impact" unless you
The hardware it
is running on is fairly good, dual Xeon CPUs, 4 GB of RAM, Raid 5.
For a database you'd want to consider replacing the RAID1 with a RAID1 (or
RAID10). RAID5 is slow for small random updates, which are common in
databases. Since you probably have enough harddisks anyway, this
On Wed, 01 Dec 2010 18:24:35 +0100, Kevin Grittner
wrote:
Mladen Gogala wrote:
There is a operating system which comes with a very decent extent
based file system and a defragmentation tool, included in the OS.
The file system is called "NTFS"
Been there, done that. Not only was performa
So, I did. I run the whole script in psql, and here is the result for
the INSERT:
realm_51=# explain analyze INSERT INTO drones_history (2771, drone_id,
drone_log_notice, drone_temperature, drone_pressure) SELECT * FROM
tmp_drones_history;
Just once.
OK, another potential problem eliminated, it gets strange...
If I have 5000 lines in CSV file (that I load into 'temporary' table
using COPY) i can be sure that drone_id there is PK. That is because CSV
file contains measurements from all the drones, one measurement per
drone. I u
Is that true? I have no idea. I thought everything was done at the
512-byte block level.
Newer disks (2TB and up) can have 4k sectors, but this still means a page
spans several sectors.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
Now I tried removing the constraints from the history table (including
the PK) and the inserts were fast. After few 'rounds' of inserts I added
constraints back, and several round after that were fast again. But then
all the same. Insert of some 11k rows took 4 seconds (with all
constrain
Having that many instances is not practical at all, so I'll have as many
databases as I have 'realms'. I'll use pg_dump | nc and nc | psql to
move databases
Mario
Then you can use schemas, too, it'll be easier.
--
Sent via pgsql-performance mailing list (pgsql-performance@po
Yes, since (sample_id, drone_id) is primary key, postgres created
composite index on those columns. Are you suggesting I add two more
indexes, one for drone_id and one for sample_id?
(sample_id,drone_id) covers sample_id but if you make searches on drone_id
alone it is likely to be very s
realm_51=# vacuum analyze verbose drones;
INFO: vacuuming "public.drones"
INFO: scanned index "drones_pk" to remove 242235 row versions
DETAIL: CPU 0.02s/0.11u sec elapsed 0.28 sec.
INFO: "drones": removed 242235 row versions in 1952 pages
DETAIL: CPU 0.01s/0.02u sec elapsed 0.03 sec.
INFO:
I pasted DDL at the begining of my post.
Ah, sorry, didn't see it ;)
The only indexes tables have are the ones created because of PK
constraints. Table drones has around 100k rows. Table drones_history has
around 30M rows. I'm not sure what additional info you'd want but I'll
be more th
When I remove foreign constraints (drones_history_fk__samples and
drones_history_fk__drones) (I leave the primary key on drones_history)
than that INSERT, even for 50k rows, takes no more than a second.
So, my question is - is there anything I can do to make INSERTS with PK
faster? Or, si
Note that your LEFT JOIN condition is probably quite slow...
Please post EXPLAIN ANALYZE for this simplified version :
SELECT
R."Osoba weryfikuj?ca",
R."LP",
A."NKA",
A."NTA",
Sum("Ile")
FROM"NumeryA" A
LEFT JOIN "Rejestr stacji do napr
The table have 200 records now.
Select * from employee takes 15 seconds to fetch the data!!!
Which seems to be very slow.
But when I say select id,name from empoyee it executes in 30ms.
30 ms is also amazingly slow for so few records and so little data.
- please provide results of "EXPLAIN AN
2. Why do both HashAggregate and GroupAggregate say the cost estimate
is 4 rows?
I've reproduced this :
CREATE TABLE popo AS SELECT (x%1000) AS a,(x%1001) AS b FROM
generate_series( 1,100 ) AS x;
VACUUM ANALYZE popo;
EXPLAIN ANALYZE SELECT a,b,count(*) FROM (SELECT * FROM popo UNI
hat when a COMMIT or segment switch came, most
of the time, the WAL was already synced and there was no wait.
Just my 2 c ;)
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
rmance
is very good.
You can get performance in this ballpark by writing a custom aggregate in
C ; it isn't very difficult, the pg source code is clean and full of
insightful comments.
- take a look at how contrib/intagg works
- http://www.postgresql.org/files/documentation/books/aw_pg
Is there any way using stored procedures (maybe C code that calls
SPI directly) or some other approach to get close to the expected 35
MB/s doing these bulk reads? Or is this the price we have to pay for
using SQL instead of some NoSQL solution. (We actually tried Tokyo
Cabinet and found it
My questions are: (1) Does the MVCC architecture introduce significant
delays between insert by a thread and visibility by other threads
As said by others, once commited it is immediately visible to all
(2) Are there any available benchmarks that can measure this delay?
Since you will not
"Pierre C" wrote:
in-page compression
How would that be different from the in-page compression done by
TOAST now? Or are you just talking about being able to make it
more aggressive?
-Kevin
Well, I suppose lzo-style compression would be better used on data that is
written a
Even if somebody had a
great idea that would make things smaller without any other penalty,
which I'm not sure I believe either.
I'd say that the only things likely to bring an improvement significant
enough to warrant the (quite large) hassle of implementation would be :
- read-only / archi
emailok | numeric(2,0)|
Note that NUMERIC is meant for
- really large numbers with lots of digits
- or controlled precision and rounding (ie, order total isn't
99. $)
Accordingly, NUMERIC is a lot slower in all operations, and uses a lot
more spac
I guess I have to comment here again and point out that while I am
having this
issue with text searches, I avoid using count(*) in such cases, I just
use
next and previous links.
Unfortunately sometimes you got to do an ORDER BY on search results, and
then all the rows got to be read...
suggest that 99% instances of the "select count(*)" idiom are probably
bad use of the SQL language.
Well, suppose you paginate results. If the user sees that the search query
returns 500 pages, there are two options :
- you're google, and your sorting algorithms are so good that the answe
I ran into a fine example of this when I was searching this mailing list,
"Searching in 856,646 pages took 13.48202 seconds. Site search powered by
PostgreSQL 8.3." Obviously at some point count(*) came into play here
Well, tsearch full text search is excellent, but it has to work inside the
It sounds horrendously complicated to keep track of to me, and in the
end it won't make query execution any faster, it'll just potentially
help the planner pick a better plan. I wonder if that'd be worth the
extra CPU time spent managing the cache and cache content stats, and
using those ca
The bitmap heap scan is 3% faster,
3% isn't really significant. Especially if the new setting makes one query
100 times slower... Like a query which will, by bad luck, get turned into
a nested loop index scan for a lot of rows, on a huge table which isn't in
cache...
--
Sent via pgsql-
1) Should I switch to RAID 10 for performance? I see things like "RAID
5 is bad for a DB" and "RAID 5 is slow with <= 6 drives" but I see
little on RAID 6.
As others said, RAID6 is RAID5 + a hot spare.
Basically when you UPDATE a row, at some point postgres will write the
page which con
Most (all?) hard drives have cache built into them. Many raid cards have
cache built into them. When the power dies, all the data in any cache is
lost, which is why it's dangerous to use it for write caching. For that
reason, you can attach a BBU to a raid card which keeps the cache alive
Essentially, we insert a set of columns into a table, and each row fires
a trigger function which calls a very large stored procedure
For inserting lots of rows, COPY is much faster than INSERT because it
parses data (a lot) faster and is more "data-stream-friendly". However the
actual inse
Two problems to recognize. First is that building something in has the
potential to significantly limit use and therefore advancement of work
on external pools, because of the "let's use the built in one instead of
installing something extra" mentality. I'd rather have a great external
On the new system the bulk loads are extremely slower than on the
previous
machine and so are the more complex queries. The smaller transactional
queries seem comparable but i had expected an improvement. Performing a
db
import via psql -d databas -f dbfile illustrates this problem.
If
I'd like to point out the costs involved in having a whole separate
"version"
It must be a setting, not a version.
For instance suppose you have a session table for your website and a users
table.
- Having ACID on the users table is of course a must ;
- for the sessions table you can dro
Well I guess I'd prefer a per-transaction setting, allowing to bypass
WAL logging and checkpointing. Forcing the backend to care itself for
writing the data I'm not sure is a good thing, but if you say so.
Well if the transaction touches a system catalog it better be WAL-logged...
A per-table
ed to:
a) Eliminate WAL logging entirely
b) Eliminate checkpointing
c) Turn off the background writer
d) Have PostgreSQL refuse to restart after a crash and instead call an
exteral script (for reprovisioning)
Of the three above, (a) is the most difficult codewise.
Actually, it's prett
When you set up a server that has high throughput requirements, the last
thing you want to do is use it in a manner that cripples its throughput.
Don't try and have 1000 parallel Postgres backends - it will process
those queries slower than the optimal setup. You should aim to have
approx
I'm not surprised that Python add is so slow, but I am surprised that
I didn't remember it was... ;-)
it's not the add(), it's the time.time()...
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mail
FYI I've tweaked this program a bit :
import psycopg2
from time import time
conn = psycopg2.connect(database='peufeu')
cursor = conn.cursor()
cursor.execute("CREATE TEMPORARY TABLE test (data int not null)")
conn.commit()
cursor.execute("PREPARE ins AS INSERT INTO test VALUES ($1)")
cursor.execu
Have you tried connecting using a UNIX socket instead of a TCP socket on
localhost ? On such very short queries, the TCP overhead is significant.
Actually UNIX sockets are the default for psycopg2, had forgotten that.
I get 7400 using UNIX sockets and 3000 using TCP (host="localhost")
--
Sen
Have you tried connecting using a UNIX socket instead of a TCP socket on
localhost ? On such very short queries, the TCP overhead is significant.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailp
Can you give the config params for those :
fsync =
synchronous_commit =
wal_sync_method =
Also, some "vmstat 1" output during the runs would be interesting.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql
Within the data to import most rows have 20 till 50 duplicates.
Sometime much more, sometimes less.
In that case (source data has lots of redundancy), after importing the
data chunks in parallel, you can run a first pass of de-duplication on
the chunks, also in parallel, something like :
C
Within the data to import most rows have 20 till 50 duplicates. Sometime
much more, sometimes less.
In that case (source data has lots of redundancy), after importing the
data chunks in parallel, you can run a first pass of de-duplication on the
chunks, also in parallel, something like :
Yes, the "other" reason is that I am not issueing a single SQL command,
but import data from plain ASCII files through the Pyhton-based
framework into the database.
The difference between your measurement and my measurent is the upper
potential of improvement for my system (which has, on
Since you have lots of data you can use parallel loading.
Split your data in several files and then do :
CREATE TEMPORARY TABLE loader1 ( ... )
COPY loader1 FROM ...
Use a TEMPORARY TABLE for this : you don't need crash-recovery since if
something blows up, you can COPY it again... and it wil
As promised, I did a tiny benchmark - basically, 8 empty tables are
filled with 100k rows each within 8 transactions (somewhat typically for
my application). The test machine has 4 cores, 64G RAM and RAID1 10k
drives for data.
# INSERTs into a TEMPORARY table:
[joac...@testsrv scaling]$ t
DELETE FROM table1 WHERE table2_id = ?
For bulk deletes, try :
DELETE FROM table1 WHERE table2_id IN (list of a few thousands ids)
- or use a JOIN delete with a virtual VALUES table
- or fill a temp table with ids and use a JOIN DELETE
This will save cliet/server roundtrips.
Now, something t
FYI, I had a query like this :
(complex search query ORDER BY foo LIMIT X)
LEFT JOIN objects_categories oc
LEFT JOIN categories c
GROUP BY ...
(more joins)
ORDER BY foo LIMIT X
Here, we do a search on "objects" (i'm not gonna give all the details,
they're not interestin
How do you explain the cost is about ten times lower in the 2nd query
than the first ?
Function call cost ?
Can you EXPLAIN ANALYZE ?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-
explain analyze SELECT * FROM "telegrams" WHERE ((recipient_id=508933
AND recipient_deleted=FALSE) OR (user_id=508933 AND user_deleted=FALSE))
ORDER BY id DESC LIMIT 10 OFFSET 0
If you need very fast performance on this query, you need to be able to
use the index for ordering.
Note that
Does the psql executable have any ability to do a "fetch many", using a
server-side named cursor, when returning results? It seems like it tries
to retrieve the query entirely to local memory before printing to
standard out.
I think it grabs the whole result set to calculate the display col
So am I to understand I don't need to do daily reindexing as a
maintenance measure with 8.3.7 on FreeBSD.
Sometimes it's better to have indexes with some space in them so every
insert doesn't hit a full index page and triggers a page split to make
some space.
Of course if the index is 90
1. VACUUM FULL ANALYZE once in a week during low-usage time and
VACUUM FULL compacts tables, but tends to bloat indexes. Running it weekly
is NOT RECOMMENDED.
A correctly configured autovacuum (or manual vacuum in some circumstances)
should maintain your DB healthy and you shouldn't ne
1 - 100 of 819 matches
Mail list logo