I have this query in a table with 150 thowsand tuples and it takes to long
t_documentcontent._id AS _id
FROM t_documentcontent LIMIT 50 OFFSET 8
here is the explain output
"Limit (cost=100058762.30..100058799.02 rows=50 width=58) (actual time=
19433.474..19433.680 rows=50 loops=1)"
Sorry Alex i forget mention that i have setscan of in my last test.
now I have set seqscan on and indexscan on and added order by _id
The table has an index in the _id field
CREATE INDEX i_documentcontent_document
ON t_documentcontent
USING btree
(_document);
The database was rencently v
aterialized view
> > of just the id field, the sequence scan will return much fewer pages than
> > when you do it on the main table. Then you join it to the indexed main
> > table, and page in just the rows you need. Voila - much faster result. Of
> > course we hav
Hi all
I interesting in the protect my applications that use postgresql as is
database backend from Sql Injections attacks, can any recommend me best
pratices or references to protected postgres from this kind of malicious
users.
Thanks in advanced
José Manuel, Gutíerrez de la Concha Martínez.
Thanks all you, i will use prepared queries for all my functions after now.
BTW i using Qt-4 postgres drivers from c++ not php. I launch this question
because i read that each day more are more applications are compromised with
this class of attacks.
Thanks again.
On Jan 23, 2008 9:45 PM, brian
On Sat, Apr 19, 2008 at 6:10 PM, Oleg Bartunov wrote:
> On Sat, 19 Apr 2008, Tom Lane wrote:
>
>> Craig Ringer writes:
>>>
>>> Tom Lane wrote:
I don't really see the problem. I assume from your reference to pg_trgm
that you're using trigram similarity as the prefilter for potentia