> Christian Fritze <[EMAIL PROTECTED]> writes:
> >
> > I'm working on a web based application (using gnuJSP / JDBC)
> > that needs to do queries like
> >
> > SELECT count(textattrib1), text_cat(textattrib1 || ' '), func1(textattrib2)
> > FROM table1 WHERE textattrib2 >= 'foo' GROUP BY textattrib2 ORDER BY
> > textattrib2 ASC LIMIT somelimit;
> >
> > with table1 holding roughly 80000 rows (probably growing some day),
> > textattrib2 is indexed of course.
> >
> > Unfortunately the query above is intolerably slow (up to 30 seconds or
> > so, 2 would be OK...).
* Tim Kientzle <[EMAIL PROTECTED]> [001109 17:21] wrote:
> A couple of ideas for you:
>
> Experiment with doing the GROUP BY within your code.
> Depending on a number of factors, it's sometimes faster.
>
> Experiment with doing the ORDER BY within your code.
> I've seen several cases where pulling the data into
> memory and sorting there was much, much faster than
> having the database do the sorting.
You can increase postgresql's performance by tuning the amount
of shared memory it allocates as well as how much memory it
will use for "sort buffers"
-B 32768 (~256MB shared segment)
-o "-S 65534" (increases size of sort buffers, not sure how much though)
--
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."