Benjamin Bostow wrote:
I haven't modified any of the setting. I did try changing shmmax from
32MB to 256MB but didn't see much change in the processor usage. The
init script that runs to start the server uses the following:
su -l postgres -s /bin/sh -c "/usr/bin/pg_ctl -D $PGDATA -p
/usr/bin/post
Joel Jacobson <[EMAIL PROTECTED]> writes:
> I understand that it is not possible to occasionally re-plan the queries in a
> PL/pgSQL function without dropping and re-creating the function.
Huh? You only need to start a fresh connection.
regards, tom lane
Ryszard Lach wrote:
> On Wed, Nov 19, 2003 at 01:58:27PM -0500, Bruce Momjian wrote:
> > Ryszard Lach wrote:
> >
> > > There is another one thing: logs from the same database running on 7.3 and the
> > > same
> > > application contained lines like 'select getdatabaseencoding()', 'select
> > > dat
On Wed, Nov 19, 2003 at 01:58:27PM -0500, Bruce Momjian wrote:
> Ryszard Lach wrote:
>
> > There is another one thing: logs from the same database running on 7.3 and the same
> > application contained lines like 'select getdatabaseencoding()', 'select
> > datestyle()' and similar (not used by appl
On Tue, Nov 18, 2003 at 10:07:48AM -0500, Bruce Momjian wrote:
>
> Wow, that is strange. If you don't use syslog, do you see the proper
> output?
I've just checked this. It behaves exactly the same way.
> If you turn on log_statement, do you see the statements?
If I turn on log_min_duration_s
On Wednesday 19 November 2003 17:26, you wrote:
> On Wed, 19 Nov 2003 10:18:18 +0100, Ian Barwick <[EMAIL PROTECTED]>
>
> wrote:
> >Indexes:
> >[...]
> >"opv_v_ix" btree (substr(value, 1, 128))
> >
> >SELECT obj_property_id
> > FROM object_property_value opv
> > WHERE opv.value = 'foo'
>
>
On Wednesday 19 November 2003 17:35, Stephan Szabo wrote:
> On Wed, 19 Nov 2003, Ian Barwick wrote:
> > I have this table:
(...)
>
> You probably need to be querying like:
> WHERE substr(value,1,128)='foo';
> in order to use that index.
>
> While substr(txtcol, 1,128) happens to have the property t
Ryszard Lach wrote:
> If I turn on log_min_duration_statement (i.e. set to 0), log_statement and
> log_duration, then I receive something like that
>
> Nov 17 22:33:27 postgres[22945]: [29231-1] LOG: statement:
> Nov 17 22:33:27 postgres[22945]: [29232-1] LOG: duration: 0.198 ms
> Nov 17 22:33:2
Shridhar,
> However I do not agree with this logic entirely. It pegs the next vacuum
> w.r.t current table size which is not always a good thing.
No, I think the logic's fine, it's the numbers which are wrong. We want to
vacuum when updates reach between 5% and 15% of total rows. NOT when
u
On Wed, 19 Nov 2003 10:18:18 +0100, Ian Barwick <[EMAIL PROTECTED]>
wrote:
>Indexes:
>[...]
>"opv_v_ix" btree (substr(value, 1, 128))
>SELECT obj_property_id
> FROM object_property_value opv
> WHERE opv.value = 'foo'
Try
... WHERE substr(opv.value, 1, 128) = 'foo'
HTH.
Servus
M
On Wed, 19 Nov 2003, Ian Barwick wrote:
>
> I have this table:
>
> db=> \d object_property_value
>Table "db.object_property_value"
> Column | Type | Modifiers
> ---++
Shridhar,
> Will look into it. Give me a day or so. I am planning couple of other
> patches as well. May be over week end.
Thanks, appreciated. As I said, I don't think the settings themselves are
wrong, I think the documentation is.
What are you patching?
--
Josh Berkus
Aglio Database Solu
On Thu, Nov 20, 2003 at 07:07:30 +0530,
Rajesh Kumar Mallah <[EMAIL PROTECTED]> wrote:
>
> If i dump and reload the performance improves and it takes < 1 sec. This
> is what i have been doing since the upgrade. But its not a solution.
>
> The Vacuum full is at the end of a loading batch SQL fil
Dear Tom,
Thanks for your early response.
An addition: the nastier difference increased by adding an index (it was an
essential index for this query):
func with param improved from 2700ms to 2300ms
func with constant improved from 400ms to 31ms
inline query improved from 390ms to 2ms
So a
Josh Berkus wrote:
Shridhar,
I was looking at the -V/-v and -A/-a settings in pgavd, and really don't
understand how the calculation works. According to the readme, if I set -v
to 1000 and -V to 2 (the defaults) for a table with 10,000 rows, pgavd would
only vacuum after 21,000 rows had been
Laurent Martelli wrote:
"Shridhar" == Shridhar Daithankar <[EMAIL PROTECTED]> writes:
[...]
Shridhar> 2. Try following query EXPLAIN ANALYZE SELECT * from lists
Shridhar> join classes on classes.id=lists.value where
Shridhar> lists.id='16'::integer;
Shridhar> classes.id=lists.value::int
> "Shridhar" == Shridhar Daithankar <[EMAIL PROTECTED]> writes:
[...]
Shridhar> 2. Try following query EXPLAIN ANALYZE SELECT * from lists
Shridhar> join classes on classes.id=lists.value where
Shridhar> lists.id='16'::integer;
Shridhar> classes.id=lists.value::integer.
With classes
Laurent Martelli wrote:
"Shridhar" == Shridhar Daithankar <[EMAIL PROTECTED]> writes:
Shridhar> I am stripping the analyze outputs and directly jumping to
Shridhar> the end.
Shridhar> Can you try following?
Shridhar> 1. Make all fields integer in all the table.
I can't do this because
> "Shridhar" == Shridhar Daithankar <[EMAIL PROTECTED]> writes:
Shridhar> Laurent Martelli wrote:
>>> "Shridhar" == Shridhar Daithankar
>>> <[EMAIL PROTECTED]> writes:
Shridhar> Laurent Martelli wrote:
>> [...] >> Should I understand that a join on incompatible types
>> (s
I have this table:
db=> \d object_property_value
Table "db.object_property_value"
Column | Type | Modifiers
---++
obj_prop
Laurent Martelli wrote:
"Shridhar" == Shridhar Daithankar <[EMAIL PROTECTED]> writes:
Shridhar> Laurent Martelli wrote:
[...]
>> Should I understand that a join on incompatible types (such as
>> integer and varchar) may lead to bad performances ?
Shridhar> Conversely, you should enforc
21 matches
Mail list logo