Re: [GENERAL] Join query on 1M row table slow

2004-02-10 Thread CSN
Doh! Yeah, now I remember ;) QUERY 1: => explain analyze select p.* from product_categories pc inner join products p on pc.product_id = p.id where pc.category_id = 1016 order by p.title limit 25 offset 0; QUERY PLAN

Re: [GENERAL] Join query on 1M row table slow

2004-02-10 Thread scott.marlowe
On Tue, 10 Feb 2004, CSN wrote: > > I think that probably improves things (lower cost? - > see my other post): > > explain select p.* from products p where p.id in ( > select product_id from product_categories pc where > pc.category_id = 1016) order by p.title limit 25 > offset 0; >

Re: [GENERAL] Join query on 1M row table slow

2004-02-10 Thread CSN
--- "scott.marlowe" <[EMAIL PROTECTED]> wrote: > On Tue, 10 Feb 2004, CSN wrote: > > > I have a pretty simple select query that joins a > table > > (p) with 125K rows with another table (pc) with > almost > > one million rows: > > > > select p.* > > from product_categories pc > > inner join prod

Re: [GENERAL] Join query on 1M row table slow

2004-02-10 Thread scott.marlowe
On Tue, 10 Feb 2004, CSN wrote: > > 2. Vacuum analyze the tables concerned and post the > > output of EXPLAIN ANALYSE > > with your query. > > => explain analyze; > > results in: > > ERROR: syntax error at or near ";" at character 16 No silly. you do: explain analyze select ... (rest of th

Re: [GENERAL] Join query on 1M row table slow

2004-02-10 Thread scott.marlowe
On Tue, 10 Feb 2004, CSN wrote: > I have a pretty simple select query that joins a table > (p) with 125K rows with another table (pc) with almost > one million rows: > > select p.* > from product_categories pc > inner join products p > on pc.product_id = p.id > where pc.category_id = $category_id

Re: [GENERAL] Duplicate oid and primary key values

2004-02-10 Thread scott.marlowe
On Tue, 10 Feb 2004, Jeff Bohmer wrote: > > I have a table in a PG 7.4.1 database with 380 duplicate rows, > including duplicate oid and primary key values. Looking through our > backups, the duplicates did not exist before Friday, 02/06/2004. I'm > assuming neither pg_dumpall nor restoring

Re: [GENERAL] Join query on 1M row table slow

2004-02-10 Thread CSN
> 2. Vacuum analyze the tables concerned and post the > output of EXPLAIN ANALYSE > with your query. => explain analyze; results in: ERROR: syntax error at or near ";" at character 16 __ Do you Yahoo!? Yahoo! Finance: Get your refund fast by filing online. ht

Re: [GENERAL] Upgrade function problem - c language

2004-02-10 Thread Oleg Lebedev
Title: Message I am trying to upgrade from postgres 7.3.2 to 7.4.1 I use backups created by 7.3.2 pg_dump. When I run pg_restore (v. 7.4.1) as a superuser and it throws "ERROR: permission denied for language c" Below is the installation trace.   Thanks.   Oleg   TRACE: [EMAIL PROTECTED] we

Re: [GENERAL] I want to use postresql for this app, but...

2004-02-10 Thread scott.marlowe
On Tue, 10 Feb 2004, Andrew Sullivan wrote: > On Tue, Feb 10, 2004 at 12:06:43PM -0700, scott.marlowe wrote: > > simple. They GPL'd their connection libs. So, if you write code that has > > their connection libs in it, it's gotta be GPL'd. > > Yes. But you could fork from their old libs (whic

Re: [GENERAL] Join query on 1M row table slow

2004-02-10 Thread Richard Huxton
On Tuesday 10 February 2004 19:51, CSN wrote: > I have a pretty simple select query that joins a table > (p) with 125K rows with another table (pc) with almost > one million rows: > > select p.* > from product_categories pc > inner join products p > on pc.product_id = p.id > where pc.category_id =

Re: [GENERAL] Join query on 1M row table slow

2004-02-10 Thread lists
On Tue, 10 Feb 2004, CSN wrote: > I have a pretty simple select query that joins a table > (p) with 125K rows with another table (pc) with almost > one million rows: > > select p.* > from product_categories pc > inner join products p > on pc.product_id = p.id > where pc.category_id = $category_id

Re: [GENERAL] Duplicate oid and primary key values

2004-02-10 Thread Richard Huxton
On Tuesday 10 February 2004 17:10, Jeff Bohmer wrote: > I have a table in a PG 7.4.1 database with 380 duplicate rows, > including duplicate oid and primary key values. Looking through our > backups, the duplicates did not exist before Friday, 02/06/2004. I'm > assuming neither pg_dumpall nor res

Re: [GENERAL] DB cache size strategies

2004-02-10 Thread Martijn van Oosterhout
On Tue, Feb 10, 2004 at 01:20:32PM -0700, Ed L. wrote: > On Friday January 30 2004 6:06, Martijn van Oosterhout wrote: > > On Fri, Jan 30, 2004 at 03:19:56PM -0700, Ed L. wrote: > > > > > > I'm also curious about the relationship of DB shared buffer cache to > > > the linux/hpux kernel caches. ...

Re: [GENERAL] Strange Slow query

2004-02-10 Thread Martijn van Oosterhout
Please run it with EXPLAIN ANALYZE and post the results. On Tue, Feb 10, 2004 at 02:30:57PM +0100, [EMAIL PROTECTED] wrote: > Hello all, > > I have the following query that has a problem i cannot resolve: > > SELECT puzzle_picking.*, tmagaztestate.mconto, tanagraficagen.araso, > tmagaztestate.m

Re: [GENERAL] DB cache size strategies

2004-02-10 Thread Ed L.
On Friday January 30 2004 6:06, Martijn van Oosterhout wrote: > On Fri, Jan 30, 2004 at 03:19:56PM -0700, Ed L. wrote: > > > > I'm also curious about the relationship of DB shared buffer cache to > > the linux/hpux kernel caches. ... > > Whenever the database needs a block not in memory it get loa

Re: [GENERAL] Transfer database tables to a schema

2004-02-10 Thread Peter Darley
Francisco, What I ended up doing was to dump the database, edit the 'SET search_path=public,pg_system' to be 'SET search_path=newschema,pg_system', CREATE SCHEMA newschema;, then use psql to read the dump back into the new database. It was very easy and worked flawlessly. Thanks, Peter Dar

[GENERAL] Join query on 1M row table slow

2004-02-10 Thread CSN
I have a pretty simple select query that joins a table (p) with 125K rows with another table (pc) with almost one million rows: select p.* from product_categories pc inner join products p on pc.product_id = p.id where pc.category_id = $category_id order by p.title limit 25 offset $offset The quer

[GENERAL] PQunescapeBytea

2004-02-10 Thread Iker Arizmendi
The first part of the following piece of code should, and does, take an array of 5 characters and convert it to an escaped string of 10 characters (including the terminating byte). However, when going in the other direction PQunescapeBytea reports (in unescaped_len) that the resulting binary data h

Re: [GENERAL] Increasing Max Connections Mac OS 10.3

2004-02-10 Thread John Siracusa
On 2/10/04 12:28 PM, Tom Lane wrote: > Joe Lester <[EMAIL PROTECTED]> writes: >> Would this be kern.maxfiles? > > Sounds like what you want. There's probably no need to reduce > maxfilesperproc (and thereby constrain every process not only PG > backends). You can set PG's max_files_per_process i

Re: [GENERAL] I want to use postresql for this app, but...

2004-02-10 Thread scott.marlowe
On Tue, 10 Feb 2004, Andrew Sullivan wrote: > On Tue, Feb 10, 2004 at 09:50:31AM -0700, scott.marlowe wrote: > > you would still be wholly bound by the GPL. I.e. all the code you write > > that connects to MySQL would therefore have to be GPL'd. I.e. it does > > nothing to fix the licensing pr

Re: [GENERAL] I want to use postresql for this app, but...

2004-02-10 Thread Andrew Sullivan
On Tue, Feb 10, 2004 at 12:06:43PM -0700, scott.marlowe wrote: > simple. They GPL'd their connection libs. So, if you write code that has > their connection libs in it, it's gotta be GPL'd. Yes. But you could fork from their old libs (which were, IIRC, LGPL) and work from there. Of course, yo

[GENERAL] Changing schema owner?

2004-02-10 Thread Francisco Reyes
Is there a way to change a schema owner other than dump/restore? How about changing the nspowner in pg_namespace? Will that do the trick without any negative consecuences? ---(end of broadcast)--- TIP 8: explain analyze is your friend

[GENERAL] Duplicate oid and primary key values

2004-02-10 Thread Jeff Bohmer
I have a table in a PG 7.4.1 database with 380 duplicate rows, including duplicate oid and primary key values. Looking through our backups, the duplicates did not exist before Friday, 02/06/2004. I'm assuming neither pg_dumpall nor restoring from a pg_dumpall file will eliminate such duplicat

Re: [GENERAL] Increasing Max Connections Mac OS 10.3

2004-02-10 Thread Joe Lester
Would this be kern.maxfiles? There's also one called kern.maxfilesperproc. Is it OK to set these before starting the server? Or should I set them in /etc/rc? On Feb 10, 2004, at 10:04 AM, Tom Lane wrote: Also look at increasing the kernel's limit on number of open files (I remember seeing it i

Re: [GENERAL] Increasing Max Connections Mac OS 10.3

2004-02-10 Thread Tom Lane
Brian Hirt <[EMAIL PROTECTED]> writes: > ... after i did ulimit -n > unlimited the problem joe describes went away for me. Hmm. Postgres assumes it can use the smaller of max_files_per_process and sysconf(_SC_OPEN_MAX). From what you describe, I suspect that OSX's sysconf call ignores the "ulim

Re: [GENERAL] Increasing Max Connections Mac OS 10.3

2004-02-10 Thread Tom Lane
Joe Lester <[EMAIL PROTECTED]> writes: > Would this be kern.maxfiles? Sounds like what you want. There's probably no need to reduce maxfilesperproc (and thereby constrain every process not only PG backends). You can set PG's max_files_per_process instead. > Is it OK to set these before starting

Re: [GENERAL] I want to use postresql for this app, but...

2004-02-10 Thread Andrew Sullivan
On Tue, Feb 10, 2004 at 09:50:31AM -0700, scott.marlowe wrote: > you would still be wholly bound by the GPL. I.e. all the code you write > that connects to MySQL would therefore have to be GPL'd. I.e. it does > nothing to fix the licensing problems that have been brought up. I know that's what

Re: [GENERAL] Increasing Max Connections Mac OS 10.3

2004-02-10 Thread Brian Hirt
On Feb 10, 2004, at 10:57 AM, Tom Lane wrote: Hmm, I hadn't even thought about ulimit. I thought those settings were per-process, not per-user. If they are per-user they could be problematic. not sure if it's per user or per process. after i did ulimit -n unlimited the problem joe describes we

Re: [GENERAL] Increasing Max Connections Mac OS 10.3

2004-02-10 Thread Brian Hirt
Joe, I've run into this on my ibook too. The default number of files is set very low by default. On my system 10.3.2, it's 256 for the postgres user. You can raise it to something higher like 2048 with the ulimit command. i have ulimit -n unlimited in my .bash_profile ibook:~ root# su -

Re: [GENERAL] Increasing Max Connections Mac OS 10.3

2004-02-10 Thread Tom Lane
Brian Hirt <[EMAIL PROTECTED]> writes: > I've run into this on my ibook too. The default number of files is > set very low by default. On my system 10.3.2, it's 256 for the > postgres user. You can raise it to something higher like 2048 with > the ulimit command. i have ulimit -n unlimite

Re: [GENERAL] Now that I have --enabled-nls...

2004-02-10 Thread Tom Lane
Francois Suter <[EMAIL PROTECTED]> writes: > If I read the manual correctly, all I have to do is set the lc_message > parameter to "de_DE" in postgresql.conf and SIGHUP the postmaster. Is > that correct? Are there other settings? I think that's the only setting, but I've never used the facility

Re: [GENERAL] [BUGS] Probably a security bug in PostgreSQL rule system

2004-02-10 Thread Tom Lane
Bruce Momjian <[EMAIL PROTECTED]> writes: > Would someone comment on this? This is fixed in CVS tip. 2004-01-13 22:39 tgl * src/backend/rewrite/: rewriteHandler.c (REL7_3_STABLE), rewriteHandler.c (REL7_4_STABLE), rewriteHandler.c: Revert ill-starred change of 13-Feb-02: