Doh! Yeah, now I remember ;)
QUERY 1:
=> explain analyze select p.* from product_categories
pc inner join products p on pc.product_id = p.id where
pc.category_id = 1016 order by p.title limit 25 offset
0;
QUERY PLAN
On Tue, 10 Feb 2004, CSN wrote:
>
> I think that probably improves things (lower cost? -
> see my other post):
>
> explain select p.* from products p where p.id in (
> select product_id from product_categories pc where
> pc.category_id = 1016) order by p.title limit 25
> offset 0;
>
--- "scott.marlowe" <[EMAIL PROTECTED]> wrote:
> On Tue, 10 Feb 2004, CSN wrote:
>
> > I have a pretty simple select query that joins a
> table
> > (p) with 125K rows with another table (pc) with
> almost
> > one million rows:
> >
> > select p.*
> > from product_categories pc
> > inner join prod
On Tue, 10 Feb 2004, CSN wrote:
> > 2. Vacuum analyze the tables concerned and post the
> > output of EXPLAIN ANALYSE
> > with your query.
>
> => explain analyze;
>
> results in:
>
> ERROR: syntax error at or near ";" at character 16
No silly. you do:
explain analyze select ... (rest of th
On Tue, 10 Feb 2004, CSN wrote:
> I have a pretty simple select query that joins a table
> (p) with 125K rows with another table (pc) with almost
> one million rows:
>
> select p.*
> from product_categories pc
> inner join products p
> on pc.product_id = p.id
> where pc.category_id = $category_id
On Tue, 10 Feb 2004, Jeff Bohmer wrote:
>
> I have a table in a PG 7.4.1 database with 380 duplicate rows,
> including duplicate oid and primary key values. Looking through our
> backups, the duplicates did not exist before Friday, 02/06/2004. I'm
> assuming neither pg_dumpall nor restoring
> 2. Vacuum analyze the tables concerned and post the
> output of EXPLAIN ANALYSE
> with your query.
=> explain analyze;
results in:
ERROR: syntax error at or near ";" at character 16
__
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
ht
Title: Message
I am trying to
upgrade from postgres 7.3.2 to 7.4.1
I use backups
created by 7.3.2 pg_dump.
When
I run pg_restore (v. 7.4.1) as a superuser and it throws
"ERROR: permission denied for language c"
Below is the installation
trace.
Thanks.
Oleg
TRACE:
[EMAIL PROTECTED] we
On Tue, 10 Feb 2004, Andrew Sullivan wrote:
> On Tue, Feb 10, 2004 at 12:06:43PM -0700, scott.marlowe wrote:
> > simple. They GPL'd their connection libs. So, if you write code that has
> > their connection libs in it, it's gotta be GPL'd.
>
> Yes. But you could fork from their old libs (whic
On Tuesday 10 February 2004 19:51, CSN wrote:
> I have a pretty simple select query that joins a table
> (p) with 125K rows with another table (pc) with almost
> one million rows:
>
> select p.*
> from product_categories pc
> inner join products p
> on pc.product_id = p.id
> where pc.category_id =
On Tue, 10 Feb 2004, CSN wrote:
> I have a pretty simple select query that joins a table
> (p) with 125K rows with another table (pc) with almost
> one million rows:
>
> select p.*
> from product_categories pc
> inner join products p
> on pc.product_id = p.id
> where pc.category_id = $category_id
On Tuesday 10 February 2004 17:10, Jeff Bohmer wrote:
> I have a table in a PG 7.4.1 database with 380 duplicate rows,
> including duplicate oid and primary key values. Looking through our
> backups, the duplicates did not exist before Friday, 02/06/2004. I'm
> assuming neither pg_dumpall nor res
On Tue, Feb 10, 2004 at 01:20:32PM -0700, Ed L. wrote:
> On Friday January 30 2004 6:06, Martijn van Oosterhout wrote:
> > On Fri, Jan 30, 2004 at 03:19:56PM -0700, Ed L. wrote:
> > >
> > > I'm also curious about the relationship of DB shared buffer cache to
> > > the linux/hpux kernel caches. ...
Please run it with EXPLAIN ANALYZE and post the results.
On Tue, Feb 10, 2004 at 02:30:57PM +0100, [EMAIL PROTECTED] wrote:
> Hello all,
>
> I have the following query that has a problem i cannot resolve:
>
> SELECT puzzle_picking.*, tmagaztestate.mconto, tanagraficagen.araso,
> tmagaztestate.m
On Friday January 30 2004 6:06, Martijn van Oosterhout wrote:
> On Fri, Jan 30, 2004 at 03:19:56PM -0700, Ed L. wrote:
> >
> > I'm also curious about the relationship of DB shared buffer cache to
> > the linux/hpux kernel caches. ...
>
> Whenever the database needs a block not in memory it get loa
Francisco,
What I ended up doing was to dump the database, edit the 'SET
search_path=public,pg_system' to be 'SET search_path=newschema,pg_system',
CREATE SCHEMA newschema;, then use psql to read the dump back into the new
database. It was very easy and worked flawlessly.
Thanks,
Peter Dar
I have a pretty simple select query that joins a table
(p) with 125K rows with another table (pc) with almost
one million rows:
select p.*
from product_categories pc
inner join products p
on pc.product_id = p.id
where pc.category_id = $category_id
order by p.title
limit 25
offset $offset
The quer
The first part of the following piece of code should, and
does, take an array of 5 characters and convert it to an
escaped string of 10 characters (including the terminating
byte).
However, when going in the other direction PQunescapeBytea
reports (in unescaped_len) that the resulting binary data
h
On 2/10/04 12:28 PM, Tom Lane wrote:
> Joe Lester <[EMAIL PROTECTED]> writes:
>> Would this be kern.maxfiles?
>
> Sounds like what you want. There's probably no need to reduce
> maxfilesperproc (and thereby constrain every process not only PG
> backends). You can set PG's max_files_per_process i
On Tue, 10 Feb 2004, Andrew Sullivan wrote:
> On Tue, Feb 10, 2004 at 09:50:31AM -0700, scott.marlowe wrote:
> > you would still be wholly bound by the GPL. I.e. all the code you write
> > that connects to MySQL would therefore have to be GPL'd. I.e. it does
> > nothing to fix the licensing pr
On Tue, Feb 10, 2004 at 12:06:43PM -0700, scott.marlowe wrote:
> simple. They GPL'd their connection libs. So, if you write code that has
> their connection libs in it, it's gotta be GPL'd.
Yes. But you could fork from their old libs (which were, IIRC, LGPL)
and work from there. Of course, yo
Is there a way to change a schema owner other than dump/restore?
How about changing the nspowner in pg_namespace? Will that do the trick
without any negative consecuences?
---(end of broadcast)---
TIP 8: explain analyze is your friend
I have a table in a PG 7.4.1 database with 380 duplicate rows,
including duplicate oid and primary key values. Looking through our
backups, the duplicates did not exist before Friday, 02/06/2004. I'm
assuming neither pg_dumpall nor restoring from a pg_dumpall file will
eliminate such duplicat
Would this be kern.maxfiles? There's also one called
kern.maxfilesperproc.
Is it OK to set these before starting the server? Or should I set them
in /etc/rc?
On Feb 10, 2004, at 10:04 AM, Tom Lane wrote:
Also look at increasing the kernel's limit on number of open files
(I remember seeing it i
Brian Hirt <[EMAIL PROTECTED]> writes:
> ... after i did ulimit -n
> unlimited the problem joe describes went away for me.
Hmm. Postgres assumes it can use the smaller of max_files_per_process
and sysconf(_SC_OPEN_MAX). From what you describe, I suspect that OSX's
sysconf call ignores the "ulim
Joe Lester <[EMAIL PROTECTED]> writes:
> Would this be kern.maxfiles?
Sounds like what you want. There's probably no need to reduce
maxfilesperproc (and thereby constrain every process not only PG
backends). You can set PG's max_files_per_process instead.
> Is it OK to set these before starting
On Tue, Feb 10, 2004 at 09:50:31AM -0700, scott.marlowe wrote:
> you would still be wholly bound by the GPL. I.e. all the code you write
> that connects to MySQL would therefore have to be GPL'd. I.e. it does
> nothing to fix the licensing problems that have been brought up.
I know that's what
On Feb 10, 2004, at 10:57 AM, Tom Lane wrote:
Hmm, I hadn't even thought about ulimit. I thought those settings were
per-process, not per-user. If they are per-user they could be
problematic.
not sure if it's per user or per process. after i did ulimit -n
unlimited the problem joe describes we
Joe,
I've run into this on my ibook too. The default number of files is
set very low by default. On my system 10.3.2, it's 256 for the
postgres user. You can raise it to something higher like 2048 with
the ulimit command. i have ulimit -n unlimited in my .bash_profile
ibook:~ root# su -
Brian Hirt <[EMAIL PROTECTED]> writes:
> I've run into this on my ibook too. The default number of files is
> set very low by default. On my system 10.3.2, it's 256 for the
> postgres user. You can raise it to something higher like 2048 with
> the ulimit command. i have ulimit -n unlimite
Francois Suter <[EMAIL PROTECTED]> writes:
> If I read the manual correctly, all I have to do is set the lc_message
> parameter to "de_DE" in postgresql.conf and SIGHUP the postmaster. Is
> that correct? Are there other settings?
I think that's the only setting, but I've never used the facility
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Would someone comment on this?
This is fixed in CVS tip.
2004-01-13 22:39 tgl
* src/backend/rewrite/: rewriteHandler.c (REL7_3_STABLE),
rewriteHandler.c (REL7_4_STABLE), rewriteHandler.c: Revert
ill-starred change of 13-Feb-02:
32 matches
Mail list logo