I forgot to mention that I'm running 7.4.6. The README includes the
caveat that pgmemcache is designed for use with 8.0. My instinct is to
be hesitant using something like that in a production environment
without some confidence that people have done so with good and reliable
success or without
Thomas F.O'Connell wrote:
The problem comes in importing new data into the tables for which the
counts are maintained. The current import process does some
preprocessing and then does a COPY from the filesystem to one of the
tables on which counts are maintained. This means that for each row
be
Thomas,
> Would it be absurd to drop the triggers during import and recreate them
> afterward and update the counts in a summ> ary update based on
> information from the import process?
That's what I'd do.
Also, might I suggest storing the counts in memcached (see the pgmemached
project on pgF
On Thu, Jan 27, 2005 at 21:17:56 -0800,
Zavier Sheran <[EMAIL PROTECTED]> wrote:
> quote from manual:
> --
> Unfortunately, there is no similarly trivial query
> that can be used to improve the performance of count()
> when applied to the entire table
> --
>
> does count(1) also cause a sequenti
I'm involved in an implementation of doing trigger-based counting as a
substitute for count( * ) in real time in an application. My
trigger-based counts seem to be working fine and dramatically improve
the performance of the display of the counts in the application layer.
The problem comes in i
quote from manual:
--
Unfortunately, there is no similarly trivial query
that can be used to improve the performance of count()
when applied to the entire table
--
does count(1) also cause a sequential scan of the
entire table? It should be able to just use the
primary keys.
-Zavier
=
---
za
PFC wrote:
>
> Supposing your searches display results which are rows coming from one
> specific table, you could create a cache table :
>
> search_id serial primary key
> index_n position of this result in the global result set
> result_id id of the resulting row.
>
> Then, maki
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> what about
> CREATE AGGREGATE array_accum (
> sfunc = array_append,
> basetype = anyelement,
> stype = anyarray,
> initcond = '{}'
> );
huh, that is faster. It's only 14x slower than the C implementation.
For completeness, here are
On Thu, 27 Jan 2005 16:05:09 -0500, Merlin Moncure
<[EMAIL PROTECTED]> wrote:
> Alexandre wrote:
> > like I'm doing right now, that "de-normalizing" in an array is the way
> > to go.
>
> Only sometimes. Looping application code is another tactic. There may
> be other things to do as well that do
On Wed, Jan 19, 2005 at 10:42:26AM -0500, Alan Stange wrote:
>
> I'm fairly sure that the pi and po numbers include file IO in Solaris,
> because of the unified VM and file systems.
That's correct.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
When my information changes, I alter my conclusions.
Alexandre wrote:
> > >
> > Right. I expanding departments into columns is basically a dead
end.
> > First of all, SQL is not really designed to do this, and second of
all
> > (comments continued below)
>
> Ok, I got it. The basic message is to avoid making columns out of rows
yes. This is wrong
On Thu, 27 Jan 2005 13:02:48 -0500, Merlin Moncure
<[EMAIL PROTECTED]> wrote:
> Alexandre wrote:
> > On Thu, 27 Jan 2005 10:44:45 -0500, Merlin Moncure
> > <[EMAIL PROTECTED]> wrote:
> > > Alexandre wrote:
> > > Let's start with the normalized result set.
> > >
> > > product_id | department_id
What you want is called UNNEST. It didn't get done in time for 8.0. But
if
what you have is an array of integers the int_array_enum() function I
quoted
in the other post is basically that.
Yes, I used it, thanks. That's what I wanted. The query plans are good.
You don't really need the int_
Greg Stark wrote:
> test=> create or replace function array_push (anyarray, anyelement)
> returns anyarray as 'select $1 || $2' language sql immutable strict;
> CREATE FUNCTION
> test=> create aggregate array_aggregate (basetype=anyelement,
> sfunc=array_push, stype=anyarray, initcond = '{}');
>
PFC <[EMAIL PROTECTED]> writes:
> intset(x) seems to be like array[x] ?
> Actually what I want is the opposite.
What you want is called UNNEST. It didn't get done in time for 8.0. But if
what you have is an array of integers the int_array_enum() function I quoted
in the other post i
On Fri, Jan 21, 2005 at 03:23:30PM -0800, Kevin Brown wrote:
> beefier CPU setup would be in order. But in my (limited) experience,
> the disk subsystem is likely to be a bottleneck long before the CPU is
> in the general case, especially these days as disk subsystems haven't
> improved in perfor
On Fri, Jan 21, 2005 at 02:00:03AM -0500, Tom Lane wrote:
> got absolutely zero flak about their use of Postgres in connection
> with the .mobi bid, after having endured very substantial bombardment
Well, "absolutely zero" is probably overstating it, but Tom is right
that PostgreSQL is not the sor
Alexandre wrote:
> On Thu, 27 Jan 2005 10:44:45 -0500, Merlin Moncure
> <[EMAIL PROTECTED]> wrote:
> > Alexandre wrote:
> > ok, you have a couple of different options here. The first thing
that
> > jumps out at me is to use arrays to cheat using arrays.
> > Let's start with the normalized result
On Thu, 27 Jan 2005 10:44:45 -0500, Merlin Moncure
<[EMAIL PROTECTED]> wrote:
> Alexandre wrote:
> > Here a normal listing of design.product_department_time:
> > product_id | department_id | req_time
> > +---+--
> > 906 | A | 3000
> >
On Thu, 27 Jan 2005 17:27:40 +0100, Dawid Kuroczko <[EMAIL PROTECTED]> wrote:
> On Thu, 27 Jan 2005 10:23:34 -0500, Alexandre Leclerc
> <[EMAIL PROTECTED]> wrote:
> > Here a normal listing of design.product_department_time:
> > product_id | department_id | req_time
> > +---
Steve wrote:
> Okay. Darn. While I don't write the queries for the application, I do
> interact with the company frequently. Their considering moving the
> queries into the database with PL/pgSQL. Currently their queries are
> done through ProvIV development using ODBC. Will context switching be
>
On Thu, Jan 27, 2005 at 08:56:03AM -0800, Josh Berkus wrote:
> It's well understood. See the archives of this list. The problem is that
> implementing the solution is very, very hard -- 100+ hours from a top-notch
> programmer. I'm still hoping to find a corporate sponsor for the issue ...
Steve,
> You mentioned earlier that to get around the CS bug, avoid the query
> structures which trigger it. Dumb question: How do you isolate this?
In real terms, it's generally triggered by a query joining against a very
large table requiring a seq scan.
You can probably find the "bad queries
Unfortunately we will need a rowtype with all the departaments:
CREATE DOMAIN departaments AS (a int, b int, c int, d int, ...);
I think you mean CREATE TYPE departments...
Chris
---(end of broadcast)---
TIP 2: you can get off all lists at once with t
However, it seems that integer && integer[] does not exist :
Try intset(id) && int[]. intset is an undocumented function :)
I'm going to add intset() to README.
SELECT * FROM table WHERE id && int[]
Mm.
intset(x) seems to be like array[x] ?
Actually what I want is the opposite. I have a btree
On Thu, 27 Jan 2005 10:23:34 -0500, Alexandre Leclerc
<[EMAIL PROTECTED]> wrote:
> Here a normal listing of design.product_department_time:
> product_id | department_id | req_time
> +---+--
> 906 | A | 3000
> 906 | C |
Alexandre wrote:
> Here a normal listing of design.product_department_time:
> product_id | department_id | req_time
> +---+--
> 906 | A | 3000
> 906 | C | 3000
> 906 | D | 1935
> 907 | A
Good morning,
I have a table that links two tables and I need to flatten one.
(Please, if I'm just not in the good forum for this, tell me. This is
a performance issue for me, but you might consider this as an SQL
question. Feel free to direct me at the good mailling-list.)
design.products ---> d
For example, IIRC when joining an integer column with a SERIAL column,
you must expicitly cast it as an integer or the planner will not use
the indexes, right? (This is a guess, as I remember reading something
like this and thinking, "How in the world is someone supposed to
figure that out, even wi
On Thu, 27 Jan 2005 00:02:29 -0800, Dustin Sallings wrote:
>
> On Jan 26, 2005, at 10:27, Van Ingen, Lane wrote:
>
> > Clarification: I am talking about SQL coding practices in Postgres
> > (how to write queries for best
> > results), not tuning-related considerations (although that would be
>
> Actually, you can if you assume you can "temporarily materialize" that
> view.
>
> Then, you use a join on my_query to pull the bits you want:
>
> select [big table.details] from [big table],
> [select * from my_query order by [something] offset 280 limit 20]
> where [join criteria between
On Thu, 27 Jan 2005, PFC wrote:
for example,
http://www.sai.msu.su/~megera/postgres/gist/code/7.3/README.intarray
see OPERATIONS and EXAMPLE USAGE:
Thanks, I already know this documentation and have used intarray
before (I find it absolutely fabulous in the right application, it has a
great p
for example,
http://www.sai.msu.su/~megera/postgres/gist/code/7.3/README.intarray
see OPERATIONS and EXAMPLE USAGE:
Thanks, I already know this documentation and have used intarray before
(I find it absolutely fabulous in the right application, it has a great
potential for getting out of t
Oleg Bartunov writes:
> On Thu, 27 Jan 2005, PFC wrote:
>
> >
> > > > beware that SELECT * FROM table WHERE id =ANY( array ) won't use an
> > > > index,
> > > contrib/intarray provides index access to such queries.
> >
> > Can you provide an example of such a query ? I've looked at the operato
On Thu, 27 Jan 2005, PFC wrote:
The best part is that you can skip the LIMIT/OFFSET entirely if you
put page numbers in your cache table while inserting into it, via a
temporary sequence or something. Retrieving the results will then be very
fast, but beware that SELECT * FROM table WHERE id =
The best part is that you can skip the LIMIT/OFFSET entirely if you
put page numbers in your cache table while inserting into it, via a
temporary sequence or something. Retrieving the results will then be
very fast, but beware that SELECT * FROM table WHERE id =ANY( array )
won't use an i
On Thu, 27 Jan 2005, PFC wrote:
Thats a really good idea, just store a list of the sorted ids in the
temp table - small amount of data for insert... I like it!
Alex Turner
NetEconomist
The best part is that you can skip the LIMIT/OFFSET entirely if you
put page numbers in your cache table while
Sebastian Böck wrote:
Hello,
if i have the following (simple) table layout:
create table a (
id serial primary key
);
create table b (
id integer references a,
test text
);
create view c as
select a.id,b.test from a
left join b
on a.id = b.id;
test=# EXPLAIN SELECT * from g;
test=# EX
Thats a really good idea, just store a list of the sorted ids in the
temp table - small amount of data for insert... I like it!
Alex Turner
NetEconomist
The best part is that you can skip the LIMIT/OFFSET entirely if you put
page numbers in your cache table while inserting into it, via a tempor
Hello,
if i have the following (simple) table layout:
create table a (
id serial primary key
);
create table b (
id integer references a,
test text
);
create view c as
select a.id,b.test from a
left join b
on a.id = b.id;
So if i do a select * from c i get the following:
test=# EXPLAIN
On Jan 26, 2005, at 10:27, Van Ingen, Lane wrote:
Clarification: I am talking about SQL coding practices in Postgres
(how to write queries for best
results), not tuning-related considerations (although that would be
welcomed too).
Your question is a bit too vague. At this point in your develop
Josh,
Thanks again for the feedback.
Well, the list of ones which are good is shorter: pretty much LSI and 3Ware
(for SATA). You can suffer with Adaptec if you have to.
Good. We don't plan on using IDE, but I've pondered Firewire.
If we went with a single CPU, like Athlon/Opertron64, would C
42 matches
Mail list logo