a) if the images are in the filesystem (and also under the web root), no
problem. Just use
b) if the images are in the database...
You use and setup URL rewriting in your
webserver so that a HTTP request on "images/filename.jpg" becomes
"serve_image?fname=filename.jpg" with serve_im
The parsing has turned out to be pretty intense. It takes about 10-20
minutes for any project. When we are parsing data, it really slows
down the site's response. I tested serving static webpages from
apache, endless loops in php , but the choke point seems to be doing
any other query on postgr
O>> Here is how I implemented something very similar (in PHP) :
- "Node" class and several derived classes.
- "nodes" table which contains the fields for the base class with
node_id as a PK and a field which indicates the class
- "nodes_***" tables which contain the extra field
I am doing a project using Ruby On Rails with PostgreSQL as the
database. I have not seen the term polymorphic used with databases
except with Rails so I will quickly describe it.
Instead of holding just an id as a foreign key, the record holds a
"type" field which is a string and an id. The
Why not have a table type that writes no WAL and is truncated whenever
postgres starts? Such a table could then be put in a ramdisk tablespace
and there would be no transaction atomicity repercussions. Is there
something I'm missing?
Is this not in the TODO (if not already schedul
OK, check...
test=> CREATE TABLE test (id INTEGER PRIMARY KEY);
test=> INSERT INTO test SELECT random()*5 + n*10 FROM
generate_series( 1,10 ) AS n;
test=> SELECT * FROM test LIMIT 10;
id
-
11
23
31
41
52
63
70
85
94
103
test=> ANALYZE test;
ANALYZE
On Jun 26, 2007, at 14:41 , [EMAIL PROTECTED] wrote:
and what I need is the following ("old fashion", that is, the "SPACE"
is another character whose ASCII value is before any other LATIN
letter's!!)
AB CD
AB EF
ABAB
ABD E
What you don't want :
peufeu=> SELECT column1 FROM (VALU
Now, I'd like to make a JOIN-ed query of that table with itself, so that
I'd get rows paiwise: every row containing data from *two* rows of the
original TEST table so, that those data come from rows of consequtive
ID's - not neceserly (depending on the TEST table contents) continuesly
consequtiv
Check out greatest() and least()... (I think ;)
On Sat, 23 Jun 2007 18:35:36 +0200, Raymond O'Donnell <[EMAIL PROTECTED]> wrote:
On 23/06/2007 17:17, Gunther Mayer wrote:
Any way I can achieve that on one line? I.e. I want it simpler than
IF arg1 < arg2 THEN
RETURN arg1;
ELSE
I think it would be greatly helpful if the insert...returning could be
seen as a "select" statement and, thus, being usable in the way I
have described. I suspect that the insert...returning is actually
implemented as an inser plus a select.
You can create a function which does the INSERT RET
Hi. I have a few databases created with UNICODE encoding, and I would
like to be able to search with accent insensitivity. There's something
in Oracle (NLS_COMP, NLS_SORT) and SQL Server (don't remember) to do
this, but I found nothing in PostgreSQL, just the 'to_ascii' function,
which AF
2. Oracle, Microsoft, and IBM have a "lot" to fear in the sense of a
database like PostgreSQL. We can compete in 90-95% of cases where people
would traditionally purchase a proprietary system for many, many
thousands (if not hundreds of thousands) of dollars.
Oracle also fears benchmarks
I wouldn't call Python *strongly* typed, but I do know what you mean. I
think.
It is strongly typed (string + int = error), just not statically typed
(but you saw what I mean ;)
"PHP: very loosely typed, does whatever it wants"
yeah php got a life of its own! sure be a lazy programmer
I've seen similar negative comments before on this list about PHP, and
I'm curious to know what informs them.
Maybe the fact that, when I coded a database object/form library, it took
me LONGER to ensure that empty strings / NULLs / zero valued floats and
integers / etc were handled corr
Seems CPU makers currently have more transistors than they know what to
do with, so they're adding cores and doing a lot of boring stuff like
SSE2, SSE3, SSE4, etc.
SSE(n) isn't useless since it speeds up stuff like video encoding by,
say, a few times.
For databases, I'd say scatter/g
Holy Crud!
you mean to tell me I can replace:
insert into table(string) values(('one'),('two'),('three'));
select idx from table where string in ('one','two','three');
Yes.
A smart ORM library should, when you create a new database object from
form values, use INSERT RETURNING to
The chunk to be allocated is not the same size, so to set the increment
value will not help.
I'm sometimes not that subtle, so I'd just use a BIGINT sequence. Think
about the largest chunk you'll ever get (probably less than 2^30 rows, yes
?), set this sequence increment to this very hig
Hello all!
I'm working on a PHP site using Postgres as a back-end. I have an
include at the top of each page that runs the pg_connect function.
I'm implementing some temporary tables, which I understand are
destroyed automatically at the end of the session. It seems to me that
- Temp t
I promised that I will get back to the group with the reason. Well, of
course was a query :). I do use a search engine file system
based(lucene) that will take any desired entity saved into the database
and find the primary keys and then do a select * from entity where id is
in (:ids)If I
The DELETE should block, no?
Why ?
Foreign keys put an ON DELETE trigger on the referenced table
Foreign keys that silently, automatic DELETE records?
Did I read that correctly?
Isn't that the point of ON DELETE CASCADE ?
besides checking the referencing column on insert/
This message appears:
ERROR: relation "t_arti" does not exist
SQL state: 42P01
Context: SQL function "test1"
Why it does not work???
thanks for your help
Because plpgsql functions are compiled on first execution and all queries
are then prepared. All tables are referenced directly in pre
On Thu, 14 Jun 2007 00:09:20 +0200, Ron Johnson <[EMAIL PROTECTED]>
wrote:
On 06/13/07 16:59, PFC wrote:
Isn't it *supposed* to mis UNcommitted changes from other transactions?
Well, if the "uncommited change" is a DELETE of the row that
allowed the constraint che
Isn't it *supposed* to mis UNcommitted changes from other transactions?
Well, if the "uncommited change" is a DELETE of the row that allowed the
constraint check to pass, then when this delete is commited, your data is
no longer consistent.
Consider this :
CREATE TABLE A( attribu
If you try it with max() you'd likely get less-surprising answers.
So it was in fact the type conversions that got timed.
Damn. I got outsmarted XDD
Rewind :
CREATE TEMPORARY TABLE test AS SELECT a::FLOAT AS f,
(a::NUMERIC)*100 AS n, a::INTEGER AS i, a::
Aaaargh! No, it doesn't. It means NULL. Nothing else.
Well, x = UNKNOWN doesn't make any sense... the answer is UNKNOWN.
x IS UNKNOWN does make sense, the answer is true or false. Replace
UNKNOWN with NULL...
Actually it means what the DBA wants it to mean (which opens the doo
This is a 32 bit CPU by the way.
Consider this :
- There are 100K rows
- The CPU executes about 3 billion instructions per second if everything
is optimum
- "SELECT sum(n) FROM test", takes, say 60 ms
This gives about 1800 CPU ops per row.
A Float addition ve
It is. But why do you care? You either have the correctness that
NUMERIC gives, or you don't.
I suspect it's still useful to know what order of magnitude slower it
is. After all if it is 1000x slower (not saying it is), some people may
decide it's not worth it or roll their own.
Any hint
I believe you have made quite a good description of what happens.
Index-building isn't very CPU-intensive for integers (geometrics and
tsearch is another matter, of course), so building all indexes of a large
table in one pass is a possibility that works now, provided you issue all
Yeah, it is awful ;^) However the existing system is equally awful
because there is no way to enter NULL!
Consider this form :
First name :Edgar
Middle name : J.
Last name : Hoover
Now, if someone has no middle name, like "John Smith", should we use NULL
or "" for the middle nam
1-max amount of available storage data.
How many hard disks can you buy ?
2-Clustering support.
Please be more precise.
3-price.
Free.
4-speed.
Proportional to the expertise of the DBA.
Postgres can be extremely fast if used correctly, it can totall
On Sun, 03 Jun 2007 01:39:11 +0200, Tom Lane <[EMAIL PROTECTED]> wrote:
"Jaime Casanova" <[EMAIL PROTECTED]> writes:
Tom's example seems to show that mysql inserts a commit immidiatelly
after a DDL but this one example shows the thing is worse than that.
Actually, I think their behavior is ju
This is what happens in every RDBMS. Whats so special about postgres
then?
mysql> BEGIN;
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE TABLE ble ( id INTEGER ) ENGINE=InnoDB;
Query OK, 0 rows affected (0.09 sec)
mysql> INSERT INTO ble VALUES (1),(2),(3);
Query OK, 3 rows affect
MySQL supports procedures and functions.
Functions can return results but cannot update the database.
Procedures can update the database but cannot return results.
However :
- a function can call a procedure that updates the database !
- a proced
Have you looked at raidb? http://c-jdbc.objectweb.org. Narrow niche,
but if it happens to be the one you are in, then it's an option. I took
a quick look at the user's page, and both of them were using PostgreSQL.
I just love those Java guys.
The world starts and ends wit
On Sat, 02 Jun 2007 00:14:28 +0200, Ron Johnson <[EMAIL PROTECTED]>
wrote:
On 06/01/07 16:38, PFC wrote:
Will the synchronized seq scan patch be able to do this by
issuing all the CREATE INDEX commands at the same time from several
different database connections ?
No,
Will the synchronized seq scan patch be able to do this by issuing
all the CREATE INDEX commands at the same time from several different
database connections ?
No, but it could someday.
Actually I tested, it does it right now, albeit unconsciously (pg doesn't
do anything to synchron
In the last versions of postgres, do :
INSERT INTO blah RETURNING blah_id
No need to worry about sequences or anything. It inserts, then it returns
the inserted id, as the name says.
Very much unlike MySQL where insert_id() returns the id of the last
insert, even if it
On Thu, 31 May 2007 23:36:32 +0200, PFC <[EMAIL PROTECTED]> wrote:
On Thu, 31 May 2007 22:20:09 +0200, Vivek Khera <[EMAIL PROTECTED]> wrote:
On May 25, 2007, at 5:28 PM, Tom Lane wrote:
That's true at the level of DDL operations, but AFAIK we could
parallelize table-
On Thu, 31 May 2007 22:20:09 +0200, Vivek Khera <[EMAIL PROTECTED]> wrote:
On May 25, 2007, at 5:28 PM, Tom Lane wrote:
That's true at the level of DDL operations, but AFAIK we could
parallelize table-loading and index-creation steps pretty effectively
--- and that's where all the time goes.
In short, you have to update every instance of the key, not only in the
database, but in every application and even in every other
representation in
the real world. That could include changing people's bookmarks, notes in
PDAs,
even paper reports sitting on people's desks -- a tall order fo
On Wed, 30 May 2007 05:24:57 +0200, Erwin Brandstetter
<[EMAIL PROTECTED]> wrote:
On May 30, 2:11 am, Rodrigo De León <[EMAIL PROTECTED]> wrote:
(... useful code example snipped)
Now see:
http://www.postgresql.org/docs/8.2/static/plpgsql-control-structures
Thanks for your hints, Rodri
again thanks - bit of a noob question I know, but it's good to
learn :-)
Well not really since the answer is quite subtle...
You kave two columns A and B.
Say you have index on A, and index on B.
These queries will make direct use of the index :
A=... o
SELECT 'PV.LINEITEM:' || partnum || ',' || round(onhand) || ',' ||
round(qm5) || ',' || round(lsm4) || ',' || ',' || ',' || round(onorder)
|| ',' || ',' || ',' || binone || ',' || ',' || round(backorderqty) ||
',' || ',' || round(onhold) || ',' || ',' || ',' || ',' || ',' || ',' ||
',' || r
2. Is there any performance or other advantage to using PL/pgsql over
Pl/Perl or Python?
Yes, if you want to loop over large amounts of data (FOR row IN SELECT)
plpgsql will be faster since it does not have to convert the data from
postgres to python/perl format.
---
SELECT * from my_table where id_1 = x and id_2 = y;
Neither id_1 or id_2 or the combination of them is unique. I expect
this table to become large over time.
Create an index on (id_1, id_2), or (id_2,id_1).
---(end of broadcast)---
TIP 9: In v
Here what I would do :
- Have a table "objects", which contains locations, shows, persons, well
anything you want to relate to the forums in question.
- Simply add an object_id to your forums table.
Now, how do you distinguish between different types of objects in your
ob
MySQL has a related problem, which is that they have embedded IPv4
addressing rather deeply into their client authentication logic (by
making userids be [EMAIL PROTECTED] not just a username). This is probably why
they still haven't got IPv6 support:
http://bugs.mysql.com/bug.php?id=8836
I won
Would it be possible to look at a much larger number of samples during
analyze,
then look at the variation in those to generate a reasonable number of
pg_statistic "samples" to represent our estimate of the actual
distribution?
More datapoints for tables where the planner might benefit from
Indeed. Wouldn't it be a cool feature to persists transaction states
across connections so that a new connection could get access to a sub-
transaction state? That way, you could make your schema changes and
test them with any number of test clients (which designate the state
to connect with) an
Flickr uses InnoDB, by the way.
On Thu, 24 May 2007 18:07:21 +0200, Joshua D. Drake <[EMAIL PROTECTED]>
wrote:
Dave Page wrote:
problem with your setup. Granted, MySQL is a pretty bad database, but
it's not *that* bad -- your example implies that heavily MyISAM-based
(you don't
What version of that pathetic RDBMS is this?
MySQL 5.0.40, on gentoo Linux, Core 2 Duo.
The table in question takes about 100 inserts/deletes and 600 selects per
second.
MyISAM isn't able to finish the benchmark. Actually, I have to run REPAIR
TABLE every 20 minutes, since it c
Some big sites do of course juggle performance vs in-database run-time
checks, but the statements as typically presented by MySQL partisans,
Live from the front :
This freshly created database has had to endure a multithreaded query
assault for about 2 hours.
It gave up.
"*Really* big sites don't ever have referential
integrity. Or if the few spots they do (like with
financial transactions) it's implemented on the
application level (via, say, optimistic locking), never the
database level."
Sure, but in the forum benchmark I just did, when using MyISAM, with
On Wed, 23 May 2007 14:41:00 +0200, Rick Schumeyer <[EMAIL PROTECTED]>
wrote:
Actually, the situation is slightly more complicated. It's more like I
have tables A1, A2, and A3 each of which must have a corresponding row
in B. So each of A1, A2 and A3 has a BEFORE INSERT trigger that creat
I'm using Ruby on Rails and have two tables, A and B. Every row in A
needs a corresponding row in B. A also contains a FK pointing to B.
I created a before insert trigger on A that inserts a new row in B, and
sets the FK in A. This seems to be running fine.
So, A has a b_id fie
I thought the same thing. While I'm not a MS SQL guru, I put 'TOP 100
PERCENT' after SELECT in the query. The Enterprise Manager does not
indicate how many rows come back. I save it as a VIEW in MS SQL and do a
'select count(*)...' and, yes, it comes back 42164877 records.
No, it comes back 1
The other option is to have a column on the mailbox table to flag that
it is a default_mailbox -- but then I'd have to ensure there's only
one column for each "person" flagged that way.
- is_default BOOL column in mailbox table
- conditional index :
UNIQUE INDEX ON mailboxes( owner ) WHERE is
Which list is the most appropriate for proposing features and ideas for
postgres ?
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org/
On Wed, 16 May 2007 06:09:15 +0200, camb <[EMAIL PROTECTED]> wrote:
Hey all,
I'm sorta new to PG and am just wondering, when I create a foreign key
is it automatically set as an indexed column or do I need to also set
that manually?
Primary key creates unique index.
Foreign keys do no
I have a small database (PgSQL 8.0, database encoding UTF8) that folks
are inserting into via a web form. The form itself is declared
ISO-8859-1 and the prior to inserting any data, pg_client_encoding is
set to LATIN1.
Wouldn't it be simpler to have the browser submit the form in
Aren't there PCI heartbeat cards that are independent of the load on
the host machine?
But, if the machine is fork-bombed, or drowning in swap, or generally
slowly committing suicide, it's not shall we say "available" anymore, so
you might want to finish it off...
--
I can't wrap BEGIN/COMMIT around the INSERT in the trigger. Is there
another way of achieving this?
- Savepoints (won't work with your trigger approach)
- dblink would allow you to open another connection concurrently
---(end of broadcast)
Has anybody pondered this and come up with anything?
Well, I've done e-commerce stuff although it was with banks, not paypal.
There are a few levels of safety :
- Level 0 : Total stupidity (osCommerce)
Bank redirects to your site. Order is marked as paid with "paid=1" in the
UR
We use bitfields on our large user table. It is becoming unworkable to
scan for matches, since overall most people have very few selections
made.
We are moving it to a model like your favorite_colors table which just
links the option and the user.
We find that doing joins on large tables
Thus, if there are a whole bunch of columns on each table, the data in
those extra columns (e.g. - all columns aside from "id", the one that
was asked for in the result set) will indeed be drawn into memory.
Yeah, I wanted to mean that ;)
All the columns are loaded (except the TOASTed
SELECT o.id
FROM order o
JOIN customer c on o.customer = c.id
Does that bring into memory all columns from both order and customer?
Maybe that's not a good example due to indexes.
No, it just pulls the columns you ask from the table, nothing less,
nothing more.
Splitting tab
Say, if you first SELECT fname, lname FROM user_table;
and then you issue SELECT * FROM user_table; -- the
second select will be returned from buffer cache -- since
all rows are already in the cache.
...Unless your table contains some large TEXT columns that have been
stored out of line (T
Say I have a "user" table that has first, last, email, password, and
last_accessed columns. This user table will be accessed often. (It's
not really "user", but that's not important in this discussion)
Say that there's also about 10 columns of settings or preferences for
each user. Are there
On Fri, 11 May 2007 04:24:55 +0200, Tom Lane <[EMAIL PROTECTED]> wrote:
"Leif B. Kristensen" <[EMAIL PROTECTED]> writes:
Would it be reasonable to suggest that later versions of PostgreSQL
could examine if a function changes data, and quietly marks a function
as 'stable' if it doesn't?
My ins
This may be a question for -hackers, but I don't like disturbing them
unnecessarily.
I've been having a look at memcached. I would like to ask, is there any
reason that, theoretically, a similar caching system could be built
right into the db serving daemon?
I.e., the hash tables and libevent co
I have written a little PHP+postgres forum for benchmarking purposes, to
see how fast postgres could go.
It has basic forum features, like forums (duh), topics, posting,
pagination, watching topics, topic & post count, display newest topic and
post in topic & forum pages, templates, topic
Thanks for a good answer, I'll try to find a workaround. The number of
data_loggers will change, but not to frequently. I was actually hoping
to make a view showing the latest data for each logger, maybe I can
manage that with a stored procedure thingy...
- Create a table which contains your
2b- LARGE UPS because HDs are the components that have the higher power
consomption (a 700VA UPS gives me about 10-12 minutes on a machine
with a XP2200+, 1GB RAM and a 40GB HD, however this fall to..
less than 25 secondes with seven HDs ! all ATA),
I got my hands on a (free)
inside psql, type :
\i filename
On Sat, 11 Mar 2006 11:29:20 +0100, AKHILESH GUPTA
<[EMAIL PROTECTED]> wrote:
Hi All.!
I just want to know one thing that is it possible with PGSQL that,
if I want to insert and execute a query from a external text file
instead of
giving it at t
You need a newer pgadmin --- pg_database.datpath went away in 8.0.
I'm installing the new version. Thanks.
---(end of broadcast)---
TIP 8: explain analyze is your friend
29 +0200, Tom Lane <[EMAIL PROTECTED]> wrote:
PFC <[EMAIL PROTECTED]> writes:
I have no idea what to type in gbd to get the trace, though
What I usually do is
- start a psql session
- in another window, find out the PID of the backend attached
to the psql session, and do
#x27;m going to look into it.
Thanks for your help and sorry about bothering you !
On Mon, 11 Apr 2005 18:42:29 +0200, Tom Lane <[EMAIL PROTECTED]> wrote:
PFC <[EMAIL PROTECTED]> writes:
I have no idea what to type in gbd to get the trace, though
What I usually do is
If you want, you can try building with --enable-debug and getting a gdb
traceback from the call to errfinish(). That would at least give us
some clue where in the rule text it's getting confused.
Is this :
./configure --enable-debug
?
I have no idea what to type in gbd to get
You might try comparing the results of
select ev_action from pg_rewrite where ev_class = 'pg_user'::regclass;
from the two builds.
regards, tom lane
Well...
I'll spare your eyes and tell you right away that the results are
identical... they're at the bottom of the
Hmm; the only occurrence of that string in the sources is in
nodes/read.c, which AFAIK would never be invoked for a query entered
straight from the client. What could trigger it would be trying to
read a rule that is misformatted for some reason. Maybe you have
a damaged pg_user view definition
Hello,
Just installed pg 8.0.1-r3 and now phppgadmin and pgadmin3 die with a
cryptic error message.
I've traced it to the following query to get information about the
database :
SELECT pdb.datname AS datname, pu.usename AS datowner,
pg_encoding_to_char(encoding) AS datencoding,
(SE
To use this to prevent simultaneous inserts of the same data (for
example if two employees try to insert the same contact into the DB), I
suppose you could use a constraint (before insert) that checks that
there is no data matching the md5 checksum, right?
CREATE TABLE blah
(
mymd5
This I don't get. Why is an index scan not used? Isn't an index
supposed
to help when using > < >= <= too?
It should !
Explain Analyze Select count(smiles) from structure where _c >= 30
Aggregate (cost=196033.74..196033.74 rows=1 width=32) (actual
time=42133.432..42133.434 rows=1
loops=
iginal Message-
From: PFC [mailto:[EMAIL PROTECTED]
Sent: Monday, January 31, 2005 5:16 PM
To: Együd Csaba; 'Sven Willenberger'; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Howto determin the number of elemnts of an array
icount( array )
Hi Seven,
it would be better for m
icount( array )
Hi Seven,
it would be better for me if it returned the number of elements as an
integer even if the array is empty (in this case it returns with NULL).
No
metter this is easily can be worked around, but it would have been more
confortable.
Thank you very much.
-- Csaba Együd
postgresql-8.0.0beta4 $ time make -j 5
... lots of output ...
real0m41.274s
user1m36.315s
sys 0m15.451s
Yikes.
---(end of broadcast)---
TIP 8: explain analyze is your friend
I have never heard of Propolice SSP. What is it ? Any relation to the
honey
'Propolys'. just kidding.
Max
The name says little although I like it.
http://www.gentoo.org/proj/en/hardened/
I was out of date -- Propolice has been renamed PaX.
The hardened project has many parts, you should r
Thanks for reminding me that 64bit translates to: recompile everything
you
need!
I think this is exactly the choice of configuration we are going to make.
Someone just reminded me that windows and linux come down to the same
performance, but that the real overhead is on maintenance. It's true th
First you should use a ltree type for the uri field :
- you write it foo.bar instead of /foo/bar
- there are operators on ltree types to express "is parent of", "is
children of"
- these operators are indexed
Check the readme :
http://www.sai.msu.su/~megera/postgres/gist/ltree/
If you h
This question pops about once a day...
can't the list server grep for "auto increment" and send a link to the
docs itself ;)
hi
How to get the current value of a auto
incrementing number generated using CREATE SEQUENCE
seqname START 1
Is there any way and if yes what is it. Can we run a
I'm sorry, I forgot to say I had the bug with 8rc3, then installed 8.0.0
and it is still crashing.
On Thu, 27 Jan 2005 10:52:37 -0700, Michael Fuhr <[EMAIL PROTECTED]> wrote:
[I've Cc'ed pgsql-bugs and set the Reply-To header to that list.]
On Thu, Jan 27, 2005 at 05:26:
I realize that using OR will not result in an index scan.
I will never be interested in a OR condition for the kinds
of searches I use. In my Select statements, I always name
every column of the multi-column index in same order that
they were named when creating the index. I always use
the >= co
It seems that contrib/intagg crashes my server :
-
select int_agg_final_array(1);
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the
Thanks, I was hoping there was some way to avoid it.
You have to write the subselect twice if you want to match up different
rows.
In some cases it might be worth making a copy in a temp table. For simple
subselects where there is an index on id, leaving it as is should work
fine.
---
I wonder if it makes sense to implement the units as separate data types
? Cause that's what they are really.
So "amper" would be a data type which aliases one of the numeric data
types (depending on what precision range you need), but does not allow
to be added with anything else than "amper". Any
I think you missed an important "feature" of multicolumn indexes, that
you better not use 'OR' in your expressions. You seem to want only to use
'>=' so this should be OK.
Suppose you have 3 columns a,z,e containing values linearly distributed
between ...
select min(a),max(a),min(z),max(z
Check out ltree
http://www.sai.msu.su/~megera/postgres/gist/ltree/
On Tue, 25 Jan 2005 22:03:58 +0100, tmp <[EMAIL PROTECTED]> wrote:
I don't think anybody has written the syntactic sugar, but someone did
write a function that provides equivalent output.
I think it is important that the funcionalit
If you allow multiplication and division, you'd need to store not only
one type, but an expression like m.s^-2, etc. You'll end up with something
with Maple. Isn't there some free open source algebraic computation
toolkit with equations and units somewhere ?
---(end
It's a bit more complicated than that as there are also locking issues,
like what if other processes insert rows while some others are being
deleted, really the whole thing isn't trivial.
Since postgres already incoporates code to check foreign keys more
efficiently (when doing alter table .
Make a plpgsql function which will iterate over the rows on which the
moving average is to be done (FOR row IN SELECT), of course use the
correct order, then use an array as a FIFO, add a row to the moving
average and push it, pop the old one and substract it.
Roundoff errors will bite your
1 - 100 of 110 matches
Mail list logo