ou VACUUM and ANALYZE the database regularly?
Have you investigated whether you need to increase the statistics
on any columns? Have you tuned postgresql.conf? What version of
PostgreSQL are you using?
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/
---
documents:
http://www.powerpostgresql.com/Docs/
> BTW. If you are a SQL/python programmer in (or near) Lanarkshire,
> Scotland, we have a vacancy. ;-)
Allow telecommute from across the pond and I might be interested :-)
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/
--
ueries, one with enable_seqscan set to "on"
and one with it set to "off"? The planner might think that a
sequential scan would be faster than an index scan, and EXPLAIN
ANALYZE should tell us if that guess is correct.
What version of PostgreSQL are you using?
--
Michael Fuhr
h
QUERY PLAN
Index Scan using foo_date_idx on foo (cost=0.01..15.55 rows=97 width=12)
Index Cond: ((first_date <= ('now'::text)::date) AND (last_date >=
('now'::text)::date))
(2 rows)
--
ts about the expected row count.
http://archives.postgresql.org/pgsql-hackers/2005-03/msg00146.php
http://archives.postgresql.org/pgsql-hackers/2005-03/msg00153.php
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/
---(end of broadcast)---
TIP 9: the pla
tatistics would be
helpful then you could set up a test server and load a copy of your
database into it. Just beware that because it's bleeding edge, it
might destroy your data and it might behave differently than released
versions.
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/
can are both on). But my test case
and postgresql.conf settings might be different enough from yours
to account for different behavior.
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
ill not return until the entire result set has been
generatedCurrently, the point at which data begins being
written to disk is controlled by the work_mem configuration
variable.
You might want to test both ways in typical and worst-case scenarios
and see how each performs
it references --
is there a reason for making them different?
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED
How many records are in the tables you're querying? Are you regularly
vacuuming and analyzing the database or the individual tables? Are
any of the tables clustered? If so, on what indexes and how often
are you re-clustering them? What version of PostgreSQL are you using?
--
Michael Fuh
rg/pgsql-committers/2005-04/msg00163.php
http://archives.postgresql.org/pgsql-committers/2005-04/msg00168.php
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
the locks are acquired, no other transaction will be able
to access the table or the index until the transaction doing the
DROP INDEX commits or rolls back. Rolling back leaves the index
in place.
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/
---(end of broadcast)-
See timeofday().
http://www.postgresql.org/docs/8.0/static/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
On Tue, Jun 28, 2005 at 01:54:08AM +, Karl O. Pinc wrote:
> On 06/27/2005 06:33:03 PM, Michael Fuhr wrote:
>
> >See timeofday().
>
> That only gives you the time at the start of the transaction,
> so you get no indication of how long anything in the
> transaction
might not be following this thread.
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
umed and analyzed your tables? Could you post the
EXPLAIN ANALYZE output of a query, once with enable_seqscan turned
on and once with it turned off?
See also "Operator Classes" in the "Indexes" chapter of the
documentation:
http://www.postgresql.org/docs/8.0/static/indexes-o
rom the table on each call, leaving a lot of dead tuples?
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
dead tuples, then how can we solve it?
If the function deletes all records from the temporary table then
you could use TRUNCATE instead of DELETE. Otherwise you could
VACUUM the table between calls to the function (you can't run VACUUM
inside a function).
--
Michael Fuhr
http://ww
WHERE attstattarget > 0;
See the "System Catalogs" chapter in the documentation for more
information.
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/
---(end of broadcast)---
TIP 8: explain analyze is your friend
tication" in the documentation:
http://www.postgresql.org/docs/8.0/static/client-authentication.html
If you're trying to do something else then please elaborate, as
it's not clear what you mean by "I want to ALTER that user to exclude
the password."
--
s you show are taking fractions of a
millisecond; the communications overhead of executing two queries
might make that technique significantly slower than just the server
execution time that EXPLAIN ANALYZE shows.
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/
-
ection_registry_id = 40105) AND (obj1 = 73582))
Total runtime: 0.495 ms
(7 rows)
--
Michael Fuhr
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so tha
(this will be improved in 8.1).
--
Michael Fuhr
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
MIN and MAX
queries.
http://archives.postgresql.org/pgsql-committers/2005-04/msg00163.php
http://archives.postgresql.org/pgsql-committers/2005-04/msg00168.php
--
Michael Fuhr
---(end of broadcast)---
TIP 6: explain analyze is your friend
.postgresql.org/docs/postgres/release.html#RELEASE-8-1
You could probably find more detailed discussion in the pgsql-hackers
archives.
--
Michael Fuhr
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
TO off;
EXPLAIN ANALYZE SELECT ...;
SET enable_seqscan TO off;
SET enable_indexscan TO on;
EXPLAIN ANALYZE SELECT ...;
You might also experiment with planner variables like effective_cache_size
and random_page_cost to see how changing them affects the query
plan. However, be careful of tuning
ne have? If you have enough
memory then raising those variables should result in better plans;
you might also want to experiment with random_page_cost. Be careful
not to set work_mem/sort_mem too high, though. See "Run-time
Configuration" in the "Server Run-time
?
Various tuning guides give advice on how to set the above and other
configuration variables. Here's one such guide:
http://www.powerpostgresql.com/PerfList/
--
Michael Fuhr
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
e_middle could use it, making the indexes
on only those columns superfluous. Or am I missing something?
--
Michael Fuhr
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
quot;EXPLAIN
ANALYZE DELETE ..."? Do you vacuum and analyze the tables regularly?
What version of PostgreSQL are you using?
--
Michael Fuhr
---(end of broadcast)---
TIP 6: explain analyze is your friend
dom
records into a table and ran the above query.
--
Michael Fuhr
---(end of broadcast)---
TIP 6: explain analyze is your friend
LECT 1 FROM generate_series(1, 10);
INSERT 0 10
Time: 3492.344 ms
INSERT INTO test_fk SELECT 1 FROM generate_series(1, 10);
INSERT 0 100000
Time: 23578.853 ms
--
Michael Fuhr
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
On Sat, Sep 10, 2005 at 01:03:03AM -0300, Marc G. Fournier wrote:
> On Fri, 9 Sep 2005, Michael Fuhr wrote:
> >INSERT INTO test_check SELECT 1 FROM generate_series(1, 10);
> >INSERT 0 10
> >Time: 3492.344 ms
> >
> >INSERT INTO test_fk SELECT 1 FROM genera
-> Seq Scan on content c (cost=0.00..1.01 rows=1 width=8)
(actual time=0.025..0.033 rows=1 loops=1)
-> Index Scan using supplier_pkey on supplier s
(cost=0.00..3.01 rows=1 width=4) (actual time=0.046..0.053 rows=1 loops=1)
Index Cond: ("
em; I expect he'll be committing
a fix shortly.
--
Michael Fuhr
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get t
s (this should
be done when nobody else is querying the table so the statistics
represent only what you did).
You can avoid cached plans by using EXECUTE. You'll have to run
tests to see whether the potential gain is worth the overhead.
--
Michael Fuhr
---(end
ta in binary, not by automatically converting
to and from text.
--
Michael Fuhr
---(end of broadcast)---
TIP 6: explain analyze is your friend
pecify whether the data is in
text format or binary. See the libpq documentation for details.
--
Michael Fuhr
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
projects/pgbuffercache/
Note that pg_buffercache shows only pages in PostgreSQL's buffer
cache; it doesn't show your operating system's cache.
--
Michael Fuhr
---(end of broadcast)---
TIP 4: Have you searched our list archive
n you use
COPY, which is much more efficient for bulk loads?
http://www.postgresql.org/docs/8.0/interactive/populate.html
--
Michael Fuhr
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choo
ight explain at least some of the performance improvement.
Maybe one of the developers will comment.
--
Michael Fuhr
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On Sat, Oct 29, 2005 at 09:49:47AM -0500, Bruno Wolff III wrote:
> On Sat, Oct 29, 2005 at 08:24:32 -0600, Michael Fuhr <[EMAIL PROTECTED]>
> wrote:
> > My tests suggest that a lookup on the referring key is done only
> > if the referenced key is changed. Here's an
rsion 8.0.2.
I couldn't duplicate this in 8.0.4; I don't know if anything's
changed since 8.0.2 that would affect the query plan. Could you
post the EXPLAIN ANALYZE output? It might also be useful to see
the output with enable_seqscan disabled.
Have the tables been vacuumed and analyz
alled, presumably because it can't do so until
the low-level code returns control to Perl.
Is there a reason you're using alarm() in the client instead of
setting statement_timeout on the server?
--
Michael Fuhr
---(end of broadcast)---
TIP 6: explain analyze is your friend
BI on a per-client basis,
> and that works.
You probably shouldn't set statement_timeout on a global basis
anyway, but did you reload the server after you made the change?
Setting statement_timeout in postgresql.conf and then reloading the
server works here in 8.0.4.
--
Michael Fuhr
--
no effect at all. I tried values clear down to 1
> millisecond, but the server never timed out for any query.
Did you use "SHOW statement_timeout" to see if the value was set
to what you wanted? Are you sure you edited the right file? As a
database superuser execute "SHOW conf
N NEXT;
y := y + 1;
z := z + 1;
RETURN NEXT;
END;
$$ LANGUAGE plpgsql IMMUTABLE STRICT;
SELECT * FROM fooset(1, 2);
y | z
+
20 | 10
21 | 11
(2 rows)
--
Michael Fuhr
---(end of broadcast)---
TIP 4: Have you searched ou
t works.
Did you try it? If you did and it didn't work then please post
exactly what you tried and explain what happened and how that
differed from what you'd like.
--
Michael Fuhr
---(end of broadcast)---
TIP 4: Have you searched our lis
ferring
to how I wrote two sets of assignments and RETURN NEXT statements,
you don't have to do it that way: you can use a loop, just as you
would with any other set-returning function.
--
Michael Fuhr
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
cl,
PL/Python, etc. There's even a PL/sh:
http://pgfoundry.org/projects/plsh/
--
Michael Fuhr
---(end of broadcast)---
TIP 6: explain analyze is your friend
ce for
simple selects, but a series of insert/update/delete operations ran
about 30% slower when block- and row-level statistics were enabled
versus when the statistics collector was disabled.
--
Michael Fuhr
---(end of broadcast)---
TIP 9: In ve
t; the hardest if you are doing 1 row at a time operations over a
> persistent connection.
That's basically how the application I tested works: it receives
data from a stream and performs whatever insert/update/delete
statements are necessary to update the database for each chunk of
dat
On Mon, Dec 12, 2005 at 10:23:42AM -0300, Alvaro Herrera wrote:
> Michael Fuhr wrote:
> > The cost depends on your usage patterns. I did tests with one of
> > my applications and saw no significant performance difference for
> > simple selects, but a series of insert/update/
8
stats_command_string = on
stats_block_level = on
stats_row_level = on
time: 2:53.76
[Wanders off, swearing that he ran these tests before and saw higher
penalties for block- and row-level statistics.]
--
Michael Fuhr
---(end of broadcast)---
TIP 9
amount of time had expired, and then continue processing the query?
That way admins could avoid the overhead of posting messages for
short-lived queries that nobody's likely to see in pg_stat_activity
anyway.
--
Michael Fuhr
---(end of broadcast)-
on the referring columns? Does this table or
any referring table have triggers? Also, are you regularly vacuuming
and analyzing your tables? Have you examined pg_locks to see if
an unacquired lock might be slowing things down?
--
Michael Fuhr
---(end of broadcast)--
-- workaround 2: quite ugly but seems to work (at least for this
> -- one test case):
> # explain analyze
> select idopont from
> (select idopont from muvelet_vonalkod
>where muvelet=6859 order by idopont) foo
> order by idopont limit 1;
Another worka
ng what problem is to be solved it's near
impossible to recommend an appropriate tool.
--
Michael Fuhr
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
On Wed, Dec 21, 2005 at 10:38:10PM +0100, Steinar H. Gunderson wrote:
> On Wed, Dec 21, 2005 at 02:24:42PM -0700, Michael Fuhr wrote:
> > The difference is clear only in specific cases; just because you
> > saw a 10x increase in some cases doesn't mean you can expect that
>
On Thu, Dec 22, 2005 at 02:08:23AM +0100, Steinar H. Gunderson wrote:
> On Wed, Dec 21, 2005 at 03:10:28PM -0700, Michael Fuhr wrote:
> >> That's funny, my biggest problems with PL/PgSQL have been (among others)
> >> exactly with large result sets...
> > Out of curi
your configuration settings the same in both versions? You
mentioned increasing work_mem, but what about others like
effective_cache_size, random_page_cost, and shared_buffers?
--
Michael Fuhr
---(end of broadcast)---
TIP 3: Have you che
ndex? Is there a specific
index you want to ignore or do you want the planner to ignore all
indexes? What problem are you trying to solve?
--
Michael Fuhr
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.po
ld you post the
query and the complete output of EXPLAIN ANALYZE (preferably without
wrapping) for both versions?
--
Michael Fuhr
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
On Thu, Mar 15, 2007 at 01:58:47PM +0600, Alexey Romanchuk wrote:
> is it possible to determine dead tuples size for table?
See contrib/pgstattuple.
--
Michael Fuhr
---(end of broadcast)---
TIP 5: don't forget to increase your free s
, inserts, updates, and
deletes in SQL Server and about the implications on ACID of locking
hints such as NOLOCK. Then consider how MVCC handles concurrency
without blocking or the need for dirty reads.
--
Michael Fuhr
---(end of broadcast)---
TIP 3:
n checklist such
as this one:
http://www.powerpostgresql.com/PerfList
--
Michael Fuhr
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
rget on l_pvcp.value?
I ran your queries against canned data in 8.2.3 and better statistics
resulted in more accurate row count estimates for this and other
parts of the plan. I don't recall if estimates for non-leading-character
matches in earlier versions can benefit from better statistics.
On Fri, Mar 30, 2007 at 04:46:11AM -0600, Michael Fuhr wrote:
> Have you tried increasing the statistics target on l_pvcp.value?
> I ran your queries against canned data in 8.2.3 and better statistics
> resulted in more accurate row count estimates for this and other
> parts of the pl
STATISTICS 100;
ANALYZE event;
You can set the statistics target as high as 1000 to get more
accurate results at the cost of longer ANALYZE times.
--
Michael Fuhr
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
I can set the compression settings using the "ALTER TABLE
> ALTER SET STORAGE" syntax, but is there a way I can see what this
> value is currently set to?
You could query pg_attribute.attstorage:
http://www.postgresql.org/docs/8.2/interactive/catalog-pg-attribute.html
--
Michae
ned empirically? How much benefit are they
providing and how did you measure that?
enable_mergejoin = off
geqo = off
I've occasionally had to tweak planner settings but I prefer to do
so for specific queries instead of changing them server-wide.
--
Michael Fuhr
-
r this table? Is there any chance that
somebody set all of the columns' statistics targets to zero?
--
Michael Fuhr
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
-1. Negative attstattarget values mean to use the system
default, which you can see with:
SHOW default_statistics_target;
How exactly are you determining that no statistics are showing up
for this table? Are you running a query like the following?
SELECT *
FROM pg_stats
WHERE schemaname =
)
order by n1.nspname,
c1.relname,
a1.attname;
--
Michael Fuhr
---(end of broadcast)---
TIP 6: explain analyze is your friend
default_statistics_target.
http://www.postgresql.org/docs/8.2/interactive/sql-altertable.html
http://www.postgresql.org/docs/8.2/interactive/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER
--
Michael Fuhr
---(end of broadcast)---
TIP 6: explain analyze is your friend
iew, the join is performed even if your query doesn't return any
> columns from the outer relation. Also, if the calculation contains
> immutable functions, it's not skipped.
Don't you mean "if the calculation contains VOLATILE functions,
it's not skipped"?
--
mmend disabling sequential scans permanently but doing
so can be useful when investigating why the planner thinks one plan
will be faster than another.
What are your settings for random_page_cost, effective_cache_size,
work_mem, and shared_buffers? If you're using the default
random_page_cost
qscan disabled? That'll show
whether an index or bitmap scan would be faster. And have you
verified that the join condition is correct? Should the query be
returning over a million rows?
--
Michael Fuhr
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
check for the year, or unless you want to match
more than one year.
--
Michael Fuhr
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On Wed, Jan 11, 2006 at 12:56:55AM -0700, Michael Fuhr wrote:
> WHERE ...
> AND doy >= EXTRACT(doy FROM now() - '24 hour'::interval)
> AND doy <= EXTRACT(doy FROM now())
To work on 1 Jan this should be more like
WHERE ...
AND (doy = EXTRACT(doy FROM now()
orrection.
http://archives.postgresql.org/pgsql-performance/2006-01/msg00104.php
--
Michael Fuhr
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
back and see what the
> number is for that column? I want to be able to tell which columns I've
> changed the statistics on, and which ones I haven't.
pg_attribute.attstattarget
http://www.postgresql.org/docs/8.1/interactive/catalog-pg-attribute.html
--
Michael
ity checks would have to do sequential
scans on the referring table (districts). Indeed, performance
problems for exactly this reason occasionally come up in the mailing
lists.
--
Michael Fuhr
---(end of broadcast)---
TIP 6: explain analyze is your friend
way to suppress this notice when I create tables in a
> >script?
>
> Set[1] your log_min_messages to WARNING or higher[2].
Or client_min_messages, depending on where you don't want to see
the notice.
--
Michael Fuhr
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
ERT.
One complication is how to handle rules that run as part of the
insert.
http://www.postgresql.org/docs/faqs.TODO.html
--
Michael Fuhr
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.
On Tue, Jan 17, 2006 at 09:04:53AM +, Marcos wrote:
> I already read the documentation for to use the SPI_PREPARE and
> SPI_EXEC... but sincerely I don't understand how I will use this
> resource in my statements.
What statements? What problem are you trying to solve?
--
ng more
tables than that? In another recent thread varying plans were
attributed to exceeding geqo_threshold:
http://archives.postgresql.org/pgsql-performance/2006-01/msg00132.php
Does your situation look similar?
--
Michael Fuhr
---(end of broadcast)-
IN ANALYZE DELETE FROM foo WHERE id = 1;
QUERY PLAN
--
Index Scan using foo_pkey on foo (cost=0.00..3.92 rows=1 width=6) (actual
time=0.124.
On Tue, Jan 31, 2006 at 07:29:51PM -0800, Joshua D. Drake wrote:
> > Any ideas?
>
> What does explain analyze say?
Also, have the tables been vacuumed and analyzed?
--
Michael Fuhr
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
un VACUUM ANALYZE on all the tables, then run
the query again with EXPLAIN ANALYZE?
--
Michael Fuhr
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
, including the
numbered parameters ($1, $2, etc.). To execute the query do this:
EXPLAIN ANALYZE EXECUTE stmt (...);
Where "..." is the same parameter list you'd pass to the function
(the same values you used in the direct query).
If you need to re-prepare the query then run "DE
he first way (parameterized)
then the EXPLAIN ANALYZE output would have shown the parameters as
$1, $2, $3, etc., which it didn't.
--
Michael Fuhr
---(end of broadcast)---
TIP 6: explain analyze is your friend
xample:
CREATE FUNCTION fooquery(qval text) RETURNS SETOF foo AS $$
DECLARE
rowfoo%ROWTYPE;
query text;
BEGIN
query := 'SELECT * FROM foo WHERE val = ' || quote_literal(qval);
FOR row IN EXECUTE query LOOP
RETURN NEXT row;
END LOOP;
RETURN;
END;
it
coincidence that the "long time" between fast results is about the
same? What's your setting? Are your test results more consistent
if you execute CHECKPOINT between them?
--
Michael Fuhr
---(end of broadcast)---
TIP 1: if post
On Mon, Mar 06, 2006 at 07:46:05PM +0100, Joost Kraaijeveld wrote:
> Michael Fuhr wrote:
> > What's your setting?
>
> Default.
Have you tweaked postgresql.conf at all? If so, what non-default
settings are you using?
> > Are your test results more consistent
>
[Please copy the mailing list on replies.]
On Mon, Mar 06, 2006 at 09:38:20PM +0100, Joost Kraaijeveld wrote:
> Michael Fuhr wrote:
> > Have you tweaked postgresql.conf at all? If so, what non-default
> > settings are you using?
>
> Yes, I have tweaked t
mprovements is making the difference.
--
Michael Fuhr
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
querying? Might you be hitting geqo_threshold
(default 12)? If so then the following thread might be helpful:
http://archives.postgresql.org/pgsql-performance/2006-01/msg00132.php
--
Michael Fuhr
---(end of broadcast)---
TIP 9: In versions below
test results:
http://archives.postgresql.org/pgsql-performance/2005-12/msg00307.php
Your results may vary. If you see substantially different results
then please post the particulars.
--
Michael Fuhr
---(end of broadcast)---
TIP 4: Have you search
but the rest of the query can be straight out of
the SQL and OGC standards.
[1] http://www.opengeospatial.org/docs/99-049.pdf
[2] http://www.postgis.org/
[3] http://dev.mysql.com/doc/refman/5.0/en/spatial-extensions.html
[4] http://www.oracle.com/technology/products/spatial/index.htm
1 - 100 of 130 matches
Mail list logo