Hi,
I'm reading date/time datatypes documentation, and I'm a little bit
surprised by this piece of documentation :
Note: When timestamp values are stored as double precision
floating-point numbers (currently the default), the effective limit of
precision may be less than 6. timestamp values are
Hi,
If I have a PostgreSQL table with records and logical indexes already
created, if I use COPY to load additional data, does the COPY update
the indexes during, after, or not at all?
Benjamin
---(end of broadcast)---
TIP 1: if posting/readi
We use an application that generates 4800 points for the graph of a waveform.
We capture this data and display it to the user. Now we want to save all this
information to a database. I have tried to create a record for each point,
but insertion/retrieval is slow. I thought that maybe I could s
On Wednesday 28 March 2007 16:43, Dann Corbit wrote:
> Link to documentation:
> http://www.postgresql.org/docs/8.2/interactive/datatype-geometric.html#A
> EN5367
Yep. The polygon type is what I was needing. I tested it out by saving data
from nine graphs all at once. That's about 24000 points. Po
Good Evening Tom-
probably the one good reason to use Postgres is the ability to install PostGIS
http://postgis.refractions.net/docs/ch04.html#id2673328 supports these simple
curve geometries
CompoundCurve
CurvePolygon
MultiCurve
Does this help ?
Martin--
Link to documentation:
http://www.postgresql.org/docs/8.2/interactive/datatype-geometric.html#A
EN5367
> -Original Message-
> From: Dann Corbit
> Sent: Wednesday, March 28, 2007 4:42 PM
> To: 'Tom Brown'; pgsql-general@postgresql.org
> Subject: RE: [GENERAL] inserting 4800 records at a tim
PostgreSQL has both array and point data types.
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Hi,
We use an application that generates 4800 points for the graph of a waveform.
We capture this data and display it to the user. Now we want to save all this
information to a database. I have tried to create a record for each point,
but insertion/retrieval is slow. I thought that maybe I coul
On Wed, 28 Mar 2007, Carlos H. Reimer wrote:
> Hi,
>
> We have a PostgreSQL 8.0.6 cluster configured with lc_collate=pt_BR.UTF-8
> and when we run the following SELECT:
> SELECT substr(nomerazao,1,4),
> ascii(substr(nomerazao,1,1)),
> ascii(substr(nomerazao,2,1))
> from spunico.unico order by nome
Carlos H. Reimer wrote:
> Are not the lines out of order
No.
> or is it a normal behaviour for a
> server with lc_collate=pt_BR.UTF-8?
Yes.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
---(end of broadcast)---
TIP 2: Don't 'kill
On 28/03/2007 22:52, Carlos H. Reimer wrote:
SELECT substr(nomerazao,1,4),
ascii(substr(nomerazao,1,1)),
ascii(substr(nomerazao,2,1))
from spunico.unico order by nomerazao;
You need to add aliases to the returned column by which you'd like to
order the result: your query is ordering the rows
Hi,
We have a PostgreSQL 8.0.6 cluster configured with lc_collate=pt_BR.UTF-8
and when we run the following SELECT:
SELECT substr(nomerazao,1,4),
ascii(substr(nomerazao,1,1)),
ascii(substr(nomerazao,2,1))
from spunico.unico order by nomerazao;
is returning:
substr | ascii | ascii
+-
Dmitry Koterov wrote:
And the general question - why SELECT queries slowdown during the
CHECKPOINT? I thought that Postgres is a version-based database and read
queries are never blocked...
Because the disk is busy.
---(end of broadcast)---
TI
Martijn van Oosterhout wrote:
On Wed, Mar 28, 2007 at 02:16:55PM -0400, Joseph S wrote:
I thought the performance penalty came from making the executable
bigger, which means the executor has to skip over the debug symbols
somehow, which eats up cpu. If the symbols are in a different file this
On Wed, Mar 28, 2007 at 02:16:55PM -0400, Joseph S wrote:
> I thought the performance penalty came from making the executable
> bigger, which means the executor has to skip over the debug symbols
> somehow, which eats up cpu. If the symbols are in a different file this
> wouldn't apply.
When r
Tom Lane wrote:
Joseph Shraibman writes:
Well 1) I'd like to avoid the performance penalty for including debug
symbols
There is none. If there were, it would certainly apply to debuginfo
as well --- debuginfo is merely moving the symbols over to a different
file after the compiler finishes.
The audit finished up overnight, but i'll try running a test tonight
so i can get a better idea of what is going on.
On Mar 28, 2007, at 3:44 AM, Richard Huxton wrote:
What's the limiting factor? Disk? CPU?
i'm imagining disk io.its a simple query
Any chance of seeing the queries th
am Wed, dem 28.03.2007, um 19:02:40 +0200 mailte Alain Roger folgendes:
> Hi,
>
> I would like display in my PHP application the size of each table.
> So, how can i get the table size (for example in Mb) ?
There are some functions, read:
http://www.postgresql.org/docs/current/interactive/functio
Hi,
I would like display in my PHP application the size of each table.
So, how can i get the table size (for example in Mb) ?
thanks a lot,
--
Alain
Windows XP SP2
PostgreSQL 8.1.4
Apache 2.0.58
PHP 5
Mike Charnoky <[EMAIL PROTECTED]> writes:
> First, a question: For a PG8.1 database, is it preferable to use the new
> "alter table disable|enable trigger" command as opposed to the old
> method of setting pg_class.reltriggers = 0?
Very much so --- manual manipulation of reltriggers has never been
"David Jaquay" <[EMAIL PROTECTED]> writes:
> How exactly can you control the join order?
http://www.postgresql.org/docs/8.2/static/explicit-joins.html
(adjust link for your PG version, as this material changes from release
to release)
> I have cases where the
> explain data indicates that the pl
A really long time ago, on 2/27/07, Tom Lane <[EMAIL PROTECTED]> wrote:
Usually what you want to think about for something
like this is plan optimization, ie, what order are the tables joined in
and with what join methods.
How exactly can you control the join order? I have cases where the
expl
Hi, I'm using PostgreSQL 8.1.4 and am having some problems with the new
disable/enable trigger command.
First, a question: For a PG8.1 database, is it preferable to use the new
"alter table disable|enable trigger" command as opposed to the old
method of setting pg_class.reltriggers = 0?
I'm assum
Joseph Shraibman writes:
> Well 1) I'd like to avoid the performance penalty for including debug
> symbols
There is none. If there were, it would certainly apply to debuginfo
as well --- debuginfo is merely moving the symbols over to a different
file after the compiler finishes.
> and 2) I al
On Wed, Mar 28, 2007 at 12:10:27PM -0400, Tom Lane wrote:
> Stephen Harris <[EMAIL PROTECTED]> writes:
> > It's vacuumed every night after the updates. There are minimal (zero,
> > most days) updates during the day. As I mentioned earlier, nightly we do:
>
> > for host in list_of_hosts
> >
Stephen Harris <[EMAIL PROTECTED]> writes:
> It's vacuumed every night after the updates. There are minimal (zero,
> most days) updates during the day. As I mentioned earlier, nightly we do:
> for host in list_of_hosts
> delete from sweep_users where hostid=host
> for user in users_for
Am Mittwoch, 28. März 2007 17:42 schrieb Joseph Shraibman:
> Well 1) I'd like to avoid the performance penalty for including debug
> symbols
There is no performance penalty for that.
> and 2) I already built the binary and it is running on a live
> system, and I'd like to get debug symbols w/o r
Jaime Silvela <[EMAIL PROTECTED]> writes:
> Wow, thanks! I'll definitely get the Unix operations guys to investigate.
> Do you know if this is tracked as an issue for Linux? I'd like some
> reassurance that whatever release Unix Ops is able to upgrade to fixes
> the bug.
I did some idle searchin
On Wed, Mar 28, 2007 at 11:36:27AM -0400, Tom Lane wrote:
> Stephen Harris <[EMAIL PROTECTED]> writes:
> > INFO: "sweep_users": found 835831 removable, 972662 nonremovable row
> > versions in 2890304 pages
> > DETAIL: 0 dead row versions cannot be removed yet.
> > There were 112212932 unused ite
Wow, thanks! I'll definitely get the Unix operations guys to investigate.
Do you know if this is tracked as an issue for Linux? I'd like some
reassurance that whatever release Unix Ops is able to upgrade to fixes
the bug.
I'll try to get one of our machines upgraded and test there, and will
l
Well 1) I'd like to avoid the performance penalty for including debug
symbols and 2) I already built the binary and it is running on a live
system, and I'd like to get debug symbols w/o restarting.
Peter Eisentraut wrote:
Am Mittwoch, 28. März 2007 03:00 schrieb Joseph S:
I don't use rpms, I
Stephen Harris <[EMAIL PROTECTED]> writes:
> INFO: "sweep_users": found 835831 removable, 972662 nonremovable row
> versions in 2890304 pages
> DETAIL: 0 dead row versions cannot be removed yet.
> There were 112212932 unused item pointers.
Oy, that's one bloated table ... only one live row in e
Jaime Silvela <[EMAIL PROTECTED]> writes:
> Here's the uname -a:
> Linux wdlbc22r06 2.6.5-7.244-bigsmp #1 SMP Mon Dec 12 18:32:25 UTC 2005
> i686 i686 i386 GNU/Linux
The previous thread says
>> SLES support helped upgrade our SLES 9, sp3, linux kernel from 2.6.5-7.244
>> to 2.6.5-7.282. Since t
On 3/28/07, Mark <[EMAIL PROTECTED]> wrote:
Ideally, I'd like to have all postgresql related files on flush
drive. and OS on another device (CD). Database data will be small and
I don't think I will run into the problem with a space.
But how can I install postgresql (including all libraries) on
On Wed, Mar 28, 2007 at 11:07:54AM -0400, Tom Lane wrote:
> Stephen Harris <[EMAIL PROTECTED]> writes:
> > I have one specific table ("sweep_users") that has 900,000 rows in prod
> > and 630,000 on the standby. On the standby a "select count(*) from
> > sweep_users" takes a couple of seconds. On
Sorry, forgot about the hardware.
Here's the uname -a:
Linux wdlbc22r06 2.6.5-7.244-bigsmp #1 SMP Mon Dec 12 18:32:25 UTC 2005
i686 i686 i386 GNU/Linux
From hwinfo --cpu
01: None 00.0: 10103 CPU
[Created at cpu.290]
Unique ID: rdCR.j8NaKXDZtZ6
Hardware Class: cpu
Arch: Intel
Vendor: "Genui
Stephen Harris <[EMAIL PROTECTED]> writes:
> I have one specific table ("sweep_users") that has 900,000 rows in prod
> and 630,000 on the standby. On the standby a "select count(*) from
> sweep_users" takes a couple of seconds. On production it takes... 240
> seconds!
Lots of dead rows/free spac
Jaime Silvela <[EMAIL PROTECTED]> writes:
> The kernel is Linux 2.6.5
2.6.5.what (give us full uname -a output please)? On what hardware?
If memory serves, we had some reason to think that it only occurred on
specific 64-bit multi-CPU platforms, so I'm not just asking idly.
The error check was pu
The kernel is Linux 2.6.5
I'll drop and redo this evening after hours - the migration takes a
couple of hours. The source database is under medium load, with a couple
of heavy updates, but the target database is a clean 8.2.3 install, and
only being written to by pg_restore.
So I take it that
Ideally, I'd like to have all postgresql related files on flush
drive. and OS on another device (CD). Database data will be small and
I don't think I will run into the problem with a space.
But how can I install postgresql (including all libraries) on flash
drive? I remember that 7.4.X rpms always
Jaime Silvela <[EMAIL PROTECTED]> writes:
> I'm seeing these messages
> ERROR: unexpected data beyond EOF in block 4506 of relation
> "coverage_test_val"
> HINT: This has been seen to occur with buggy kernels; consider updating
> your system.
So what kernel are you running, on what hardware?
Richard Huxton writes:
> Jonathan Vanasco wrote:
>> can anyone offer some suggestions on tweaking a system for
>> write-intensive operations?
> What's the limiting factor? Disk? CPU?
If it's I/O bound, you probably need to increase checkpoint_segments.
regards, tom lane
This 8.0.8 on Solaris 8. Yes I know; old technologies but we can't
upgrade yet.
I have two identical servers. One is production and overnight we do
a complete dump and scp the results to the other. The other is standby.
It's currently running data that's about 7 months old because we haven't
n
Hello.
The quintessence of the problem explained in previous letters: CHECKPOIND
command works quite slow, disk write activity is high just after it is
executed, and diring the CHECKPOINT SELECT queries work many times slower
than usual.
I experimended with bgwriter, but it affects the CHECKPOIN
I'm doing dry runs to migrate my database from 8.1.3 to 8.2.3, with
pg_dumpall | gg_restore, using the executables from 8.2.3.
I'm seeing these messages
ERROR: unexpected data beyond EOF in block 4506 of relation
"coverage_test_val"
HINT: This has been seen to occur with buggy kernels; consi
Hi Alberto
You should upgrade to the most recent version of the jdbc driver :
see the change log details ( http://jdbc.postgresql.org/
changes.html#version_8.0-315 )
When performing replace processing we must continue processing
until we hit the end of a user supplied query, not just once w
Hi all!
I use postgresql 8.0 and its driver for JDBC.A strange thing happens:
when my application creates a query it is created always with a
parenthesis more than expected, that is a query like this one:
SELECT * FROM table_x WHERE (table_x.afield='a' and
table_x.bfield='b')) order by table_x.a
Disc write activity timeline looks like the following:
CHECKPOINT ->
Large disc write (30-40M/s), 1 second ->
4-5 seconds: no read and write activity at all, but server works slow ->
2-3 seconds: disc write activity about 4-5 M/s ->
restore normal speed
On 3/28/07, Dmitry Koterov <[EMAIL PROTEC
echoping is a program to test and monitor the response time of network
servers, for instance DBMS servers. It is typically used from the
command-line or called repetitively by a monitoring framework such as
SmokePing (http://www.smokeping.org/).
http://echoping.sourceforge.net/
The new version, v
On þri, 2007-03-27 at 12:54 +0200, Denis Gasparin wrote:
> I have a query that performs a multiple join between four tables and
> that doesn't use the defined indexes.
> If I set enable_seqscan to off, the query obviously uses the indexes and
> it is considerable faster than normal planned executi
Am Mittwoch, 28. März 2007 03:00 schrieb Joseph S:
> I don't use rpms, I build from the tarballs, hence my question.
Then use --enable-debug. You won't get the debug info in separate files, but
you don't need to.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
---
Denis Gasparin wrote:
Hi!
I am testing the PHP PDO library versus the old style PHP postgres
functions.
I noted that PDO library declare and prepare every statement. I mean:
$s = $db->query("select * from test where field=1");
is equivalent to
$s = $db->prepare("select * from test where field
Hi,
is there an advantage of using fixed length columns only in a table ?
/sebastian
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
New information.
When I run CHECKPOINT manually in psql console, effect is fully reproduced.
So - it is a checkpoint slowdown.
The quesion is - how to make CHECKPOINT work faster?
On 3/27/07, Dmitry Koterov <[EMAIL PROTECTED]> wrote:
How to tune them?
Now I have:
checkpoint_segments = 5
Hi!
I am testing the PHP PDO library versus the old style PHP postgres
functions.
I noted that PDO library declare and prepare every statement. I mean:
$s = $db->query("select * from test where field=1");
is equivalent to
$s = $db->prepare("select * from test where field=?");
$s->execute(array(
55 matches
Mail list logo