David Greco wrote:
> Need some help with UPDATE syntax. I am attempting to do something
like this:
>
> WITH default_facility AS (
> SELECT facility_id,
> inkjetorlabel
> FROM engagement_facility_defs
> WHERE engagement_facility_def_id
icholy wrote:
> I'm having problem with libpq's *PQexec* function hanging on
intermittent
> connections.
> After searching around, the solution seems to be to use the
asynchronous
> functions *PQsendQuery*/*PQgetResult* and implement your own timeout.
>
> Now the issue I'm facing is that *PQgetRes
Michael Giannakopoulos wrote:
> I would like to ask if there is any way to make an aggregate function
to take a set of tuples as an
> input variable. I know that an actual aggregate function receives each
tuple one at a time and process
> it on the fly. However I want to store tuples in an incremen
Hi,
We have a problem with postgres handling of certain timestamps where the
timestamps can go into postgres through jdbc interface but cannot be
retrieved back. It appears to be partly postgres-jdbc and partly postgres
server problem.
Here is an example:
db=# show timezone;
TimeZone
-
Dear users,
we are facing with a strange efficiency problem.
We have two tables:
1) the first largest (3 rows) contains a timestamp field (named
"tempo_calcolo") and other fields.
2) the second contains only few rows (150), a PostGIS geometric POLYGON
field (named "the_geom") and other fields.
On 11/30/12 3:28 AM, ivan marchesini wrote:
Without the geometric field it takes around 86 ms !!
With the geometric field it takes around 14000 ms !!
These are the EXPLAIN ANALYSE results obtained when we perform the query
without the geometric field:
...
"Total runt
ivan marchesini wrote:
> we are facing with a strange efficiency problem.
>
> We have two tables:
> 1) the first largest (3 rows) contains a timestamp field (named
> "tempo_calcolo") and other fields.
> 2) the second contains only few rows (150), a PostGIS geometric POLYGON
> field (named "the
maxxedev maxxedev wrote:
> db=# show timezone;
> TimeZone
> --
> Eire
> (1 row)
> db=# insert into test_table values('1912-03-14 00:00:00.00 -0025');
> INSERT 0 1
> db=# select * from test_table;
> invoice_date
> --
> 1912-03-13 23:59:39-00:25:21
> (1 r
Hi
thanks for all your answers.
Concerning the problem of the same "running time" I'm really surprised
but I can confirm that the EXPLAIN ANALIZE time is different from the
query execution time.
probably it happen something like this:
http://bytes.com/topic/postgresql/answers/422841-explain-analy
On Fri, Nov 30, 2012 at 8:22 AM, Henry Drexler wrote:
> Hello, and thank you in advance.
>
>
> Beyond the date vs timestamp troubleshooting I did,
>
I realize this could be confusing - since I ruled out that difference, the
real question is - given this setup, why would the query time go from 4
Yeah that's good, but there are plenty of columns, was hoping to be able to use
(table.*) syntax
-Original Message-
From: Albe Laurenz [mailto:laurenz.a...@wien.gv.at]
Sent: Friday, November 30, 2012 3:47 AM
To: David Greco; pgsql-general@postgresql.org
Subject: RE: [GENERAL] UPDATE sy
Albe Laurenz *EXTERN* wrote
> If you have a single SQL statement, you will get only one
> PQresult. You get more than one if you send a query string
> with more than one statement, e.g.
>
> PQsendQuery(conn, "SELECT 42; SELECT 'Hello'");
>
> would result in two PQresults.
>
> You can get multip
David Greco wrote:
[wants to use CTEs in an UPDATE]
> Yeah that's good, but there are plenty of columns, was hoping to be
able to use (table.*) syntax
Is this a problem or do you just want to type
as little as possible?
You have to specify them in the SET clause anyway.
Yours,
Laurenz Albe
--
ivan marchesini wrote:
> Concerning the problem of the same "running time" I'm really surprised
> but I can confirm that the EXPLAIN ANALIZE time is different from the
> query execution time.
> probably it happen something like this:
> http://bytes.com/topic/postgresql/answers/422841-explain-analyz
>David Greco wrote:
>[wants to use CTEs in an UPDATE]
>> Yeah that's good, but there are plenty of columns, was hoping to be
able to use (table.*) syntax
>Is this a problem or do you just want to type as little as possible?
>You have to specify them in the SET clause anyway.
No problem, just l
Hi I have a question, Is possible to create a shared-storage postgresql server
cluster? I found some mix responses online.
Thank you very much,
Lexi
This electronic mail communication may contain privileged, confidential, and/or
proprietary information which is the property of The Atkins No
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Justin Julicher wrote:
> If you use ALTER TABLE it will check every row in the table to make sure
> the column doesn't exceed the constraint (in 8.4 - I know this has been
> updated in 9.x)
Correct.
> As I am trying to update a table with hund
Hi,
On Fri, 2012-11-30 at 10:21 -0500, Bijelic, Aleksandra (Lexi) wrote:
> Is possible to create a shared-storage postgresql server cluster?
If you want both nodes as active, then you need to use Postgres-XC:
http://postgres-xc.sourceforge.net/
If you want active/passive solution, then you can
"Albe Laurenz" writes:
> As I said -- could the time be spent on the client?
It is probably some combination of
(1) time to fetch the wide geometry values from the table's TOAST table
(2) time to convert the geometry values to text form
(3) time to transmit the larger volume of data to the clien
On Fri, Nov 30, 2012 at 9:51 AM, Tom Lane wrote:
> "Albe Laurenz" writes:
>> As I said -- could the time be spent on the client?
>
> It is probably some combination of
>
> (1) time to fetch the wide geometry values from the table's TOAST table
> (2) time to convert the geometry values to text for
Hi Laurenz
Thanks but as I said I have tried using psql directly on the server. And
still there is a strong difference between the queries execution time with
and without the geometric field.
May be it is a problem of clients but I obtain the same problem with
pgadmin, psql and other servers. So I
Henry Drexler wrote:
> why would the query time go from 4 minutes to over 50, for an
> increase in table rows from 30 million to 65 million?
Did the active (frequently referenced) portion of the database go
from something which fit in cache to something which didn't? Did
any hash table or sort no
Hi,
PostgreSQL 9.1.6
Linux (SLES 11)
i have a pg_basebackup job that is executed within a script.
My command is:
/postgresql/v91/bin/pg_basebackup -h 127.0.0.1 -D
/postgresql/pg_backup/backupdir -Ft -Z 5
The result of this creates 2 files: base.tar and 16450.tar
I understand that 16450 is th
On Fri, Nov 30, 2012 at 5:22 AM, Henry Drexler wrote:
> Hello, and thank you in advance.
>
>
> Beyond the date vs timestamp troubleshooting I did, I am not sure what else
> to look for, I know the increase of rows will have some affect but I just
> don't think the query should go from 4 minutes to
> -Original Message-
> From: Greg Sabino Mullane [mailto:g...@turnstep.com]
> Sent: Thursday, November 29, 2012 11:34 PM
> To: pgsql-general@postgresql.org
> Subject: Re: pg_listening_channels()
>
>
> On the contrary, it was very well discussed and designed. Why do you
> even care if the
On Fri, Nov 30, 2012 at 1:42 PM, Jeff Janes wrote:
> Can you report the EXPLAIN (ANALYZE, BUFFERS) instead?
Thanks, here they are:
for the approx 65 million row approx 50 min version:
EXPLAIN (ANALYZE, BUFFERS)
select
massive_expansion(ctn,the_range)
from
critical_visitors;
"Seq Scan on crit
On Fri, Nov 30, 2012 at 1:23 PM, Kevin Grittner wrote:
> Henry Drexler wrote:
>
> > why would the query time go from 4 minutes to over 50, for an
> > increase in table rows from 30 million to 65 million?
>
> Did the active (frequently referenced) portion of the database go
> from something which
On 11/30/12 8:30 AM, Ivan Marchesini wrote:
Thanks but as I said I have tried using psql directly on the server.
And still there is a strong difference between the queries execution
time with and without the geometric field.
May be it is a problem of clients but I obtain the same problem with
p
28 matches
Mail list logo