2016-04-27 2:27 GMT+03:00 Tim van der Linden :
> The plan:
>
> Sort (cost=105773.63..105774.46 rows=333 width=76) (actual
> time=5143.162..5143.185 rows=448 loops=1)
>Sort Key: r.created
>Sort Method: quicksort Memory: 60kB
>-> Nested Loop (cost=1.31..105759.68 rows=333 width=76) (
On Apr 26, 2016 4:29 PM, "Peter Devoy" wrote:
>
> Hi all,
>
> I am trying to work out why a piece of software, Mapnik, is executing
> slowly. All it is doing is loading a config file which causes about
> 12 preparation queries (i.e. with LIMIT 0) to be executed. I can see
> from pg_stat_statemen
On 27 April 2016 at 11:27, Tim van der Linden wrote:
> Hi all
>
> I have asked this question in a somewhat different form on the DBA
> Stackexchange site, but without much luck
> (https://dba.stackexchange.com/questions/136423/postgresql-slow-join-on-three-large-tables).
> So I apologize for po
On Wed, 27 Apr 2016 07:28 Tim van der Linden, wrote:
> Hi all
>
> I have asked this question in a somewhat different form on the DBA
> Stackexchange site, but without much luck (
> https://dba.stackexchange.com/questions/136423/postgresql-slow-join-on-three-large-tables).
> So I apologize for pos
Hi all
I have asked this question in a somewhat different form on the DBA
Stackexchange site, but without much luck
(https://dba.stackexchange.com/questions/136423/postgresql-slow-join-on-three-large-tables).
So I apologize for possible double posting, but I hope this might get a better
respon
Hi all,
I am trying to work out why a piece of software, Mapnik, is executing
slowly. All it is doing is loading a config file which causes about
12 preparation queries (i.e. with LIMIT 0) to be executed. I can see
from pg_stat_statements these only take ~1ms in their totality.
So next I ran "p
Would it be reasonable to just take the simple approach with same algorithm
I used in the shell script? Basically: If the psql client uses a local
UNIX domain socket, or localhost TCP connection, use the string output by
"hostname" system command. From the C perspective, this is just calling
the
From: Jayadevan M Sent: Tuesday, April 26, 2016 6:32 AM
Hello,
I have a python script. It opens a cursor…
Thanks,
Jayadevan
On Mon, 25 Apr 2016 21:48:44 -0400, Adam Brusselback
wrote:
>>It is not difficult to simulate column store in a row store system if
>>you're willing to decompose your tables into (what is essentially)
>>BCNF fragments. It simply is laborious for designers and programmers.
>
>I could see a true c
>
> 2) %M vs shell call
>
> %M on when connected to the local machine displays the string "[local]"
> which I didn't like. I wanted a real hostname to show no matter which
> client/server pair I was using. Zero chance for mistaken commands on the
> wrong host. Many times we ssh to a remote serv
Thanks for the input everyone. I'll try to comment on each discussion
point:
1) garbled output in large queries
I messed around with a few things, and have not been able to recreate any
issues. Can you provide a test case for this? Also, any other interesting
things about your terminal, like y
2016-04-26 15:48 GMT+02:00 :
> Solved. The sample can indeed be loaded at startup (although it emits some
> strange LOG messages).
>
>
>
> But to load it dynamically requires this SQL:
>
>
>
> CREATE OR REPLACE FUNCTION worker_spi_launch(i INT) RETURNS INT
>
> AS '' LANGUAGE C;
>
> SELECT * FR
On 04/26/2016 05:55 AM, Rakesh Kumar wrote:
Pardon me if this has been discussed before.
I believe that PG back-end does not version index rows the way it does
the data rows. Assume that the app updates a row frequently (several
times in a second). For each update, PG will create a new version.
On Tue, Apr 26, 2016 at 7:25 PM, Albe Laurenz
wrote:
>
>
> It is not the "SET search_path" statement that is blocking the truncate,
> but probably some earlier statement issued in the same transaction.
>
You are right. I had a select against that table.
Adding this line fixed it ...
conn.set_is
Jayadevan M wrote:
> I have a python script. It opens a cursor, and sets the search_path (using
> psycopg2). In case
> something goes wrong in the script , a record is inserted into a table. In
> that script, I am not doing
> any thing else other than reading a file and publishing the lines to a
Solved. The sample can indeed be loaded at startup (although it emits some
strange LOG messages).
But to load it dynamically requires this SQL:
CREATE OR REPLACE FUNCTION worker_spi_launch(i INT) RETURNS INT
AS '' LANGUAGE C;
SELECT * FROM worker_spi_launch();
It would be helpful
On 26 April 2016 at 15:35, Charles Clavadetscher wrote:
> Hello Johann
>
> There are two to_tsvector functions:
>
> charles@charles.[local]=# \df to_tsvector
> List of functions
>Schema |Name | Result data type | Argument data types | Type
>
> --
Hello Johann
There are two to_tsvector functions:
charles@charles.[local]=# \df to_tsvector
List of functions
Schema |Name | Result data type | Argument data types |
Type
+-+--+-+
pg_
Hello,
I have a python script. It opens a cursor, and sets the search_path (using
psycopg2). In case something goes wrong in the script , a record is
inserted into a table. In that script, I am not doing any thing else other
than reading a file and publishing the lines to a queue (no database
oper
I have never seen this problem before. It occurred while trying to import
a dump (done by 9.5 client of a 9.4 database) also.
Table definition:
-
CREATE TABLE source.annual
(
filename text,
gzipfile text,
id serial NOT NULL,
tsv tsvector,
ut character varying(19),
xml xml,
processe
Hello.
As far as I know, postgres will update the index with each DML statement, so
the scenario you present should not be a problem by itself. However, having a
table updated that frequently can be a problem if the table grows too much and
is not well indexed.
If the table is small, you can c
Pardon me if this has been discussed before.
I believe that PG back-end does not version index rows the way it does the data
rows. Assume that the app updates a row frequently (several times in a second).
For each update, PG will create a new version. However I believe the primary
key index p
Hi Tom,
many thanks for your answer. This is a good hint. Will check this.
Regards
Jürgen
-Ursprüngliche Nachricht-
Von: Tom Lane [mailto:t...@sss.pgh.pa.us]
Gesendet: Montag, 25. April 2016 17:09
An: Wetzel, Juergen (Juergen)
Cc: pgsql-general@postgresql.org
Betreff: Re: [GENERAL] Int
2016-04-26 11:17 GMT+02:00 Jinhua Luo :
> Why not use libpq in worker? i.e. your worker works just like a pure PG
> client.
>
there must be some overhead from using client API on server side.
Regards
Pavel
>
> In my project, I uses worker in this way and it works well. I do not
> use any back
Why not use libpq in worker? i.e. your worker works just like a pure PG client.
In my project, I uses worker in this way and it works well. I do not
use any backend API to access the database.
2016-04-21 15:51 GMT+08:00 Ihnat Peter | TSS Group a.s. :
> I am trying to create background worker whic
Hi Peter!
The solution to this problem would be also interesting for me. We have
application which use sending data to background worker and it's look like
the asynchronous notification can be ideal solution. But we do not found
solution how to work with this notifications. Worker calculating data
Hej all,
Sorry for the late answer.
I faced the same problem installing PostgreSQL 9.5.2 server on my RHEL 7.2
server.
I solved it by doing the following.
1. vi /etc/fstab
.
.
.
UUID=19881aa7-699a-41ff-bd65-216e1d3de62c/var/lib/pgsql/9.5xfs
_netdev0 0
2. vi /usr/lib/systemd/sys
Adrian Klaver wrote on 04/21/2016 16:03:55:
> From: Adrian Klaver
> To: Martin Kamp Jensen/DK/Schneider@Europe, pgsql-general@postgresql.org
> Date: 04/21/2016 16:09
> Subject: Re: [GENERAL] Invalid data read from synchronously
> replicated hot standby
>
> On 04/21/2016 01:05 AM, martin.kamp.j
On Sat, Apr 23, 2016 at 3:17 PM, Melvin Davidson
wrote:
>
> On Sat, Apr 23, 2016 at 1:03 AM, Shulgin, Oleksandr <
> oleksandr.shul...@zalando.de> wrote:
> >I find yor lack of proper email quoting skills disturbing..
>
> I am sorry you are disturbed, but thank you for pointing that out. I have
> r
29 matches
Mail list logo