The group ID is part of the primary key of the group_history table. My
understanding is that two INSERTs with different group IDs should not
collide in this case, or am I wrong in thinking this?
The table definition for group_history is the following:
CREATE TABLE public.group_history (
group int
I'm sorry. I worded this quite poorly. I meant to say that there were
no log lines added to the postgres logfile at the time.
I hope these are the settings you were refferring to (I did not change
anything in the config files vs. the default)
log_destination = 'stderr'
logging_collector = off
log_
On 11/23/2016 01:52 PM, azhwkd wrote:
Greetings!
The parallel calls should not be working on the same row. Each query
services a different group ID on it's own and there is no overlap.
Except the INSERT query in the trigger function is working on dates not
group ids.
Kind regards,
Sebasti
On 11/23/2016 01:47 PM, azhwkd wrote:
Greetings!
Yes I had a look at the logfiles but there is not a single logfile
generated when I'm reproducing this issue.
We are talking about the Postgres logfile, correct?
I have to believe a log file exists, but maybe not entries at that time.
What are
greigwise writes:
> So, I decided to try an experiment. I wrote 2 queries as follows:
> 1 ) select pg_sleep(100) ;
> 2 ) with q (s1, s2) as (select pg_sleep(100), 1)
> select * from q where s2 in ( 1, delimited numbers>)
>
> It looks to me like the connection running the big query is usi
I had an issue today where the OOM killer terminated one of my postgres
processes.
On my server I have 8 GB of RAM, shared_memory is 1 GB and work_memory is
24MB.
I have connection pooling which limits us to 25 connections. Even if I'm
maxed out there, I'm still only using 1.6 MB of RAM of my 8
Greetings!
The parallel calls should not be working on the same row. Each query
services a different group ID on it's own and there is no overlap.
Kind regards,
Sebastian
Tom Lane schrieb am Mi., 23. Nov. 2016 um 17:47 Uhr:
> azh...@gmail.com writes:
> > I have a query which if run alone usua
Greetings!
Yes I had a look at the logfiles but there is not a single logfile
generated when I'm reproducing this issue.
Concerning locks I used there queries at
https://wiki.postgresql.org/wiki/Lock_Monitoring and they came up empty.
The group_history table and its sub-tables do not have any fore
On 11/21/2016 10:23 PM, MEERA wrote:
If archive_mode is not configured, and i use snapshot solution for
backup of the server, how can i ensure data consistency? Is there a
way to quiesce all the connections to DB?
select pg_startbackup();before taking the file system copy or
snapshot, and
On Wed, Nov 23, 2016 at 11:53 AM, Scott Mead wrote:
> On Nov 22, 2016, at 01:23, MEERA wrote:
>> If archive_mode is not configured, and i use snapshot solution for backup of
>> the server, how can i ensure data consistency? Is there a way to quiesce all
>> the connections to DB?
>
> If your snap
My goal is to grap a user from the OS and and substitute the grapped user -
to be the owner of the database - in the sql commands
e.g using shell-script substitution syntax:
create user ${user} with login;
create tablespace ${user} location '/path/${user)';
create database ${us
Andreas Terrius writes:
> Is there any way to check whether the row already exists before checking
> constraints ? I still want it to fail if it turns out to be a new row
> (which would violate the not null constraint), but updates the row if it
> already exists.
I'm not really sure why you expec
Am 23. November 2016 20:31:21 MEZ, schrieb John R Pierce :
>On 11/23/2016 11:20 AM, Israel Brewster wrote:
>> How do I determine which those are? Just based on the timestamp if
>the
>> WAL file, such that I could do something like take the timestamp of
>> the last basebackup and delete all WAL
On 11/23/2016 02:50 PM, Israel Brewster wrote:
> On Nov 23, 2016, at 10:31 AM, John R Pierce > there's a tool, barman, I've never used but its supposed to greatly
>> simplify this whole process...
>>
>> http://www.pgbarman.org/
>
> Definitely looks like something to try. Thanks!
I'd recommend you
On Nov 23, 2016, at 10:31 AM, John R Pierce wrote:
>
> On 11/23/2016 11:20 AM, Israel Brewster wrote:
>> How do I determine which those are? Just based on the timestamp if the WAL
>> file, such that I could do something like take the timestamp of the last
>> basebackup and delete all WAL files
On 11/23/2016 11:20 AM, Israel Brewster wrote:
How do I determine which those are? Just based on the timestamp if the
WAL file, such that I could do something like take the timestamp of
the last basebackup and delete all WAL files older than that? Or is
there a better way?
there's a tool, bar
---Israel BrewsterSystems Analyst IIRavn Alaska5245 Airport Industrial RdFairbanks, AK 99709(907) 450-7293---BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying Service;MIS
TITLE
On 11/23/2016 10:16 AM, Israel Brewster wrote:
To mitigate these issues, I am thinking of supplementing the
individual dumps with a full base backup and WAL archiving to a NFS
share. This should enable (relatively) quick/easy recovery from
backup, plus the ability to do PIT Recovery. I do hav
I was wondering if someone could inform me about, or point me to an online article about best practices for backing up a postgresql database cluster? At the moment, I have two servers running with streaming replication for failover purposes, and then I do nightly database dumps for recovery/backup
> On Nov 22, 2016, at 01:23, MEERA wrote:
>
> Hi all,
>
> If archive_mode is not configured, and i use snapshot solution for backup of
> the server, how can i ensure data consistency? Is there a way to quiesce all
> the connections to DB?
If your snapshot solution is atomic, then you are *pr
On 11/21/2016 2:32 AM, Andreas Terrius wrote:
Is there any way to check whether the row already exists before
checking constraints ? I still want it to fail if it turns out to be a
new row (which would violate the not null constraint), but updates the
row if it already exists.
just do an upda
Hi all,
If archive_mode is not configured, and i use snapshot solution for backup
of the server, how can i ensure data consistency? Is there a way to quiesce
all the connections to DB?
Thanks,
Meera
Greetings!
I'm using postgres version 9.5.5 on a ubuntu 16.04.1 server
installation which was installed through apt-get.
I have a query which if run alone usually completes in about 300ms.
When run in my application this query constantly locks up and bogs
down all connections of the connection po
Is there any way to check whether the row already exists before checking
constraints ? I still want it to fail if it turns out to be a new row
(which would violate the not null constraint), but updates the row if it
already exists.
Since if that is not possible, I would need to do a query to deter
*This is not a problem or a query.*
*Here in the U.S.A. it is the day before Thanksgiving. In keeping with
that, I'd like to thank all those that have helped me, be thankful for all
those I have helped, and forgive all those that have refused to see my
point of view.*
*Happy Thanksgiving to ever
azh...@gmail.com writes:
> I have a query which if run alone usually completes in about 300ms.
> When run in my application this query constantly locks up and bogs
> down all connections of the connection pool (In the application this
> query is run up to 10 times in parallel with different paramet
On Wed, Nov 23, 2016 at 4:43 AM, Charles Clavadetscher
wrote:
> From: Kevin Grittner [mailto:kgri...@gmail.com]
>> Is it possible to upgrade? You are missing over a year's worth
>> of fixes for serious bugs and security vulnerabilities.
>
> Yes. Actually it is foreseen to move to 9.6 (and RedHat
On 11/23/2016 05:24 AM, Moreno Andreo wrote:
Il 15/11/2016 18:19, Adrian Klaver ha scritto:
On 11/15/2016 07:39 AM, Moreno Andreo wrote:
Sorry for late reply... i'm in some quite rough days
Il 08/11/2016 21:28, Adrian Klaver ha scritto:
On 11/08/2016 12:13 PM, Moreno Andreo wrote:
[...]
"David G. Johnston" writes:
> On Wed, Nov 23, 2016 at 3:46 AM, Poul Kristensen wrote:
>> In the postgresql.log I get "select datname from $1".
> That is expected when you parameterize a query - the query string includes
> parameters instead of values.
It looks like Poul is hoping to use a par
On 11/22/2016 11:41 PM, azh...@gmail.com wrote:
Greetings!
I'm using postgres version 9.5.5 on a ubuntu 16.04.1 server
installation which was installed through apt-get.
I have a query which if run alone usually completes in about 300ms.
When run in my application this query constantly locks up
On Wed, Nov 23, 2016 at 3:46 AM, Poul Kristensen wrote:
> In the postgresql.log I get "select datname from $1".
>
That is expected when you parameterize a query - the query string includes
parameters instead of values. What you are seeing is the "Prepare"
component of a "prepare-execute". Wh
Il 15/11/2016 18:19, Adrian Klaver ha scritto:
On 11/15/2016 07:39 AM, Moreno Andreo wrote:
Sorry for late reply... i'm in some quite rough days
Il 08/11/2016 21:28, Adrian Klaver ha scritto:
On 11/08/2016 12:13 PM, Moreno Andreo wrote:
[...]
In your experience, would this approach help
The below mentioned pg_database does not appear in $1.
In the postgresql.log I get "select datname from $1".
Do you have a suggestion of how to solve this?
I get double declarations when using both
char *paramValues[1]
char *paramValues[1]
//char *value1 = "pg_database";
//char **paramValues = &v
Hello Kevin
Thank you very much for your input. I appreciate it very much.
> -Original Message-
> From: Kevin Grittner [mailto:kgri...@gmail.com]
> Sent: Dienstag, 22. November 2016 22:37
> To: Charles Clavadetscher
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] max_connecti
Torsten Förtsch wrote:
> Now, I have a database with very low write activity. Archive_command is
> called about once per hour to
> archive one segment. When the database was moved to PG 9.5, it was initially
> configured with insanely
> high settings for max_wal_size, min_wal_size and wal_keep_se
On Tue, Nov 22, 2016 at 10:34 PM, Adrian Klaver
wrote:
> On 11/22/2016 12:51 PM, Torsten Förtsch wrote:
>
>>
>> Now, I have a database with very low write activity. Archive_command is
>> called about once per hour to archive one segment. When the database was
>> moved to PG 9.5, it was initially
36 matches
Mail list logo