In fact it was a single delete statement.
From: Vladimir Nicolici
Sent: Tuesday, October 10, 2017 17:30
To: Achilleas Mantzios; pgsql-general@postgresql.org
Subject: RE: [GENERAL] Strange checkpoint behavior - checkpoints take alongtime
No, it didn’t. The delete was done in a single transaction.
No, it didn’t. The delete was done in a single transaction.
From: Achilleas Mantzios
Sent: Tuesday, October 10, 2017 17:18
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Strange checkpoint behavior - checkpoints take a longtime
Hello Vladimir,
maybe your update triggered auto_vacuum on
Hello Vladimir,
maybe your update triggered auto_vacuum on those tables ? Default
autovacuum_freeze_max_age is exactly set at 200,000,000 . Did you check your
vacuum stats afterwards (pg_stat_*_tables) ?
Can you show the code which performed the deletes?
On 10/10/2017 16:56, Vladimir Nicolici
I experimented some more with the settings this weekend, while doing some large
write operations (deleting 200 million records from a table), and I realized
that the database is capable of generating much more WAL than I estimated.
And it seems that spikes in write activity, when longer than a f
Further updates:
Yesterday checkpoints were finishing more or less on time with the
configuration for 25 minutes out of 30 minutes, taking 26 minutes at most.
So for today I reduced the time reserved for checkpoint writes to 20 minutes
out of 30 minutes, by setting checkpoint_completion_target
Hi,
On 2017-10-06 05:53:39 +0300, Vladimir Nicolici wrote:
> Hello, it’s postgres 9.6.
Consider setting checkpoint_flush_after to 16MB or something large like that.
> I will probably try the compression on Monday or Tuesday, I can only
> experiment with a single set of changes in a day, and I p
: Friday, October 6, 2017 04:51
To: Vladimir Nicolici
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Strange checkpoint behavior - checkpoints take a longtime
Hi,
On 2017-10-05 22:58:31 +0300, Vladimir Nicolici wrote:
> I changed some configuration parameters during the night to the values I
Hi,
On 2017-10-05 22:58:31 +0300, Vladimir Nicolici wrote:
> I changed some configuration parameters during the night to the values I was
> considering yesterday:
>
> - shared_buffers = 144GB #previously 96 GB
> - bgwriter_lru_maxpages = 100 #previously 400
> - checkpoint_timeout = 30min #prev
mbination, I will probably set it to something like 0.90
target, so that it distributes the writes over 27 minutes.
Thanks,
Vlad
From: Igor Polishchuk
Sent: Friday, October 6, 2017 02:56
To: Vladimir Nicolici
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Strange checkpoint behavior - chec
Vladimir,
Just curious, if your goal is to reduce checkpoint overhead, shouldn’t you
decrease shared_buffers instead of increasing it?
With bigger shared_buffers, you can accumulate more dirty buffers for
checkpoint to take care.
I remember in early versions ( around 8.4), when checkpoint_compl
Some further updates about the issue.
I did a bit of benchmarking on the disk system with iozone, and the during the
test the SSDs seemed to be able to easily sustain 200 MB/second of writes each,
they fluctuated between 200 MB/s and 400 MB/s when doing 96 GB of random writes
in a file. That wo
On Fri, Sep 01, 2017 at 11:08:32PM +0100, Paul Linehan wrote:
> which is fine (note that the field "mary" is sorted correctly) but
> I want "proper" JSON - i.e. with open and close square brackets
> i.e. ([ - ]) before and after the fields!
I don't know what that means. Do you mean that you want
Paul Linehan writes:
> I have a table (fred) that I want to transform into JSON and
> I use the following command (ignore the backslash stuff):
> ...
> which is fine (note that the field "mary" is sorted correctly) but
> I want "proper" JSON - i.e. with open and close square brackets
> i.e. ([ - ]
On Tue, Jul 4, 2017 at 10:18 PM, Chris Travers wrote:
> First, I haven't seen major problems of database bloat in a long time which
> is why I find this case strange. I wanted to ask here what may be causing
> it.
>
> Problem:
> ==
> Database is in the 100GB to 200GB size range, running o
On Tue, Jul 4, 2017 at 10:18 PM, Chris Travers
wrote:
>
> Questions
> ===
> I assume that it is the fact that rows update frequently which is the
> problem here? But why doesn't Postgres re-use any of the empty disk pages?
>
Can you install the contrib extension pg_freespacemap and use "
On Wed, Jul 5, 2017 at 3:51 PM, Bill Moran wrote:
> On Wed, 5 Jul 2017 13:28:29 +0200
> Chris Travers wrote:
>
> > On Wed, Jul 5, 2017 at 1:00 PM, PT wrote:
> >
> > > 2x the working size for a frequently updated table isn't terrible
> bloat.
> > > Or are
> > > you saying it grows 2x every 24 ho
On Wed, 5 Jul 2017 13:28:29 +0200
Chris Travers wrote:
> On Wed, Jul 5, 2017 at 1:00 PM, PT wrote:
>
> > 2x the working size for a frequently updated table isn't terrible bloat.
> > Or are
> > you saying it grows 2x every 24 hours and keeps growing? The real question
> > is
> > how often the ta
On Wed, Jul 5, 2017 at 7:28 AM, Chris Travers
wrote:
>
>
> On Wed, Jul 5, 2017 at 1:00 PM, PT wrote:
>
>>
>> 2x the working size for a frequently updated table isn't terrible bloat.
>> Or are
>> you saying it grows 2x every 24 hours and keeps growing? The real
>> question is
>> how often the tab
On Wed, Jul 5, 2017 at 1:00 PM, PT wrote:
>
> 2x the working size for a frequently updated table isn't terrible bloat.
> Or are
> you saying it grows 2x every 24 hours and keeps growing? The real question
> is
> how often the table is being vacuumed. How long have you let the
> experiment run
> f
On Wed, 5 Jul 2017 07:18:03 +0200
Chris Travers wrote:
> Hi;
>
> First, I haven't seen major problems of database bloat in a long time which
> is why I find this case strange. I wanted to ask here what may be causing
> it.
>
> Problem:
> ==
> Database is in the 100GB to 200GB size rang
On Wed, Jul 5, 2017 at 7:18 AM, Chris Travers
wrote:
> Hi;
>
> First, I haven't seen major problems of database bloat in a long time
> which is why I find this case strange. I wanted to ask here what may be
> causing it.
>
> Problem:
> ==
> Database is in the 100GB to 200GB size range, r
Got it
Thank you
> On Jun 12, 2017, at 4:16 PM, Adrian Klaver wrote:
>
> On 06/12/2017 02:07 PM, armand pirvu wrote:
>> Hi
>> I was doing a test upgrade from 9.5 to 9.6 and the following lines caught my
>> eye
>> postgres 10967 10911 0 15:59 pts/000:00:00
>> /usr/pgsql-9.6/bin/pg_upgra
On 06/12/2017 02:07 PM, armand pirvu wrote:
Hi
I was doing a test upgrade from 9.5 to 9.6 and the following lines caught my eye
postgres 10967 10911 0 15:59 pts/000:00:00 /usr/pgsql-9.6/bin/pg_upgrade
-d /var/lib/pgsql/9.5/data -D /var/lib/pgsql/9.6/data -b /usr/pgsql-9.5/bin -B
/usr/pgs
On 04/23/2017 05:48 PM, Periko Support wrote:
Yes, with our datamax no issue.
Once u setup the label in LabelView u chose which printer to use.
That indicates to me it is a printer driver/setup issue and not anything
the database is causing. You might want to verify the e.g. media
settings a
Yes, with our datamax no issue.
Once u setup the label in LabelView u chose which printer to use.
On Sun, Apr 23, 2017 at 1:27 PM, Adrian Klaver
wrote:
> On 04/23/2017 01:12 PM, Periko Support wrote:
>>
>> We use a label program called LabelView to create our labels, we read
>> the record using
On 04/23/2017 01:12 PM, Periko Support wrote:
We use a label program called LabelView to create our labels, we read
the record using ODBC.
The label has 8 fields which UPC is what it changes on each one.
You use the same program and setup to print to the DataMax printer and
you do not have the
We use a label program called LabelView to create our labels, we read
the record using ODBC.
The label has 8 fields which UPC is what it changes on each one.
Thanks.
On Sat, Apr 22, 2017 at 11:28 AM, Adrian Klaver
wrote:
> On 04/21/2017 05:14 PM, Periko Support wrote:
>>
>> Hi guys.
>>
>> I would
On 04/21/2017 05:14 PM, Periko Support wrote:
Hi guys.
I would to explain my current strange issue with our label printer
-.
Windows 8.1 x64, psqlodbc 9.5x86/x64 drivers tested.
The issue is that every time we print labels that read data from the
DB, the print start printing
with pause on ever
On 02/22/2017 09:16 AM, Jerry LeVan wrote:
There is a setting in the Mac System Preferences that will allow you to turn
off smart quotes.
Aah, Apple trying to be 'helpful'. Assumes everyone wants to output
print copy everywhere.
I did this and the program is acting properly once more
> On Feb 22, 2017, at 10:08 AM, Jerry LeVan wrote:
>
>>
>> On Feb 22, 2017, at 9:49 AM, Adrian Klaver wrote:
>>
>> On 02/22/2017 07:32 AM, Jerry LeVan wrote:
>>>
On Feb 22, 2017, at 9:20 AM, Adrian Klaver
wrote:
On 02/22/2017 07:09 AM, Jerry LeVan wrote:
> Sorry I
> On Feb 22, 2017, at 9:49 AM, Adrian Klaver wrote:
>
> On 02/22/2017 07:32 AM, Jerry LeVan wrote:
>>
>>> On Feb 22, 2017, at 9:20 AM, Adrian Klaver
>>> wrote:
>>>
>>> On 02/22/2017 07:09 AM, Jerry LeVan wrote:
Sorry I accidentally posted this to pgsql-general-owners earlier today…
On 02/22/2017 07:32 AM, Jerry LeVan wrote:
On Feb 22, 2017, at 9:20 AM, Adrian Klaver wrote:
On 02/22/2017 07:09 AM, Jerry LeVan wrote:
Sorry I accidentally posted this to pgsql-general-owners earlier today…
How can this happen ( from the postgresql.log file)
LOG: statement: selec
On 02/22/2017 07:32 AM, Jerry LeVan wrote:
On Feb 22, 2017, at 9:20 AM, Adrian Klaver wrote:
On 02/22/2017 07:09 AM, Jerry LeVan wrote:
Sorry I accidentally posted this to pgsql-general-owners earlier today…
How can this happen ( from the postgresql.log file)
LOG: statement: selec
> On Feb 22, 2017, at 9:20 AM, Adrian Klaver wrote:
>
> On 02/22/2017 07:09 AM, Jerry LeVan wrote:
>> Sorry I accidentally posted this to pgsql-general-owners earlier today…
>>
>> How can this happen ( from the postgresql.log file)
>>
>> LOG: statement: select * from knives where manu
On 02/22/2017 07:09 AM, Jerry LeVan wrote:
Sorry I accidentally posted this to pgsql-general-owners earlier today…
How can this happen ( from the postgresql.log file)
LOG: statement: select * from knives where manufacturer=‘Boker’
ERROR: column "‘boker’" does not exist at character 41
On Fri, 2016-12-02 at 07:04 -0700, pinker wrote:
>
> Hi!
> I have pgBadger report with strange data about prepared queries I
> cannot
> interpret by myself. If anybody could help me with interpretation,
> would be
> greatly appreciated.
> In first half of the day pgBadger shows there is no prepare
On Thu, Oct 20, 2016 at 6:27 AM, Bjørn T Johansen wrote:
> On Thu, 20 Oct 2016 14:04:51 +0200
> vinny wrote:
>
>> On 2016-10-20 13:51, Bjørn T Johansen wrote:
>> > I have the following SQL:
>> >
>> > SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
>> > 00:00:00','DD.MM. HH24:M
Bjørn:
On Thu, Oct 20, 2016 at 4:13 PM, Bjørn T Johansen wrote:
> Yes, the field name is actually dato but changed it to the English variant..
I routinely have databases with english column names for nearly
everything except 'fecha', spanish for date, and 'tipo', spanish for
type which sometime
On Thu, 20 Oct 2016 15:16:20 +0200
Francisco Olarte wrote:
> On Thu, Oct 20, 2016 at 1:51 PM, Bjørn T Johansen wrote:
> > I have the following SQL:
> > SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
> > 00:00:00','DD.MM. HH24:MI:SS') AND to_timestamp('20.10.2016
> > 23:59:
On Thu, Oct 20, 2016 at 1:51 PM, Bjørn T Johansen wrote:
> I have the following SQL:
> SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
> 00:00:00','DD.MM. HH24:MI:SS') AND to_timestamp('20.10.2016
> 23:59:59','DD.MM.
> HH24:MI:SS')
> date is of type timestamp.
> I was exp
Bottom quoting makes it difficult to reply properly, so reformated a bit:
On Thu, Oct 20, 2016 at 2:02 PM, William Ivanski
wrote:
> Em qui, 20 de out de 2016 às 09:52, Bjørn T Johansen
> escreveu:
>> date is of type timestamp.
> select * from table where date = '2016/10/20'::date
I think is the
On 2016-10-20 14:27, Bjørn T Johansen wrote:
On Thu, 20 Oct 2016 14:04:51 +0200
vinny wrote:
On 2016-10-20 13:51, Bjørn T Johansen wrote:
> I have the following SQL:
>
> SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
> 00:00:00','DD.MM. HH24:MI:SS') AND to_timestamp('20.10
On Thu, 2016-10-20 at 13:51 +0200, Bjørn T Johansen wrote:
> I have the following SQL:
>
> SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
> 00:00:00','DD.MM. HH24:MI:SS') AND to_timestamp('20.10.2016
> 23:59:59','DD.MM.
> HH24:MI:SS')
>
> date is of type timestamp.
>
>
On Thu, 20 Oct 2016 14:04:51 +0200
vinny wrote:
> On 2016-10-20 13:51, Bjørn T Johansen wrote:
> > I have the following SQL:
> >
> > SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
> > 00:00:00','DD.MM. HH24:MI:SS') AND to_timestamp('20.10.2016
> > 23:59:59','DD.MM.
> > H
Yes, that's what I am doing now but I was just wondering why the other way did
not work...
BTJ
On Thu, 20 Oct 2016 12:02:14 +
William Ivanski wrote:
> You could try:
>
> select * from table where date = '2016/10/20'::date
>
> Em qui, 20 de out de 2016 às 09:52, Bjørn T Johansen
> escrev
On 2016-10-20 13:51, Bjørn T Johansen wrote:
I have the following SQL:
SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
00:00:00','DD.MM. HH24:MI:SS') AND to_timestamp('20.10.2016
23:59:59','DD.MM.
HH24:MI:SS')
date is of type timestamp.
I was expecting to get all the re
You could try:
select * from table where date = '2016/10/20'::date
Em qui, 20 de out de 2016 às 09:52, Bjørn T Johansen
escreveu:
> I have the following SQL:
>
> SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
> 00:00:00','DD.MM. HH24:MI:SS') AND to_timestamp('20.10.2016
> 2
> Unnest is evaluated before LIMIT
> You need to use derived table
>
> postgres=# select unnest(a) from (select a from x limit 1) si
>
Thank you!
Regards
Johann
Hi
2016-07-14 14:55 GMT+02:00 Johann Spies :
> When I unnest (regexp_split_to_array) or regexep_split_to_table
>
> I get one value but the length of the array without the unnest is 193.
>
> Why would that be?
>
> wos=# select array_length(regexp_split_to_array(tsv::text, E'\\\s+'),1),
> unnest(re
On 2016-02-01 12:35:35 -0600, Yu Nie wrote:
> Recently I am working with a large amount of taxis GIS data and had
> encountered
> some weird performance issues. I am hoping someone in this community can help
> me figure it out.
>
> The taxi data were loaded in 5 minute block into a table. I hav
please stop top posting, and quoting 100s and 100s of lines of old
dreck.This list uses inline posting, and its preferred to edit out
any unimportant junk from the quoted postings.
--
john r pierce, recycling bits in santa cruz
--
Sent via pgsql-general mailing list (pgsql-general@postg
13_01w has not been clustered?
>>>> If you cluster data2013_01w on the index, does the performance change?
>>>>
>>>> On Mon, Feb 1, 2016 at 4:03 PM, Yu Nie wrote:
>>>>
>>>>> Melvin,
>>>>>
>>>>> Please see a
;> Melvin,
>>>>
>>>> Please see attached for the requests results. I ran two queries (each
>>>> with a different taxiid that is next to each other) for each table. Note
>>>> that for the large table one is much faster than the other because the
>
gt;>
>>>> Many thanks for your willingness to help!
>>>>
>>>> Best, Marco
>>>>
>>>> On Mon, Feb 1, 2016 at 2:25 PM, melvin6925
>>>> wrote:
>>>>
>>>>> Fine. Please rerun both explains a
k however for
>>>> the small table.
>>>>
>>>> Many thanks for your willingness to help!
>>>>
>>>> Best, Marco
>>>>
>>>> On Mon, Feb 1, 2016 at 2:25 PM, melvin6925
>>>> wrote:
>>>>
>>>>> Fine. Pl
o a
>> file (F8 in PGADMIN Sql). Then attach the file.
>>
>>
>>
>> Sent via the Samsung Galaxy S® 6, an AT&T 4G LTE smartphone
>> Original message ----
>> From: Yu Nie
>> Date: 2/1/2016 15:17 (GMT-05:00)
>> To: melvin6925
Galaxy S® 6, an AT&T 4G LTE smartphone
> Original message
> From: Yu Nie
> Date: 2/1/2016 15:17 (GMT-05:00)
> To: melvin6925
> Subject: Re: [GENERAL] strange sql behavior
>
> Yes, absolutely.
>
> On Mon, Feb 1, 2016 at 2:12 PM, melvin6925 wrote:
&g
Hi Klaver,
Thanks.
1. I don't see order by time makes a difference - in fact, the "analyze"
seems to indicate the sorting is faster for the small table because it uses
less memory.
2. No, the large table has not been clustered. Both tables were created
exactly the same way, loading 5-minute b
On 02/01/2016 10:35 AM, Yu Nie wrote:
Hi there,
Recently I am working with a large amount of taxis GIS data and had
encountered some weird performance issues. I am hoping someone in this
community can help me figure it out.
The taxi data were loaded in 5 minute block into a table. I have two
eral@postgresql.org Subject: Re: [GENERAL]
strange sql behavior
Thanks, Bill and Melvin!
Just some quick note/answers before I absorb all the information provided by
Bill.
1. I don't expect many users running queries against the tables, especially for
the small table - since I just created it thi
45 PM, melvin6925 wrote:
> Thanks Bill.
> Also, it's very important to include the headers with the queries!
>
> Marco,
> There is no top secret information that is requested, so please do not
> edit the output.
>
>
>
> Sent via the Samsung Galaxy S® 6, an AT&
Date: 2/1/2016
14:41 (GMT-05:00) To: Yu Nie Cc: Melvin Davidson
, pgsql-general@postgresql.org Subject: Re: [GENERAL]
strange sql behavior
Came a little late to the thread, see many comments inline below:
On Mon, 1 Feb 2016 13:16:13 -0600
Yu Nie wrote:
> Thanks a lot for your repl
Came a little late to the thread, see many comments inline below:
On Mon, 1 Feb 2016 13:16:13 -0600
Yu Nie wrote:
> Thanks a lot for your reply. I ran the query you suggested and here are
> the results
>
> Large table: "public";"data2011_01";784236885;0;0;"";"";"";"2016-01-19
> 17:31:08-06";1
Hi Melvin,
Thanks a lot for your reply. I ran the query you suggested and here are
the results
Large table: "public";"data2011_01";784236885;0;0;"";"";"";"2016-01-19
17:31:08-06";156847423
Small table: "public";"data2013_01w";300786444;0;0;"";"";"2016-02-01
08:57:24-06";"2016-02-01 04:01:04-06"
One thing to look at is the last time both tables were vacuumed/analyzed.
SELECT n.nspname,
s.relname,
c.reltuples::bigint,
n_tup_ins,
n_tup_upd,
n_tup_del,
date_trunc('second', last_vacuum) as last_vacuum,
date_trunc('second', last_autovacuum) as l
On 01/22/2016 01:41 AM, david.tu...@linuxbox.cz wrote:
Hi,
we have some question about behavior SELECT FOR UPDATE. We want find
record with open bounds tstzrange, close it a insert new open. We use
SELECT FOR UPDATE in function, but sometimes 2rows inserted. I show this
on simple example with in
Edson,
I've attached a script that shows how to use bash to assign a variable from
a SQL statement.
On Fri, Dec 25, 2015 at 12:48 PM, Adrian Klaver
wrote:
> On 12/25/2015 08:26 AM, Edson F. Lidorio wrote:
>
>>
>>
>>
>
>>>
>>> I'musing versionbelowPostgreSQL.
>>>
>>> PostgreSQL 9.4.5 on x86_64-u
On 12/25/2015 08:26 AM, Edson F. Lidorio wrote:
I'musing versionbelowPostgreSQL.
PostgreSQL 9.4.5 on x86_64-unknown-linux-gnu, compiled by gcc (Debian
4.9.2-10) 4.9.2, 64-bit
--
Edson
via pgadmin not accept this syntax.
You have any other way to do?
Should have added to previous post
On 12/25/2015 08:26 AM, Edson F. Lidorio wrote:
On 25-12-2015 13:09, Charles Clavadetscher wrote:
Hello
I am not in clear what your use case is, but you may have a look at that:
http://www.depesz.com/2013/02/25/variables-in-sql-what-how-when/
The bottom line is that in a psql interactive s
On 12/25/2015 08:37 AM, Melvin Davidson wrote:
FYI, it is always wise (and polite) to advise what version of PostgreSQL
you are using and what O/S you are using.
Actually the OP put that at the bottom of the first post:
"I'm using version below PostgreSQL.
PostgreSQL 9.4.5 on x86_64-unknown-l
FYI, it is always wise (and polite) to advise what version of PostgreSQL
you are using and what O/S you are using.
It would also be nice to know exactly what you are trying to do. IE: What
is your use case?
That being said, you can assign a result of an expression to a variable in
a PostgreSQL fun
On 25-12-2015 13:09, Charles Clavadetscher wrote:
Hello
I am not in clear what your use case is, but you may have a look at that:
http://www.depesz.com/2013/02/25/variables-in-sql-what-how-when/
The bottom line is that in a psql interactive session you cannot
really set a variable as the r
Hello
I am not in clear what your use case is, but you may have a look at that:
http://www.depesz.com/2013/02/25/variables-in-sql-what-how-when/
The bottom line is that in a psql interactive session you cannot really set a
variable as the result of a select statement (or at least I did
Edson F. Lidorio wrote:
> Hello,
> I excuse my ignorance with SQL and my English.
> I wonder if these procedures are correct or is it a bug?
It's not a bug, but storing the result in a new table is senseless. Why
do are doing that?
Andreas
--
Really, I'm not out to destroy Microsoft. That wi
Subject: Re: [GENERAL] strange connection problem.
"Day, David" writes:
> I have a development site ( FreeBSD 10.1 and Postgres 9.3 ) where I can
> connect to the database via psql or pgadminIII both locally and remotely.
> However, all of the local apps ( 3 different ones )
On 10/23/2015 08:23 AM, Day, David wrote:
One app is connecting through libpqxx, the other 2, glassfish and
ejabber chat agent, I am not certain.
It is only on this one machine where I am having a connection issue
with any of these apps.
I can connect via the identical roles that these apps
"Day, David" writes:
> I have a development site ( FreeBSD 10.1 and Postgres 9.3 ) where I can
> connect to the database via psql or pgadminIII both locally and remotely.
> However, all of the local apps ( 3 different ones ) that would connect to it
> are failing to connect.
Are those apps t
st all all 192.168.32.0/19 trust
I don’t believe this to be a postgres problem, but you don’t know what you
don’t know.
From: David G. Johnston [mailto:david.g.johns...@gmail.com]
Sent: Friday, October 23, 2015 10:36 AM
To: Day, David
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERA
On 10/23/2015 07:29 AM, Day, David wrote:
I have a development site ( FreeBSD 10.1 and Postgres 9.3 ) where I can
connect to the database via psql or pgadminIII both locally and remotely.
However, all of the local apps ( 3 different ones ) that would connect
to it are failing to connect.
I s
On Fri, Oct 23, 2015 at 10:29 AM, Day, David wrote:
> I have a development site ( FreeBSD 10.1 and Postgres 9.3 ) where I can
> connect to the database via psql or pgadminIII both locally and remotely.
>
> However, all of the local apps ( 3 different ones ) that would connect to
> it are failin
On 2015-10-23 08:29, Day, David wrote:
> I have a development site ( FreeBSD 10.1 and Postgres 9.3 ) where I can
> connect to the database via psql or pgadminIII both locally and remotely.
>
> However, all of the local apps ( 3 different ones ) that would connect to it
> are failing to connect.
On 08/27/2015 06:33 AM, Christopher BROWN wrote:
Hello Adrian,
Yep, Charles' explanation helped me understand what was going on.
Before that, I was as confused as you were (in your first reply) about
how access_mode could be NULL (with the same reasoning). In any case,
thanks for your links ; I
Hi
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Adrian Klaver
> Sent: Donnerstag, 27. August 2015 15:41
> To: Christopher BROWN
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GEN
On 08/27/2015 06:33 AM, Christopher BROWN wrote:
Hello Adrian,
Yep, Charles' explanation helped me understand what was going on.
Before that, I was as confused as you were (in your first reply) about
how access_mode could be NULL (with the same reasoning). In any case,
thanks for your links ; I
Hello Adrian,
Yep, Charles' explanation helped me understand what was going on. Before
that, I was as confused as you were (in your first reply) about how
access_mode could be NULL (with the same reasoning). In any case, thanks
for your links ; I did try searching the web for the answer before p
D; $$
>
> LANGUAGE plpgsql VOLATILE;
>
>
>
>
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] *On Behalf Of *Charles Clavadetscher
> *Sent:* Donnerstag, 27. August 2015 14:57
> *To:* pgsql-general@postgresql.org
> *Subject:*
On 08/27/2015 04:49 AM, Christopher BROWN wrote:
Hello,
I'm new to this list but have been using PostgreSQL for a moment. I've
encountered an error using PostgreSQL 9.4.4 which can be reproduced
using the SQL below.
The trigger "init_store_ldap_profiles_trigger" fails if the function
"init_sto
On 08/27/2015 04:49 AM, Christopher BROWN wrote:
Hello,
I'm new to this list but have been using PostgreSQL for a moment. I've
encountered an error using PostgreSQL 9.4.4 which can be reproduced
using the SQL below.
The trigger "init_store_ldap_profiles_trigger" fails if the function
"init_sto
: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Charles Clavadetscher
Sent: Donnerstag, 27. August 2015 14:57
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Strange TRIGGER failure with FOR ... IN ... LOOP ...
INSERT
Hello
You declare your
Hello
You declare your variable r as of type
application.store_ldap_profile_defaults%rowtype, but then select only 4 of the
5 fields of the table to put in there. The last one (happens to be access_mode
is then null).
The structures don’t match. That may explain this behaviour.
This wor
On 08/06/2015 04:24 PM, Sophia Wright wrote:
On Fri, Aug 7, 2015 at 2:46 AM, Adrian Klaver mailto:adrian.kla...@aklaver.com>> wrote:
I would also take a look at Alvaro's explanation. My understanding
is that for locking purposes the UNIQUE index is considered sort of
like a FK, as it
On Fri, Aug 7, 2015 at 2:46 AM, Adrian Klaver
wrote:
> I would also take a look at Alvaro's explanation. My understanding is that
> for locking purposes the UNIQUE index is considered sort of like a FK, as
> it could be used as a FK. This then leads to the FOR UPDATE lock, which
> from Table 13.3
On 08/06/2015 09:29 AM, Sophia Wright wrote:
On Fri, Aug 7, 2015 at 1:11 AM, Alvaro Herrera mailto:alvhe...@2ndquadrant.com>> wrote:
Sophia Wright wrote:
> I am seeing some odd locking behaviour when deleting a parent record
> (Postgres 9.4.4).
Somewhere in the triggers for FK c
On Fri, Aug 7, 2015 at 1:11 AM, Alvaro Herrera
wrote:
> Sophia Wright wrote:
> > I am seeing some odd locking behaviour when deleting a parent record
> > (Postgres 9.4.4).
>
> Somewhere in the triggers for FK checks we do "SELECT FOR KEY SHARE" of
> the PK tuples when the FK tuples are altered; a
On 08/06/2015 07:19 AM, Sophia Wright wrote:
I am seeing some odd locking behaviour when deleting a parent record
(Postgres 9.4.4).
Setup:
create table pk_rel (id int primary key);
create table fk_rel (pk_id int references pk_rel (id), x text unique);
insert into pk_rel values (1), (2);
insert i
Sophia Wright wrote:
> I am seeing some odd locking behaviour when deleting a parent record
> (Postgres 9.4.4).
Somewhere in the triggers for FK checks we do "SELECT FOR KEY SHARE" of
the PK tuples when the FK tuples are altered; and conversely when we
remove tuples from the PK side we need to ens
On 01-07-2015 13:53, Condor wrote:
Hello,
I have master - slave replication hot standby. Both server are linux
slackware64 current with postgresql 9.4.4.
Today when I logged to check some things on slave server I see on top
memory taken 26%. That was strange for me and I restart server.
Well, I
2015-05-25 15:15 GMT+02:00 Piotr Gasidło :
> 2015-05-25 11:30 GMT+02:00 Guillaume Lelarge :
>
> >> I currently have wal_keep_segments set to 0.
> >> Setting this to higher value will help? As I understand: master won't
> >> delete segment and could stream it to slave on request - so it will
> >> h
2015-05-25 11:30 GMT+02:00 Guillaume Lelarge :
>> I currently have wal_keep_segments set to 0.
>> Setting this to higher value will help? As I understand: master won't
>> delete segment and could stream it to slave on request - so it will
>> help.
>
>
> It definitely helps, but the issue could sti
2015-05-22 18:36 GMT+02:00 Piotr Gasidło :
> 2015-05-22 6:55 GMT+02:00 Fujii Masao :
> >
> > This problem happens when WAL record is stored in separate two WAL files
> and
> > there is no valid latter WAL file in the standby. In your case, the
> former file
> > is 00044C4D0090 and the
1 - 100 of 665 matches
Mail list logo