On Tue, Nov 06, 2018 at 04:27:35PM +1100, rob stone wrote:
> Logged in as user postgres and postgres owns the files created by
> initdb, so is this a permissions problem or am I having a brain fade?
Having 0600 as umask for those files is normal. Don't you have more
logs about the error? You sho
Hello,
Trying to run pg_upgrade from version 10 to version 11 on the test
server and pulling the following error:-
could not open version file: /home/postgres/testing/data_v10/PG_VERSION
Failure, exiting
O/S is:- Debian 4.18.10-2 (2018-11-02) x86_64 GNU/Linux
Running pg_upgrade as user postgres
On 11/05/2018 08:30 PM, Rob Sargent wrote:
On 11/5/18 7:05 PM, Ron wrote:
I've got a very puzzling problem on 9.6.6 systems we just migrated from
8.4. (The same problem happened on 9.6.9, but rolled it back so as to
make prod have the same version as our Staging systems.)
We've got a giant
On 11/5/18 7:05 PM, Ron wrote:
I've got a very puzzling problem on 9.6.6 systems we just migrated
from 8.4. (The same problem happened on 9.6.9, but rolled it back so
as to make prod have the same version as our Staging systems.)
We've got a giant script full of DROP TRIGGER IF EXISTS and CR
I've got a very puzzling problem on 9.6.6 systems we just migrated from
8.4. (The same problem happened on 9.6.9, but rolled it back so as to make
prod have the same version as our Staging systems.)
We've got a giant script full of DROP TRIGGER IF EXISTS and CREATE TABLE and
DROP TABLE and CR
On 11/5/18 8:50 AM, Charles Martin wrote:
Daniel said:
This appears to be the case. I ran:
SELECT max(length(docfilecontents::text)) FROM docfile;
and after a very long time, got:
ERROR: invalid memory alloc request size 1636085512 SQL state: XX000
Adrian said:
Ok, thanks for explaining
Hello, thanks for sharing the hint,
I've tested 10 minutes, but it did not fix that.
Having more time waiting for timeout, I've run strace on wal sender once
more and now the patern is different (probably I've catch it first time
before it reaches issue point).
There are repeated failing attempt
The first query timed out, but the second one returned this:
0 "623140"
1 "53"
2 "12"
3 "10"
4 "1"
5 "1"
7 "1"
[null] "162"
Not quite sure what that means, but if there is just a small number of
overly-large records, I might be able to delete them. If I can find them.
On Mon, Nov 5, 2018 at 12:5
Charles Martin wrote:
> SELECT max(length(docfilecontents::text)) FROM docfile;
> and after a very long time, got:
> ERROR: invalid memory alloc request size 1636085512 SQL state: XX000
It would mean that at least one row has a "docfilecontents"
close to 0.5GB in size. Or that the size fi
On 11/05/2018 11:06 AM, Charles Martin wrote:
Ron said:
>We were expecting the output of the "SHOW SHARED_BUFFERS;" command.
Ok, the result from that command is:
1GB
>The system has only 4GB of RAM. I read that a reasonable swap size is 1/4 of
RAM, so I've created a swap file of 1GB.
>Times
Ron said:
>We were expecting the output of the "SHOW SHARED_BUFFERS;" command.
Ok, the result from that command is:
1GB
>The system has only 4GB of RAM. I read that a reasonable swap size is 1/4
of RAM, so I've created a swap file of 1GB.
>Times have changed... (I'd have made a 4GB swap file.
On 11/05/2018 10:50 AM, Charles Martin wrote:
[snip]
The results I pasted were from:
SELECT * FROM pg_settings
Maybe I didn't get it the right way.
We were expecting the output of the "SHOW SHARED_BUFFERS;" command.
The system has only 4GB of RAM. I read that a reasonable swap size is 1/4
Daniel said:
>It's plausible that, with only 4GB of RAM, the table that fails
to dump has some very large rows that can't be allocated, especially
since both the backend and pg_dump need to have it simultaneously
in memory.
> >pg_dump: The command was: COPY public.docfile (docfile_pkey,
> >docfile
On 6/8/18 8:30 PM, Jeremy Finzel wrote:
No I was referring to this from the documentation:
Avoid spuriously marking pages as all-visible (Dan Wood, Pavan
Deolasee, Álvaro Herrera)
This could happen if some tuples were locked (but not deleted).
While queries would still functi
Charles Martin writes:
> Ok, thanks for explaining this. Here is the current value:
> "shared_buffers" "131072" "8kB"
Well, that's 1GB, which might be ambitious inside a VM with a hard
restriction to 4GB total RAM. Postgres can get by with a *lot* less.
Try knocking it down to a tenth of that an
On 11/5/18 7:04 AM, Charles Martin wrote:
Adrian said:
>Tom was referring to this from your previous post:
>(postmaster) total-vm:3068900kB,
>where vm(VM) is Virtual Memory:
>https://stackoverflow.com/questions/18845857/what-does-anon-rss-and-total-vm-mean#22326766
>So what is your shared_
Adrian said:
>Tom was referring to this from your previous post:
>(postmaster) total-vm:3068900kB,
>where vm(VM) is Virtual Memory:
>
https://stackoverflow.com/questions/18845857/what-does-anon-rss-and-total-vm-mean#22326766
>So what is your shared_buffers:
>https://www.postgresql.org/docs/10/stat
Charles Martin wrote:
> >So where is the server located relative to the pg_dump client?
> >On the same machine?
> >If so is it a virtual machine e.g AWS?
> >Across a local or remote network?
>
>
> I gave the command in a terminal session after SSHing to the server from
> the same network
On 11/5/18 5:56 AM, Charles Martin wrote:
Tom said:
>That's kind of odd: a COPY shouldn't really consume very much working
>memory. I suspect that much of the process's apparent VM consumption may
>be shared buffers ... what have you got shared_buffers set to on the old
>server? If it's mo
Tom said:
>That's kind of odd: a COPY shouldn't really consume very much working
>memory. I suspect that much of the process's apparent VM consumption may
>be shared buffers ... what have you got shared_buffers set to on the old
>server? If it's more than half a GB or so, maybe reducing it would
Hello,
I´m trying use pgAgent on windows 10 and Windows Server 2012 R2.
My version of PostgreSQL is 9.5.1.14
Install and configure pgAgent with sucess, but, when I try run a Job,
status is Failed with a message bellow:
*"Couldn't create the primary connection (attempt 1): fe_sendauth: no
password
Even more difficult in PG functions as they have no commit / rollback
capability. I haven't played with stored procedures in in PG11 yet.
You can simulate oracle autonomous transaction feature in postgres by
connecting to the same db using dblink.
As for implicit passing of error paramete
Hello Community,
I wanted to know if password can be managed by the respective users at the
time of their login, after their password has been expired?
Consider the following case:
*postgres=# create user pavan with login password 'password' valid until
'November 5 16:30:00 2018';*
CREATE ROLE
On 3/11/18 6:02 μ.μ., Jerry Sievers wrote:
Aleš Zelený writes:
Hello,
we are suing logical replication on 10.4 and it now hangs. After
some timeout it is retarted again, replaying 18GB of data and then
hangs (while 7GB of wals remains to be proceeded).
Timeout...
Have a look at the 2 setti
Hello,
I have a database that use index on different table space (nvme). I read
documentation about table space and understand table space cannot be
treated as an autonomous collection of data files.
My question is: Is this always true ? I mean if I have table .. okay
here is example:
crea
25 matches
Mail list logo