On Thu, Apr 2, 2020 at 9:25 PM Kyotaro Horiguchi
wrote:
> I provided the subject, and added -hackers.
>
> > Hello,
> > I am running postgres 11.5 and we were having issues with shared segments.
> > So I increased the max_connection as suggested by you guys and reduced my
> > work_mem to 600M.
> >
On 4/3/20 11:52 AM, Moses Mafusire wrote:
I actually started the installation process as me. I was logged as user
postgres because after trying and getting that version error message. I
thought as user postgres I would be able to install, but failed). I get
the same results though;
Dawned on
On 4/3/20 11:52 AM, Moses Mafusire wrote:
I actually started the installation process as me. I was logged as user
postgres because after trying and getting that version error message. I
thought as user postgres I would be able to install, but failed). I get
the same results though;
Yes, but y
I actually started the installation process as me. I was logged as user
postgres because after trying and getting that version error message. I thought
as user postgres I would be able to install, but failed). I get the same
results though;
[username@dart-centos ~]$ /usr/bin/pg_configBINDIR = /u
On 4/3/20 10:48 AM, Moses Mafusire wrote:
Adrian,
I really appreciate your help. I was able to open the file in a text
editor in centOS, but it's read-only. I cannot even replace the file.
Any tricks up your sleeves for me to edit this file.
Looks like a permissions issue.
In your first post
Adrian,I really appreciate your help. I was able to open the file in a text
editor in centOS, but it's read-only. I cannot even replace the file. Any
tricks up your sleeves for me to edit this file.
Thanks
Moses
On Friday, April 3, 2020, 12:40:19 PM EDT, Adrian Klaver
wrote:
On 4/3/20
yeah I'm on 9.5, but thanks for the info.
On Fri, Apr 3, 2020 at 1:24 PM Adrian Klaver
wrote:
> On 4/3/20 10:18 AM, Adrian Klaver wrote:
> > On 4/2/20 9:59 PM, AC Gomez wrote:
> >> Granted. But we are where we are, so I'm assuming this is going to be
> >> hand to hand combat.
> >
> > Well you co
On 4/3/20 10:18 AM, Adrian Klaver wrote:
On 4/2/20 9:59 PM, AC Gomez wrote:
Granted. But we are where we are, so I'm assuming this is going to be
hand to hand combat.
Well you could even the odds somewhat by using the below as a starting
point:
SELECT
relname,
pg_roles.rolname,
On 4/2/20 9:59 PM, AC Gomez wrote:
Granted. But we are where we are, so I'm assuming this is going to be
hand to hand combat.
Well you could even the odds somewhat by using the below as a starting
point:
SELECT
relname,
pg_roles.rolname,
acl.*
FROM
pg_class,
aclexplode(r
On 4/3/20 9:26 AM, Moses Mafusire wrote:
Hi Adrian,
Yes VERSION = PostgreSQL 12.1, is the only version I have. So I am
novice to Linux and Postgres, where should I run this
/ifndef MAJORVERSION
MAJORVERSION := $(basename $(VERSION))
endif
$(info MAJORVERSION is $(MAJORVERSION))/
and do I
Does this unnamed monitoring tool (and undefined query) really think higher
percentages are worse or are you mis-communicating?
Forget about the tool used for monitoring, important is to monitor the
"Cache hit ratio" is good or we can ignore it?
On Wed, 1 Apr 2020 at 10:44, David G. Johnston
wr
If you didn't turn it off, you have parallel workers on by default with
v12. If work_mem is set high, memory use may be much higher as each node in
a complex plan could end up executing in parallel.
Also, do you use a connection pooler such as pgbouncer or pgpool? What is
max_connections set to?
Hi Adrian,Yes VERSION = PostgreSQL 12.1, is the only version I have. So I am
novice to Linux and Postgres, where should I run this ifndef MAJORVERSION
MAJORVERSION := $(basename $(VERSION))
endif
$(info MAJORVERSION is $(MAJORVERSION))
and do I run as is or do I need to do something else, if
On 4/3/20 9:52 AM, Adrian Klaver wrote:
On 4/2/20 9:42 PM, James Brauman wrote:
I am using the COPY command to insert 10 million rows from a CSV file
into a database table and I am finding the performance is
unacceptable. When the COPY command is executed the disk I/O is
extremely high which
On 4/2/20 9:42 PM, James Brauman wrote:
I am using the COPY command to insert 10 million rows from a CSV file
into a database table and I am finding the performance is
unacceptable. When the COPY command is executed the disk I/O is
extremely high which leads to degraded query performance for othe
On 4/3/20 5:53 AM, Moses Mafusire wrote:
Hi Adrian,
Thanks for your response. Here is what I'm getting when I run /pg_config;/
VERSION = PostgreSQL 12.1
So that is the correct version.
Just to be clear there is only one version of Postgres on this machine,
correct?
Before you run make ag
On Fri, Apr 3, 2020 at 5:17 AM Rajiv Ranjan wrote:
> Does this unnamed monitoring tool (and undefined query) really think
> higher percentages are worse or are you mis-communicating?
>
> Forget about the tool used for monitoring, important is to monitor the
> "Cache hit ratio" is good or we can i
> On Apr 3, 2020, at 2:50 AM, Laurenz Albe wrote:
>
> On Fri, 2020-04-03 at 00:59 -0400, AC Gomez wrote:
>> Granted. But we are where we are, so I'm assuming this is going to be hand
>> to hand combat.
>
> Yes, unfortunately there is no better way.
> But DROP ROLE will produce messages that
Hi Adrian, Thanks for your response. Here is what I'm getting when I run
pg_config;
$ pg_configBINDIR = /usr/binDOCDIR = /usr/share/doc/pgsqlHTMLDIR =
/usr/share/doc/pgsqlINCLUDEDIR = /usr/includePKGINCLUDEDIR =
/usr/include/pgsqlINCLUDEDIR-SERVER = /usr/include/pgsql/serverLIBDIR =
/usr/lib64P
On Fri, 2020-04-03 at 00:59 -0400, AC Gomez wrote:
> Granted. But we are where we are, so I'm assuming this is going to be hand to
> hand combat.
Yes, unfortunately there is no better way.
But DROP ROLE will produce messages that help you along.
Yours,
Laurenz Albe
--
Cybertec | https://www.cyb
On Fri, 2020-04-03 at 18:27 +1100, Tim Cross wrote:
> > I am using the COPY command to insert 10 million rows from a CSV file
> > into a database table and I am finding the performance is
> > unacceptable. When the COPY command is executed the disk I/O is
> > extremely high which leads to degraded
James Brauman writes:
> I am using the COPY command to insert 10 million rows from a CSV file
> into a database table and I am finding the performance is
> unacceptable. When the COPY command is executed the disk I/O is
> extremely high which leads to degraded query performance for other
> quer
On Thu, Apr 2, 2020 at 11:38 PM Andrus wrote:
> Hi!
>
> >Simply replace
> > SET col = unnest(array_value)
> >with
> >
> > SET col = array_value[1]
>
> I tried
>
>
> update temprid set
> ContactFirstName =xpath(
>
>
> '/E-Document/Document/DocumentParties/BuyerParty/ContactData/ContactF
23 matches
Mail list logo