On 17 October 2018 07:39:48 CEST, "Sakai, Teppei"
wrote:
>Hi
>
>We have question about index scan.
>
>We created table and indexes and executed SQL, but the index used by
>day is different.
>One day the index of c1 was used, and on another day the index of c2
>was used.
> - CREATE TABLE tbl1 (c1
Joshua White wrote:
> > > > > > psql: FATAL: could not open file "base/618720/2610": No such file
> > > > > > or directory
> > >
> > > "pg_index" initially uses file 2610.
> > >
> > > Anyway, your database seems to be quite wrecked, and you'd probably need
> > > an expert to save what can be sa
Hi
We have question about index scan.
We created table and indexes and executed SQL, but the index used by day is
different.
One day the index of c1 was used, and on another day the index of c2 was used.
- CREATE TABLE tbl1 (c1 varchar, c2 varchar, c3 timestamptz);
- CREATE INDEX tbl1_idx_c1 O
>
> > > > Recently I attempted to create a new database in this cluster. The
>> command succeeds,
>> > > > but when I try to connect to the new database, I get a "could not
>> open file" error:
>> > > >
>> > > > psql: FATAL: could not open file "base/618720/2610": No such file
>> or directory
>> >
On Tue, 16 Oct 2018 at 19:13, Colin Coles
wrote:
> On 15/10/2018 10:46, Joshua White wrote:
> > Hi all,
> >
> > I'm hoping someone can point me in the right direction. I've got a
> > PostgreSQL 10 server instance on CentOS 6, which I set up and manage. I
> > have full admin rights on this machine
On Tue, 16 Oct 2018 at 18:58, Laurenz Albe wrote:
> Please don't top post.
>
> Joshua White wrote:
> > On Mon, 15 Oct 2018 at 21:13, Laurenz Albe
> wrote:
> > > Joshua White wrote:
> > > > I'm hoping someone can point me in the right direction. I've got a
> PostgreSQL 10 server
> > > > instance
I enabled ODBC trace . Attached trace log. Had to edit connection details and
snipped few repetitive lines as I ran the same sql multiple times
Connection Out [DSN=BLUE;UID=*;PWD=***;DATABASE=***;HOST...]
[ODBC][4248][1539734369.611042][SQLGetInfo.c][554]
Entry:
Version 10.5
AWS Linux
Foreign server: Db2 on Linux 10.5.9
carto_odbc
unix_odbc
I am able to build odbc_fdw extension, register the extension and also create
foreign server.
I also created foreign table.
When I run a sql 'select * from odbctest' postgres crashes, generate a core
file.
2018-10
> On Oct 16, 2018, at 1:01 PM, Tom Lane wrote:
>
> Rob Sargent writes:
>> Should reality be half again as large as the estimated row count?
>> coon=# select count(*) from sui.segment;
>> count
>> --
>> 49,942,837 -- my commas
>> (1 row)
>
>> coon=# vacuum (analyse, verbose) sui.
On 10/16/18 8:36 AM, Shrikant Bhende wrote:
Hi Adrian,
Thanks for your reply.
O/S is centos 6.7 on AWS EC2 ,
this is happening when system starts copying data for the biggest table,
so just to reconfirm I have taken a pg_dump with Fp for that single
table and tried to restore the same into PG
> On Oct 16, 2018, at 1:01 PM, Tom Lane wrote:
>
> Rob Sargent writes:
>> Should reality be half again as large as the estimated row count?
>> coon=# select count(*) from sui.segment;
>> count
>> --
>> 49,942,837 -- my commas
>> (1 row)
>
>> coon=# vacuum (analyse, verbose) sui.
Rob Sargent writes:
> Should reality be half again as large as the estimated row count?
> coon=# select count(*) from sui.segment;
> count
> --
> 49,942,837 -- my commas
> (1 row)
> coon=# vacuum (analyse, verbose) sui.probandset;
Uh, what does sui.probandset have to do with sui.s
Should reality be half again as large as the estimated row count?
coon=# select count(*) from sui.segment;
count
--
49,942,837 -- my commas
(1 row)
coon=# vacuum (analyse, verbose) sui.probandset;
INFO: vacuuming "sui.probandset"
INFO: scanned index "probandset_pkey" to remove 31
Hi Adrian,
Thanks for your reply.
O/S is centos 6.7 on AWS EC2 ,
this is happening when system starts copying data for the biggest table, so
just to reconfirm I have taken a pg_dump with Fp for that single table and
tried to restore the same into PG cluster which was successful, and then
again whe
If they are visible in pg_stat_activity then yes. Likewise, if they are only
visible in pgbouncer, then no.
The last query being discard all means that PGBouncer has returned the
connection to the pool to make it available to the next client that needs it.
So what you’re seeing sounds to me l
El 16/10/18 a las 12:12, Nicola Contu escribió:
> 2492534808 | dev | 7355 | 1833427130 | pgbouncer |
> | 10.151.2.145 | | 60570 |
> 2018-10-16 14:13:05.151015+00 | |
> 2018-10-16 15:10:40.309993+00 | 2018-10-1
2492534808 | dev| 7355 | 1833427130 | pgbouncer |
| 10.151.2.145 | | 60570 | 2018-10-16
14:13:05.151015+00 | | 2018-10-16
15:10:40.309993+00 | 2018-10-16 15:10:40.310038+00 | Client
| ClientRead | idle |
Ok thanks I will try that.
But do you know if there is any way to avoid this?
vm.swapiness? or anything on the postgres conf?
Il giorno mar 16 ott 2018 alle ore 15:17 Bob Jolliffe
ha scritto:
> I guess you can run swapoff (followed by swapon). That will free up
> whatever is currently swapped.
On 10/16/18 7:29 AM, Shrikant Bhende wrote:
Hi Adrian,
Its a PostgreSQL binary and installer was downloaded from enterprisedb site.
Binary version : psql (PostgreSQL) 9.6.10
Command to restore the dump is :
./psql -p 5434 -d cloud -f
Hmm.
What OS is this?
Does the error always happen in th
Hi Adrian,
Its a PostgreSQL binary and installer was downloaded from enterprisedb site.
Binary version : psql (PostgreSQL) 9.6.10
Command to restore the dump is :
./psql -p 5434 -d cloud -f
Thanks
On Tue, Oct 16, 2018 at 2:55 AM Adrian Klaver
wrote:
> On 10/15/18 8:10 AM, Shrikant Bhende wro
El 16/10/18 a las 09:59, Scot Kreienkamp escribió:
> Are you sure they’re actually waiting? Don’t forget 10.5 will show the
> last query executed even if the connection is idle. I believe discard
> all would be the last command the pgbouncer would send to the database
> when the client is done as
On 10/16/18 2:27 AM, csusza wrote:
Hi,
Thanks for the replies and the help, but unfortunately it did not help my
case to just simply add and enable the missing flags as it is stated here:
Did you get the same errors?
In order to allow the code to compile with
OpenSSL 1.1.0, all the flags m
I guess you can run swapoff (followed by swapon). That will free up
whatever is currently swapped. Beware if the system is actively
swapping then swapoff can take some time. But it seems not in your
case.
On Tue, 16 Oct 2018 at 10:48, Nicola Contu wrote:
>
> No it is not probably used, because
Are you sure they’re actually waiting? Don’t forget 10.5 will show the last
query executed even if the connection is idle. I believe discard all would be
the last command the pgbouncer would send to the database when the client is
done as it resets the connection for the next client. So what
Rijo Roy wrote:
> Is there a possibility to disable auto analyze if we set
> autovacuum_analyze_scale_factor
> to 100% and autovacuum_analyze_threshold = 0?
Why would you want that?
You are trying very hard to hurt yourself by disabling autovacuum;
maybe there is a better way to solve your prob
Hello,
is this normal? can anyone help?
Thanks a lot for your help in advance.
Nicola
Il giorno mer 10 ott 2018 alle ore 17:03 Nicola Contu <
nicola.co...@gmail.com> ha scritto:
> Hello,
> we are running pgbouncer 1.9.1 connected to postgres 10.5
>
> Sometimes we are seeing a lot of waiting con
Thanks Adrian for sharing the facts and docs..
Is there a possibility to disable auto analyze if we set
autovacuum_analyze_scale_factor to 100% and autovacuum_analyze_threshold = 0?
Thanks, Rijo Roy
Sent from Yahoo Mail on Android
On Mon, 15 Oct 2018 at 6:48 pm, Adrian Klaver
wrote: On
No it is not probably used, because I can't find it in any way as I said.
I run your commands :
[root@usnyh-cmd1 ~]# vmstat 1
procs ---memory-- ---swap-- -io -system--
--cpu-
r b swpd free buff cache si sobibo in cs us sy id
wa st
3 2 750
Nicola Contu wrote:
> we are running Postgres 10.5 with master slave replication.
>
> This server is on Centos 7 and the strange thing is that we see a lot of swap
> usage :
>
> [root@usnyh-cmd1 ~]# free -m
> totalusedfree shared buff/cache
> available
> Me
Hi,
Thanks for the replies and the help, but unfortunately it did not help my
case to just simply add and enable the missing flags as it is stated here:
> In order to allow the code to compile with
> OpenSSL 1.1.0, all the flags mentioned above need to be enabled in
> pg_config.h.win32
https:/
Are you sure that swap is used actively? Maybe it had just been used during
backup or something.
Look after SwapIn/SwapOut (si/so) it should be '0'
$ vmstat 1
procs ---memory-- ---swap-- -io -system--
--cpu-
r b swpd free buff cache si sobibo
Il 16/10/2018 10:18, Laurenz Albe ha scritto:
Moreno Andreo wrote:
Now, 2 questions.
1. Is it all or do I need to adjust something else about permissions,
indexes, vacuuming, etc...?
ALTERing the database namespace means copying its physical files to new
directory, but is it applied to all objec
Hello,
we are running Postgres 10.5 with master slave replication.
These are our custom params
archive_command = 'pgbackrest --stanza=cmdprod archive-push %p' # command
to use to archive a logfile segment
archive_mode = on # enables archiving; off, on, or always
checkpoint_completion_target = 0.7
Moreno Andreo wrote:
> Now, 2 questions.
> 1. Is it all or do I need to adjust something else about permissions,
> indexes, vacuuming, etc...?
> ALTERing the database namespace means copying its physical files to new
> directory, but is it applied to all objects (indexes, triggers, etc)?
All obj
On 15/10/2018 10:46, Joshua White wrote:
Hi all,
I'm hoping someone can point me in the right direction. I've got a
PostgreSQL 10 server instance on CentOS 6, which I set up and manage. I
have full admin rights on this machine, so I can access logs, etc.
Recently I attempted to create a new
Please don't top post.
Joshua White wrote:
> On Mon, 15 Oct 2018 at 21:13, Laurenz Albe wrote:
> > Joshua White wrote:
> > > I'm hoping someone can point me in the right direction. I've got a
> > > PostgreSQL 10 server
> > > instance on CentOS 6, which I set up and manage. I have full admin righ
On Mon, Oct 15, 2018 at 9:52 PM Adrian Klaver
wrote:
> On 10/15/18 8:56 AM, Durgamahesh Manne wrote:
>
>
> > I request you all community members to provide built in bdr v3 version
> > replication for public as multimaster replication is on high priority
> > against other dbms
>
> BDR v3 is thir
Thanks for the tip. I've checked and the on-disk file behind "pg_index"
still exists.
I have existing databases in this cluster that I'd prefer not to drop and
recreate if possible.
I've tried dropping and recreating the new database I want to use, but each
time get the same type of error.
Kind
38 matches
Mail list logo