On 04/11/2018 05:36 PM, Jonathan Leroy - Inikup wrote:
2018-04-12 1:49 GMT+02:00 Jonathan Leroy - Inikup :
Maybe I will try to "dissect" pg_wrapper to find where the error
occurs, but the script is quite complex...
OK, I've found the issue : pg_wrapper is trying to get the value of
the "data_d
2018-04-12 1:49 GMT+02:00 Jonathan Leroy - Inikup :
> Maybe I will try to "dissect" pg_wrapper to find where the error
> occurs, but the script is quite complex...
OK, I've found the issue : pg_wrapper is trying to get the value of
the "data_directory" setting by reading the postgresql.conf file.
On 04/11/2018 04:49 PM, Jonathan Leroy - Inikup wrote:
2018-04-12 1:38 GMT+02:00 Adrian Klaver :
Another thought do you have the PGDATA environment variable set for your
user1?
Nope.
Maybe I will try to "dissect" pg_wrapper to find where the error
occurs, but the script is quite complex...
On 04/11/2018 04:49 PM, Jonathan Leroy - Inikup wrote:
2018-04-12 1:38 GMT+02:00 Adrian Klaver :
Another thought do you have the PGDATA environment variable set for your
user1?
Nope.
Maybe I will try to "dissect" pg_wrapper to find where the error
occurs, but the script is quite complex...
2018-04-12 1:38 GMT+02:00 Adrian Klaver :
> Another thought do you have the PGDATA environment variable set for your
> user1?
Nope.
Maybe I will try to "dissect" pg_wrapper to find where the error
occurs, but the script is quite complex...
--
Jonathan Leroy
2018-04-12 1:32 GMT+02:00 Adrian Klaver :
> What are the permissions on /usr/bin/psql?
>
> On my Ubuntu 16.04 instance I have:
>
> lrwxrwxrwx 1 root root 37 Feb 8 04:55 /usr/bin/psql ->
> ../share/postgresql-common/pg_wrapper*
I have exactly the same permissions on my side.
I don't think it's a p
On 04/11/2018 04:20 PM, Jonathan Leroy - Inikup wrote:
2018-04-12 0:26 GMT+02:00 Tim Cross :
Check your settings in /etc/postgresql-common/user_clusters. The wrapper
script uses that file to determine what databases to connect to or what
is the user default database cluster. It can also be overr
On 04/11/2018 04:20 PM, Jonathan Leroy - Inikup wrote:
2018-04-12 0:26 GMT+02:00 Tim Cross :
Check your settings in /etc/postgresql-common/user_clusters. The wrapper
script uses that file to determine what databases to connect to or what
is the user default database cluster. It can also be overr
On 04/11/2018 01:20 PM, karthik kumar wrote:
Db2 server on a docker container on my laptop: IP address 172.17.0.4
Db2 client where postgres server is installed is on a different
container. IP address: 172.17.0.3
Output of db2 list node directory
Node 2 entry:
Node name
2018-04-12 0:26 GMT+02:00 Tim Cross :
> Check your settings in /etc/postgresql-common/user_clusters. The wrapper
> script uses that file to determine what databases to connect to or what
> is the user default database cluster. It can also be overridden with a
> local ~/.postgresqlrc, so check there
On 04/11/2018 01:20 PM, karthik kumar wrote:
This is the error we see in the ODBC trace log
ODBC][586][1523477070.240690][__handles.c][450]
Exit:[SQL_SUCCESS]
Environment = 0x556f874716a0
[ODBC][586][1523477070.240741][SQLSetEnvAttr.c][182]
Entry:
Envi
Jonathan Leroy - Inikup writes:
> Hi,
>
> I'm using multiples versions of PostgreSQL from the postgresql.org
> repository (http://apt.postgresql.org/) on Debian Jessie, on multiples
> servers.
> Each postgresql-client-XX package depends on postgresql-client-common,
> which provides the pg_wrappe
Jonathan Leroy - Inikup writes:
> Here's my issue : when I'm logged as an user which is not root or
> postgresql, I can't use any of the commands linked to pg_wrapper:
> user1@server1:~ $ /usr/bin/psql --version
> Error: Invalid data directory
There's no error message with exactly that spelling
Hi,
I'm using multiples versions of PostgreSQL from the postgresql.org
repository (http://apt.postgresql.org/) on Debian Jessie, on multiples
servers.
Each postgresql-client-XX package depends on postgresql-client-common,
which provides the pg_wrapper script
(/usr/share/postgresql-common/pg_wrappe
>>
> I'm not a networking guru, but it sure looks like something is trying to
> connect with an IPv6 loopback address. To me, this is the smoking gun.
>
>
>
>
yes that much we figured it out here. Question is, why is isql able to
connect, but not fdw via psql. At the end both isql and fdw should
On Wed, 11 Apr 2018, karthik kumar wrote:
This is the error we see in the ODBC trace log
DIAG [08001] [IBM][CLI Driver] SQL30081N A communication error has
been detected. Communication
protocol being used: "TCP/IP". Communication API being used: "SOCKETS".
Location where the error
This is the error we see in the ODBC trace log
ODBC][586][1523477070.240690][__handles.c][450]
Exit:[SQL_SUCCESS]
Environment = 0x556f874716a0
[ODBC][586][1523477070.240741][SQLSetEnvAttr.c][182]
Entry:
Environment = 0x556f874716a0
Attribute = S
On 04/11/2018 11:59 AM, Kumar, Virendra wrote:
Does this apply to SELECT calls as well or only for DMLs.
Easy enough to test:
test=# \d projection
Foreign table "public.projection"
...
Server: fdw_test_server
FDW options: (schema_name 'public', table_name 'projection')
test=# select count
Does this apply to SELECT calls as well or only for DMLs.
I am planning to use postgres_fdw but if it is going by one row at a time there
will be a lot of round trip and defeat the purpose.
Regards,
Virendra.
From: Don Seiler [mailto:d...@seiler.us]
Sent: Wednesday, April 11, 2018 2:53 PM
To: Ad
On Wed, Apr 11, 2018 at 1:38 PM, Don Seiler wrote:
>
> Yeah, I saw the same with a 132 row insert. Now imagine that with a
> monthly 50 million row insert or delete. :p Thanks for the confirmation!
>
I went back to look at the postgres logs on my dev server. These logs are
rotated once they hit
On 04/11/2018 11:22 AM, karthik kumar wrote:
Hello Adrian
I built fdw with debug option and ran it with debug option. Here is the
output
postgres=# select * from odbc_testt;
DEBUG: StartTransactionCommand
DEBUG: StartTransaction
DEBUG: name: unnamed; blockState: DEFAULT; state: INPR
On Wed, Apr 11, 2018 at 1:33 PM, Adrian Klaver
wrote:
>
> A test case here confirms it sends individual INSERTS:
>
> test_(postgres)# insert into fdw_test_table select * from fdw_test;
> INSERT 0 3
>
> Where fdw_test_table is the remote table and fdw_test is the local one.
>
> postgres-2018-04-11
On Wed, Apr 11, 2018 at 1:13 PM, Don Seiler wrote:
>
> I had considered this as well, as this would allow me to rollback the
> delete (assuming my intel on postgres_fdw transactions was correct, which
> it may not be after all). I wondered if a remote insert would be broken up
> into individual i
On 04/11/2018 11:13 AM, Don Seiler wrote:
On Wed, Apr 11, 2018 at 12:58 PM, Adrian Klaver
mailto:adrian.kla...@aklaver.com>> wrote:
"F.33.3. Transaction Management
During a query that references any remote tables on a foreign
server, postgres_fdw opens a transaction on the remote s
On Wed, 11 Apr 2018, karthik kumar wrote:
Hello Adrian
I built fdw with debug option and ran it with debug option. Here is the output
postgres=# select * from odbc_testt;
DEBUG: StartTransactionCommand
DEBUG: StartTransaction
DEBUG: name: unnamed; blockState: DEFAULT; state: INPROGR,
Hello Adrian
I built fdw with debug option and ran it with debug option. Here is the
output
postgres=# select * from odbc_testt;
DEBUG: StartTransactionCommand
DEBUG: StartTransaction
DEBUG: name: unnamed; blockState: DEFAULT; state: INPROGR,
xid/subid/cid: 0/1/0, nestlvl: 1, children:
Stephen Frost writes:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> Yeah, I can reproduce this. I suspect it got broken by Stephen's hacking
>> around with default ACLs. A simple example is
> Yes, it's related to the work I did with pg_dump's ACL handling, because
> we're no longer just always in
On Wed, Apr 11, 2018 at 12:58 PM, Adrian Klaver
wrote:
>
> "F.33.3. Transaction Management
>
> During a query that references any remote tables on a foreign server,
> postgres_fdw opens a transaction on the remote server if one is not already
> open corresponding to the current local transaction.
On 04/11/2018 09:15 AM, Don Seiler wrote:
Let's say I have two DBs: main (9.6.6) and archive (10.2).
I have a table in main where I want to archive data older then 60 days.
For various reasons, the table is not partitioned, so for now we must
use DELETE. The destination table in the archive DB
On 04/11/2018 10:24 AM, Ron wrote:
On 04/11/2018 11:15 AM, Don Seiler wrote:
Let's say I have two DBs: main (9.6.6) and archive (10.2).
I have a table in main where I want to archive data older then 60
days. For various reasons, the table is not partitioned, so for now
we must use DELETE.
On Wed, Apr 11, 2018 at 9:15 AM, Don Seiler wrote:
> Right now I'm thinking of falling back to the far-less-elegant method of
> dumping the data to a flat file via COPY, running psql to connect to the
> archive DB remotely and running a COPY to load the data (or maybe
> transferring the flat file
On 04/11/2018 11:15 AM, Don Seiler wrote:
Let's say I have two DBs: main (9.6.6) and archive (10.2).
I have a table in main where I want to archive data older then 60 days.
For various reasons, the table is not partitioned, so for now we must use
DELETE. The destination table in the archive
Let's say I have two DBs: main (9.6.6) and archive (10.2).
I have a table in main where I want to archive data older then 60 days. For
various reasons, the table is not partitioned, so for now we must use
DELETE. The destination table in the archive DB is partitioned with the new
Pg10 partitioning
On Wed, Apr 11, 2018 at 8:22 AM, Ron wrote:
> On 04/11/2018 10:21 AM, Andreas Kretschmer wrote:
>>
>> Am 11.04.2018 um 15:53 schrieb camarillo:
>>
>>> Can I do a restore of a single table or single base using the archive
>>> generated for the basebackup without having to delete the filesystem
>>>
On Wed, Apr 11, 2018 at 11:22 AM, Ron wrote:
>
>
> On 04/11/2018 10:21 AM, Andreas Kretschmer wrote:
>
>>
>>
>> Am 11.04.2018 um 15:53 schrieb camarillo:
>>
>>> Can I do a restore of a single table or single base using the archive
>>> generated for the basebackup without having to delete the file
On 04/11/2018 10:21 AM, Andreas Kretschmer wrote:
Am 11.04.2018 um 15:53 schrieb camarillo:
Can I do a restore of a single table or single base using the archive
generated for the basebackup without having to delete the filesystem
(/var/lib/pgsql/9.5/*)?.
No, but you can use a spare machin
Am 11.04.2018 um 15:53 schrieb camarillo:
Can I do a restore of a single table or single base using the archive
generated for the basebackup without having to delete the filesystem
(/var/lib/pgsql/9.5/*)?.
No, but you can use a spare machine to restore the hole database
(point-in-time-recove
On 04/10/2018 11:44 PM, Thiemo Kellner wrote:
Hi all
I try to execute a function not in the Schema I connect to with dblink.
Is there way to tell dblink to set search_path in a specific way? I have
not found a solution in the documentation. I tried with the set
search_path definition in the f
Hello how are you?
Currently, I'm doing a basebackup of a postgres instance 9.5 due to the
growth of it, doing a pg_dump took a long time.
Can I do a restore of a single table or single base using the archive
generated for the basebackup without having to delete the filesystem
(/var/lib/pgsql/9.5/
On 04/10/2018 11:44 PM, Thiemo Kellner wrote:
Hi all
I try to execute a function not in the Schema I connect to with dblink.
Is there way to tell dblink to set search_path in a specific way? I have
not found a solution in the documentation. I tried with the set
search_path definition in the f
2018-04-10 19:53 GMT-03:00 Andres Freund :
> On 2018-04-10 08:31:20 -0300, Alexandre Arruda wrote:
>> 2018-04-09 23:51 GMT-03:00 Peter Geoghegan :
>> > On Mon, Apr 9, 2018 at 6:56 PM, Alexandre Arruda
>> > wrote:
>> >> (... and all other indexes returns null too)
>> >>
>> >> I tried with bt_index
2018-04-11 11:36 GMT+02:00 Thiemo Kellner, NHC Barhufpflege <
thiemo.kell...@gelassene-pferde.biz>:
> Zitat von Pavel Stehule :
>
> No, there is not possible to read/write client side variables from server
>> side.
>>
>
> I did not mean that the variable be replaced by its value at execution
> tim
Raghavendra Rao J S V wrote:
> Are you suggesting me to keep "autovacuum_vacuum_cost_limit" to zero or
> " autovacuum_vacuum_scale_factor" to zero or both? Please clarify me.
autovacuum_vacuum_cost_limit is effectively 100 by default.
You could raise it to 2000 or more.
If that does not take ca
Last night I have inexplicably missed 2 conditions /facepalm
Now my JOIN works ok, without multiple records -
CREATE OR REPLACE FUNCTION words_stat_games(
in_social integer,
in_sidtext
) RETURNS TABLE (
out_gidinteger,
Thank you very much for your prompt response.
I requested in my previous mail as , planning to make '
*autovacuum_vacuum_scale_factor*' value to *zero *and
*autovacuum_vacuum_threshold
*value to *150 * in postgreconf file.
Are you suggesting me to keep "autovacuum_vacuum_cost_limit" to zero or
Zitat von Rene Romero Benavides :
What about setting the search path at the user level?
ALTER ROLE act SET search_path = act,logger;
Best.
Thanks for the inspiration. Maybe it is best to create a dedicated
user for logging anyway...
--
Öffentlicher PGP-Schlüssel:
http://pgp.mit.edu/pks/
Zitat von Pavel Stehule :
No, there is not possible to read/write client side variables from server
side.
I did not mean that the variable be replaced by its value at execution
time but at Installation time.
--
Öffentlicher PGP-Schlüssel:
http://pgp.mit.edu/pks/lookup?op=get&search=0x8F7
Raghavendra Rao J S V wrote:
> We are using postgres 9.2 version on Centos operating system. We have
> around 1300+ tables.
> We have following auto vacuum settings are enables. Still few of the tables
> which are always busy are not vacuumed. Due to that tables are bloating and
> observed few
Raghavendra Rao J S V wrote:
> We are using postgres 9.2 version on Centos operating system.
That's no good, 9.2 is out of support.
Use v10 or at least 9.6.
> Total ram available is 80GB . At present we don't have any connection pool
> mechanisiam. Max number of connections are allowed is 10
Hi
2018-04-11 8:51 GMT+02:00 Thiemo Kellner :
> Hi all
>
> Is there a way to pass the value of a psql variable into function code? I
> create a schema with help of psql variable
>
>\set SCHEMA_NAME LOGGER
>create
> schema :SCHEMA_NAME;
>
> I would like to create a function that has t
Hello David,
Sorry for answering only now but I just saw you answer only now.
> To be clear, I'm the original author and primary maintainer of
pgBackRest.
I am very happy to see guys like you to take time to answer me. Thank you
> This a good feature, and one that has been requested for pgBackR
What about setting the search path at the user level?
ALTER ROLE act SET search_path = act,logger;
Best.
2018-04-11 1:44 GMT-05:00 Thiemo Kellner :
> Hi all
>
> I try to execute a function not in the Schema I connect to with dblink. Is
> there way to tell dblink to set search_path in a specific
52 matches
Mail list logo