Hi All,
currently I'm trying to configure postgres (on centos) to load data from MS
SQL server (on windows server) on an encrypted connection.
Trying with tds_fdw, but found that it doesn't support encrypted connection.
Is there any solution to this?
Thanks,
Soni.
Thank you for informations
I didn't know "CREATE ROUTINE MAPPING" thread.
In my development, it may be necessary to push down features, whether
they are remote only or remote and local.
Now I understand community concerns about function pushdown.
I will investigate more and if needed I will crea
On Thu, Jan 9, 2020, 21:47 github kran wrote:
>
>
> On Wed, Jan 8, 2020 at 11:03 PM Michael Lewis wrote:
>
>> On Wed, Jan 8, 2020 at 8:52 PM github kran wrote:
>>
>>> You are right on RDS but I believe the problem is on Aurora PostgreSQL
>>> where the pglogical throws an error during installati
Hi Matthias,
On Thu, Jan 9, 2020, 20:21 Matthias Apitz wrote:
> Hello,
>
> We encounter the following problem with ESQL/C: Imagine a table with two
> columns: CHAR(16) and DATE
>
> The CHAR column can contain not only 16 bytes, but 16 Unicode chars,
> which are longer than 16 bytes if one or mor
On 1/10/20 3:54 AM, Soni M wrote:
Hi All,
currently I'm trying to configure postgres (on centos) to load data from
MS SQL server (on windows server) on an encrypted connection.
Trying with tds_fdw, but found that it doesn't support encrypted connection.
Is there any solution to this?
Take i
By loading data meaning this is a one time deal or only used to refresh
data stored in the postgresql database???
A possible solution would be to setup a vpn tunnel, or ipsec connection to
server. then run FDW through that connection. Not idea and will slow
things down.
The other option is to u
> On Jan 8, 2020, at 7:52 AM, stan wrote:
>
> On Tue, Jan 07, 2020 at 12:20:12PM -0900, Israel Brewster wrote:
>>> On Jan 7, 2020, at 12:15 PM, Alan Hodgson wrote:
>>>
>>> On Tue, 2020-01-07 at 11:58 -0900, Israel Brewster wrote:
>
Really? Why? With the update I am only changing data
I have two databases that are clustered. One is my primary (DB1) and the
other is my secondary (DB2). Both have the same tables and schemas. Could I
use pg_repack against each of these separately (I am wanting to do this at a
"by table" level) to clean up dead space that hasn't been returned? Shoul
On 1/10/20 10:01 AM, dagamier wrote:
I have two databases that are clustered. One is my primary (DB1) and the
other is my secondary (DB2). Both have the same tables and schemas. Could I
use pg_repack against each of these separately (I am wanting to do this at a
"by table" level) to clean up dead