Re: very slow largeobject transfers through JDBC

2018-09-06 Thread Mate Varga
After inlining the data, performance issues have been solved. Thanks for
the help.

On Mon, Sep 3, 2018 at 9:57 PM Mate Varga  wrote:

> Thanks,
> 1) we'll try to move stuff out from LOBs
> 2) we might raise a PR for the JDBC driver
>
> Mate
>
> On Mon, 3 Sep 2018, 19:35 Dave Cramer,  wrote:
>
>>
>>
>> On Mon, 3 Sep 2018 at 13:00, Mate Varga  wrote:
>>
>>> More precisely: when fetching 10k rows, JDBC driver just does a large
>>> bunch of socket reads. With lobs, it's ping-pong: one read, one write per
>>> lob...
>>>
>>>
>> Ok, this is making more sense. In theory we could fetch them all but
>> since they are LOB's we could run out of memory.
>>
>> Not sure what to tell you at this point. I'd entertain a PR if you were
>> motivated.
>>
>> Dave Cramer
>>
>> da...@postgresintl.com
>> www.postgresintl.com
>>
>>
>>
>>>
>>> On Mon, Sep 3, 2018 at 6:54 PM Mate Varga  wrote:
>>>
 So I have detailed profiling results now. Basically it takes very long
 that for each blob, the JDBC driver reads from the socket then it creates
 the byte array on the Java side. Then it reads the next blob, etc. I guess
 this takes many network roundtrips.

 On Mon, Sep 3, 2018 at 5:58 PM Dave Cramer  wrote:

>
> On Mon, 3 Sep 2018 at 10:48, Mate Varga  wrote:
>
>> That's 1690 msec (1.69 seconds, and that is how long it takes to
>> fetch 20k (small-ish) rows without LOBs (LOBs are a few lines below on 
>> the
>> screenshot)
>>
>
> that sound high as well!
>
> Something isn't adding up..
>
>
> Dave Cramer
>
> da...@postgresintl.com
> www.postgresintl.com
>
>
>
>>
>> On Mon, Sep 3, 2018 at 4:40 PM Dave Cramer  wrote:
>>
>>> the one you have highlighted ~1.69ms
>>>
>>> Dave Cramer
>>>
>>> da...@postgresintl.com
>>> www.postgresintl.com
>>>
>>>
>>> On Mon, 3 Sep 2018 at 10:38, Mate Varga  wrote:
>>>
 Which frame do you refer to?

 On Mon, Sep 3, 2018 at 3:57 PM Dave Cramer 
 wrote:

> Not sure why reading from a socket is taking 1ms ?
>
> Dave Cramer
>
> da...@postgresintl.com
> www.postgresintl.com
>
>
> On Mon, 3 Sep 2018 at 09:39, Mate Varga  wrote:
>
>> Hi,
>>
>> https://imgur.com/a/ovsJPRv -- I've uploaded the profiling info
>> (as an image, sorry). It seems this is a JDBC-level problem. I 
>> understand
>> that the absolute timing is not meaningful at all because you don't 
>> know
>> how large the resultset is, but I can tell that this is only a few
>> thousands rows + few thousand largeobjects, each largeobject is 
>> around 1
>> kByte. (Yes I know this is not a proper use of LOBs -- it's a legacy 
>> db
>> structure that's hard to change.)
>>
>> Thanks.
>> Mate
>>
>> On Mon, Sep 3, 2018 at 11:52 AM Mate Varga 
>> wrote:
>>
>>> Hey,
>>>
>>> we'll try to test this with pure JDBC versus hibernate. Thanks!
>>>
>>>
>>> On Mon, Sep 3, 2018 at 11:48 AM Dave Cramer 
>>> wrote:
>>>


 On Mon, 3 Sep 2018 at 03:55, Mate Varga 
 wrote:

> Basically there's a class with a byte[] field, the class is
> mapped to table T and the byte field is annotated with @Lob so it 
> goes to
> the pg_largeobject table.
>

 Ah, so hibernate is in the mix. I wonder if that is causing
 some challenges ?


> The DB is on separate host but relatively close to the app,
> and I can reproduce the problem locally as well. One interesting 
> bit is
> that turning of SSL between the app and PSQL speeds up things by 
> at least
> 50%.
>
> Ah, one addition -- the binary objects are encrypted, so their
> entropy is very high.
>
> Any chance you could write a simple non-hibernate test code to
 time the code ?

 Dave Cramer

 dave.cra...@crunchydata.ca
 www.crunchydata.ca



> Mate
>
> On Sun, Sep 2, 2018 at 12:55 AM Dave Cramer 
> wrote:
>
>>
>>
>>
>> On Fri, 31 Aug 2018 at 10:15, Mate Varga 
>> wrote:
>>
>>> I see -- we could try that, though we're mostly using an ORM
>>> (Hibernate) to do this. Thanks!
>>>
>>> On Fri, Aug 31, 2018 at 3:57 PM Dmitry Igrishin <
>>> dmit...@gma

Re: How to install pgAgent on windows for postresql-bigsql-10.5

2018-09-06 Thread Dave Page
Hi

On Thu, Sep 6, 2018 at 6:23 AM, jimmy  wrote:

> I use PostgreSQL-10.5-1-win64-bigsql.exe to install postgresql database.
> How to install pgAgent on windows for postresql-bigsql-10.5.
> I have been searching some articles to install pgAgent.
> But they do not work.
> I found there has not any version of pgAgent for windows in the website '
> www.pgadmin.org'.
> And PostgreSQL-10.5-1-win64-bigsql.exe installer also has not the pgAgent.
> When I execute CREATE EXTENSION pgagent, it throws ERROR:  could not open
> extension control file
> "H:/PostgreSQL/pg10/../pg10/share/postgresql/extension/pgagent.control":
> No such file or directory.
> How can I resolve these problems.
> Thank you.
>

If you use the EDB PostgreSQL installers, you can install pgAgent using
StackBuilder. For BigSQL you'll probably have to build and install it
manually from source (which isn't exactly easy on Windows, and is dependent
on what is included with and how BigSQL is packaged - which I know nothing
about).


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re:Re: How to install pgAgent on windows for postresql-bigsql-10.5

2018-09-06 Thread jimmy
Is EnterpriseDB opensource and free. Has this database some limits and 
restrictions ? 






At 2018-09-06 15:57:30, "Dave Page"  wrote:

Hi



On Thu, Sep 6, 2018 at 6:23 AM, jimmy  wrote:

I use PostgreSQL-10.5-1-win64-bigsql.exe to install postgresql database.
How to install pgAgent on windows for postresql-bigsql-10.5.
I have been searching some articles to install pgAgent.
But they do not work.
I found there has not any version of pgAgent for windows in the website 
'www.pgadmin.org'.
And PostgreSQL-10.5-1-win64-bigsql.exe installer also has not the pgAgent.
When I execute CREATE EXTENSION pgagent, it throws ERROR:  could not open 
extension control file 
"H:/PostgreSQL/pg10/../pg10/share/postgresql/extension/pgagent.control": No 
such file or directory.
How can I resolve these problems.
Thank you.


If you use the EDB PostgreSQL installers, you can install pgAgent using 
StackBuilder. For BigSQL you'll probably have to build and install it manually 
from source (which isn't exactly easy on Windows, and is dependent on what is 
included with and how BigSQL is packaged - which I know nothing about).




--

Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: very slow largeobject transfers through JDBC

2018-09-06 Thread Dave Cramer
Hi

Can you be more explicit how you fixed the problem ?

Thanks
Dave Cramer

da...@postgresintl.com
www.postgresintl.com


On Thu, 6 Sep 2018 at 03:46, Mate Varga  wrote:

> After inlining the data, performance issues have been solved. Thanks for
> the help.
>
> On Mon, Sep 3, 2018 at 9:57 PM Mate Varga  wrote:
>
>> Thanks,
>> 1) we'll try to move stuff out from LOBs
>> 2) we might raise a PR for the JDBC driver
>>
>> Mate
>>
>> On Mon, 3 Sep 2018, 19:35 Dave Cramer,  wrote:
>>
>>>
>>>
>>> On Mon, 3 Sep 2018 at 13:00, Mate Varga  wrote:
>>>
 More precisely: when fetching 10k rows, JDBC driver just does a large
 bunch of socket reads. With lobs, it's ping-pong: one read, one write per
 lob...


>>> Ok, this is making more sense. In theory we could fetch them all but
>>> since they are LOB's we could run out of memory.
>>>
>>> Not sure what to tell you at this point. I'd entertain a PR if you were
>>> motivated.
>>>
>>> Dave Cramer
>>>
>>> da...@postgresintl.com
>>> www.postgresintl.com
>>>
>>>
>>>

 On Mon, Sep 3, 2018 at 6:54 PM Mate Varga  wrote:

> So I have detailed profiling results now. Basically it takes very long
> that for each blob, the JDBC driver reads from the socket then it creates
> the byte array on the Java side. Then it reads the next blob, etc. I guess
> this takes many network roundtrips.
>
> On Mon, Sep 3, 2018 at 5:58 PM Dave Cramer  wrote:
>
>>
>> On Mon, 3 Sep 2018 at 10:48, Mate Varga  wrote:
>>
>>> That's 1690 msec (1.69 seconds, and that is how long it takes to
>>> fetch 20k (small-ish) rows without LOBs (LOBs are a few lines below on 
>>> the
>>> screenshot)
>>>
>>
>> that sound high as well!
>>
>> Something isn't adding up..
>>
>>
>> Dave Cramer
>>
>> da...@postgresintl.com
>> www.postgresintl.com
>>
>>
>>
>>>
>>> On Mon, Sep 3, 2018 at 4:40 PM Dave Cramer  wrote:
>>>
 the one you have highlighted ~1.69ms

 Dave Cramer

 da...@postgresintl.com
 www.postgresintl.com


 On Mon, 3 Sep 2018 at 10:38, Mate Varga  wrote:

> Which frame do you refer to?
>
> On Mon, Sep 3, 2018 at 3:57 PM Dave Cramer 
> wrote:
>
>> Not sure why reading from a socket is taking 1ms ?
>>
>> Dave Cramer
>>
>> da...@postgresintl.com
>> www.postgresintl.com
>>
>>
>> On Mon, 3 Sep 2018 at 09:39, Mate Varga  wrote:
>>
>>> Hi,
>>>
>>> https://imgur.com/a/ovsJPRv -- I've uploaded the profiling info
>>> (as an image, sorry). It seems this is a JDBC-level problem. I 
>>> understand
>>> that the absolute timing is not meaningful at all because you don't 
>>> know
>>> how large the resultset is, but I can tell that this is only a few
>>> thousands rows + few thousand largeobjects, each largeobject is 
>>> around 1
>>> kByte. (Yes I know this is not a proper use of LOBs -- it's a 
>>> legacy db
>>> structure that's hard to change.)
>>>
>>> Thanks.
>>> Mate
>>>
>>> On Mon, Sep 3, 2018 at 11:52 AM Mate Varga 
>>> wrote:
>>>
 Hey,

 we'll try to test this with pure JDBC versus hibernate. Thanks!


 On Mon, Sep 3, 2018 at 11:48 AM Dave Cramer 
 wrote:

>
>
> On Mon, 3 Sep 2018 at 03:55, Mate Varga 
> wrote:
>
>> Basically there's a class with a byte[] field, the class is
>> mapped to table T and the byte field is annotated with @Lob so 
>> it goes to
>> the pg_largeobject table.
>>
>
> Ah, so hibernate is in the mix. I wonder if that is causing
> some challenges ?
>
>
>> The DB is on separate host but relatively close to the app,
>> and I can reproduce the problem locally as well. One interesting 
>> bit is
>> that turning of SSL between the app and PSQL speeds up things by 
>> at least
>> 50%.
>>
>> Ah, one addition -- the binary objects are encrypted, so
>> their entropy is very high.
>>
>> Any chance you could write a simple non-hibernate test code
> to time the code ?
>
> Dave Cramer
>
> dave.cra...@crunchydata.ca
> www.crunchydata.ca
>
>
>
>> Mate
>>
>> On Sun, Sep 2, 2018 at 12:55 AM Dave Cramer 
>> wrote:
>>
>>>
>>>
>>>

Re: Re: How to install pgAgent on windows for postresql-bigsql-10.5

2018-09-06 Thread Dave Page
On Thu, Sep 6, 2018 at 9:41 AM, jimmy  wrote:

> Is EnterpriseDB opensource and free. Has this database some limits and
> restrictions ?
>

The EDB PostgreSQL installers are free, opensource and have no limits or
restrictions. They are the very first option at
https://www.postgresql.org/download/windows/


>
>
>
>
>
>
> At 2018-09-06 15:57:30, "Dave Page"  wrote:
>
> Hi
>
> On Thu, Sep 6, 2018 at 6:23 AM, jimmy  wrote:
>
>> I use PostgreSQL-10.5-1-win64-bigsql.exe to install postgresql database.
>> How to install pgAgent on windows for postresql-bigsql-10.5.
>> I have been searching some articles to install pgAgent.
>> But they do not work.
>> I found there has not any version of pgAgent for windows in the website '
>> www.pgadmin.org'.
>> And PostgreSQL-10.5-1-win64-bigsql.exe installer also has not the
>> pgAgent.
>> When I execute CREATE EXTENSION pgagent, it throws ERROR:  could not open
>> extension control file
>> "H:/PostgreSQL/pg10/../pg10/share/postgresql/extension/pgagent.control":
>> No such file or directory.
>> How can I resolve these problems.
>> Thank you.
>>
>
> If you use the EDB PostgreSQL installers, you can install pgAgent using
> StackBuilder. For BigSQL you'll probably have to build and install it
> manually from source (which isn't exactly easy on Windows, and is dependent
> on what is included with and how BigSQL is packaged - which I know nothing
> about).
>
>
> --
> Dave Page
> Blog: http://pgsnake.blogspot.com
> Twitter: @pgsnake
>
> EnterpriseDB UK: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>
>
>
>



-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: very slow largeobject transfers through JDBC

2018-09-06 Thread Mate Varga
Hi,

summarizing:
we had a table that had an OID column, referencing an object in
pg_largeobject. This was mapped to a (Java) entity with a byte array field,
annotated with @Lob. The problem was that we were fetching thousands of
these entities in one go, and LOB fetching is not batched by Hibernate/JDBC
(so each row is fetched separately). Because we were abusing LOBs (they
were small, often less than 10 kB), we have chosen to move the binary blobs
from the LO table to a simple bytea column. So the entity that had a byte
array field mapped to an OID column now has a byte array field mapped to a
bytea column, and we have manually moved data from the LO table to the
bytea column. Now Hibernate/JDBC fetches all the content we need in
batches. Random benchmark: fetching 20k rows used to take 7 seconds (250
msec query execution time, 6.7 sec for transfer) and now it takes 1.5
seconds (250 msec query + 1.3 sec transfer).

Regards,
Mate

On Thu, Sep 6, 2018 at 10:56 AM Dave Cramer  wrote:

> Hi
>
> Can you be more explicit how you fixed the problem ?
>
> Thanks
> Dave Cramer
>
> da...@postgresintl.com
> www.postgresintl.com
>
>
> On Thu, 6 Sep 2018 at 03:46, Mate Varga  wrote:
>
>> After inlining the data, performance issues have been solved. Thanks for
>> the help.
>>
>> On Mon, Sep 3, 2018 at 9:57 PM Mate Varga  wrote:
>>
>>> Thanks,
>>> 1) we'll try to move stuff out from LOBs
>>> 2) we might raise a PR for the JDBC driver
>>>
>>> Mate
>>>
>>> On Mon, 3 Sep 2018, 19:35 Dave Cramer,  wrote:
>>>


 On Mon, 3 Sep 2018 at 13:00, Mate Varga  wrote:

> More precisely: when fetching 10k rows, JDBC driver just does a large
> bunch of socket reads. With lobs, it's ping-pong: one read, one write per
> lob...
>
>
 Ok, this is making more sense. In theory we could fetch them all but
 since they are LOB's we could run out of memory.

 Not sure what to tell you at this point. I'd entertain a PR if you were
 motivated.

 Dave Cramer

 da...@postgresintl.com
 www.postgresintl.com



>
> On Mon, Sep 3, 2018 at 6:54 PM Mate Varga  wrote:
>
>> So I have detailed profiling results now. Basically it takes very
>> long that for each blob, the JDBC driver reads from the socket then it
>> creates the byte array on the Java side. Then it reads the next blob, 
>> etc.
>> I guess this takes many network roundtrips.
>>
>> On Mon, Sep 3, 2018 at 5:58 PM Dave Cramer  wrote:
>>
>>>
>>> On Mon, 3 Sep 2018 at 10:48, Mate Varga  wrote:
>>>
 That's 1690 msec (1.69 seconds, and that is how long it takes to
 fetch 20k (small-ish) rows without LOBs (LOBs are a few lines below on 
 the
 screenshot)

>>>
>>> that sound high as well!
>>>
>>> Something isn't adding up..
>>>
>>>
>>> Dave Cramer
>>>
>>> da...@postgresintl.com
>>> www.postgresintl.com
>>>
>>>
>>>

 On Mon, Sep 3, 2018 at 4:40 PM Dave Cramer 
 wrote:

> the one you have highlighted ~1.69ms
>
> Dave Cramer
>
> da...@postgresintl.com
> www.postgresintl.com
>
>
> On Mon, 3 Sep 2018 at 10:38, Mate Varga  wrote:
>
>> Which frame do you refer to?
>>
>> On Mon, Sep 3, 2018 at 3:57 PM Dave Cramer 
>> wrote:
>>
>>> Not sure why reading from a socket is taking 1ms ?
>>>
>>> Dave Cramer
>>>
>>> da...@postgresintl.com
>>> www.postgresintl.com
>>>
>>>
>>> On Mon, 3 Sep 2018 at 09:39, Mate Varga  wrote:
>>>
 Hi,

 https://imgur.com/a/ovsJPRv -- I've uploaded the profiling
 info (as an image, sorry). It seems this is a JDBC-level problem. I
 understand that the absolute timing is not meaningful at all 
 because you
 don't know how large the resultset is, but I can tell that this is 
 only a
 few thousands rows + few thousand largeobjects, each largeobject 
 is around
 1 kByte. (Yes I know this is not a proper use of LOBs -- it's a 
 legacy db
 structure that's hard to change.)

 Thanks.
 Mate

 On Mon, Sep 3, 2018 at 11:52 AM Mate Varga 
 wrote:

> Hey,
>
> we'll try to test this with pure JDBC versus hibernate. Thanks!
>
>
> On Mon, Sep 3, 2018 at 11:48 AM Dave Cramer 
> wrote:
>
>>
>>
>> On Mon, 3 Sep 2018 at 03:55, Mate Varga 
>> wrote:
>>
>>> Basically there's a class with a byte[] field, the class is
>>> mapped to table T and 

Re: very slow largeobject transfers through JDBC

2018-09-06 Thread Dave Cramer
Hi Mate,

Thanks for the detailed response. This will help others in the same
situation

Dave Cramer

da...@postgresintl.com
www.postgresintl.com


On Thu, 6 Sep 2018 at 05:03, Mate Varga  wrote:

> Hi,
>
> summarizing:
> we had a table that had an OID column, referencing an object in
> pg_largeobject. This was mapped to a (Java) entity with a byte array field,
> annotated with @Lob. The problem was that we were fetching thousands of
> these entities in one go, and LOB fetching is not batched by Hibernate/JDBC
> (so each row is fetched separately). Because we were abusing LOBs (they
> were small, often less than 10 kB), we have chosen to move the binary blobs
> from the LO table to a simple bytea column. So the entity that had a byte
> array field mapped to an OID column now has a byte array field mapped to a
> bytea column, and we have manually moved data from the LO table to the
> bytea column. Now Hibernate/JDBC fetches all the content we need in
> batches. Random benchmark: fetching 20k rows used to take 7 seconds (250
> msec query execution time, 6.7 sec for transfer) and now it takes 1.5
> seconds (250 msec query + 1.3 sec transfer).
>
> Regards,
> Mate
>
> On Thu, Sep 6, 2018 at 10:56 AM Dave Cramer  wrote:
>
>> Hi
>>
>> Can you be more explicit how you fixed the problem ?
>>
>> Thanks
>> Dave Cramer
>>
>> da...@postgresintl.com
>> www.postgresintl.com
>>
>>
>> On Thu, 6 Sep 2018 at 03:46, Mate Varga  wrote:
>>
>>> After inlining the data, performance issues have been solved. Thanks for
>>> the help.
>>>
>>> On Mon, Sep 3, 2018 at 9:57 PM Mate Varga  wrote:
>>>
 Thanks,
 1) we'll try to move stuff out from LOBs
 2) we might raise a PR for the JDBC driver

 Mate

 On Mon, 3 Sep 2018, 19:35 Dave Cramer,  wrote:

>
>
> On Mon, 3 Sep 2018 at 13:00, Mate Varga  wrote:
>
>> More precisely: when fetching 10k rows, JDBC driver just does a large
>> bunch of socket reads. With lobs, it's ping-pong: one read, one write per
>> lob...
>>
>>
> Ok, this is making more sense. In theory we could fetch them all but
> since they are LOB's we could run out of memory.
>
> Not sure what to tell you at this point. I'd entertain a PR if you
> were motivated.
>
> Dave Cramer
>
> da...@postgresintl.com
> www.postgresintl.com
>
>
>
>>
>> On Mon, Sep 3, 2018 at 6:54 PM Mate Varga  wrote:
>>
>>> So I have detailed profiling results now. Basically it takes very
>>> long that for each blob, the JDBC driver reads from the socket then it
>>> creates the byte array on the Java side. Then it reads the next blob, 
>>> etc.
>>> I guess this takes many network roundtrips.
>>>
>>> On Mon, Sep 3, 2018 at 5:58 PM Dave Cramer  wrote:
>>>

 On Mon, 3 Sep 2018 at 10:48, Mate Varga  wrote:

> That's 1690 msec (1.69 seconds, and that is how long it takes to
> fetch 20k (small-ish) rows without LOBs (LOBs are a few lines below 
> on the
> screenshot)
>

 that sound high as well!

 Something isn't adding up..


 Dave Cramer

 da...@postgresintl.com
 www.postgresintl.com



>
> On Mon, Sep 3, 2018 at 4:40 PM Dave Cramer 
> wrote:
>
>> the one you have highlighted ~1.69ms
>>
>> Dave Cramer
>>
>> da...@postgresintl.com
>> www.postgresintl.com
>>
>>
>> On Mon, 3 Sep 2018 at 10:38, Mate Varga  wrote:
>>
>>> Which frame do you refer to?
>>>
>>> On Mon, Sep 3, 2018 at 3:57 PM Dave Cramer 
>>> wrote:
>>>
 Not sure why reading from a socket is taking 1ms ?

 Dave Cramer

 da...@postgresintl.com
 www.postgresintl.com


 On Mon, 3 Sep 2018 at 09:39, Mate Varga 
 wrote:

> Hi,
>
> https://imgur.com/a/ovsJPRv -- I've uploaded the profiling
> info (as an image, sorry). It seems this is a JDBC-level problem. 
> I
> understand that the absolute timing is not meaningful at all 
> because you
> don't know how large the resultset is, but I can tell that this 
> is only a
> few thousands rows + few thousand largeobjects, each largeobject 
> is around
> 1 kByte. (Yes I know this is not a proper use of LOBs -- it's a 
> legacy db
> structure that's hard to change.)
>
> Thanks.
> Mate
>
> On Mon, Sep 3, 2018 at 11:52 AM Mate Varga 
> wrote:
>
>> Hey,
>>
>> we'll try to test this wit

Re: timestamp arithmetics in C function

2018-09-06 Thread Adrian Klaver

On 09/03/2018 09:11 AM, Lutz Gehlen wrote:

Hello all,

unfortunately, I have so far not received a reply to my question
below. I am well aware that no one has an obligation to reply; I was
just wondering whether I phrased my question badly or whether there
is anything else I could do to improve it.


Caveat, I am not a C programmer so I cannot comment on the correctness 
of the code. The question and it's phrasing look alright to me though. 
Your most recent post landed on a holiday(Labor Day) here in the States 
and therefore may have got lost in the return to work on Tuesday. 
Hopefully someone more knowledgeable then I will see this and comment on 
the C portion of your post.




Thanks for your help and best wishes,
Lutz


On Friday, 10.08.2018 09:05:40 Lutz Gehlen wrote:

Hello all,

I am trying to implement a C function that accepts a date ("date"
in the sense of a type of information, not a postgres datatype)
as parameter (among others) and returns a certain point in time.
(The background is to calculate the time of dawn and dusk at the
given date.) Ideally, I would like to accept a timestamp value
and return another timestamp as result. I have implemented the
function, but I would like to ask advice on whether my
implementation is the recommended way to achieve this.

To get started - since this is my first attempt at a C function in
postgres - I implemented a function that accepts the date as
three separate int32 values for year, month, and day and returns
the time of dawn as a float8 for the minutes since midnight (this
is what the implemented algorithm internally returns, anyway):


PG_FUNCTION_INFO_V1(dawn_utc);

Datum dawn_utc(PG_FUNCTION_ARGS) {
   float8 lat  = PG_GETARG_FLOAT8(0);
   float8 lon  = PG_GETARG_FLOAT8(1);
   int32  year = PG_GETARG_INT32(2);
   int32  month= PG_GETARG_INT32(3);
   int32  day  = PG_GETARG_INT32(4);
   float8 solar_depression = PG_GETARG_FLOAT8(5);

   // postgres-independent computation goes here
   float8 dawn_utc = calc_dawn_utc
 (lat, lon, year, month, day, solar_depression);

   PG_RETURN_FLOAT8(dawn_utc);
}


This works fine. However, it would be more convenient if the
function would accept a date or timestamp value and return a
timestamp. So I modified the first part of the function like
this, based on code snippets I found in the postgres source code:


PG_FUNCTION_INFO_V1(dawn_utc);

Datum dawn_utc(PG_FUNCTION_ARGS) {
   float8 lat  = PG_GETARG_FLOAT8(0);
   float8 lon  = PG_GETARG_FLOAT8(1);
   Timestamp timestamp = PG_GETARG_TIMESTAMP(2);
   float8 solar_depression = PG_GETARG_FLOAT8(3);

   struct pg_tm tt;
   struct pg_tm *tm = &tt;
   fsec_t   fsec;

   if (timestamp2tm(timestamp, NULL, tm, &fsec, NULL, NULL) != 0)
 ereport(ERROR,
 (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),
  errmsg("timestamp out of range")));

   // postgres-independent computation goes here
   float8 dawn_utc = calc_dawn_utc
 (lat, lon, tm->tm_year, tm->tm_mon, tm->tm_mday,
solar_depression;


For the second part of the function, I now have to add the
calculated number of minutes to the date portion of the timestamp
variable. One has to be aware that depending on the geographic
location dawn_utc can possibly be negative or larger than 1440
(i.e. 24h). I am not sure whether I should construct an interval
value from the number of minutes and add that to the timestamp. I
have not figured out how to do this, but decided to calculate a
new timestamp in a more fundamental way:


   tm->tm_sec  = 0;
   tm->tm_min  = 0;
   tm->tm_hour = 0;
   Timestamp result;
   if (tm2timestamp(tm, 0, NULL, &result) != 0)
 ereport(ERROR,
 (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),
  errmsg("timestamp out of range")));

#ifdef HAVE_INT64_TIMESTAMP
   /* timestamp is microseconds since 2000 */
   result += dawn_utc * USECS_PER_MINUTE;
#else
   /* timestamp is seconds since 2000 */
   result += dawn_utc * (double) SECS_PER_MINUTE;
#endif

   PG_RETURN_TIMESTAMP(result);


Again this code is based on what I found in the source code. It
seems to work correctly (at least on my development machine), but
I am wondering whether this is a safe and recommended way to
achieve this result or whether it is considered bad practice to
manipulate a timestamp on such fundamental level.

Thank you for your advice and best wishes,
Lutz







--
Adrian Klaver
adrian.kla...@aklaver.com



nested query problem

2018-09-06 Thread David Gauthier
Hi:

I'm having trouble with this query...

select sr.sqf_id, sr.status, fse.perl_sub_name, fse.end_datetime
  from
sqf_runs sr,
(select perl_sub_name, end_datetime from flow_step_events_view where
sqf_id = sr.sqf_id order by 2 limit 1) fse
  where sr.userid='foo';

ERROR:  invalid reference to FROM-clause entry for table "sr"
LINE 4: ...atetime from flow_step_events_view where sqf_id = sr.sqf_id ...
 ^
HINT:  There is an entry for table "sr", but it cannot be referenced from
this part of the query.


If this is indeed a foul, how can I accomplish the same thing ?

Thanks in Advance !


Re: nested query problem

2018-09-06 Thread Paul Jungwirth

On 09/06/2018 01:59 PM, David Gauthier wrote:

I'm having trouble with this query...

select sr.sqf_id, sr.status, fse.perl_sub_name, fse.end_datetime
   from
     sqf_runs sr,
     (select perl_sub_name, end_datetime from flow_step_events_view 
where sqf_id = sr.sqf_id order by 2 limit 1) fse

   where sr.userid='foo';

ERROR:  invalid reference to FROM-clause entry for table "sr"
LINE 4: ...atetime from flow_step_events_view where sqf_id = sr.sqf_id ...
                                                              ^
HINT:  There is an entry for table "sr", but it cannot be referenced 
from this part of the query.


This calls for a lateral join:

SELECT  sr.sqf_id, sr.status, fse.perl_sub_name, fse.end_datetime
FROMsqf_runs sr
LEFT OUTER JOIN LATERAL (
  SELECT  perl_sub_name, end_datetime
  FROMflow_step_events_view fsev
  WHERE   fsev.sqf_id = sr.sqf_id
  ORDER BY 2
  LIMIT 1
) fse
ON  true
WHERE   sr.userid = 'foo'
;

It's nearly what you had already, but `LATERAL` lets the subquery 
reference columns in the other tables.


A lateral join is conceptually a lot like running your subquery in for 
loop, looping over all the rows produced by the rest of the query. It 
doesn't have to produce 1 row for each iteration, but saying `LIMIT 1` 
ensures that here.


The `ON true` is just pro forma because you can't have a join without an 
`ON` clause.


You might prefer an INNER JOIN LATERAL, depending on your needs.

--
Paul  ~{:-)
p...@illuminatedcomputing.com



Re: nested query problem

2018-09-06 Thread Tom Lane
David Gauthier  writes:
> I'm having trouble with this query...

> select sr.sqf_id, sr.status, fse.perl_sub_name, fse.end_datetime
>   from
> sqf_runs sr,
> (select perl_sub_name, end_datetime from flow_step_events_view where
> sqf_id = sr.sqf_id order by 2 limit 1) fse
>   where sr.userid='foo';

> ERROR:  invalid reference to FROM-clause entry for table "sr"
> LINE 4: ...atetime from flow_step_events_view where sqf_id = sr.sqf_id ...
>  ^
> HINT:  There is an entry for table "sr", but it cannot be referenced from
> this part of the query.

If you actually meant to access the outer "sqf_runs" RTE from the
sub-query, you need to mark it LATERAL:

select sr.sqf_id, sr.status, fse.perl_sub_name, fse.end_datetime
  from
sqf_runs sr,
lateral (select perl_sub_name, end_datetime from flow_step_events_view where
sqf_id = sr.sqf_id order by 2 limit 1) fse
  where sr.userid='foo';

regards, tom lane



Re: nested query problem

2018-09-06 Thread David Gauthier
Not quite.  This returns one value. In the actual "sqf_runs" table, there
are many records with user_id = 'foo'.  I want one line for each where the
fse.p-erl_sub_name and fse.end_datetime values are the latest values found
in the flow_step_events_view view where the sqf_ids match.

On Thu, Sep 6, 2018 at 5:10 PM Paul Jungwirth 
wrote:

> On 09/06/2018 01:59 PM, David Gauthier wrote:
> > I'm having trouble with this query...
> >
> > select sr.sqf_id, sr.status, fse.perl_sub_name, fse.end_datetime
> >from
> >  sqf_runs sr,
> >  (select perl_sub_name, end_datetime from flow_step_events_view
> > where sqf_id = sr.sqf_id order by 2 limit 1) fse
> >where sr.userid='foo';
> >
> > ERROR:  invalid reference to FROM-clause entry for table "sr"
> > LINE 4: ...atetime from flow_step_events_view where sqf_id = sr.sqf_id
> ...
> >   ^
> > HINT:  There is an entry for table "sr", but it cannot be referenced
> > from this part of the query.
>
> This calls for a lateral join:
>
>  SELECT  sr.sqf_id, sr.status, fse.perl_sub_name, fse.end_datetime
>  FROMsqf_runs sr
>  LEFT OUTER JOIN LATERAL (
>SELECT  perl_sub_name, end_datetime
>FROMflow_step_events_view fsev
>WHERE   fsev.sqf_id = sr.sqf_id
>ORDER BY 2
>LIMIT 1
>  ) fse
>  ON  true
>  WHERE   sr.userid = 'foo'
>  ;
>
> It's nearly what you had already, but `LATERAL` lets the subquery
> reference columns in the other tables.
>
> A lateral join is conceptually a lot like running your subquery in for
> loop, looping over all the rows produced by the rest of the query. It
> doesn't have to produce 1 row for each iteration, but saying `LIMIT 1`
> ensures that here.
>
> The `ON true` is just pro forma because you can't have a join without an
> `ON` clause.
>
> You might prefer an INNER JOIN LATERAL, depending on your needs.
>
> --
> Paul  ~{:-)
> p...@illuminatedcomputing.com
>
>


Re: nested query problem

2018-09-06 Thread David Gauthier
Wow, I take that back.  I thought there were many recs with "foo" but there
wa sonly one.
When I ran this against a value that actually had multiple records, it ran
fine.

Sorry for that.
And Thanks for this query !


On Thu, Sep 6, 2018 at 5:15 PM David Gauthier 
wrote:

> Not quite.  This returns one value. In the actual "sqf_runs" table, there
> are many records with user_id = 'foo'.  I want one line for each where the
> fse.p-erl_sub_name and fse.end_datetime values are the latest values found
> in the flow_step_events_view view where the sqf_ids match.
>
> On Thu, Sep 6, 2018 at 5:10 PM Paul Jungwirth 
> wrote:
>
>> On 09/06/2018 01:59 PM, David Gauthier wrote:
>> > I'm having trouble with this query...
>> >
>> > select sr.sqf_id, sr.status, fse.perl_sub_name, fse.end_datetime
>> >from
>> >  sqf_runs sr,
>> >  (select perl_sub_name, end_datetime from flow_step_events_view
>> > where sqf_id = sr.sqf_id order by 2 limit 1) fse
>> >where sr.userid='foo';
>> >
>> > ERROR:  invalid reference to FROM-clause entry for table "sr"
>> > LINE 4: ...atetime from flow_step_events_view where sqf_id = sr.sqf_id
>> ...
>> >   ^
>> > HINT:  There is an entry for table "sr", but it cannot be referenced
>> > from this part of the query.
>>
>> This calls for a lateral join:
>>
>>  SELECT  sr.sqf_id, sr.status, fse.perl_sub_name, fse.end_datetime
>>  FROMsqf_runs sr
>>  LEFT OUTER JOIN LATERAL (
>>SELECT  perl_sub_name, end_datetime
>>FROMflow_step_events_view fsev
>>WHERE   fsev.sqf_id = sr.sqf_id
>>ORDER BY 2
>>LIMIT 1
>>  ) fse
>>  ON  true
>>  WHERE   sr.userid = 'foo'
>>  ;
>>
>> It's nearly what you had already, but `LATERAL` lets the subquery
>> reference columns in the other tables.
>>
>> A lateral join is conceptually a lot like running your subquery in for
>> loop, looping over all the rows produced by the rest of the query. It
>> doesn't have to produce 1 row for each iteration, but saying `LIMIT 1`
>> ensures that here.
>>
>> The `ON true` is just pro forma because you can't have a join without an
>> `ON` clause.
>>
>> You might prefer an INNER JOIN LATERAL, depending on your needs.
>>
>> --
>> Paul  ~{:-)
>> p...@illuminatedcomputing.com
>>
>>


bad url in docs

2018-09-06 Thread Rob Sargent

Version 10

33.18. SSL Support

The pointers off to hp.com seem to have gone away on or about 
28Aug2018.  They also fall under the heading of HP OpenVMS Systems Doc 
which may explain why they've disappeared.






Re: bad url in docs

2018-09-06 Thread Rob Sargent

Sorry.  I didn't see the specific form for documentations issues.



On 09/06/2018 04:52 PM, Rob Sargent wrote:

Version 10

33.18. SSL Support

The pointers off to hp.com seem to have gone away on or about 
28Aug2018.  They also fall under the heading of HP OpenVMS Systems Doc 
which may explain why they've disappeared.








pgbackrest when data/base is symlinked to another volume

2018-09-06 Thread Ron

Hi,

Will pgbackrest properly backup and restore the cluster if data/base, 
data/pg_xlog and data/pg_log are symlinks?


PGDATA=/var/lib/pgsql/9.6/data
$PGDATA/base -> /Database/9.6/base
$PGDATA/pg_log -> /Database/9.6/pg_log
$PGDATA/pg_xlog -> /Database/9.6/pg_xlog

(I'm not just defining data_directory because the DBAs are used to looking 
in $PGDATA and seeing all the relevant files.)


Thanks
--
Angular momentum makes the world go 'round.



Re: scram-sha-256 authentication broken in FIPS mode

2018-09-06 Thread Alessandro Gherardi
 Hi Michael,
I'm attaching the output of diff  .
>  If we could prove that sha2-openssl.c is actually
unreliable even if FIPS is enabled system-wide with either SCRAM
authentication or any of the other hashing functions, then I would be
ready to accept a patch.  Now, as far as I can see and heard from other
folks for at least Linux, if FIPS is enabled at the OS level, then
Postgres would use it automatically and SCRAM is able to work.
Not sure why it works on Linux but not on Windows. That the low-level digest 
APIs can't be used when FIPS is enabled is by design, other people have 
encountered that problem, e.g., 
http://openssl.6102.n7.nabble.com/Low-Level-Digest-if-Fips-mode-td54983.html .
Thanks,Alessandro


be-secure-openssl.c.diff
Description: Binary data


fe-secure-openssl.c.diff
Description: Binary data


sha2.h.diff
Description: Binary data


sha2_openssl.c.diff
Description: Binary data