Re: What's your experience with using Postgres in IoT-contexts?

2020-10-14 Thread Peter J. Holzer
On 2020-10-13 06:55:52 +0200, chlor wrote:
> > I want to have long term storage and access to individual telegrams
> 
> An IOT is not designed for that. It is used for control or delivery of
> data to a server.

That's a rather dogmatic and narrow-minded point of view. "IOT" means
"Internet of things". There are many things which which could benefit
from network connectivity and don't necessarily need a central server or
may even act as servers for other "things".

It all depends on the application and the "thing".

> Long term storage also means backup and recovery and I don't think you
> have that planned for your IOT.

That depends on how valuable those data are.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: What's your experience with using Postgres in IoT-contexts?

2020-10-14 Thread Paul Förster
Hi Peter,

> On 14. Oct, 2020, at 14:49, Peter J. Holzer  wrote:
> 
> "IOT" means "Internet of things".

IOT also means "index organized table"...

So much for using abbreviations. :-)

Cheers,
Paul





Parameter value from (mb/gb) to bytes

2020-10-14 Thread Raul Kaubi
Hi

Is there a simple way to dynamically get for example parameter "shared buffers" 
value (megabytes or gigabytes) to bytes, for monitoring perspective..?

At the moment, this gives me value in GB.

# psql -U postgres -Atc "show shared_buffers;"
1GB

This value may as well be in MB. So I am looking a way to dynamically get the 
value in bytes.

Regards
Raul


Re: Parameter value from (mb/gb) to bytes

2020-10-14 Thread Thomas Kellerer
Raul Kaubi schrieb am 14.10.2020 um 12:22:
> Is there a simple way to dynamically get for example parameter
> „shared buffers“ value (megabytes or gigabytes) to bytes, for
> monitoring perspective..?>
>  
>
> At the moment, this gives me value in GB.
>
> # psql -U postgres -Atc "show shared_buffers;"
> 1GB
>
> This value may as well be in MB. So I am looking a way to dynamically get the 
> value in bytes.

Instead of using "show" you can use a SELECT with pg_size_bytes():

  select pg_size_bytes(setting)
  from pg_settings
  where name = 'shared_buffers';









Re: Parameter value from (mb/gb) to bytes

2020-10-14 Thread Thomas Kellerer
Thomas Kellerer schrieb am 14.10.2020 um 15:55:
> Raul Kaubi schrieb am 14.10.2020 um 12:22:
>> Is there a simple way to dynamically get for example parameter
>> „shared buffers“ value (megabytes or gigabytes) to bytes, for
>> monitoring perspective..?>
>>  
>>
>> At the moment, this gives me value in GB.
>>
>> # psql -U postgres -Atc "show shared_buffers;"
>> 1GB
>>
>> This value may as well be in MB. So I am looking a way to dynamically get 
>> the value in bytes.
>
> Instead of using "show" you can use a SELECT with pg_size_bytes():
>
>   select pg_size_bytes(setting)
>   from pg_settings
>   where name = 'shared_buffers';

Ah, forgot that shared_buffers is in 8K pages.

So you actually need:

   select pg_size_bytes(setting) * 8192
   from pg_settings
   where name = 'shared_buffers';




bug in PG13?

2020-10-14 Thread Andreas Kretschmer

Hi all,

it seems to me a bug. i have a partitioned table:


test=*# select version();
version
-
 PostgreSQL 13.0 (Ubuntu 13.0-1.pgdg18.04+1) on x86_64-pc-linux-gnu, 
compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit

(1 row)

test=*# \d+ kunden
    Partitioned table 
"public.kunden"
   Column   |  Type   | Collation | Nullable | Default   | 
Storage  | Stats target | Description

+-+---+--++--+--+-
 id | integer |   | not null | 
nextval('kunden_id_seq'::regclass) | plain    |  |
 kundenname | text    |   | not null 
|    | extended |  |
 datum  | date    |   | not null 
|    | plain    |  |

Partition key: HASH (kundenname)
Indexes:
    "kunden_pkey" PRIMARY KEY, btree (id, kundenname, datum)
Partitions: kunden_0 FOR VALUES WITH (modulus 4, remainder 0), PARTITIONED,
    kunden_1 FOR VALUES WITH (modulus 4, remainder 1), PARTITIONED,
    kunden_2 FOR VALUES WITH (modulus 4, remainder 2), PARTITIONED,
    kunden_3 FOR VALUES WITH (modulus 4, remainder 3), PARTITIONED

test=*# \d+ kunden_0
   Partitioned table 
"public.kunden_0"
   Column   |  Type   | Collation | Nullable | Default   | 
Storage  | Stats target | Description

+-+---+--++--+--+-
 id | integer |   | not null | 
nextval('kunden_id_seq'::regclass) | plain    |  |
 kundenname | text    |   | not null 
|    | extended |  |
 datum  | date    |   | not null 
|    | plain    |  |

Partition of: kunden FOR VALUES WITH (modulus 4, remainder 0)
Partition constraint: satisfies_hash_partition('16574'::oid, 4, 0, 
kundenname)

Partition key: RANGE (datum)
Indexes:
    "kunden_0_pkey" PRIMARY KEY, btree (id, kundenname, datum)
Partitions: kunden_0_2015 FOR VALUES FROM ('2015-01-01') TO ('2016-01-01'),
    kunden_0_2016 FOR VALUES FROM ('2016-01-01') TO ('2017-01-01'),
    kunden_0_2017 FOR VALUES FROM ('2017-01-01') TO ('2018-01-01'),
    kunden_0_2018 FOR VALUES FROM ('2018-01-01') TO ('2019-01-01'),
    kunden_0_2019 FOR VALUES FROM ('2019-01-01') TO ('2020-01-01'),
    kunden_0_default DEFAULT

test=*#


this plan seems okay, partition pruning is working as expected:


test=*# explain analyse select * from kunden where kundenname = 'Kunde 
11' and datum = current_date;

QUERY PLAN
-
 Gather  (cost=1000.15..5030.91 rows=6 width=40) (actual 
time=10.068..14.326 rows=0 loops=1)

   Workers Planned: 1
   Workers Launched: 1
   ->  Parallel Append  (cost=0.15..4030.31 rows=6 width=40) (actual 
time=0.004..0.005 rows=0 loops=2)

 Subplans Removed: 5
 ->  Parallel Index Only Scan using kunden_0_default_pkey on 
kunden_0_default kunden_1  (cost=0.15..20.16 rows=1 width=40) (actual 
time=0.008..0.008 rows=0 loops=1)
   Index Cond: ((kundenname = 'Kunde 11'::text) AND (datum 
= CURRENT_DATE))

   Heap Fetches: 0
 Planning Time: 0.303 ms
 Execution Time: 14.364 ms
(10 rows)



but, if i switch ``parallel_leader_participation`` to off, the plan 
changed in a bad way:



test=*# set parallel_leader_participation to off;
SET
test=*# explain analyse select * from kunden where kundenname = 'Kunde 
11' and datum = current_date;

QUERY PLAN

 Gather  (cost=1000.00..4833.46 rows=6 width=21) (actual 
time=37.188..40.386 rows=0 loops=1)

   Workers Planned: 2
   Workers Launched: 2
   ->  Parallel Append  (cost=0.00..3832.86 rows=2 width=21) (actual 
time=23.559..23.562 rows=0 loops=2)
 ->  Seq Scan on kunden_0_2016 kunden_2 (cost=0.00..1446.92 
rows=1 width=17) (actual time=12.094..12.094 rows=0 loops=1)
   Filter: ((kundenname = 'Kunde 11'::text) AND (datum = 
CURRENT_DATE))

   Rows Removed by Filter: 60624
 ->  Seq Scan on kunden_0_2015 kunden_1 (cost=0.00..1445.22 
rows=1 width=17) (actual time=10.313..10.313 rows=0 loops=1)
   Filter: ((kundenname = 'Kunde 11'::text) AND (datum = 
CURRENT_DATE))

   Rows Removed by Filter: 60527
 ->  Seq Scan on kunden_0_2

Re: Parameter value from (mb/gb) to bytes

2020-10-14 Thread Magnus Hagander
On Wed, Oct 14, 2020 at 3:57 PM Thomas Kellerer  wrote:

> Thomas Kellerer schrieb am 14.10.2020 um 15:55:
> > Raul Kaubi schrieb am 14.10.2020 um 12:22:
> >> Is there a simple way to dynamically get for example parameter
> >> „shared buffers“ value (megabytes or gigabytes) to bytes, for
> >> monitoring perspective..?>
> >>
> >>
> >> At the moment, this gives me value in GB.
> >>
> >> # psql -U postgres -Atc "show shared_buffers;"
> >> 1GB
> >>
> >> This value may as well be in MB. So I am looking a way to dynamically
> get the value in bytes.
> >
> > Instead of using "show" you can use a SELECT with pg_size_bytes():
> >
> >   select pg_size_bytes(setting)
> >   from pg_settings
> >   where name = 'shared_buffers';
>
> Ah, forgot that shared_buffers is in 8K pages.
>
> So you actually need:
>
>select pg_size_bytes(setting) * 8192
>from pg_settings
>where name = 'shared_buffers';
>

Actually, it doesn't have to be in 8k pages, that depends on the build
options. So if you want to be perfectly correct, you should probably
multiply with current_setting('block_size') instead of a hardcoded 8192 :)

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: Parameter value from (mb/gb) to bytes

2020-10-14 Thread Tom Lane
Magnus Hagander  writes:
> On Wed, Oct 14, 2020 at 3:57 PM Thomas Kellerer  wrote:
>> select pg_size_bytes(setting) * 8192
>> from pg_settings
>> where name = 'shared_buffers';

> Actually, it doesn't have to be in 8k pages, that depends on the build
> options. So if you want to be perfectly correct, you should probably
> multiply with current_setting('block_size') instead of a hardcoded 8192 :)

It's fairly annoying that this doesn't work:

regression=# select pg_size_bytes(setting||' '||unit) from pg_settings where 
name = 'shared_buffers';
ERROR:  invalid size: "16384 8kB"
DETAIL:  Invalid size unit: "8kB".
HINT:  Valid units are "bytes", "kB", "MB", "GB", and "TB".

Maybe we should teach pg_size_bytes to cope with that.

regards, tom lane




Re: Parameter value from (mb/gb) to bytes

2020-10-14 Thread Pavel Stehule
st 14. 10. 2020 v 17:10 odesílatel Tom Lane  napsal:

> Magnus Hagander  writes:
> > On Wed, Oct 14, 2020 at 3:57 PM Thomas Kellerer  wrote:
> >> select pg_size_bytes(setting) * 8192
> >> from pg_settings
> >> where name = 'shared_buffers';
>
> > Actually, it doesn't have to be in 8k pages, that depends on the build
> > options. So if you want to be perfectly correct, you should probably
> > multiply with current_setting('block_size') instead of a hardcoded 8192
> :)
>
> It's fairly annoying that this doesn't work:
>
> regression=# select pg_size_bytes(setting||' '||unit) from pg_settings
> where name = 'shared_buffers';
> ERROR:  invalid size: "16384 8kB"
> DETAIL:  Invalid size unit: "8kB".
> HINT:  Valid units are "bytes", "kB", "MB", "GB", and "TB".
>
> Maybe we should teach pg_size_bytes to cope with that.
>

"8kB" is strange unit - maybe we can introduce new - "page"



> regards, tom lane
>
>
>


PostGIS missing from EDB Windows installer for PG13?

2020-10-14 Thread Anthony DeBarros
Good day,

I just tried a fresh install of PostgreSQL 13 on Windows using the EDB
installer, and I see that StackBuilder does not include an option to
install PostGIS. Anyone have insight on whether that's a permanent change
or not?

Thanks,

Anthony DeBarros
Author, "Practical SQL" from No Starch Press


Re: Parameter value from (mb/gb) to bytes

2020-10-14 Thread Christoph Moench-Tegeder
## Magnus Hagander (mag...@hagander.net):

> Actually, it doesn't have to be in 8k pages, that depends on the build
> options. So if you want to be perfectly correct, you should probably
> multiply with current_setting('block_size') instead of a hardcoded 8192 :)

More self-contained:
  select pg_size_bytes(setting) * pg_size_bytes(unit)
  from pg_settings
  where name = 'shared_buffers';

I guess that's what that unit column is there for.

Regards,
Christoph

-- 
Spare Space




Re: Parameter value from (mb/gb) to bytes

2020-10-14 Thread Magnus Hagander
On Wed, Oct 14, 2020 at 5:10 PM Tom Lane  wrote:

> Magnus Hagander  writes:
> > On Wed, Oct 14, 2020 at 3:57 PM Thomas Kellerer  wrote:
> >> select pg_size_bytes(setting) * 8192
> >> from pg_settings
> >> where name = 'shared_buffers';
>
> > Actually, it doesn't have to be in 8k pages, that depends on the build
> > options. So if you want to be perfectly correct, you should probably
> > multiply with current_setting('block_size') instead of a hardcoded 8192
> :)
>
> It's fairly annoying that this doesn't work:
>
> regression=# select pg_size_bytes(setting||' '||unit) from pg_settings
> where name = 'shared_buffers';
> ERROR:  invalid size: "16384 8kB"
> DETAIL:  Invalid size unit: "8kB".
> HINT:  Valid units are "bytes", "kB", "MB", "GB", and "TB".
>
> Maybe we should teach pg_size_bytes to cope with that.
>

Actually thinking though, surely *this* particular case can be spelled as:
SELECT  pg_size_bytes(current_setting('shared_buffers'))

Or if doing it off pg_settings:

SELECT setting::bigint * pg_size_bytes(unit) from pg_settings where
name='shared_buffers'

I'm not sure having pg_size_bytes() parse "16384 8kB" is reasonable, I have
a feeling that could lead to a lot of accidental entries giving the wrong
results.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: Parameter value from (mb/gb) to bytes

2020-10-14 Thread Tom Lane
Magnus Hagander  writes:
> On Wed, Oct 14, 2020 at 5:10 PM Tom Lane  wrote:
>> It's fairly annoying that this doesn't work:
>> regression=# select pg_size_bytes(setting||' '||unit) from pg_settings
>> where name = 'shared_buffers';

> Actually thinking though, surely *this* particular case can be spelled as:
> SELECT  pg_size_bytes(current_setting('shared_buffers'))

Yeah, that might be the most recommendable way.

> Or if doing it off pg_settings:
> SELECT setting::bigint * pg_size_bytes(unit) from pg_settings where
> name='shared_buffers'

No, because that will fail for any unit other than '8kB', eg

regression=# select pg_size_bytes('MB');
ERROR:  invalid size: "MB"

> I'm not sure having pg_size_bytes() parse "16384 8kB" is reasonable, I have
> a feeling that could lead to a lot of accidental entries giving the wrong
> results.

Yeah, that's definitely a risk.  Given that current_setting() already does
what's needed, that's probably a better answer.

regards, tom lane




Re: What's your experience with using Postgres in IoT-contexts?

2020-10-14 Thread Jonathan Strong
On Wed, Oct 14, 2020 at 8:49 AM Peter J. Holzer  wrote:

> On 2020-10-13 06:55:52 +0200, chlor wrote:
> > > I want to have long term storage and access to individual telegrams
> >
> > An IOT is not designed for that. It is used for control or delivery of
> > data to a server.
>
> That's a rather dogmatic and narrow-minded point of view. "IOT" means
> "Internet of things". There are many things which which could benefit
> from network connectivity and don't necessarily need a central server or
> may even act as servers for other "things".
>
> It all depends on the application and the "thing".
>
> > Long term storage also means backup and recovery and I don't think you
> > have that planned for your IOT.
>
> That depends on how valuable those data are.
>
> hp
>
> --
>_  | Peter J. Holzer|
>
>
Indeed. IoT architecture also begs the question of "when" detailed
historical data may be needed, and how Edge Computing can factor into the
overall solution model. Detailed transactions may live "at the edge" while
needed aggregate / extract info is communicated to a central server to
support real time response. But those left-behind detailed transactions may
(or may not) follow later on via a lower priority / non- real time path if
relevant and eventually valuable. Some examples I've had to work with:

When calculating real-time Equity / Security Index values, you might
capture tick by tick data for each security in an index valuation formula.
Just one security in an index (e.g., MSFT) could easily generate more than
100,000 ticks per day. One of the Large Cap indices currently has about
3,500 stocks in it. They might not all trade as frequently as MSFT, but you
might see anywhere from 10 million to 100 million data points in a day.
While this differs from IoT in that data sources aren't physically
separated and as numerous as individual IoT devices, the challenge is
similar in that a good real time architecture makes use of needed data at
various stages in the process flow (and data network flow) and defers
functions that can wait, including perhaps committing full details of every
transaction to centralized long term storage, as long as the computed Index
value can be published in real time.

Years ago we developed an online gaming platform supporting hundreds of
thousands of concurrent users who came in from numerous countries around
the world. Real-time scoring and chat communications posed challenges
similar to the Equity Index solution above. We needed to be able to accept
play data from thousands of concurrent players and have a game (or chat
room) respond in near real time, but full detailed data could be queued up
and gradually transmitted, processed, assimilated and committed to long
term storage.

In health care data collection we see similar challenges: real time IoT
biosensors may capture blood oximetry, glucose, lactate info, heart rate,
etc. Some of this may be critical for real time monitoring and processing.
Some gets processed "at the Edge" - aggregated, filtered, interpreted, etc.
before getting to central / long term storage.

Deciding the level of detail that actually has to reach centralized long
term storage - and when - is typically a non-trivial exercise. When you
look at examples like monitoring a jet engine, gas turbines, or an air
conditioner manufacturer and service company (one of my past clients)
monitoring hundreds of thousands of HVAC units distributed around the
country, data samples go past terabytes to petabytes, exabytes and more.

While you need to figure out how to trim the raw data to amounts that can
reasonably be stored and managed, I've seen too many cases of being overly
aggressive in discarding data thought to be superfluous; thoughtful
analysis is critical here.

- Jon


Re: Parameter value from (mb/gb) to bytes

2020-10-14 Thread Raul Kaubi
Hi

Thanks for all the replies.

So at first, I did this:
select (pg_size_bytes(setting) * (select setting from pg_settings where
name = 'block_size')::int) as shared_buffers from pg_settings where name =
'shared_buffers';

But as I understood, that the preferred way would be this, correct..?  (at
least, it seems a lot simpler, I agree)
SELECT pg_size_bytes(current_setting('shared_buffers'))

Regards
Raul

Kontakt Tom Lane () kirjutas kuupäeval K, 14. oktoober
2020 kell 18:23:

> Magnus Hagander  writes:
> > On Wed, Oct 14, 2020 at 5:10 PM Tom Lane  wrote:
> >> It's fairly annoying that this doesn't work:
> >> regression=# select pg_size_bytes(setting||' '||unit) from pg_settings
> >> where name = 'shared_buffers';
>
> > Actually thinking though, surely *this* particular case can be spelled
> as:
> > SELECT  pg_size_bytes(current_setting('shared_buffers'))
>
> Yeah, that might be the most recommendable way.
>
> > Or if doing it off pg_settings:
> > SELECT setting::bigint * pg_size_bytes(unit) from pg_settings where
> > name='shared_buffers'
>
> No, because that will fail for any unit other than '8kB', eg
>
> regression=# select pg_size_bytes('MB');
> ERROR:  invalid size: "MB"
>
> > I'm not sure having pg_size_bytes() parse "16384 8kB" is reasonable, I
> have
> > a feeling that could lead to a lot of accidental entries giving the wrong
> > results.
>
> Yeah, that's definitely a risk.  Given that current_setting() already does
> what's needed, that's probably a better answer.
>
> regards, tom lane
>
>
>


Re: Parameter value from (mb/gb) to bytes

2020-10-14 Thread Magnus Hagander
On Wed, Oct 14, 2020 at 5:23 PM Tom Lane  wrote:

> Magnus Hagander  writes:
> > On Wed, Oct 14, 2020 at 5:10 PM Tom Lane  wrote:
> >> It's fairly annoying that this doesn't work:
> >> regression=# select pg_size_bytes(setting||' '||unit) from pg_settings
> >> where name = 'shared_buffers';
>
> > Actually thinking though, surely *this* particular case can be spelled
> as:
> > SELECT  pg_size_bytes(current_setting('shared_buffers'))
>
> Yeah, that might be the most recommendable way.
>
> > Or if doing it off pg_settings:
> > SELECT setting::bigint * pg_size_bytes(unit) from pg_settings where
> > name='shared_buffers'
>
> No, because that will fail for any unit other than '8kB', eg
>
> regression=# select pg_size_bytes('MB');
> ERROR:  invalid size: "MB"
>

Right, but it would certainly work for *this* case using pg_asettings, is
what I meant.

That said, I think it'd then actually be better to teach pg_size_bytes to
know that "MB" is the same as "1MB" and parse that. That might be something
that would actually be useful in other cases as well -- basically as a way
to get conversion units in general. Basically if the string is "unit only"
then consider that as "1 unit".

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: rum index supported on pg13?

2020-10-14 Thread Teodor Sigaev

Thank you, fixed and published.

On 09.10.2020 05:29, John the Scott wrote:

will rum index from postgrespro be supported in pg13?
numerous errors occur when compiling rum in pg13 and
no replies from github.  the differences from pg12
to pg13 seem to be significant

  https://github.com/postgrespro/rum

thanks for such a brilliant extension.
perhaps i should jump in and help.

-john




--
Teodor Sigaev  E-mail: teo...@sigaev.ru
  WWW: http://www.sigaev.ru/




Re: rum index supported on pg13?

2020-10-14 Thread Devrim Gündüz

Hi Teodor,

On Wed, 2020-10-14 at 18:49 +0300, Teodor Sigaev wrote:
> Thank you, fixed and published.

Can you please release a new tarball? We need that to build the RPM
packages. I'm still seeing 1.3.6 as the latest version.

Thanks!

Regards,
-- 
Devrim Gündüz
Open Source Solution Architect, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR


signature.asc
Description: This is a digitally signed message part


pgbouncer installation example (Step by step)

2020-10-14 Thread Atul Kumar
Hi Team,

Please share a clean example of installing, configuring and testing pgBouncer.

Your official links are not organized so I need an example of
PgBouncer with organized steps.



Regards,
Atul




Re: pgbouncer installation example (Step by step)

2020-10-14 Thread David G. Johnston
On Wed, Oct 14, 2020 at 10:08 AM Atul Kumar  wrote:

> Please share a clean example of installing, configuring and testing
> pgBouncer.
>
> Your official links are not organized so I need an example of
> PgBouncer with organized steps.
>

As I said on your exact same posting to the -admin list; not the right
place and not enough information provided.

David J.


Re: pgbouncer installation example (Step by step)

2020-10-14 Thread Michael Lewis
On Wed, Oct 14, 2020 at 11:08 AM Atul Kumar  wrote:

> Hi Team,
>
> Please share a clean example of installing,

https://www.pgbouncer.org/install.html


> configuring

https://www.pgbouncer.org/config.html


> and testing pgBouncer.
>
https://www.pgbouncer.org/usage.html


What else do you want to know?


Failed to compile libpq

2020-10-14 Thread Igor Korot
Hi, ALL,
I just tried to compile libpq on latest Solaris x86 (12.4) with Solaris
compiler (Sollaris Studio) version 12.6.

Configure succeeded, but running "make" failed with

[quote]
make: Fatal error in reader ../../../src/Makefile.global, line 45:
Unexpected end of line seen
[/quote]

Is there a way to fix it?
Below is the complete log from the Terminal session:

[quote]
igor@solaris:~/dbhandlerSol/libpq$ ./configure --enable-debug
--enable-nls --with-openssl
checking build system type... i386-pc-solaris2.11
checking host system type... i386-pc-solaris2.11
checking which template to use... solaris
checking whether to build with 64-bit integer date/time support... yes
checking whether NLS is wanted... yes
checking for default port number... 5432
checking for block size... 8kB
checking for segment size... 1GB
checking for WAL block size... 8kB
checking for WAL segment size... 16MB
checking for gcc... no
checking for cc... cc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... no
checking whether cc accepts -g... yes
checking for cc option to accept ISO C89... none needed
checking whether the C compiler still works... yes
checking how to run the C preprocessor... cc -Xa -E
checking allow thread-safe client libraries... yes
checking whether to build with Tcl... no
checking whether to build Perl modules... no
checking whether to build Python modules... no
checking whether to build with GSSAPI support... no
checking whether to build with PAM support... no
checking whether to build with BSD Authentication support... no
checking whether to build with LDAP support... no
checking whether to build with Bonjour support... no
checking whether to build with OpenSSL support... yes
checking whether to build with SELinux support... no
checking whether to build with systemd support... no
checking for grep that handles long lines and -e... /usr/bin/ggrep
checking for egrep... /usr/bin/ggrep -E
checking for non-GNU ld... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... no
checking for ranlib... ranlib
checking for strip... strip
checking whether it is possible to strip libraries... no
checking for ar... ar
checking for a BSD-compatible install... /usr/bin/ginstall -c
checking for tar... /usr/bin/tar
checking whether ln -s works... yes
checking for gawk... gawk
checking for a thread-safe mkdir -p... /usr/bin/gmkdir -p
checking for bison... no
configure: WARNING:
*** Without Bison you will not be able to build PostgreSQL from Git nor
*** change any of the parser definition files.  You can obtain Bison from
*** a GNU mirror site.  (If you are using the official distribution of
*** PostgreSQL then you do not need to worry about this, because the Bison
*** output is pre-generated.)
checking for flex... no
configure: WARNING:
*** Without Flex you will not be able to build PostgreSQL from Git nor
*** change any of the scanner definition files.  You can obtain Flex from
*** a GNU mirror site.  (If you are using the official distribution of
*** PostgreSQL then you do not need to worry about this because the Flex
*** output is pre-generated.)
checking for perl... /usr/bin/perl
configure: using perl 5.22.1
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking if compiler needs certain flags to reject unknown flags...
-Werror -Wunknown-warning-option -Wunused-command-line-argument
checking whether pthreads work with -mt -lpthread... yes
checking for joinable pthread attribute... PTHREAD_CREATE_JOINABLE
checking if more special flags are required for pthreads... -D_REENTRANT
checking for PTHREAD_PRIO_INHERIT... yes
checking pthread.h usability... yes
checking pthread.h presence... yes
checking for pthread.h... yes
checking for strerror_r... yes
checking for getpwuid_r... yes
checking for gethostbyname_r... yes
checking whether strerror_r returns int... yes
checking for main in -lm... yes
checking for library containing setproctitle... no
checking for library containing dlopen... none required
checking for library containing socket... none required
checking for library containing shl_load... no
checking for library containing getopt_long... none required
checking for library containing crypt... none required
checking for library containing shm_open... none required
checking for library containing shm_unlink... none required
checking for library containing fdatasync... none required
checking for library containing sched_yield... none required
checking for library containing gethostbyname_r... none required
checking for library containing shmg

Re: Failed to compile libpq

2020-10-14 Thread Tom Lane
Igor Korot  writes:
> Configure succeeded, but running "make" failed with

> [quote]
> make: Fatal error in reader ../../../src/Makefile.global, line 45:
> Unexpected end of line seen
> [/quote]

Kinda looks like you're using some non-GNU make.

regards, tom lane




Re: Failed to compile libpq

2020-10-14 Thread Igor Korot
Tom et al,

On Thu, Oct 15, 2020 at 12:01 AM Tom Lane  wrote:
>
> Igor Korot  writes:
> > Configure succeeded, but running "make" failed with
>
> > [quote]
> > make: Fatal error in reader ../../../src/Makefile.global, line 45:
> > Unexpected end of line seen
> > [/quote]
>
> Kinda looks like you're using some non-GNU make.

Correct.
It is from Solaris Studio compiler.

What should I do?

Thank you.

>
> regards, tom lane




Re: Failed to compile libpq

2020-10-14 Thread Tom Lane
Igor Korot  writes:
> On Thu, Oct 15, 2020 at 12:01 AM Tom Lane  wrote:
>> Kinda looks like you're using some non-GNU make.

> Correct.
> It is from Solaris Studio compiler.

> What should I do?

Try "gmake".  If it's not already on the system, you'll need to
install it.

regards, tom lane