Re: VM Instance to Google Cloud SQL Migration

2018-11-15 Thread Andreas Kretschmer




Am 15.11.2018 um 08:54 schrieb Sathish Kumar:
We would like to migrate our Postgresql VM instance on Google Cloud 
Platform to Google Cloud SQL with a minimal downtime. As I checked, we 
have to export and import the SQL file and our database size is large 
and cannot afford longer downtime.


Do any have solution to achieve this?.


setup a replication from one to the other?


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: <-> Operator on Trigram Index

2018-11-15 Thread Arthur Zakirov

Hello,

On 14.11.2018 01:42, Jeffrey Kamei wrote:
I'm trying to get the <-> operator to recognize a trigram index (GIST) 
I've set on a table. Using `EXPLAIN VERBOSE` I can see the query engine 
ignoring the trigram index when using the `<->` operator. However if I 
use the `%` operator, the index is found and used. Can you explain why 
this is happening? As far as I can tell from the documentation, the 
`<->` operator should be using the index as well.


Yes <-> operator should use a GiST index. Can you show your query and 
its plan?


--
Arthur Zakirov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company



Re: VM Instance to Google Cloud SQL Migration

2018-11-15 Thread Ian Lawrence Barwick
2018年11月15日(木) 17:19 Andreas Kretschmer :
>
> Am 15.11.2018 um 08:54 schrieb Sathish Kumar:
> > We would like to migrate our Postgresql VM instance on Google Cloud
> > Platform to Google Cloud SQL with a minimal downtime. As I checked, we
> > have to export and import the SQL file and our database size is large
> > and cannot afford longer downtime.
> >
> > Do any have solution to achieve this?.
>
> setup a replication from one to the other?

Doesn't seem possible at the moment; here:
https://cloud.google.com/sql/docs/postgres/replication/
it says: "Cloud SQL for PostgreSQL does not yet support replication
from an external
master or external replicas for Cloud SQL instances."

Looking at the feature list:

  https://cloud.google.com/sql/docs/postgres/features

among the "Unsupported features" are: "Any features that require
SUPERUSER privileges"
(apart from a limited number of extensions), which pretty much rules
out pglogical or similar solutions.


Regards

Ian Barwick

--
 2ndQuadrant - The PostgreSQL Support Company.
 www.2ndQuadrant.com



RE: pg_dump out of memory for large table with LOB

2018-11-15 Thread Daniel Verite
Jean-Marc Lessard wrote:

> Another area where LOB hurts is the storage. LOB are broken and stored in 2K
> pieces.
> Due to the block header, only three 2k pieces fit in an 8k block wasting 25%
> of space (in fact pgstattuple reports ~ 20%).

Yes. bytea stored as TOAST is sliced into pieces of 2000 bytes, versus
2048 bytes for large objects. And that makes a significant difference
when packing these slices because 2000*4+page overhead+
4*(row overhead) is just under the default size of 8192 bytes per page,
whereas 2048*4+(page overhead)+4*(row overhead)
is obviously a bit over 8192, since 2048*4=8192.

If the data is compressible, the difference may be less obvious because
the slices in pg_largeobject are compressed individually
(as opposed to bytea that gets compressed as a whole),
so more than 3 slices can fit in a page inside pg_largeobject
The post-compression size can be known with pg_column_size(),
versus octet_length() that gives the pre-compression size.
 
> Would you recommend bytea over LOB considering that the max LOB size is well
> bellow 1GB?
> Are bytea preferable in terms of support by the community, performance,
> feature, etc?

For the storage and pg_dump issues, bytea seems clearly preferable
in your case.
As for the performance aspect, large objects are excellent because their
API never requires a binary<->text conversion.
This may be different with bytea. The C API provided by libpq allows to
retrieve and send bytea in binary format, for instance through
PQexecParams(), but most drivers implemented on top of libpq use only 
the text representation for all datatypes, because it's simpler for them.
So you may want to check the difference in sending and retrieving
your biggest binary objects with your particular app/language/framework
stored in a bytea column versus large objects.


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite



Re: Java UnsatisfiedLinkError exception when connecting to Postgresql database

2018-11-15 Thread dclark


 Rob Sargent  wrote: 
> 
> On 11/14/18 5:03 PM, dcl...@cinci.rr.com wrote:
> >  Adrian Klaver  wrote:
> >> On 11/14/18 10:24 AM, dcl...@cinci.rr.com wrote:
> >> Please reply to list also.
> >> Ccing list.
> >>>  Adrian Klaver  wrote:
>  On 11/14/18 9:25 AM, dcl...@cinci.rr.com wrote:
> > Hello;
> >
> > I've written a Java program which uses Postgresql via JDBC.  The 
> > program works fine on all RedHat systems I've tested except one, where 
> > it yields an UnsatisifiedLinkError.  Here is the stack trace:
> >
> > sun.misc.VM.latestUserDefinedLoader0(Native
> > Method)
> > sun.misc.VM.latestUserDefinedLoader(VM.java:411)
> > java.io.ObjectInputStream.latestUserDefinedLoader(ObjectInputStream.java:2351)
> > java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:686)
> > java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1866)
> > java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1749)
> > java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2040)
> > java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571)
> > java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
> > org.postgresql.ds.common.BaseDataSource.readBaseObject(BaseDataSource.java:1210)
> > org.postgresql.ds.common.BaseDataSource.initializeFrom(BaseDataSource.java:1220)
> > org.postgresql.ds.PGPoolingDataSource.initialize(PGPoolingDataSource.java:267)
> > org.postgresql.ds.PGPoolingDataSource.getConnection(PGPoolingDataSource.java:324)
> >
> > Any ideas?
>  What is different about the system that throws the error?
> 
>  For example:
> 
>  OS version
>  JDBC version
>  Postgres version
>  Java version
> >>> Thank you for your reply.
> >>>
> >>> OS on working system: Linux 3.10.0-693.11.6.el7.x86_64 x86_64
> >>> OS on problem system: Linux 3.10.0-693.21.1.el7.x86_64 x86_64
> >>>
> >>> JDBC version on both systems: 9.4.1209
> >>>
> >>> Postgres version on both systems: 9.6.5 on x86_64-redhat-linux-gnu, 
> >>> compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit
> >>>
> >>> Java version on both systems:
> >>>
> >>> openjdk version "1.8.0_171"
> >>> OpenJDK Runtime Environment (build 1.8.0_171-b10)
> >>> OpenJDK 64-Bit Server VM (build 25.171-b10, mixed mode)
> >>>
> >> Hmm.
> >> So what was the UnsatisifiedLinkError message, before the traceback above?
> > java.lang.UnsatisfiedLinkError: 
> > sun.misc.VM.latestUserDefinedLoader0()Ljava/lang/ClassLoader;
> >
> > Thank you.
> >
> >
> Should OpenJDK be looking for a sun class?

Ah ha, that's it.  Part of the deployment on the problem system was outdated.

Thanks to all.




Re: Impact on PostgreSQL due to Redhat acquisition by IBM

2018-11-15 Thread Adrian Klaver

On 11/14/18 10:39 PM, Sachin Kotwal wrote:






Looks like no one have clear idea yet and deal also not completely done.


Barring some regulatory intervention(which I do not see) the deal will 
be completed.



Hope Redhat community support will continue same as earlier.

Let's wait until something will announce by community is coming days.


Not sure what announcement you are expecting? From what I see nothing 
has changed so there is no need for any.




Thanks all for your inputs.

Regards,
Sachin

--

Thanks and Regards,
Sachin Kotwal



--
Adrian Klaver
adrian.kla...@aklaver.com



db-connections (application architecture)

2018-11-15 Thread Mark Moellering
So, I am working on some system designs for a web application, and I wonder
if there is any definitive answer on how to best connect to a postgres
database.

I could have it so that each time a query, or set of queries, for a
particular request, needs to be run, a new connection is opened, queries
are run, and then connection is closed / dropped.

OR, I could create a persistent connection that will remain open as long as
a user is logged in and then any queries are run against the open
connection.

I can see how, for only a few (hundreds to thousands) of users, the latter
might make more sense but if I need to scale up to millions, I might not
want all of those connections open.

Any idea of how much time / overhead is added by opening and closing a
connection everytime?

Any and all information is welcome.

Thanks in advance

-- Mark M


Re: db-connections (application architecture)

2018-11-15 Thread Andreas Kretschmer




Am 15.11.2018 um 16:09 schrieb Mark Moellering:
I can see how, for only a few (hundreds to thousands) of users, the 
latter might make more sense but if I need to scale up to millions, I 
might not want all of those connections open.


consider a connection-pooler like phbouncer.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: db-connections (application architecture)

2018-11-15 Thread Mark Moellering
Oh, excellent.  I knew I was about to reinvent the wheel.
Sometimes, there are just too many new things to keep up on.

Thank you so much!

On Thu, Nov 15, 2018 at 10:16 AM Adrian Klaver 
wrote:

> On 11/15/18 7:09 AM, Mark Moellering wrote:
> > So, I am working on some system designs for a web application, and I
> > wonder if there is any definitive answer on how to best connect to a
> > postgres database.
> >
> > I could have it so that each time a query, or set of queries, for a
> > particular request, needs to be run, a new connection is opened, queries
> > are run, and then connection is closed / dropped.
> >
> > OR, I could create a persistent connection that will remain open as long
> > as a user is logged in and then any queries are run against the open
> > connection.
> >
> > I can see how, for only a few (hundreds to thousands) of users, the
> > latter might make more sense but if I need to scale up to millions, I
> > might not want all of those connections open.
> >
> > Any idea of how much time / overhead is added by opening and closing a
> > connection everytime?
> >
> > Any and all information is welcome.
>
> Connection pooling?
>
> In no particular order:
>
> https://pgbouncer.github.io/
>
> http://www.pgpool.net/mediawiki/index.php/Main_Page
>
> >
> > Thanks in advance
> >
> > -- Mark M
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: db-connections (application architecture)

2018-11-15 Thread Adrian Klaver

On 11/15/18 7:09 AM, Mark Moellering wrote:
So, I am working on some system designs for a web application, and I 
wonder if there is any definitive answer on how to best connect to a 
postgres database.


I could have it so that each time a query, or set of queries, for a 
particular request, needs to be run, a new connection is opened, queries 
are run, and then connection is closed / dropped.


OR, I could create a persistent connection that will remain open as long 
as a user is logged in and then any queries are run against the open 
connection.


I can see how, for only a few (hundreds to thousands) of users, the 
latter might make more sense but if I need to scale up to millions, I 
might not want all of those connections open.


Any idea of how much time / overhead is added by opening and closing a 
connection everytime?


Any and all information is welcome.


Connection pooling?

In no particular order:

https://pgbouncer.github.io/

http://www.pgpool.net/mediawiki/index.php/Main_Page



Thanks in advance

-- Mark M



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: db-connections (application architecture)

2018-11-15 Thread Andreas Kretschmer




Am 15.11.2018 um 16:14 schrieb Andreas Kretschmer:



Am 15.11.2018 um 16:09 schrieb Mark Moellering:
I can see how, for only a few (hundreds to thousands) of users, the 
latter might make more sense but if I need to scale up to millions, I 
might not want all of those connections open.


consider a connection-pooler like phbouncer.



typo, should be pgbouncer ;-)



Regards, Andreas



--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




RE: pg_dump out of memory for large table with LOB

2018-11-15 Thread Jean-Marc Lessard
Thanks to Daniel Verite, nice answer, really helpful :)
It summarizes what I have read in the doc and blogs.

What about updates where the bytea do not changed. Does a new copy of the bytea 
will be made in the toast table or new row will point to the original bytea?
> https://www.postgresql.org/docs/current/storage-toast.html says
> The TOAST management code is triggered only when a row value to be stored in 
> a table is wider than TOAST_TUPLE_THRESHOLD bytes (normally 2 kB). The TOAST 
> code will compress
> and/or move field values out-of-line until the row value is shorter than 
> TOAST_TUPLE_TARGET bytes (also normally 2 kB, adjustable) or no more gains 
> can be had. During an UPDATE
> operation, values of unchanged fields are normally preserved as-is; so an 
> UPDATE of a row with out-of-line values incurs no TOAST costs if none of the 
> out-of-line values change.

Does it means, no incurs cost to generate the out of line toast, but that a 
copy of the bytea is still made for the new line?


Jean-Marc Lessard
Administrateur de base de données / Database Administrator
Ultra Electronics Forensic Technology Inc.
T +1 514 489 4247 x4164
www.ultra-forensictechnology.com


Re: Impact on PostgreSQL due to Redhat acquisition by IBM

2018-11-15 Thread Sachin Kotwal
On Thu 15 Nov, 2018, 7:59 PM Adrian Klaver  On 11/14/18 10:39 PM, Sachin Kotwal wrote:
> >
>
> >
> > Looks like no one have clear idea yet and deal also not completely done.
>
> Barring some regulatory intervention(which I do not see) the deal will
> be completed.
>
> > Hope Redhat community support will continue same as earlier.
> >
> > Let's wait until something will announce by community is coming days.
>
> Not sure what announcement you are expecting? From what I see nothing
> has changed so there is no need for any.
>  does.
>


I know community does not provide postgresql  binaries for windows,
EnterpriseDB and OpenSCG

I feel community has most of linux based instance in thier buildfarm for
testing, might be very few Ubuntu based.
I might be wrong here.

Anyways , we can conclude discussion with "No need to worry about Redhat
acquisition as PostgreSQL having really strong position in market"

Thanks,
Sachin

> >
> > Thanks all for your inputs.
> >
> > Regards,
> > Sachin
> >
> > --
> >
> > Thanks and Regards,
> > Sachin Kotwal
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Impact on PostgreSQL due to Redhat acquisition by IBM

2018-11-15 Thread Adrian Klaver

On 11/15/18 7:59 AM, Sachin Kotwal wrote:



On Thu 15 Nov, 2018, 7:59 PM Adrian Klaver  wrote:


On 11/14/18 10:39 PM, Sachin Kotwal wrote:
 >

 >
 > Looks like no one have clear idea yet and deal also not
completely done.

Barring some regulatory intervention(which I do not see) the deal will
be completed.

 > Hope Redhat community support will continue same as earlier.
 >
 > Let's wait until something will announce by community is coming days.

Not sure what announcement you are expecting? From what I see nothing
has changed so there is no need for any.
  does.



I know community does not provide postgresql  binaries for windows, 
EnterpriseDB and OpenSCG


Actually it does, those companies are part of the community.



I feel community has most of linux based instance in thier buildfarm for 
testing, might be very few Ubuntu based.

I might be wrong here.


Easy enough to see:

https://buildfarm.postgresql.org/cgi-bin/show_members.pl



Anyways , we can conclude discussion with "No need to worry about Redhat 
acquisition as PostgreSQL having really strong position in market"


Thanks,
Sachin

 >
 > Thanks all for your inputs.
 >
 > Regards,
 > Sachin
 >
 > --
 >
 > Thanks and Regards,
 > Sachin Kotwal


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Impact on PostgreSQL due to Redhat acquisition by IBM

2018-11-15 Thread Alvaro Herrera
On 2018-Nov-15, Sachin Kotwal wrote:

> I feel community has most of linux based instance in thier buildfarm for
> testing, might be very few Ubuntu based.

If you feel the need to run more buildfarm members on Ubuntu, run some
yourself.  It's self-service.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: VM Instance to Google Cloud SQL Migration

2018-11-15 Thread Christopher Browne
On Thu, 15 Nov 2018 at 07:06, Ian Lawrence Barwick  wrote:
>
> 2018年11月15日(木) 17:19 Andreas Kretschmer :
> >
> > Am 15.11.2018 um 08:54 schrieb Sathish Kumar:
> > > We would like to migrate our Postgresql VM instance on Google Cloud
> > > Platform to Google Cloud SQL with a minimal downtime. As I checked, we
> > > have to export and import the SQL file and our database size is large
> > > and cannot afford longer downtime.
> > >
> > > Do any have solution to achieve this?.
> >
> > setup a replication from one to the other?
>
> Doesn't seem possible at the moment; here:
> https://cloud.google.com/sql/docs/postgres/replication/
> it says: "Cloud SQL for PostgreSQL does not yet support replication
> from an external
> master or external replicas for Cloud SQL instances."
>
> Looking at the feature list:
>
>   https://cloud.google.com/sql/docs/postgres/features
>
> among the "Unsupported features" are: "Any features that require
> SUPERUSER privileges"
> (apart from a limited number of extensions), which pretty much rules
> out pglogical or similar solutions.

That usually also rules out Slony-I, although there's a possibility...

Slony-I includes a feature called log shipping, which could perhaps be used
for this, assuming that the "source" environment does allow superuser
privileges.  (And I think you're running on a PostgreSQL instance where
that's possible...)

See: http://slony.info/documentation/logshipping.html

-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"



Re: Default Privilege Table ANY ROLE

2018-11-15 Thread Nicolas Paris
On Wed, Nov 14, 2018 at 03:19:00PM +0100, Nicolas Paris wrote:
> Hi
> 
> I d'like my user be able to select on any new table from other users.
> 
> > ALTER DEFAULT PRIVILEGES  FOR  ROLE "theowner1"  IN SCHEMA "myschema" GRANT 
> >  select ON TABLES TO "myuser"
> > ALTER DEFAULT PRIVILEGES  FOR  ROLE "theowner2"  IN SCHEMA "myschema" GRANT 
> >  select ON TABLES TO "myuser"
> > ...
> 
> 
> Do I really have to repeat the command for all users ?
> 
> The problem is I have many user able to create tables and all of them
> have to read each other. 
> 

There is apparently no trivial solution, could the Postgres DCL be
extended with this syntax in the future ?

> ALTER DEFAULT PRIVILEGES  FOR  ALL ROLE  IN SCHEMA "myschema" GRANT select ON 
> TABLES TO "myuser"




-- 
nicolas



Re: Impact on PostgreSQL due to Redhat acquisition by IBM

2018-11-15 Thread Thomas Munro
On Fri, Nov 16, 2018 at 5:07 AM Adrian Klaver  wrote:
> On 11/15/18 7:59 AM, Sachin Kotwal wrote:
> > I feel community has most of linux based instance in thier buildfarm for
> > testing, might be very few Ubuntu based.
> > I might be wrong here.
>
> Easy enough to see:
>
> https://buildfarm.postgresql.org/cgi-bin/show_members.pl

Out of curiosity, here are the current counts for HEAD:

Linux distros:
  Amazon Linux: 1
  Arch Linux: 1
  CentOS: 9
  Debian: 34
  Fedora: 5
  Photon: 1
  Raspbian: 2
  RHEL: 8
  SUSE: 7
  Ubuntu: 7

BSD diaspora:
  DragonflyBSD: 1
  FreeBSD: 6
  NetBSD: 2
  OpenBSD: 1

OpenSolaris diapora:
  OmniOS: 1
  SmartOS: 1

Other Unixen:
  AIX: 4
  HP-UX: 3
  macOS: 4

Windows:
  Windows: 6
  Cygwin: 1

I wouldn't be too worried about any of these, especially the open
ones.  Closed Solaris, though, is apparently dead to us.  Nobody cares
enough to build HEAD on it anymore, and for example 3a769d82 (reflink
support for pg_upgrade) went in without consideration of Solaris 11.4
reflink().  I've personally moved to the 'acceptance' phase of grief;
all three Unixes that I cut my teeth on in the 90s are now either
formally dead and buried or in this case, a zombie.

>From personal observations, I know that we have developers and
committers doing their primary development work on at least Debian,
Fedora, FreeBSD, macOS, Ubuntu and Windows.

-- 
Thomas Munro
http://www.enterprisedb.com



Trouble with postgres_fdw & dblink extensions

2018-11-15 Thread Lukáš Sobotka
Hi guys,

I would be grateful for some help. I am writing you because I am confused
about using data foreign wrappers and dblink. I attached simplified script
describing the problem.

What I am trying to do?
I have two databases and I need to copy table from local database to the
remote one. For copying is used function which contains a few parts:

   -

   loading setting from foreign table (this part became a problematic)
   -

   creating destination table on remote db
   -

   importing foreign table
   -

   insert data into foreign table

If query using foreign table (with setting) is performed, command for
importing schema does not import new created table (it looks like table is
not created yet). So copying ends with error. The second calling of
function is all right (because destination table is already created from
first calling).
If function does not use foreign table, the first calling of function
copies all data.

Why the new created remote table can not be imported to local database when
I had performed query on other foreign table? What am I missing?

I am using PG 9.6 (PostgreSQL 9.6.10 on x86_64-pc-linux-gnu, compiled by
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit).

Best regards, Lukas


script.sql
Description: application/sql


Re: Trouble with postgres_fdw & dblink extensions

2018-11-15 Thread Tom Lane
=?UTF-8?B?THVrw6HFoSBTb2JvdGth?=  writes:
> I would be grateful for some help. I am writing you because I am confused
> about using data foreign wrappers and dblink. I attached simplified script
> describing the problem.

I think what is happening is that postgres_fdw starts a transaction on
its connection as soon as it's asked to do something, and then the CREATE
TABLE executed on dblink's separate connection isn't visible to that
already-in-progress transaction.

That theory only holds up if you are running in serializable mode (which
postgres_fdw would then also use for its remote transaction).  Which you
didn't say, but it's hard to see how it'd fail otherwise.

regards, tom lane



Re: Trouble with postgres_fdw & dblink extensions

2018-11-15 Thread Adrian Klaver

On 11/15/18 3:23 PM, Lukáš Sobotka wrote:

Hi guys,

I would be grateful for some help. I am writing you because I am 
confused about using data foreign wrappers and dblink. I attached 
simplified script describing the problem.


What I am trying to do?
I have two databases and I need to copy table from local database to the 
remote one. For copying is used function which contains a few parts:


  *

loading setting from foreign table (this part became a problematic)

  *

creating destination table on remote db

  *

importing foreign table

  *

insert data into foreign table

If query using foreign table (with setting) is performed, command for 
importing schema does not import new created table (it looks like table 
is not created yet). So copying ends with error. The second calling of 
function is all right (because destination table is already created from 
first calling).
If function does not use foreign table, the first calling of function 
copies all data.


Why the new created remote table can not be imported to local database 
when I had performed query on other foreign table? What am I missing?


Should this:

dblink('test_server_link' ...)

not be:

dblink_exec('test_server_link' ...)



I am using PG 9.6 (PostgreSQL 9.6.10 on x86_64-pc-linux-gnu, compiled by 
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit).


Best regards, Lukas




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Impact on PostgreSQL due to Redhat acquisition by IBM

2018-11-15 Thread Sachin Kotwal
On Fri 16 Nov, 2018, 1:06 AM Alvaro Herrera  On 2018-Nov-15, Sachin Kotwal wrote:
>
> > I feel community has most of linux based instance in thier buildfarm for
> > testing, might be very few Ubuntu based.
>
> If you feel the need to run more buildfarm members on Ubuntu, run some
> yourself.  It's self-service.
>


Missed to do reply all so doing here.

I would love to do it. Only help me if i found any issue while doing it and
I know community always do that , that is why i love PostgreSQL and it's
community.

We already have 120+ production instances running on Ubuntu.
I will start testing new releases from source on Ubuntu as well.

Thanks,
Sachin


> --
> Álvaro Herrerahttps://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: Impact on PostgreSQL due to Redhat acquisition by IBM

2018-11-15 Thread Sachin Kotwal
On Fri 16 Nov, 2018, 3:36 AM Thomas Munro  On Fri, Nov 16, 2018 at 5:07 AM Adrian Klaver 
> wrote:
> > On 11/15/18 7:59 AM, Sachin Kotwal wrote:
> > > I feel community has most of linux based instance in thier buildfarm
> for
> > > testing, might be very few Ubuntu based.
> > > I might be wrong here.
> >
> > Easy enough to see:
> >
> > https://buildfarm.postgresql.org/cgi-bin/show_members.pl
>
> Out of curiosity, here are the current counts for HEAD:
>
> Linux distros:
>   Amazon Linux: 1
>   Arch Linux: 1
>   CentOS: 9
>   Debian: 34
>   Fedora: 5
>   Photon: 1
>   Raspbian: 2
>   RHEL: 8
>   SUSE: 7
>   Ubuntu: 7
>
> BSD diaspora:
>   DragonflyBSD: 1
>   FreeBSD: 6
>   NetBSD: 2
>   OpenBSD: 1
>
> OpenSolaris diapora:
>   OmniOS: 1
>   SmartOS: 1
>
> Other Unixen:
>   AIX: 4
>   HP-UX: 3
>   macOS: 4
>
> Windows:
>   Windows: 6
>   Cygwin: 1
>
> I wouldn't be too worried about any of these, especially the open
> ones.  Closed Solaris, though, is apparently dead to us.  Nobody cares
> enough to build HEAD on it anymore, and for example 3a769d82 (reflink
> support for pg_upgrade) went in without consideration of Solaris 11.4
> reflink().  I've personally moved to the 'acceptance' phase of grief;
> all three Unixes that I cut my teeth on in the 90s are now either
> formally dead and buried or in this case, a zombie.
>
> From personal observations, I know that we have developers and
> committers doing their primary development work on at least Debian,
> Fedora, FreeBSD, macOS, Ubuntu and Windows.
>

Thanks for detail head count and explanation.
It cleared me and i believe should have clear to all new community member
that topic/subject doesn't make any effect on PostgreSQL future.

Regards,
Sachin


> --
> Thomas Munro
> http://www.enterprisedb.com
>