Problem with COPY command on pg_dump

2017-12-09 Thread Blake McBride
Greetings,

I am using pg_dump on version 9.5.10 and trying to import it on version
9.3.20.  Many (or all) COPY commands are failing with messages like:

psql:16504.db:16874: ERROR:  missing data for column "street"
CONTEXT:  COPY address, line 1: ""

Is there an easy way for me to fix this (like export with some
compatibility flag)?

Thanks!

Blake McBride


Re: Problem with COPY command on pg_dump

2017-12-09 Thread Rob Sargent


> On Dec 9, 2017, at 8:42 AM, Blake McBride  wrote:
> 
> Greetings,
> 
> I am using pg_dump on version 9.5.10 and trying to import it on version 
> 9.3.20.  Many (or all) COPY commands are failing with messages like:  
> 
> psql:16504.db:16874: ERROR:  missing data for column "street"
> CONTEXT:  COPY address, line 1: ""
> 
> Is there an easy way for me to fix this (like export with some compatibility 
> flag)?
> 
> Thanks!
> 
> Blake McBride

> 
Is there in fact ‘street’ data in the file?   Sounds like you have the wrong 
number of columns. Do you have your actual dump and restore commands to show 
here?




Windows XP to Win 10 migration issue

2017-12-09 Thread Dale Seaburg
I have a database in Windows XP (old PC), postgresql 8.4.5 which needs 
to be migrated to Windows 10 Pro (new PC).


Now, for the bad news, the XP PC, when I powered down, before doing any 
backup or database unload, would not power back up.  I will be 
attempting to get the PC fixed for that.  In the meantime, I removed the 
old PC HD, installing in a SATA to USB box and I installed the same 
version of Postgresql (8.4.5) on the Win 10 machine, thinking I could 
simply copy the data folder from the XP's HD to the Win 10's HD into the 
appropriate folder (C:/.../8.4.5/).  I did the copy of the data folder, 
after renaming the new, unused data folder to preserve it).  I stopped 
the postgres service, made the copy, and attempted to restart the 
postgres service.  No Go!  Would not start.  As a double-check, I 
renamed the copied data folder to something else and renamed the 
preserved data folder back to it's original name, and the service 
restarted just fine.


So, without being able to backup or unload (dump?) the original database 
on the original PC, what would be the best way to move data to the new 
PC?  Or, IS there any way to do this, without reviving the old XP, PC?


Dale



Re: Problem with COPY command on pg_dump

2017-12-09 Thread Tom Lane
Rob Sargent  writes:
>> On Dec 9, 2017, at 8:42 AM, Blake McBride  wrote:
>> I am using pg_dump on version 9.5.10 and trying to import it on version 
>> 9.3.20.  Many (or all) COPY commands are failing with messages like:  
>> psql:16504.db:16874: ERROR:  missing data for column "street"
>> CONTEXT:  COPY address, line 1: ""

> Is there in fact ‘street’ data in the file?   Sounds like you have the wrong 
> number of columns. Do you have your actual dump and restore commands to show 
> here?

While pg_dump certainly does sometimes produce output that doesn't reload
into older servers, we try to minimize such problems --- and in any case,
the actual COPY data shouldn't be a source of issues.  I'm suspicious that
the true problem occurred earlier; maybe a table drop/creation failed
and now COPY is trying to load into a pre-existing table that has the
right name and the wrong column list.

I'd counsel looking at the *first* error message and resolving that,
then trying again.  Anything later could just be cascaded failures.

regards, tom lane



Re: Problem with COPY command on pg_dump

2017-12-09 Thread Blake McBride
I suppose it's something I'm doing.  I'm pg_dump'ing the schema with the
data, so I can't be missing any columns.  However, I wrote a program to
filter out certain data.  I believe that is where the problem is.  I wrote
it a long, long time ago before --exclude-table-data existed.  When I
use --exclude-table-data
it works.

Thanks.

Blake

On Sat, Dec 9, 2017 at 9:53 AM, Rob Sargent  wrote:

>
>
> > On Dec 9, 2017, at 8:42 AM, Blake McBride  wrote:
> >
> > Greetings,
> >
> > I am using pg_dump on version 9.5.10 and trying to import it on version
> 9.3.20.  Many (or all) COPY commands are failing with messages like:
> >
> > psql:16504.db:16874: ERROR:  missing data for column "street"
> > CONTEXT:  COPY address, line 1: ""
> >
> > Is there an easy way for me to fix this (like export with some
> compatibility flag)?
> >
> > Thanks!
> >
> > Blake McBride
>
> >
> Is there in fact ‘street’ data in the file?   Sounds like you have the
> wrong number of columns. Do you have your actual dump and restore commands
> to show here?
>
>


Future of PlPython2

2017-12-09 Thread Boshomi DeWiki
Python 2.7 will not be maintained past 2020.[1]

Python2 is still default for Postgres 10. CREATE EXTENSION PLPYTHONU
results in installation of PLPYTHON2U.

As of now SUSE does not support PLPYTHON3U. (This will change soon)

Are there any plans for declaring Python3 as default for Postgres 11 or a
later Release?

[1] https://www.python.org/dev/peps/pep-0373/


Re: Future of PlPython2

2017-12-09 Thread Joshua D. Drake

On 12/09/2017 09:15 AM, Boshomi DeWiki wrote:

Python 2.7 will not be maintained past 2020.[1]

Python2 is still default for Postgres 10. CREATE EXTENSION PLPYTHONU 
results in installation of PLPYTHON2U.


As of now SUSE does not support PLPYTHON3U. (This will change soon)

Are there any plans for declaring Python3 as default for Postgres 11 
or a later Release?


[1] https://www.python.org/dev/peps/pep-0373/


This is probably more of a -hackers question, but I would +1 the python3 
version being the default in 11.


JD


--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc

PostgreSQL Centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://pgconf.org
* Unless otherwise stated, opinions are my own.   *




PG Schema to be used as log and monitoring store

2017-12-09 Thread Stefan Keller
Hi,

Given this kind of sensors (Internet-of-Things) log and monitoring scenario:

* There are 3 production machines monitored every few seconds for
forthcoming (~2) years.
* Machine m1 is emitting 20 boolean and 20 float4 captured in sensors
(m1s1..m1s40).
* Machine m2 has same attributes as m1 plus 10+10 more (m2s1..m2s20).
* Machine m3: like m2 but half of the attributes are different.
* Queries are happening once every day, like:
  SELECT m1s1,m1s2 FROM m1 WHERE logged BETWEEN '2017-11-01' AND '2017-11-30'".

So this is a kind of an "Immutable DB" with where there are
* rather static schema with sources which have overlapping attributes
* heavy writes,
* periodic reads

Would you model this schema also like my proposition, which saves
place but makes it little bit more complex to insert/update due to the
arrays?

  create table m1 (
id bigint,
created timestamp,
b20 bit(20) default b'',
farr20 float8[20]
  );

:Stefan



Re: PG Schema to be used as log and monitoring store

2017-12-09 Thread James Keener
My initial inclination is to always build the simplest to understand system 
first. Space is cheap and pg is pretty efficient, engineering time is expensive 
and debugging time doubly so with a side of anxiety when production goes down. 
Also, it will allow more flexibility later on to describe your current setup 
semantically rathe than in a condensed form.

I would suggest building a simpler schema and benchmarking insert times and 
storage space.

If you go with your condensed form, I would suggest writing functions to 
extract any data that is more complicated than one expression involving one 
field.

Just my 2¢.

Best wishes,

Jim

On December 9, 2017 2:22:02 PM EST, Stefan Keller  wrote:
>Hi,
>
>Given this kind of sensors (Internet-of-Things) log and monitoring
>scenario:
>
>* There are 3 production machines monitored every few seconds for
>forthcoming (~2) years.
>* Machine m1 is emitting 20 boolean and 20 float4 captured in sensors
>(m1s1..m1s40).
>* Machine m2 has same attributes as m1 plus 10+10 more (m2s1..m2s20).
>* Machine m3: like m2 but half of the attributes are different.
>* Queries are happening once every day, like:
>SELECT m1s1,m1s2 FROM m1 WHERE logged BETWEEN '2017-11-01' AND
>'2017-11-30'".
>
>So this is a kind of an "Immutable DB" with where there are
>* rather static schema with sources which have overlapping attributes
>* heavy writes,
>* periodic reads
>
>Would you model this schema also like my proposition, which saves
>place but makes it little bit more complex to insert/update due to the
>arrays?
>
>  create table m1 (
>id bigint,
>created timestamp,
>b20 bit(20) default b'',
>farr20 float8[20]
>  );
>
>:Stefan

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: Windows XP to Win 10 migration issue

2017-12-09 Thread Scott Mead


> On Dec 9, 2017, at 11:11, Dale Seaburg  wrote:
> 
> I have a database in Windows XP (old PC), postgresql 8.4.5 which needs to be 
> migrated to Windows 10 Pro (new PC).
> 
> Now, for the bad news, the XP PC, when I powered down, before doing any 
> backup or database unload, would not power back up.  I will be attempting to 
> get the PC fixed for that.  In the meantime, I removed the old PC HD, 
> installing in a SATA to USB box and I installed the same version of 
> Postgresql (8.4.5) on the Win 10 machine, thinking I could simply copy the 
> data folder from the XP's HD to the Win 10's HD into the appropriate folder 
> (C:/.../8.4.5/).  I did the copy of the data folder, after renaming the new, 
> unused data folder to preserve it).  I stopped the postgres service, made the 
> copy, and attempted to restart the postgres service.  No Go!  Would not 
> start.  As a double-check, I renamed the copied data folder to something else 
> and renamed the preserved data folder back to it's original name, and the 
> service restarted just fine.
> 
> So, without being able to backup or unload (dump?) the original database on 
> the original PC, what would be the best way to move data to the new PC?  Or, 
> IS there any way to do this, without reviving the old XP, PC?

I’m guessing that the old machine was 32 but and the New is 64?  They have to 
match. You could always setup a 32 bit VM. It must be 32 bit windows (if that’s 
what the original was ) 


> 
> Dale
> 



Re: Future of PlPython2

2017-12-09 Thread Peter Eisentraut
On 12/9/17 12:15, Boshomi DeWiki wrote:
> Python 2.7 will not be maintained past 2020.[1] 
> 
> Python2 is still default for Postgres 10. CREATE EXTENSION PLPYTHONU
> results in installation of PLPYTHON2U.
> 
> As of now SUSE does not support PLPYTHON3U. (This will change soon)
> 
> Are there any plans for declaring Python3 as default for Postgres 11 or
> a later Release?

For me, this depends on whether there will be an update to PEP 394
sometime for guidance.



Re: PG Schema to be used as log and monitoring store

2017-12-09 Thread Steven Lembark
On Sat, 9 Dec 2017 20:22:02 +0100
Stefan Keller  wrote:

>   create table m1 (
> id bigint,
> created timestamp,
> b20 bit(20) default b'',
> farr20 float8[20]
>   );

In general this is a bad idea *unless* you have benchmarked the 
database and found that the amount of space saved really does make
some difference. Using the packed format makes most SQL a lot harder
to write and makes indexing impossible (or at least messy and rather
error prone). This also makes adding add'l fields harder.

If you were really intent on doing this I'd add a few million recods
with both formats on a database tuned to handle the load and see if 
the packed bits really do make a difference. My guess is that you
won't see all that much difference in storage and the query speed
with effective indexing is going to be decent.

Using this database might be a lot simpler with a few that 
breaks the sub-fields out, or which has indexes on the sub
-fields within the packed data. 


-- 
Steven Lembark   1505 National Ave
Workhorse Computing Rockford, IL 61103
lemb...@wrkhors.com+1 888 359 3508



Re: Windows XP to Win 10 migration issue

2017-12-09 Thread John R Pierce

On 12/9/2017 1:03 PM, Scott Mead wrote:

I’m guessing that the old machine was 32 but and the New is 64?  They have to 
match. You could always setup a 32 bit VM. It must be 32 bit windows (if that’s 
what the original was )


you should be able to run 32 bit postgresql on 64 bit windows.

its a bit trickier than just the same version, however...  they should 
be from the same distribution too... a version built with GCC and Cygwin 
might not be binary compatible with a version built with Microsoft 
Visual C (such as the EnterpriseDB versions) due to differences in 
runtime libraries.



--
john r pierce, recycling bits in santa cruz




Re: PG Schema to be used as log and monitoring store

2017-12-09 Thread Stefan Keller
Hi

Thanks James and Steven!
I hoped somebody will advise me not to do this.
I was just bothered with NoSQL databases.
Even TimescaleDB made me wonder because it says it scales Postgres [1]
for IoT which implies that Postsgres does not scale...

2017-12-09 23:01 GMT+01:00 Steven Lembark :
> In general this is a bad idea *unless* you have benchmarked the
> database and found that the amount of space saved really does make
> some difference.

I actually made some tests on my own (using generate_series) and did
not find any disk space or performance issues yet.
I've also found this paper from 2012 about "Sensor Data Storage
Performance: SQL or NoSQL, Physical or Virtual"  [2] which confirms my
observations.

Now, you have to know that there are about 100 attributes for the
machines/tables - not only 40 - so I initially thought, it's easier to
setup the schema using bit(50) and float8[50].

Below I re-modeled it to a relational schema as you suggested and also
tried to utilize the INHERITS feature.
Does that look better?

:Stefan

[1] https://blog.timescale.com/choose-postgresql-for-iot-19688efc60ca
[2] 
https://www.ceid.upatras.gr/webpages/faculty/vasilis/Courses/SpatialTemporalDM/Papers/SQLorNoSQL2012.pdf


/*  Pure relational logging and monitoring schema */

  create table m_meta (
id int primary key,
name text
  );

  drop table if exists m cascade;
  create table m (
id bigint primary key,
gid int references m_meta not null,
created timestamp,
b1 bit,
b2 bit,
b3 bit,
-- b2 .. b20
f1 float8,
f2 float8,
f3 float8
--f4 ... f20
  );

  create table m1 (
b21 bit,
-- b22 .. b50 bit,
f21 float8,
--f4 ... f20 float8,
primary key (id),
foreign key (gid) references m_meta
  ) inherits (m);

  --create table m1 ( ... ) inherits (m);

/* end */


2017-12-09 23:01 GMT+01:00 Steven Lembark :
> On Sat, 9 Dec 2017 20:22:02 +0100
> Stefan Keller  wrote:
>
>>   create table m1 (
>> id bigint,
>> created timestamp,
>> b20 bit(20) default b'',
>> farr20 float8[20]
>>   );
>
> In general this is a bad idea *unless* you have benchmarked the
> database and found that the amount of space saved really does make
> some difference. Using the packed format makes most SQL a lot harder
> to write and makes indexing impossible (or at least messy and rather
> error prone). This also makes adding add'l fields harder.
>
> If you were really intent on doing this I'd add a few million recods
> with both formats on a database tuned to handle the load and see if
> the packed bits really do make a difference. My guess is that you
> won't see all that much difference in storage and the query speed
> with effective indexing is going to be decent.
>
> Using this database might be a lot simpler with a few that
> breaks the sub-fields out, or which has indexes on the sub
> -fields within the packed data.
>
>
> --
> Steven Lembark   1505 National Ave
> Workhorse Computing Rockford, IL 61103
> lemb...@wrkhors.com+1 888 359 3508
>



Re: PG Schema to be used as log and monitoring store

2017-12-09 Thread John R Pierce

On 12/9/2017 5:46 PM, Stefan Keller wrote:

Below I re-modeled it to a relational schema as you suggested and also
tried to utilize the INHERITS feature.
Does that look better?



I believe I would use boolean, not bit.


--
john r pierce, recycling bits in santa cruz