Re: [GENERAL] how to do merge in postgres ("with upsert as" not supported)

2015-02-27 Thread Jim Nasby

On 2/26/15 2:23 PM, Tong Michael wrote:

I saw some people use "with upsert as", but my pgAdmin version(1.8)
doesn't support it. Anyone has any ideas how to do merge in postgres?


Actually, this feature is in active development and will hopefully make 
it into 9.5.


In the meantime, take a look at Example 40-2 in the Trapping Errors 
section of the plpgsql docs for how you can do this today: 
http://www.postgresql.org/docs/9.4/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: [pgadmin-support] Issue with a hanging apply process on the replica db after vacuum works on primary

2015-02-27 Thread Jim Nasby

On 2/26/15 12:25 AM, Sergey Shchukin wrote:

Hi Radovan !

Thank you for the reply. The question is that this table is not a
subject for a massive updates/deletes.

Is there any additional traces except from perf or pg_top to trace what
replica is doing at the particular moment when we are lagging in replay?
To see locks or spins or sleeps etc..


Please don't top-post.

What version is this? What is max_standby_streaming_delay set to?


Thank you!

-

Best regards,
Sergey Shchukin

24.02.2015 19:05, Radovan Jablonovsky пишет:

This looks like more issue for pgsql-general mailing list.

Possible solutions
1) Set specific autovacuum parameters on the big table. The autovacuum
could vacuum table on multiple runs based on the thresholds and cost
settings
Example of setting specific values of autovacuum and analyze for
table. It should be adjusted for your system, work load, table usage, etc:
alter table "my_schema"."my_big_table" set (fillfactor = 80,
autovacuum_enabled = true, autovacuum_vacuum_threshold = 200,
autovacuum_analyze_threshold = 400, autovacuum_vacuum_scale_factor =
0.05, autovacuum_analyze_scale_factor = 0.005,
autovacuum_vacuum_cost_delay = 10, autovacuum_vacuum_cost_limit = 5000);

2) Could be to partition the large table on master site and vacuum it
partition by partition.

On Tue, Feb 24, 2015 at 6:42 AM, Sergey Shchukin
mailto:shchukin@gmail.com>> wrote:

Hi all!

May someone help me with the issue in the apply process on the
replica. We have a stream replication and after vacuum stops
working with a big table we get a "freeze" in applying data on the
replica database. It looks like this:

Tue Feb 24 15:04:51 MSK 2015 Stream:
MASTER-masterdb:79607136410456 SLAVE:79607136410456
Replay:79607136339456 :: REPLAY 69 KBytes (00:00:00.294485 seconds)
Tue Feb 24 15:04:52 MSK 2015 Stream:
MASTER-masterdb:79607137892672 SLAVE:79607137715392
Replay:79607137715392 :: REPLAY 173 KBytes (00:00:00.142605 seconds)
Tue Feb 24 15:04:53 MSK 2015 Stream:
MASTER-masterdb:79607139327776 SLAVE:79607139241816
Replay:79607139241816 :: REPLAY 84 KBytes (00:00:00.05223 seconds)
Tue Feb 24 15:04:54 MSK 2015 Stream:
MASTER-masterdb:79607141134776 SLAVE:79607141073344
Replay:79607141080032 :: REPLAY 54 KBytes (00:00:00.010603 seconds)
Tue Feb 24 15:04:55 MSK 2015 Stream:
MASTER-masterdb:79607143085176 SLAVE:79607143026440
Replay:79607143038040 :: REPLAY 46 KBytes (00:00:00.009506 seconds)
Tue Feb 24 15:04:56 MSK 2015 Stream:
MASTER-masterdb:79607145111280 SLAVE:79607145021384
Replay:79607145025664 :: REPLAY 83 KBytes (00:00:00.006795 seconds)
Tue Feb 24 15:04:57 MSK 2015 Stream:
MASTER-masterdb:79607146564424 SLAVE:79607146478336
Replay:79607146501264 :: REPLAY 61 KBytes (00:00:00.00701 seconds)
Tue Feb 24 15:04:58 MSK 2015 Stream:
MASTER-masterdb:79607148160680 SLAVE:79607148108352
Replay:79607147369320 :: REPLAY 773 KBytes (00:00:00.449702 seconds)
Tue Feb 24 15:04:59 MSK 2015 Stream:
MASTER-masterdb:79607150220688 SLAVE:79607150159632
Replay:79607150171312 :: REPLAY 48 KBytes (00:00:00.006594 seconds)
Tue Feb 24 15:05:00 MSK 2015 Stream:
MASTER-masterdb:79607152365360 SLAVE:79607152262696
Replay:79607152285240 :: REPLAY 78 KBytes (00:00:00.007042 seconds)
Tue Feb 24 15:05:02 MSK 2015 Stream:
MASTER-masterdb:79607154049848 SLAVE:79607154012624
Replay:79607153446800 :: REPLAY 589 KBytes (00:00:00.513637 seconds)
Tue Feb 24 15:05:03 MSK 2015 Stream:
MASTER-masterdb:79607155229992 SLAVE:79607155187864
Replay:79607155188312 :: REPLAY 41 KBytes (00:00:00.004773 seconds)
Tue Feb 24 15:05:04 MSK 2015 Stream:
MASTER-masterdb:79607156833968 SLAVE:79607156764128
Replay:79607156785488 :: REPLAY 47 KBytes (00:00:00.006846 seconds)
Tue Feb 24 15:05:05 MSK 2015 Stream:
MASTER-masterdb:79607158419848 SLAVE:79607158344856
Replay:79607158396352 :: REPLAY 23 KBytes (00:00:00.005228 seconds)
Tue Feb 24 15:05:06 MSK 2015 Stream:
MASTER-masterdb:79607160004776 SLAVE:79607159962400
Replay:7960715998 :: REPLAY 16 KBytes (00:00:00.003162 seconds)
*--here apply process just stops*

Tue Feb 24 15:05:07 MSK 2015 Stream:
MASTER-masterdb:79607161592048 SLAVE:79607161550576
Replay:79607160986064 :: REPLAY 592 KBytes (00:00:00.398376 seconds)
Tue Feb 24 15:05:08 MSK 2015 Stream:
MASTER-masterdb:79607163272840 SLAVE:79607163231384
Replay:79607160986064 :: REPLAY 2233 KBytes (00:00:01.446759 seconds)
Tue Feb 24 15:05:09 MSK 2015 Stream:
MASTER-masterdb:79607164958632 SLAVE:79607164904448
Replay:79607160986064 :: REPLAY 3879 KBytes (00:00:02.497181 seconds)
Tue Feb 24 15:05:10 MSK 2015 Stream:
MASTER-masterdb:79607166819560 SLAVE:7960716612
Replay:79607160986064 :: REPLAY 5697 KBytes (00:00:03.543107 seconds)
Tue Feb 24 15:05:11 MSK 2015 Stream:
MASTER-masterdb:79607168595280 SLAVE:

Re: [GENERAL] how to do merge in postgres ("with upsert as" not supported)

2015-02-27 Thread Thomas Kellerer
Tong Michael schrieb am 26.02.2015 um 21:23:
> 
> hey, guys, I came across a merge statement when I'm trying to convert stored 
> procedures from Mysql to Postgres:
> 
> __ __
> 
> merge into db.ChargePeriod d
> using (
> select ba.ClientID
> ...
> ...

That can't be MySQL - MySQL does not have a MERGE statement. 
That looks much more like SQL Server's T-SQL (MERGE, @ style variables, 
convert() function...)

Here are several ways to do it: http://stackoverflow.com/q/1109061/330315

> I saw some people use "with upsert as", but my pgAdmin version(1.8) doesn't 
> support it. 

pgAdmin supports whatever your Postgres version supports. 
All supported/maintained Postgres versions support common table expressions.

What exactly do you mean with "doesn't support it"? What was the exact SQL 
statement you tried? 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regarding "Point-in-time Recovery" feature

2015-02-27 Thread Jim Nasby

On 2/23/15 5:39 AM, Saurabh Gupta A wrote:

Hello,

Has anybody used online backup feature of postgreSQL? In fact precise
postgreSQL term is called: "Point-in-time Recovery" (PITR)
This means enabling following additional options in config:
---
archive_command = on
archive_command = 'cp %p /usr/local/pgsql/pgDataPITR/wals/%f' # This is
only example path
---

If yes then may I know how it is used and how it impacts database
performance?


Other than the overhead of the copy itself, it generally doesn't. There 
are a very limited number of shortcuts we can take when wal_level is set 
to minimal, such as not WAL logging the full contents of data inserted 
into a table that was created/truncated in the same transaction, but 
generally archiving doesn't impact performance (assuming you keep it 
running correctly so pg_xlog doesn't fill up ;)

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] express composite type literal as text

2015-02-27 Thread Jim Nasby

On 2/21/15 7:35 PM, Eric Hanson wrote:

Hi,

I'm trying to use a composite type in a WHERE clause, as described here:

http://www.postgresql.org/docs/9.4/static/rowtypes.html

Just pasting in the examples I get:

CREATE TYPE complex AS (
 r   double precision,
 i   double precision
);

CREATE TYPE inventory_item AS (
 nametext,
 supplier_id integer,
 price   numeric
);

CREATE TABLE on_hand (
 item  inventory_item,
 count integer
);

INSERT INTO on_hand VALUES (ROW('fuzzy dice', 42, 1.99), 1000);


Now I want to query for that row, specifying the item in the WHERE
clause.  I can't use the ROW() notation, because all values need to be
represented as text over a REST api.  But I can't seem to get the
text-based syntax to work:

select * from on_hand where item='("fuzzy dice",42,1.99)';

yeilds

ERROR:  input of anonymous composite types is not implemented

I've tried various forms of quote escaping and dollar quoting as the
docs suggest, but they all produce that same error:

select * from on_hand where item='(\\\"fuzzy dice\\\",42,1.99)';
select * from on_hand where item=$$("fuzzy dice",42,1.99)$$;

How do I express a composite type literal as text?

Thanks,
Eric


select * from on_hand where item='("fuzzy dice",42,1.99)'::inventory_item;
  item  | count
+---
 ("fuzzy dice",42,1.99) |  1000
(1 row)


--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: [pgadmin-support] Issue with a hanging apply process on the replica db after vacuum works on primary

2015-02-27 Thread Sergey Shchukin

27.02.2015 11:52, Jim Nasby пишет:

On 2/26/15 12:25 AM, Sergey Shchukin wrote:

Hi Radovan !

Thank you for the reply. The question is that this table is not a
subject for a massive updates/deletes.

Is there any additional traces except from perf or pg_top to trace what
replica is doing at the particular moment when we are lagging in replay?
To see locks or spins or sleeps etc..


Please don't top-post.

What version is this? What is max_standby_streaming_delay set to?


Thank you!

-

Best regards,
Sergey Shchukin

24.02.2015 19:05, Radovan Jablonovsky пишет:

This looks like more issue for pgsql-general mailing list.

Possible solutions
1) Set specific autovacuum parameters on the big table. The autovacuum
could vacuum table on multiple runs based on the thresholds and cost
settings
Example of setting specific values of autovacuum and analyze for
table. It should be adjusted for your system, work load, table 
usage, etc:

alter table "my_schema"."my_big_table" set (fillfactor = 80,
autovacuum_enabled = true, autovacuum_vacuum_threshold = 200,
autovacuum_analyze_threshold = 400, autovacuum_vacuum_scale_factor =
0.05, autovacuum_analyze_scale_factor = 0.005,
autovacuum_vacuum_cost_delay = 10, autovacuum_vacuum_cost_limit = 
5000);


2) Could be to partition the large table on master site and vacuum it
partition by partition.

On Tue, Feb 24, 2015 at 6:42 AM, Sergey Shchukin
mailto:shchukin@gmail.com>> wrote:

Hi all!

May someone help me with the issue in the apply process on the
replica. We have a stream replication and after vacuum stops
working with a big table we get a "freeze" in applying data on the
replica database. It looks like this:

Tue Feb 24 15:04:51 MSK 2015 Stream:
MASTER-masterdb:79607136410456 SLAVE:79607136410456
Replay:79607136339456 :: REPLAY 69 KBytes (00:00:00.294485 seconds)
Tue Feb 24 15:04:52 MSK 2015 Stream:
MASTER-masterdb:79607137892672 SLAVE:79607137715392
Replay:79607137715392 :: REPLAY 173 KBytes (00:00:00.142605 
seconds)

Tue Feb 24 15:04:53 MSK 2015 Stream:
MASTER-masterdb:79607139327776 SLAVE:79607139241816
Replay:79607139241816 :: REPLAY 84 KBytes (00:00:00.05223 seconds)
Tue Feb 24 15:04:54 MSK 2015 Stream:
MASTER-masterdb:79607141134776 SLAVE:79607141073344
Replay:79607141080032 :: REPLAY 54 KBytes (00:00:00.010603 seconds)
Tue Feb 24 15:04:55 MSK 2015 Stream:
MASTER-masterdb:79607143085176 SLAVE:79607143026440
Replay:79607143038040 :: REPLAY 46 KBytes (00:00:00.009506 seconds)
Tue Feb 24 15:04:56 MSK 2015 Stream:
MASTER-masterdb:79607145111280 SLAVE:79607145021384
Replay:79607145025664 :: REPLAY 83 KBytes (00:00:00.006795 seconds)
Tue Feb 24 15:04:57 MSK 2015 Stream:
MASTER-masterdb:79607146564424 SLAVE:79607146478336
Replay:79607146501264 :: REPLAY 61 KBytes (00:00:00.00701 seconds)
Tue Feb 24 15:04:58 MSK 2015 Stream:
MASTER-masterdb:79607148160680 SLAVE:79607148108352
Replay:79607147369320 :: REPLAY 773 KBytes (00:00:00.449702 
seconds)

Tue Feb 24 15:04:59 MSK 2015 Stream:
MASTER-masterdb:79607150220688 SLAVE:79607150159632
Replay:79607150171312 :: REPLAY 48 KBytes (00:00:00.006594 seconds)
Tue Feb 24 15:05:00 MSK 2015 Stream:
MASTER-masterdb:79607152365360 SLAVE:79607152262696
Replay:79607152285240 :: REPLAY 78 KBytes (00:00:00.007042 seconds)
Tue Feb 24 15:05:02 MSK 2015 Stream:
MASTER-masterdb:79607154049848 SLAVE:79607154012624
Replay:79607153446800 :: REPLAY 589 KBytes (00:00:00.513637 
seconds)

Tue Feb 24 15:05:03 MSK 2015 Stream:
MASTER-masterdb:79607155229992 SLAVE:79607155187864
Replay:79607155188312 :: REPLAY 41 KBytes (00:00:00.004773 seconds)
Tue Feb 24 15:05:04 MSK 2015 Stream:
MASTER-masterdb:79607156833968 SLAVE:79607156764128
Replay:79607156785488 :: REPLAY 47 KBytes (00:00:00.006846 seconds)
Tue Feb 24 15:05:05 MSK 2015 Stream:
MASTER-masterdb:79607158419848 SLAVE:79607158344856
Replay:79607158396352 :: REPLAY 23 KBytes (00:00:00.005228 seconds)
Tue Feb 24 15:05:06 MSK 2015 Stream:
MASTER-masterdb:79607160004776 SLAVE:79607159962400
Replay:7960715998 :: REPLAY 16 KBytes (00:00:00.003162 seconds)
*--here apply process just stops*

Tue Feb 24 15:05:07 MSK 2015 Stream:
MASTER-masterdb:79607161592048 SLAVE:79607161550576
Replay:79607160986064 :: REPLAY 592 KBytes (00:00:00.398376 
seconds)

Tue Feb 24 15:05:08 MSK 2015 Stream:
MASTER-masterdb:79607163272840 SLAVE:79607163231384
Replay:79607160986064 :: REPLAY 2233 KBytes (00:00:01.446759 
seconds)

Tue Feb 24 15:05:09 MSK 2015 Stream:
MASTER-masterdb:79607164958632 SLAVE:79607164904448
Replay:79607160986064 :: REPLAY 3879 KBytes (00:00:02.497181 
seconds)

Tue Feb 24 15:05:10 MSK 2015 Stream:
MASTER-masterdb:79607166819560 SLAVE:7960716612
Replay:79607160986064 :: REPLAY 5697 KBytes (00:00:03.543107 
seconds)

Tue Feb 24 15:05:11 MSK 

Re: [GENERAL] Re: [pgadmin-support] Issue with a hanging apply process on the replica db after vacuum works on primary

2015-02-27 Thread Sergey Shchukin

27.02.2015 14:11, Sergey Shchukin пишет:

27.02.2015 11:52, Jim Nasby пишет:

On 2/26/15 12:25 AM, Sergey Shchukin wrote:

Hi Radovan !

Thank you for the reply. The question is that this table is not a
subject for a massive updates/deletes.

Is there any additional traces except from perf or pg_top to trace what
replica is doing at the particular moment when we are lagging in 
replay?

To see locks or spins or sleeps etc..


Please don't top-post.

What version is this? What is max_standby_streaming_delay set to?


Thank you!

-

Best regards,
Sergey Shchukin

24.02.2015 19:05, Radovan Jablonovsky пишет:

This looks like more issue for pgsql-general mailing list.

Possible solutions
1) Set specific autovacuum parameters on the big table. The autovacuum
could vacuum table on multiple runs based on the thresholds and cost
settings
Example of setting specific values of autovacuum and analyze for
table. It should be adjusted for your system, work load, table 
usage, etc:

alter table "my_schema"."my_big_table" set (fillfactor = 80,
autovacuum_enabled = true, autovacuum_vacuum_threshold = 200,
autovacuum_analyze_threshold = 400, autovacuum_vacuum_scale_factor =
0.05, autovacuum_analyze_scale_factor = 0.005,
autovacuum_vacuum_cost_delay = 10, autovacuum_vacuum_cost_limit = 
5000);


2) Could be to partition the large table on master site and vacuum it
partition by partition.

On Tue, Feb 24, 2015 at 6:42 AM, Sergey Shchukin
mailto:shchukin@gmail.com>> wrote:

Hi all!

May someone help me with the issue in the apply process on the
replica. We have a stream replication and after vacuum stops
working with a big table we get a "freeze" in applying data on the
replica database. It looks like this:

Tue Feb 24 15:04:51 MSK 2015 Stream:
MASTER-masterdb:79607136410456 SLAVE:79607136410456
Replay:79607136339456 :: REPLAY 69 KBytes (00:00:00.294485 
seconds)

Tue Feb 24 15:04:52 MSK 2015 Stream:
MASTER-masterdb:79607137892672 SLAVE:79607137715392
Replay:79607137715392 :: REPLAY 173 KBytes (00:00:00.142605 
seconds)

Tue Feb 24 15:04:53 MSK 2015 Stream:
MASTER-masterdb:79607139327776 SLAVE:79607139241816
Replay:79607139241816 :: REPLAY 84 KBytes (00:00:00.05223 seconds)
Tue Feb 24 15:04:54 MSK 2015 Stream:
MASTER-masterdb:79607141134776 SLAVE:79607141073344
Replay:79607141080032 :: REPLAY 54 KBytes (00:00:00.010603 
seconds)

Tue Feb 24 15:04:55 MSK 2015 Stream:
MASTER-masterdb:79607143085176 SLAVE:79607143026440
Replay:79607143038040 :: REPLAY 46 KBytes (00:00:00.009506 
seconds)

Tue Feb 24 15:04:56 MSK 2015 Stream:
MASTER-masterdb:79607145111280 SLAVE:79607145021384
Replay:79607145025664 :: REPLAY 83 KBytes (00:00:00.006795 
seconds)

Tue Feb 24 15:04:57 MSK 2015 Stream:
MASTER-masterdb:79607146564424 SLAVE:79607146478336
Replay:79607146501264 :: REPLAY 61 KBytes (00:00:00.00701 seconds)
Tue Feb 24 15:04:58 MSK 2015 Stream:
MASTER-masterdb:79607148160680 SLAVE:79607148108352
Replay:79607147369320 :: REPLAY 773 KBytes (00:00:00.449702 
seconds)

Tue Feb 24 15:04:59 MSK 2015 Stream:
MASTER-masterdb:79607150220688 SLAVE:79607150159632
Replay:79607150171312 :: REPLAY 48 KBytes (00:00:00.006594 
seconds)

Tue Feb 24 15:05:00 MSK 2015 Stream:
MASTER-masterdb:79607152365360 SLAVE:79607152262696
Replay:79607152285240 :: REPLAY 78 KBytes (00:00:00.007042 
seconds)

Tue Feb 24 15:05:02 MSK 2015 Stream:
MASTER-masterdb:79607154049848 SLAVE:79607154012624
Replay:79607153446800 :: REPLAY 589 KBytes (00:00:00.513637 
seconds)

Tue Feb 24 15:05:03 MSK 2015 Stream:
MASTER-masterdb:79607155229992 SLAVE:79607155187864
Replay:79607155188312 :: REPLAY 41 KBytes (00:00:00.004773 
seconds)

Tue Feb 24 15:05:04 MSK 2015 Stream:
MASTER-masterdb:79607156833968 SLAVE:79607156764128
Replay:79607156785488 :: REPLAY 47 KBytes (00:00:00.006846 
seconds)

Tue Feb 24 15:05:05 MSK 2015 Stream:
MASTER-masterdb:79607158419848 SLAVE:79607158344856
Replay:79607158396352 :: REPLAY 23 KBytes (00:00:00.005228 
seconds)

Tue Feb 24 15:05:06 MSK 2015 Stream:
MASTER-masterdb:79607160004776 SLAVE:79607159962400
Replay:7960715998 :: REPLAY 16 KBytes (00:00:00.003162 
seconds)

*--here apply process just stops*

Tue Feb 24 15:05:07 MSK 2015 Stream:
MASTER-masterdb:79607161592048 SLAVE:79607161550576
Replay:79607160986064 :: REPLAY 592 KBytes (00:00:00.398376 
seconds)

Tue Feb 24 15:05:08 MSK 2015 Stream:
MASTER-masterdb:79607163272840 SLAVE:79607163231384
Replay:79607160986064 :: REPLAY 2233 KBytes (00:00:01.446759 
seconds)

Tue Feb 24 15:05:09 MSK 2015 Stream:
MASTER-masterdb:79607164958632 SLAVE:79607164904448
Replay:79607160986064 :: REPLAY 3879 KBytes (00:00:02.497181 
seconds)

Tue Feb 24 15:05:10 MSK 2015 Stream:
MASTER-masterdb:79607166819560 SLAVE:7960716612
Replay:79607160986064 :: REPLAY 5697

[GENERAL] Performance on DISABLE TRIGGER

2015-02-27 Thread gmb
Hi all

I'm doing some maintenance - which is done quite often, never had this
problem before - which requires me to disable triggers, run some updates and
then re-enable the triggers. 
Where the whole process normally take 30 sec , it took much longer today and
I cancelled after 5 minutes.

After running the statements individually to pinpoint the issue , I
identified that the problem is on the first line.
 >> ALTER TABLE tab DISABLE TRIGGER trig;
I have not been able to run this successfully -- after 10 minutes the
connection ( to remote DB ) got severed .
Any ideas on where I can start to look for the cause of the problem ? 

Thanks



--
View this message in context: 
http://postgresql.nabble.com/Performance-on-DISABLE-TRIGGER-tp5839727.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] range type expression syntax

2015-02-27 Thread John Turner
On Fri, 27 Feb 2015 02:57:15 -0500, Marc Mamin   
wrote:



postgres=# select concat('[', now()::date, ',', now()::date,
']')::daterange testrange;


There are range specific functions for this:
select daterange(now()::date, now()::date, '[]')

regards,
Marc Mamin



Marc, thanks - indeed it's right there in 8.17.6!  As you might suspect,  
I'm rushing through things more than I'd care to, so I'm grateful for this  
list and the dose of sanity it provides :)


Cheers,
John

___
Von: pgsql-general-ow...@postgresql.org  
[pgsql-general-ow...@postgresql.org]" im Auftrag von "John  
Turner [jjtur...@energi.com]

Gesendet: Donnerstag, 26. Februar 2015 21:17
An: pgsql-general@postgresql.org
Betreff: Re: [GENERAL] range type expression syntax

On Thu, 26 Feb 2015 15:11:28 -0500, John Turner 
wrote:


Seems I'm missing a trick trying to get rangetypes working:

No problem building the string:
select concat('''[', now()::date, ',', now()::date, ']''') testrange;
   testrange
---
  '[2015-02-26,2015-02-26]'
(1 row)

Bombed-out trying to turn this into a daterange:
postgres=# select concat('''[', now()::date, ',', now()::date,
']''')::daterange testrange;
ERROR:  malformed range literal: "'[2015-02-26,2015-02-26]'"
DETAIL:  Missing left parenthesis or bracket.

Is there a specific casting I need to apply in order to render a literal
daterange from parameterized range elements?

/john



Sorry - too quick to post, I realize there was no need to wrap the
expression in extra quotes:

postgres=# select concat('[', now()::date, ',', now()::date,
']')::daterange testrange;
 testrange
-
  [2015-02-26,2015-02-27)
(1 row)



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Pass a URI as a pgAdmin parameter

2015-02-27 Thread Clodoaldo Neto
I can connect to Heroku with psql passing a URI as generated by
http://api.postgression.com:

$ psql 
postgres://hcnkrxukvhqiww:lryfxzcmlboytgtxpsvla8h...@ec2-107-21-93-97.compute-1.amazonaws.com:5432/dearu2qcqcmi7t

I want to do the same with pgAdmin:

$ pgadmin3 -qc 
postgres://hcnkrxukvhqiww:lryfxzcmlboytgtxpsvla8h...@ec2-107-21-93-97.compute-1.amazonaws.com:5432/dearu2qcqcmi7t

but I get the error message:

Unknown token in connection string: 

Is it possible to pass a URI as parameter to pgAdmin? Fedora 21,
pgadmin3_94.x86_64 1.20.0-2.f21 from pgdg-94-fedora.repo

Regards, Clodoaldo


Re: [GENERAL] Hex characters in COPY input

2015-02-27 Thread Melvin Call
On 2/26/15, Vick Khera  wrote:
> On Thu, Feb 26, 2015 at 9:50 PM, Melvin Call 
> wrote:
>
>> Montreal where the e is an accented e. The output ends up in the text
>> file
>> as
>> Montr\xe9al, where the xe9 is a single character. When I try to copy that
>> into
>> my PostgreSQL table, I get an error "ERROR:  invalid byte sequence for
>> encoding
>>
>
> Character code E9 is not UTF8. Don't tell Postgres you're importing UTF8 if
> you're not.
>
Thank you Vic, adding the ENCODING 'latin1' option to the COPY command worked
perfectly.

If you don't mind a follow up to your reply, I have tried to understand the
different character sets and collations, but I guess I still have a lot to
learn. Your suggestion did not even come close to crossing my mind because the
MySQL table and database are encoded in UTF8. I assume the conversion to latin1
happened because I was putting the MySQL query output into a locally stored
text file? Regardless, can you point me to some reading that would have clued
me in that e9 is not a UTF8 character? Or is the clue the fact that it was not
preceeded with 0x00?

Regards,
Melvin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hex characters in COPY input

2015-02-27 Thread Melvin Call
On 2/27/15, Adam Hooper  wrote:
> On Thu, Feb 26, 2015 at 9:50 PM, Melvin Call 
> wrote:
>
>> So my question is, how do I sanitize the hex character in the middle of a
>> word
>> to be able to copy in Montreal with an accented e? Or am I going about
>> this at
>> the wrong point?
>
> Hi Melvin,
>
> This is not a Postgres problem, and it is not a regex problem. So yes,
> you're going about it at the wrong point: you're trying to modify a
> _character_ at a time, but you _should_ be trying to modify a _byte_
> at a time. Text replacement cannot do what you want it to do.
>
> If you're on Linux or Mac, uconv will work -- for instance, `iconv
> --from-code=windows-1252 --to-code=utf-8 < input-file.txt >
> output-file.txt`
>
> Otherwise, you can use a text editor. Be sure to open the file
> properly (such that é appears) and then save it as utf-8.
>
> Alternatively, you could tell Postgres to use your existing encoding
> -- judging from the \xe9, any of "windows-1252", "iso-8859-15" or
> "iso-8859-1" will be accurate. But I always prefer my data to be
> stored as "utf-8", and you should, too.
>
> Read up on character sets here:
> http://www.joelonsoftware.com/articles/Unicode.html
>
> Enjoy life,
> Adam


Thank you Adam. I was able to make this work by adding the ENCODING 'latin1'
option to the COPY command per Vic's suggestion, and as you correctly pointed
out as well. However iconv would probably do the trick too, now that I know
where the problem actually lies. I failed to realize that I was not dealing
with UTF8 because the MySQL data is encoded in UTF8, but you saw what I wasn't
seeing. Your suggested reading is also most appreciated. Maybe one of these
days I will actually make sense of this encoding issue. Thanks for the
link.

Regards,
Melvin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hex characters in COPY input

2015-02-27 Thread Adrian Klaver

On 02/27/2015 06:39 AM, Melvin Call wrote:

On 2/26/15, Vick Khera  wrote:

On Thu, Feb 26, 2015 at 9:50 PM, Melvin Call 
wrote:


Montreal where the e is an accented e. The output ends up in the text
file
as
Montr\xe9al, where the xe9 is a single character. When I try to copy that
into
my PostgreSQL table, I get an error "ERROR:  invalid byte sequence for
encoding



Character code E9 is not UTF8. Don't tell Postgres you're importing UTF8 if
you're not.


Thank you Vic, adding the ENCODING 'latin1' option to the COPY command worked
perfectly.

If you don't mind a follow up to your reply, I have tried to understand the
different character sets and collations, but I guess I still have a lot to
learn. Your suggestion did not even come close to crossing my mind because the
MySQL table and database are encoded in UTF8. I assume the conversion to latin1
happened because I was putting the MySQL query output into a locally stored
text file? Regardless, can you point me to some reading that would have clued
me in that e9 is not a UTF8 character? Or is the clue the fact that it was not
preceeded with 0x00?


For UTF8 characters see here:

http://www.utf8-chartable.de/


For the MySQL part, you are going to detail how you got the data out?



Regards,
Melvin





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hex characters in COPY input

2015-02-27 Thread Melvin Call
On Fri, Feb 27, 2015 at 8:58 AM, Adam Hooper  wrote:
> On Fri, Feb 27, 2015 at 9:39 AM, Melvin Call  wrote:
>> On 2/26/15, Vick Khera  wrote:
>
>>> Character code E9 is not UTF8. Don't tell Postgres you're importing UTF8 if
>>> you're not.
>>>
>> Thank you Vic, adding the ENCODING 'latin1' option to the COPY command worked
>> perfectly.
>>
>> Your suggestion did not even come close to crossing my mind because the
>> MySQL table and database are encoded in UTF8. I assume the conversion to 
>> latin1
>> happened because I was putting the MySQL query output into a locally stored
>> text file?
>
> The reality is confusing: when moving text around, every step of the
> process involves a character set conversion.
>
> I'm guessing your MySQL client is defaulting to character_set_client =
> 'iso-8859-1' or some-such. Depending on your client, that could be
> because your _terminal_ is set to iso-8859-1 encoding. (If you're on
> Unix, type `locale` and if you don't see lots of "UTF-8"s your
> terminal probably isn't using UTF-8.)

Ah, that makes perfect sense.

>
> But really, there are so many variables it's only an issue if you're
> trying to change the way the MySQL client is behaving. And I don't
> think this is the list for that.

Agreed. I didn't realize that the MySQL client would be the issue, but I think
you've hit it. This is in preparation for moving away from MySQL so no need to
pursue it much further. I just need to get the information out once, and y'all
have helped me get there. Thanks!

>
> MySQL's encoding logic is written up here:
> http://dev.mysql.com/doc/refman/5.7/en/charset-connection.html. (The
> way the data is stored is in a completely separate web page:
> http://dev.mysql.com/doc/refman/5.7/en/charset-applications.html ...
> which only has a tiny hint at the bottom of the page about UTF-8 in
> the MySQL command-line client.)
>
> Enjoy life,
> Adam
>
> --
> Adam Hooper
> +1-613-986-3339
> http://adamhooper.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hex characters in COPY input

2015-02-27 Thread Melvin Call
On Fri, Feb 27, 2015 at 9:03 AM, Adrian Klaver
 wrote:
> On 02/27/2015 06:39 AM, Melvin Call wrote:
>>
>> On 2/26/15, Vick Khera  wrote:
>>>
>>> On Thu, Feb 26, 2015 at 9:50 PM, Melvin Call 
>>> wrote:
>>>
 Montreal where the e is an accented e. The output ends up in the text
 file
 as
 Montr\xe9al, where the xe9 is a single character. When I try to copy
 that
 into
 my PostgreSQL table, I get an error "ERROR:  invalid byte sequence for
 encoding

>>>
>>> Character code E9 is not UTF8. Don't tell Postgres you're importing UTF8
>>> if
>>> you're not.
>>>
>> Thank you Vic, adding the ENCODING 'latin1' option to the COPY command
>> worked
>> perfectly.
>>
>> If you don't mind a follow up to your reply, I have tried to understand
>> the
>> different character sets and collations, but I guess I still have a lot to
>> learn. Your suggestion did not even come close to crossing my mind because
>> the
>> MySQL table and database are encoded in UTF8. I assume the conversion to
>> latin1
>> happened because I was putting the MySQL query output into a locally
>> stored
>> text file? Regardless, can you point me to some reading that would have
>> clued
>> me in that e9 is not a UTF8 character? Or is the clue the fact that it was
>> not
>> preceeded with 0x00?
>
>
> For UTF8 characters see here:
>
> http://www.utf8-chartable.de/

Thank you for the link. Bookmarked.

>
>
> For the MySQL part, you are going to detail how you got the data out?

This is in preparation of moving away from MySQL. I inherited this MySQL
database and it is in horrible shape, no referential integrity, no constraints
other than arbitrarily chosen VARCHAR lengths, no indexes, and inconsistent
entity and attribute naming. I have to pull the fields out through a query that
is being redirected to a local file, as opposed to a dump, because I'm having
to filter out a lot of useless rows (TestCity in a production system!?). I just
realized I could have put the usable data into similar tables and then used
mysqldump with the encoding specified, but no need now. I have my extraction
and import done, so this should be the last I need to touch the MySQL system.

>
>>
>> Regards,
>> Melvin
>>
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Triggers on foreign Postgres 9.3 tables in Postgres 9.4

2015-02-27 Thread MattF
Thank you Michael! I will let the admin know then!



--
View this message in context: 
http://postgresql.nabble.com/Triggers-on-foreign-Postgres-9-3-tables-in-Postgres-9-4-tp5839559p5839749.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hex characters in COPY input

2015-02-27 Thread Adrian Klaver

On 02/27/2015 07:55 AM, Melvin Call wrote:

On Fri, Feb 27, 2015 at 9:03 AM, Adrian Klaver
 wrote:

On 02/27/2015 06:39 AM, Melvin Call wrote:


On 2/26/15, Vick Khera  wrote:


On Thu, Feb 26, 2015 at 9:50 PM, Melvin Call 
wrote:


Montreal where the e is an accented e. The output ends up in the text
file
as
Montr\xe9al, where the xe9 is a single character. When I try to copy
that
into
my PostgreSQL table, I get an error "ERROR:  invalid byte sequence for
encoding



Character code E9 is not UTF8. Don't tell Postgres you're importing UTF8
if
you're not.


Thank you Vic, adding the ENCODING 'latin1' option to the COPY command
worked
perfectly.

If you don't mind a follow up to your reply, I have tried to understand
the
different character sets and collations, but I guess I still have a lot to
learn. Your suggestion did not even come close to crossing my mind because
the
MySQL table and database are encoded in UTF8. I assume the conversion to
latin1
happened because I was putting the MySQL query output into a locally
stored
text file? Regardless, can you point me to some reading that would have
clued
me in that e9 is not a UTF8 character? Or is the clue the fact that it was
not
preceeded with 0x00?



For UTF8 characters see here:

http://www.utf8-chartable.de/


Thank you for the link. Bookmarked.




For the MySQL part, you are going to detail how you got the data out?


This is in preparation of moving away from MySQL. I inherited this MySQL
database and it is in horrible shape, no referential integrity, no constraints
other than arbitrarily chosen VARCHAR lengths, no indexes, and inconsistent
entity and attribute naming. I have to pull the fields out through a query that
is being redirected to a local file, as opposed to a dump, because I'm having
to filter out a lot of useless rows (TestCity in a production system!?). I just
realized I could have put the usable data into similar tables and then used
mysqldump with the encoding specified, but no need now. I have my extraction
and import done, so this should be the last I need to touch the MySQL system.



Gotcha, I recently did something similar.







Regards,
Melvin





--
Adrian Klaver
adrian.kla...@aklaver.com






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance on DISABLE TRIGGER

2015-02-27 Thread Greg Sabino Mullane


binK_rPgnIklP.bin
Description: charset


[GENERAL] 'missing' data on replicate

2015-02-27 Thread Alan Nilsson
I have 4 machines in a cluster: 1 master & 2 replicates (hot standby mode) on 
OSX, 1 replicate (hot standby mode) on Linux (centos6.5).  All replicates were 
created with pg_basebackup (+ X option).

I have a table that returns no results when queried by the primary key on the 
replicate running on Linux.  The same query returns data on all other machines 
in the cluster.  The data is there, if I query on anything other than the PK, I 
get the results I expect.

a) Is this a corrupt index?  Something entirely different?
b) How can I rebuild index on slave?  Rebuilding on master did not fix.
c) What are the possible ways of getting this way?

I’ve never seen this before and I’m not really sure how to proceed.  I can take 
the Linux machine offline and rebuild the whole thing but I would rather know 
how this happened & fix it in place.  We are in the process of migrating 
everything to Linux and this box is a test bed.  Before I commit everything to 
a new linux cluster I’ld like to understand what I am seeing.

thanks
alan




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 'missing' data on replicate

2015-02-27 Thread David G Johnston
Alan Nilsson wrote
> I have 4 machines in a cluster: 1 master & 2 replicates (hot standby mode)
> on OSX, 1 replicate (hot standby mode) on Linux (centos6.5).  All
> replicates were created with pg_basebackup (+ X option).
> 
> I have a table that returns no results when queried by the primary key on
> the replicate running on Linux.

I was under the impression that binary replication requires that the master
and replicas both have the same underlying operating system (or at least
quite similar).  I'm guessing the OSX and Centos6.5 are not similar enough.

https://wiki.postgresql.org/wiki/Binary_Replication_Tutorial

David J.



--
View this message in context: 
http://postgresql.nabble.com/missing-data-on-replicate-tp5839762p5839764.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hex characters in COPY input

2015-02-27 Thread Adam Hooper
On Fri, Feb 27, 2015 at 9:39 AM, Melvin Call  wrote:
> On 2/26/15, Vick Khera  wrote:
>> On Thu, Feb 26, 2015 at 9:50 PM, Melvin Call 
>> wrote:
>>
>>> I get an error "ERROR:  invalid byte sequence for
>>> encoding "UTF8": 0xe9616c"
>>
>> Character code E9 is not UTF8. Don't tell Postgres you're importing UTF8 if
>> you're not.
>
> Regardless, can you point me to some reading that would have clued
> me in that e9 is not a UTF8 character? Or is the clue the fact that it was not
> preceeded with 0x00?

The error message Postgres gave you is a pretty darned good clue :).

But your question has an error, and it's worth expounding a bit. 0xe9
is not a UTF8 character because 0xe9 is not a _character_. 0xe9 is a
_byte_. Characters are not bytes. Characters can be _encoded_ into
bytes, and that's not the same thing.

UTF-8 encoding is a bit confusing: any byte in the range [0x00-0x7f]
represents the same character as in ASCII encoding (an extremely
popular encoding). Any byte in the range [0x80-0xff] is the "leading
byte" in a sequence of bytes that represents a single character.
"Continuation" bytes are in the range [0x80-0xbf]. (Why not the range
[0x00-0xff]? Because UTF-8 was designed to generate errors when fed
non-UTF8 byte sequences.) The first four bits of the leading byte
describe how many continuation bytes there are. If you care to read up
on the how and why of UTF-8 (a red herring in this discussion), try:
See http://en.wikipedia.org/wiki/UTF-8

Back to 0xe9. 0xe9 is '1110 1001' in binary. Postgres' UTF-8 decoder
sees that initial '1110' and determines that it needs to inspect three
bytes to read one character. The second byte is 0x61, which is not in
the range [0x80-0xbf], so Postgres reports an invalid byte sequence.
Hooray: it produces exactly the error message it should.

You don't need to concern yourself with the complications of UTF-8.
You only need to know that bytes are not characters; if you don't know
the encoding of a sequence of bytes, you've made a logic error.
Postgres told you the error, though it didn't hint at how to fix it.
(Text editors and web browsers use heuristics to guess file encodings,
and they're usually right, though it's impossible to create a perfect
heuristic. See 
http://stackoverflow.com/questions/4198804/how-to-reliably-guess-the-encoding-between-macroman-cp1252-latin1-utf-8-and
for further discussion there.)

If you're looking for take-away lessons, the main one is: "read the
error message" :).

Next time you have the "wrong encoding" problem, you have two options:
1) figure out the encoding and tell Postgres; or 2) regenerate the
file in the correct encoding (UTF-8). The "why" is here:
http://www.joelonsoftware.com/articles/Unicode.html

We on this list jumped strait to option 1. We've memorized 0xe9 in
particular, because we've been through your pain before. In the
Americas and Western Europe, if a file contains the byte 0xe9 it
probably contains the character "é" encoded as
windows-1252/ISO-8859-1/ISO-8859-15. That's very common. MySQL in
particular is a ghastly Internet denizen, in that it defaults to
ISO-8859-15 in an apparent crusade against globalization and modern
standards.

Enjoy life,
Adam

-- 
Adam Hooper
+1-613-986-3339
http://adamhooper.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 'missing' data on replicate

2015-02-27 Thread Adrian Klaver

On 02/27/2015 10:04 AM, David G Johnston wrote:

Alan Nilsson wrote

I have 4 machines in a cluster: 1 master & 2 replicates (hot standby mode)
on OSX, 1 replicate (hot standby mode) on Linux (centos6.5).  All
replicates were created with pg_basebackup (+ X option).

I have a table that returns no results when queried by the primary key on
the replicate running on Linux.


I was under the impression that binary replication requires that the master
and replicas both have the same underlying operating system (or at least
quite similar).  I'm guessing the OSX and Centos6.5 are not similar enough.

https://wiki.postgresql.org/wiki/Binary_Replication_Tutorial


To add to the above, even if they are similar enough I seem to remember 
index corruption if the character sets is not the same. Might want to 
check system character sets on your machines.




David J.



--
View this message in context: 
http://postgresql.nabble.com/missing-data-on-replicate-tp5839762p5839764.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 'missing' data on replicate

2015-02-27 Thread Alan Nilsson
Thank you for that, makes sense.  Another fine example of RTFM!

alan

> On Feb 27, 2015, at 10:04 AM, David G Johnston  
> wrote:
> 
> Alan Nilsson wrote
>> I have 4 machines in a cluster: 1 master & 2 replicates (hot standby mode)
>> on OSX, 1 replicate (hot standby mode) on Linux (centos6.5).  All
>> replicates were created with pg_basebackup (+ X option).
>> 
>> I have a table that returns no results when queried by the primary key on
>> the replicate running on Linux.
> 
> I was under the impression that binary replication requires that the master
> and replicas both have the same underlying operating system (or at least
> quite similar).  I'm guessing the OSX and Centos6.5 are not similar enough.
> 
> https://wiki.postgresql.org/wiki/Binary_Replication_Tutorial
> 
> David J.
> 
> 
> 
> --
> View this message in context: 
> http://postgresql.nabble.com/missing-data-on-replicate-tp5839762p5839764.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hex characters in COPY input

2015-02-27 Thread Melvin Call
On Fri, Feb 27, 2015 at 12:02 PM, Adam Hooper  wrote:
> On Fri, Feb 27, 2015 at 9:39 AM, Melvin Call  wrote:
>> On 2/26/15, Vick Khera  wrote:
>>> On Thu, Feb 26, 2015 at 9:50 PM, Melvin Call 
>>> wrote:
>>>
 I get an error "ERROR:  invalid byte sequence for
 encoding "UTF8": 0xe9616c"
>>>
>>> Character code E9 is not UTF8. Don't tell Postgres you're importing UTF8 if
>>> you're not.
>>
>> Regardless, can you point me to some reading that would have clued
>> me in that e9 is not a UTF8 character? Or is the clue the fact that it was 
>> not
>> preceeded with 0x00?
>
> The error message Postgres gave you is a pretty darned good clue :).
>
> But your question has an error, and it's worth expounding a bit. 0xe9
> is not a UTF8 character because 0xe9 is not a _character_. 0xe9 is a
> _byte_. Characters are not bytes. Characters can be _encoded_ into
> bytes, and that's not the same thing.
>
> UTF-8 encoding is a bit confusing: any byte in the range [0x00-0x7f]
> represents the same character as in ASCII encoding (an extremely
> popular encoding). Any byte in the range [0x80-0xff] is the "leading
> byte" in a sequence of bytes that represents a single character.
> "Continuation" bytes are in the range [0x80-0xbf]. (Why not the range
> [0x00-0xff]? Because UTF-8 was designed to generate errors when fed
> non-UTF8 byte sequences.) The first four bits of the leading byte
> describe how many continuation bytes there are. If you care to read up
> on the how and why of UTF-8 (a red herring in this discussion), try:
> See http://en.wikipedia.org/wiki/UTF-8
>
> Back to 0xe9. 0xe9 is '1110 1001' in binary. Postgres' UTF-8 decoder
> sees that initial '1110' and determines that it needs to inspect three
> bytes to read one character. The second byte is 0x61, which is not in
> the range [0x80-0xbf], so Postgres reports an invalid byte sequence.
> Hooray: it produces exactly the error message it should.
>
> You don't need to concern yourself with the complications of UTF-8.
> You only need to know that bytes are not characters; if you don't know
> the encoding of a sequence of bytes, you've made a logic error.
> Postgres told you the error, though it didn't hint at how to fix it.
> (Text editors and web browsers use heuristics to guess file encodings,
> and they're usually right, though it's impossible to create a perfect
> heuristic. See 
> http://stackoverflow.com/questions/4198804/how-to-reliably-guess-the-encoding-between-macroman-cp1252-latin1-utf-8-and
> for further discussion there.)
>
> If you're looking for take-away lessons, the main one is: "read the
> error message" :).
>
> Next time you have the "wrong encoding" problem, you have two options:
> 1) figure out the encoding and tell Postgres; or 2) regenerate the
> file in the correct encoding (UTF-8). The "why" is here:
> http://www.joelonsoftware.com/articles/Unicode.html

Timing is everything: I was just starting to read this in fact.

Thank you for taking the time to provide this information. It is proving to be
most useful. And yes, I did misspeak. At least the one part I have learned
about all of this is that more than one byte is required to represent the
majority of characters out there. So yeah, I did know that e9 was a byte, but I
appreciate you re-iterating that for me. Hoping to represent each character
with a single byte is an artifact of most early computer development being
dominated by English speaking persons way back when, eh?

I must say, at least as far as the MySQL outputting goes, I like option 1. It
was easy enough to find and fix that way once I knew what to look for. The
MySQL documentation is sometimes difficult to wade through. But no
documentation is ever perfect, is it?

One of the things that has endeared me to PostgreSQL is the fact that it seems
to strive for global usefulness while remaining standards compliant. Or are
those one and the same?

Regards

>
> We on this list jumped strait to option 1. We've memorized 0xe9 in
> particular, because we've been through your pain before. In the
> Americas and Western Europe, if a file contains the byte 0xe9 it
> probably contains the character "é" encoded as
> windows-1252/ISO-8859-1/ISO-8859-15. That's very common. MySQL in
> particular is a ghastly Internet denizen, in that it defaults to
> ISO-8859-15 in an apparent crusade against globalization and modern
> standards.
>
> Enjoy life,
> Adam
>
> --
> Adam Hooper
> +1-613-986-3339
> http://adamhooper.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 'missing' data on replicate

2015-02-27 Thread Adrian Klaver

On 02/27/2015 10:14 AM, Alan Nilsson wrote:

Thank you for that, makes sense.  Another fine example of RTFM!


To go with my previous post, I knew this sounded familiar:

http://www.postgresql.org/message-id/cadyruxojwgkh4ghxw2heqetfapiwzayj9uwg34bhyifxlrm...@mail.gmail.com

in particular this response:

http://www.postgresql.org/message-id/15503.1408548...@sss.pgh.pa.us



alan









--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general