[GENERAL] Re: ERROR: missing chunk number 0 for toast value while using logical decoder.\

2016-07-18 Thread sudalai
@micheal, 
Thanks for your help. It working now.
But, now I'm getting deadlock, when i peek_changes or get_changes, in
live system. 

db=#select count(*) from pg_logical_slot_peek_changes('slot',NULL,100); 
/* pid : 25833 */
ERROR:  deadlock detected
DETAIL:  Process 25833 waits for ShareLock on relation 248486 of database
16385; blocked by process 18904.
Process 18904 waits for RowExclusiveLock on relation 257419 of database
16385; blocked by process 25833.
HINT:  See server log for query details.
CONTEXT:  slot "slot", output plugin "rep_slot", in the change callback,
associated LSN CD/EF693310

 Please help me, I am using PostgreSQL 9.5.0. 
 I don't know much in postgres internals. why does peek_changes trying
to take share lock on 248486 (primary key of one table).
 Its postgres logical decoder  takes share lock or output-plugin takes
share lock ?  
 Is there any way i can avoid it?  Please help me.
  
-Sudalai




-
sudalai
--
View this message in context: 
http://postgresql.nabble.com/ERROR-missing-chunk-number-0-for-toast-value-while-using-logical-decoder-tp5909194p5912389.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange array_to_table / unnest result

2016-07-18 Thread Johann Spies
> Unnest is evaluated before LIMIT
> You need to use derived table
>
> postgres=# select unnest(a) from (select a from x limit 1) si
>

Thank you!

Regards
Johann


Re: [GENERAL] Re: ERROR: missing chunk number 0 for toast value while using logical decoder.\

2016-07-18 Thread Michael Paquier
On Mon, Jul 18, 2016 at 4:31 PM, sudalai  wrote:
>  Please help me, I am using PostgreSQL 9.5.0.
>  I don't know much in postgres internals. why does peek_changes trying
> to take share lock on 248486 (primary key of one table).
>  Its postgres logical decoder  takes share lock or output-plugin takes
> share lock ?
>  Is there any way i can avoid it?  Please help me.

No way to say as that's your plugin code. I think that you should look
at your code and grep for ShareLock, trying to find out why this is
taken. My guess is that a lower level of locking could be used,
ShareLock is a strong one, taken when you want to be sure that there
are schema changes for example.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: ERROR: missing chunk number 0 for toast value while using logical decoder.\

2016-07-18 Thread sudalai
thank u.
why (select * from pg_logical_slot_peek_changes('slot',NULL,1) )  take share
lock ?  
any  idea ?

-Sudalai



-
sudalai
--
View this message in context: 
http://postgresql.nabble.com/ERROR-missing-chunk-number-0-for-toast-value-while-using-logical-decoder-tp5909194p5912411.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: ERROR: missing chunk number 0 for toast value while using logical decoder.\

2016-07-18 Thread Michael Paquier
On Mon, Jul 18, 2016 at 7:08 PM, sudalai  wrote:
> thank u.
> why (select * from pg_logical_slot_peek_changes('slot',NULL,1) )  take share
> lock ?
> any  idea ?

My guess is that this lock comes from your plugin code. Please double-check it.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: ERROR: missing chunk number 0 for toast value while using logical decoder.\

2016-07-18 Thread sudalai
Got it. Thank you very much. :)
-Sudalai



-
sudalai
--
View this message in context: 
http://postgresql.nabble.com/ERROR-missing-chunk-number-0-for-toast-value-while-using-logical-decoder-tp5909194p5912426.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Congratulations (EDB) on U.S. Security Technical Implementation Guide.

2016-07-18 Thread John McKown
May be of interest to some here. EDB is the commercial enhancement to
PostgreSQL.

http://www.enterprisedb.com/news-events/press-releases/us-government-publishes-edb-postgres-advanced-server-security-technical-i

Release Date:
Jul 14, 2016

*EDB Postgres Advanced Server STIG becomes first open source-based database
STIG to be published by the Department of Defense*

*Government-validated security guide increases speed and security for U.S.
agencies’ deployment of EDB Postgres*

​I think that calling EDB an "open source-based database" may be
technically correct, but perhaps a bit misleading. It is a commercial
database derived from an open source database. And they do contribute back
to the PostgreSQL community. I'm just being picky.

The actual site containing the paper, and others, is here:
http://iase.disa.mil/stigs/app-security/database/Pages/index.aspx
​


-- 
"Worry was nothing more than paying interest on a loan that a man may never
borrow"

From: "Quest for the White Wind" by Alan Black

Maranatha! <><
John McKown


Re: [GENERAL] MongoDB 3.2 beating Postgres 9.5.1?

2016-07-18 Thread Kisung Kim
Hi,
I recently test YCSB benchmark too.
But contrary to my expectation, PG (9.5) is slower than MongoDB 3.2.
Paul said that making table with no logging option improved the performance,
and it might be equal to MongoDB's behavior.
But in MongoDB documentation, it writes journal log too.
So I think turning off no logging option is not fair.
Am I wrong about MongoDB's behavior?





(C)Bitnine, Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : ks...@bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192


2016-03-19 5:05 GMT+09:00 :

>
>
> On Tuesday, March 15, 2016 7:39 PM, "p...@cmicdo.com" 
> wrote:
>
>  > Your results are close enough to mine, I think, to prove the point.
>  > And, I agree that the EDB benchmark is not necessary reflective of a
>  > real-world scenario.
>  >
>  > However, the cache I'm referring to is PG's shared_buffer cache.
>  > You can see the first run of the select causing a lot of disk reads.
>  > The second identical run, reads purely from shared_buffers.
>  >
>  > What I don't understand is, why does a slightly different select from
>  > the *same* table during the same session cause shared_buffers to be
>  > blown out and re-read??
>  >
>  > I will see if I can try YCSB next week (I'm in workshops all week...)
>  >
>  > Thanks!
>
> I was able to try YCSB today on both PG 9.5.1 and Mongo 3.2.  At first, PG
> was running 4 times slower than Mongo.  Then I remembered about unlogged
> tables (which I think is the way Mongo is all the time.), and remade
> the PG table as UNLOGGED.  In a 50/50 read/update test over 1M records,
> PG ran in 0.62 of the time of Mongo.
>
> PG Load:
> 
> [OVERALL], RunTime(ms), 104507.0
> [OVERALL], Throughput(ops/sec), 9568.737022400413
> [CLEANUP], Operations, 1.0
> [CLEANUP], AverageLatency(us), 293.0
> [CLEANUP], MinLatency(us), 293.0
> [CLEANUP], MaxLatency(us), 293.0
> [CLEANUP], 95thPercentileLatency(us), 293.0
> [CLEANUP], 99thPercentileLatency(us), 293.0
> [INSERT], Operations, 100.0
> [INSERT], AverageLatency(us), 101.329235
> [INSERT], MinLatency(us), 88.0
> [INSERT], MaxLatency(us), 252543.0
> [INSERT], 95thPercentileLatency(us), 121.0
> [INSERT], 99thPercentileLatency(us), 141.0
> [INSERT], Return=OK, 100
>
> PG Run:
> ---
> [OVERALL], RunTime(ms), 92763.0
> [OVERALL], Throughput(ops/sec), 10780.16019318047
> [READ], Operations, 499922.0
> [READ], AverageLatency(us), 79.1722428698877
> [READ], MinLatency(us), 69.0
> [READ], MaxLatency(us), 19935.0
> [READ], 95thPercentileLatency(us), 94.0
> [READ], 99thPercentileLatency(us), 112.0
> [READ], Return=OK, 499922
> [CLEANUP], Operations, 1.0
> [CLEANUP], AverageLatency(us), 222.0
> [CLEANUP], MinLatency(us), 222.0
> [CLEANUP], MaxLatency(us), 222.0
> [CLEANUP], 95thPercentileLatency(us), 222.0
> [CLEANUP], 99thPercentileLatency(us), 222.0
> [UPDATE], Operations, 500078.0
> [UPDATE], AverageLatency(us), 98.96430156895525
> [UPDATE], MinLatency(us), 83.0
> [UPDATE], MaxLatency(us), 26655.0
> [UPDATE], 95thPercentileLatency(us), 127.0
> [UPDATE], 99thPercentileLatency(us), 158.0
> [UPDATE], Return=OK, 500078
>
> Mongo Load:
> ---
> [OVERALL], RunTime(ms), 133308.0
> [OVERALL], Throughput(ops/sec), 7501.425270801452
> [CLEANUP], Operations, 1.0
> [CLEANUP], AverageLatency(us), 1822.0
> [CLEANUP], MinLatency(us), 1822.0
> [CLEANUP], MaxLatency(us), 1822.0
> [CLEANUP], 95thPercentileLatency(us), 1822.0
> [CLEANUP], 99thPercentileLatency(us), 1822.0
> [INSERT], Operations, 100.0
> [INSERT], AverageLatency(us), 130.830678
> [INSERT], MinLatency(us), 90.0
> [INSERT], MaxLatency(us), 7147519.0
> [INSERT], 95thPercentileLatency(us), 159.0
> [INSERT], 99thPercentileLatency(us), 226.0
> [INSERT], Return=OK, 100
>
> Mongo Run:
> -
> [OVERALL], RunTime(ms), 149150.0
> [OVERALL], Throughput(ops/sec), 6704.65973851827
> [READ], Operations, 500837.0
> [READ], AverageLatency(us), 98.13153980237084
> [READ], MinLatency(us), 69.0
> [READ], MaxLatency(us), 28271.0
> [READ], 95thPercentileLatency(us), 166.0
> [READ], 99thPercentileLatency(us), 186.0
> [READ], Return=OK, 500837
> [CLEANUP], Operations, 1.0
> [CLEANUP], AverageLatency(us), 2387.0
> [CLEANUP], MinLatency(us), 2386.0
> [CLEANUP], MaxLatency(us), 2387.0
> [CLEANUP], 95thPercentileLatency(us), 2387.0
> [CLEANUP], 99thPercentileLatency(us), 2387.0
> [UPDATE], Operations, 499163.0
> [UPDATE], AverageLatency(us), 195.21505600375028
> [UPDATE], MinLatency(us), 118.0
> [UPDATE], MaxLatency(us), 4513791.0
> [UPDATE], 95thPercentileLatency(us), 211.0
> [UPDATE], 99thPercentileLatency(us), 252.0
> [UPDATE], Return=OK, 499163
>
>
>  >
>  >
>  > On Monday, March 14, 2016 3:34 AM, Dmitry Dolgov <9erthali...@gmail.com>
> wrote:
>  >
>  >
>  > Hi, Paul
>  >
>  > I agree with Oleg, EDB benchmarks are strange sometimes. I did the same
> benchmarks several months ago. I never noticed the cache influence back
> then, so I tried to reproduce your situation now (on a 5*10^6 records

Re: [GENERAL] MongoDB 3.2 beating Postgres 9.5.1?

2016-07-18 Thread Sameer Kumar
On Fri, 11 Mar 2016, 9:39 p.m. Paul Jones,  wrote:

> I have been running the EDB benchmark that compares Postgres and MongoDB.
> I believe EDB ran it against PG 9.4 and Mongo 2.6.  I am running it
> against PG 9.5.1 and Mongo 3.2 with WiredTiger storage using 10,000,000
> JSON records generated by the benchmark.  It looks like Mongo is winning,
> and apparently because of its cache management.
>
> The first queries on both run in ~30 min.  And, once PG fills its cache,
> it whips Mongo on repeats of the *same* query (vmstat shows no disk
> reads for PG).
>
> However, when different query on the same table is issued to both,
> vmstat shows that PG has to read the *entire* table again, and it takes
> ~30 min.  Mongo does a lot of reads initially but after about 5 minutes,
> it stops reading and completes the query, most likely because it is
> using its cache very effectively.
>
> Host:   Virtual Machine
> 4 CPUs
> 16 Gb RAM
> 200 Gb Disk
> RHEL 6.6
>
> PG: 9.5.1 compiled from source
> shared_buffers = 7GB
> effectve_cache_size = 12GB
>
> Mongo:  3.2 installed with RPM from Mongo
>
> In PG, I created the table by:
>
> CREATE TABLE json_tables
> (
> dataJSONB
> );
>
> After loading, it creates the index:
>
> CREATE INDEX json_tables_idx ON json_tables USING GIN (data
> jsonb_path_ops);
>

This would create one GIN index which is going to be a bit larger than
usual btree /n-tree index on a specific JSON field. And would be slower
too. I suggest that you create an index on the specific expression using
JSON operators. In my opinion that index would be much more nearer to
mongoDB indexes.



> After a lot of experimentation, I discovered that the benchmark was not
> using PG's index, so I modified the four queries to be:
>
> SELECT data FROM json_tables WHERE data @> '{"brand": "ACME"}';
> SELECT data FROM json_tables WHERE data @> '{"name": "Phone Service Basic
> Plan"}';
> SELECT data FROM json_tables WHERE data @> '{"name": "AC3 Case Red"}';
> SELECT data FROM json_tables WHERE data @> '{"type": "service"}';
>
> Here are two consecutive explain analyze for PG, for the same query.
> No functional difference in the plans that I can tell, but the effect
> of PG's cache on the second is dramatic.
>
> If anyone has ideas on how I can get PG to more effectively use the cache
> for subsequent queries, I would love to hear them.
>
> ---
>
> benchmark=# explain analyze select data from json_tables where data @>
> '{"name": "AC3 Case Red"}';
>
>QUERY PLAN
>
>
>
> -
>
> Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=1
> width=1261)
> (actual time=2157.118..1259550.327 rows=909091 loops=1)
>Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
>Rows Removed by Index Recheck: 4360296
>Heap Blocks: exact=37031 lossy=872059
>->  Bitmap Index Scan on json_tables_idx  (cost=0.00..111.00 rows=1
> width =0) (actual time=2141.250..2141.250 rows=909091 loops=1)
>  Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
> Planning time: 291.932 ms
> Execution time: 1259886.920 ms
> (8 rows)
>
> Time: 1261191.844 ms
>
> benchmark=# explain analyze select data from json_tables where data @>
> '{"name": "AC3 Case Red"}';
>   QUERY PLAN
>
>
> ---
>
> Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=1
> width=1261) (actual time=779.261..29815.262 rows=909091 loops=1)
>Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
>Rows Removed by Index Recheck: 4360296
>Heap Blocks: exact=37031 lossy=872059
>->  Bitmap Index Scan on json_tables_idx  (cost=0.00..111.00 rows=1
> width =0) (actual time=769.081..769.081 rows=909091 loops=1)
>  Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
> Planning time: 33.967 ms
> Execution time: 29869.381 ms
>
> (8 rows)
>
> Time: 29987.122 ms
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] MongoDB 3.2 beating Postgres 9.5.1?

2016-07-18 Thread Kisung Kim
Of course, I do not create GIN index.
Maybe the problem is related to checkpoint and WAL.
I don't know how to make the comparison with MongoDB fair enough.





(C)Bitnine, Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : ks...@bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192


2016-07-19 11:23 GMT+09:00 Sameer Kumar :

>
>
> On Fri, 11 Mar 2016, 9:39 p.m. Paul Jones,  wrote:
>
>> I have been running the EDB benchmark that compares Postgres and MongoDB.
>> I believe EDB ran it against PG 9.4 and Mongo 2.6.  I am running it
>> against PG 9.5.1 and Mongo 3.2 with WiredTiger storage using 10,000,000
>> JSON records generated by the benchmark.  It looks like Mongo is winning,
>> and apparently because of its cache management.
>>
>> The first queries on both run in ~30 min.  And, once PG fills its cache,
>> it whips Mongo on repeats of the *same* query (vmstat shows no disk
>> reads for PG).
>>
>> However, when different query on the same table is issued to both,
>> vmstat shows that PG has to read the *entire* table again, and it takes
>> ~30 min.  Mongo does a lot of reads initially but after about 5 minutes,
>> it stops reading and completes the query, most likely because it is
>> using its cache very effectively.
>>
>> Host:   Virtual Machine
>> 4 CPUs
>> 16 Gb RAM
>> 200 Gb Disk
>> RHEL 6.6
>>
>> PG: 9.5.1 compiled from source
>> shared_buffers = 7GB
>> effectve_cache_size = 12GB
>>
>> Mongo:  3.2 installed with RPM from Mongo
>>
>> In PG, I created the table by:
>>
>> CREATE TABLE json_tables
>> (
>> dataJSONB
>> );
>>
>> After loading, it creates the index:
>>
>> CREATE INDEX json_tables_idx ON json_tables USING GIN (data
>> jsonb_path_ops);
>>
>
> This would create one GIN index which is going to be a bit larger than
> usual btree /n-tree index on a specific JSON field. And would be slower
> too. I suggest that you create an index on the specific expression using
> JSON operators. In my opinion that index would be much more nearer to
> mongoDB indexes.
>
>
>
>> After a lot of experimentation, I discovered that the benchmark was not
>> using PG's index, so I modified the four queries to be:
>>
>> SELECT data FROM json_tables WHERE data @> '{"brand": "ACME"}';
>> SELECT data FROM json_tables WHERE data @> '{"name": "Phone Service Basic
>> Plan"}';
>> SELECT data FROM json_tables WHERE data @> '{"name": "AC3 Case Red"}';
>> SELECT data FROM json_tables WHERE data @> '{"type": "service"}';
>>
>> Here are two consecutive explain analyze for PG, for the same query.
>> No functional difference in the plans that I can tell, but the effect
>> of PG's cache on the second is dramatic.
>>
>> If anyone has ideas on how I can get PG to more effectively use the cache
>> for subsequent queries, I would love to hear them.
>>
>> ---
>>
>> benchmark=# explain analyze select data from json_tables where data @>
>> '{"name": "AC3 Case Red"}';
>>
>>QUERY PLAN
>>
>>
>>
>> -
>>
>> Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=1
>> width=1261)
>> (actual time=2157.118..1259550.327 rows=909091 loops=1)
>>Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
>>Rows Removed by Index Recheck: 4360296
>>Heap Blocks: exact=37031 lossy=872059
>>->  Bitmap Index Scan on json_tables_idx  (cost=0.00..111.00
>> rows=1 width =0) (actual time=2141.250..2141.250 rows=909091 loops=1)
>>  Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
>> Planning time: 291.932 ms
>> Execution time: 1259886.920 ms
>> (8 rows)
>>
>> Time: 1261191.844 ms
>>
>> benchmark=# explain analyze select data from json_tables where data @>
>> '{"name": "AC3 Case Red"}';
>>   QUERY PLAN
>>
>>
>> ---
>>
>> Bitmap Heap Scan on json_tables  (cost=113.50..37914.64 rows=1
>> width=1261) (actual time=779.261..29815.262 rows=909091 loops=1)
>>Recheck Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
>>Rows Removed by Index Recheck: 4360296
>>Heap Blocks: exact=37031 lossy=872059
>>->  Bitmap Index Scan on json_tables_idx  (cost=0.00..111.00
>> rows=1 width =0) (actual time=769.081..769.081 rows=909091 loops=1)
>>  Index Cond: (data @> '{"name": "AC3 Case Red"}'::jsonb)
>> Planning time: 33.967 ms
>> Execution time: 29869.381 ms
>>
>> (8 rows)
>>
>> Time: 29987.122 ms
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
> --
> --
> Best Regards
> Sameer Kumar 

[GENERAL] MediaWiki + PostgreSQL is not ready for production?

2016-07-18 Thread Tatsuo Ishii
I found following comment for using PostgreSQL with MediaWiki:

https://www.mediawiki.org/wiki/Compatibility#Database

"Anything other than MySQL or MariaDB is not recommended for
production use at this point."

This is a sad and disappointed statement for us. Should we help
MediaWiki community to enhance this?

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MediaWiki + PostgreSQL is not ready for production?

2016-07-18 Thread John R Pierce

On 7/18/2016 9:14 PM, Tatsuo Ishii wrote:

I found following comment for using PostgreSQL with MediaWiki:

https://www.mediawiki.org/wiki/Compatibility#Database

"Anything other than MySQL or MariaDB is not recommended for
production use at this point."

This is a sad and disappointed statement for us. Should we help
MediaWiki community to enhance this?


the classic problem with any of these sorts of open source projects, 
while you can convert the core system to postgres, there's a huge 
community of contributed plugins, and many of these authors have zero 
interest in anything but their default database, mysql/mariadb.   I ran 
into this with Drupal, Wordpress, a couple different forum projects.
Drupal even tried to offer a database API so plugin developers wouldn't 
touch SQL directly, but too many ignored it.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general