Re: [GENERAL] [pgadmin-support] Database has been killed after increasing memory

2013-12-23 Thread Dinesh Kumar
Hi,

+Adding postgresql general list. Hope, some one will provide more
information.

On Mon, Dec 23, 2013 at 1:56 PM, kolsze...@gmail.com wrote:

> Hi
>
> I Have problem with "postmaster" processes which unexpectedly consume very
> big amount of memory, about 200MB!!!, and as a consequence whole available
> memory in Linux, and as a consequence Linux kills postmasters processes :(
>
> my configuration:
> 8 Core
> 8 GB RAM
> max_connections = 60
> work_mem = 32MB
> shared_buffers = 2G
>
> After an investigation, I suspect, than some of my queries which contains
> many INNER JOIN's (about 30, and one of them about 60) probably causes
> memory problem.
>
> What are postgres limitations about JOINS in one query?
> How can I estimate memory needed for postgres?
>

Using EXPLAIN ANALYZE BUFFERS

> How can I protect my postgres server, when i don't know exactly how
> complicated  queries creates my customer, e.g. in reports?
>

Using pgbadger/pgfounie tools, you will get the detailed statistical report.


Regards,
Dinesh

>
>
>
>
>
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/Database-has-been-killed-after-increasing-memory-tp5784404.html
> Sent from the PostgreSQL - pgadmin support mailing list archive at
> Nabble.com.
>
>
> --
> Sent via pgadmin-support mailing list (pgadmin-supp...@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgadmin-support
>


Re: [GENERAL] pg_upgrade & tablespaces

2013-12-23 Thread Joseph Kregloh
>
> And /usr/local/pgsql was re-initdbed  with a 9.0 cluster, because
> previously it was the 9.3 cluster?
>
> And you are sure /usr/local/bin has the 9.3 binaries?
>
> Personally I would say at this point the relationships between versions
> are so confused it would seem best to start from scratch.
>
> My suggestions:
>
> 1) Create a new jail with a copy of the test 9.0 cluster located in
> /usr/local/pgsql.
>
> 2) In that jail install a new 9.3 cluster using the --prefix= switch to
> configure to have it install in a different location in the jail.
>
> 3) Use pg_upgrade.
>
>
>
Before every test I do a ZFS rollback which resets all data on the disk
back to the last snapshot. So essentially every time it's dealing with a
new install. However I will give a try your suggestions that you had in
your next email (I can see into the future) and report back.

-Thanks


Re: [GENERAL] pg_upgrade & tablespaces

2013-12-23 Thread Joseph Kregloh
On Fri, Dec 20, 2013 at 7:42 PM, John R Pierce  wrote:

> On 12/20/2013 4:14 PM, Adrian Klaver wrote:
>
>>
>> Personally I would say at this point the relationships between versions
>> are so confused it would seem best to start from scratch.
>>
>
> the 80 tablespaces aren't helping this one bit.
>
>
> I am really curious what lead to creating that many tablespaces? reminds
> me of 1990s Oracle databases where disks were small and you used lots of
> them, and spread your tables and indexes across many different
> drives/mirrors because the raid at the time had performance bottlenecks.
>
>
>
Well the original architect started out in the 80s with banking databases
they just kept that model without revisiting if it works well or not, that
might explain it a little bit. But also given the size of our tables we use
the physical disks and filesytem advantages to improve speed and
performance of the application, but not as often as I would like. We have a
pretty big database.

-Joseph


[GENERAL] FATAL: index contains unexpected zero page at block

2013-12-23 Thread markus . holmberg
After the restart of a PostgreSQL 9.3.1 hot standby doing streaming 
replication, the database would not come up again and the logs show 
"index contains unexpected zero page at block 0" errors as shown below:


  LOG:  entering standby mode
  LOG:  redo starts at 6/55A16990
  LOG:  consistent recovery state reached at 6/56D5FFF0
  LOG:  database system is ready to accept read only connections
  LOG:  invalid record length at 6/56D5FFF0
  LOG:  started streaming WAL from primary at 6/5600 on timeline 1

  state=XX002,user=repmgr,db=repmgr FATAL:  index 
"pg_amproc_fam_proc_index" contains unexpected zero page at block 0

  state=XX002,user=repmgr,db=repmgr HINT:  Please REINDEX it.

  state=XX002,user=postgres,db=foo ERROR:  index "foo_pkey" contains 
unexpected zero page at block 0

  state=XX002,user=postgres,db=foo HINT:  Please REINDEX it.

  WARNING:  page 1 of relation base/37706/11821 is uninitialized
  CONTEXT:  xlog redo vacuum: rel 1663/37706/11821; blk 2, 
lastBlockVacuumed 0

  PANIC:  WAL contains references to invalid pages
  CONTEXT:  xlog redo vacuum: rel 1663/37706/11821; blk 2, 
lastBlockVacuumed 0


What could cause "index contains unexpected zero page at block 0" 
errors as shown above on a hot standby?


As this happened only on a standby, there is no need to recover any 
data.  Instead the point would be to understand what could cause this 
and prevent it from happening again (on a master at least).  Any hints 
on how to investigate this?  Could the source of the error be on the 
master side (shipping invalid WALs)?  Or might it be an issue out of 
PostgreSQL's control on the standby (such as the filesystem)?


Also, should one be concerned about log messages such as "invalid 
record length at 6/56D5FFF0", as shown at the beginning of the log 
snippets above?  Searching mailing archives seems to suggest that such 
log messages might just indicate that PostgreSQL reached the end of the 
local set WALs and will start streaming from the master (see for example 
), 
but I couldn't find confirmation of this elsewhere.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pgadmin-support] Database has been killed after increasing memory

2013-12-23 Thread Dinesh Kumar
+Adding postgres general


Hi,


On Mon, Dec 23, 2013 at 9:34 PM, Krzysztof Olszewski wrote:

> Hi
>
> thanx for your answer
>
> "Using EXPLAIN ANALYZE BUFFERS"
> how it helps me with "How can I estimate memory needed for postgres" ?
>

Using this information, i believe we can estimate the memory usage of a
query.

Explain Analyze
Buffers


> "Using pgbadger/pgfounie"
> how it helps me with  "How can I protect my postgres server, when i don't
> know exactly ..." ?
>
> By configuring the required log information on the server level, we can
get the complete statistics information using pgbadger. Find the sample
here.
 pgbadger report 

Thanks,
Dinesh

>
>
> 2013/12/23 Dinesh Kumar 
>
>> Hi,
>>
>> +Adding postgresql general list. Hope, some one will provide more
>> information.
>>
>> On Mon, Dec 23, 2013 at 1:56 PM, kolsze...@gmail.com > > wrote:
>>
>>> Hi
>>>
>>> I Have problem with "postmaster" processes which unexpectedly consume
>>> very
>>> big amount of memory, about 200MB!!!, and as a consequence whole
>>> available
>>> memory in Linux, and as a consequence Linux kills postmasters processes
>>> :(
>>>
>>> my configuration:
>>> 8 Core
>>> 8 GB RAM
>>> max_connections = 60
>>> work_mem = 32MB
>>> shared_buffers = 2G
>>>
>>> After an investigation, I suspect, than some of my queries which contains
>>> many INNER JOIN's (about 30, and one of them about 60) probably causes
>>> memory problem.
>>>
>>> What are postgres limitations about JOINS in one query?
>>> How can I estimate memory needed for postgres?
>>>
>>
>> Using EXPLAIN ANALYZE BUFFERS
>>
>>> How can I protect my postgres server, when i don't know exactly how
>>> complicated  queries creates my customer, e.g. in reports?
>>>
>>
>> Using pgbadger/pgfounie tools, you will get the detailed statistical
>> report.
>>
>>
>> Regards,
>> Dinesh
>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://postgresql.1045698.n5.nabble.com/Database-has-been-killed-after-increasing-memory-tp5784404.html
>>> Sent from the PostgreSQL - pgadmin support mailing list archive at
>>> Nabble.com.
>>>
>>>
>>> --
>>> Sent via pgadmin-support mailing list (pgadmin-supp...@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgadmin-support
>>>
>>
>>
>


Re: [GENERAL] pg_upgrade & tablespaces

2013-12-23 Thread Adrian Klaver

On 12/23/2013 06:45 AM, Joseph Kregloh wrote:



And /usr/local/pgsql was re-initdbed  with a 9.0 cluster, because
previously it was the 9.3 cluster?

And you are sure /usr/local/bin has the 9.3 binaries?

Personally I would say at this point the relationships between
versions are so confused it would seem best to start from scratch.

My suggestions:

1) Create a new jail with a copy of the test 9.0 cluster located in
/usr/local/pgsql.

2) In that jail install a new 9.3 cluster using the --prefix= switch
to configure to have it install in a different location in the jail.

3) Use pg_upgrade.



Before every test I do a ZFS rollback which resets all data on the disk
back to the last snapshot. So essentially every time it's dealing with a
new install.


And that has been repeatably proven not to work:) I was suggesting to go 
back even further and do not start from the snapshot, but start from a 
totally new installation where both instances are in the same jail.


However I will give a try your suggestions that you had in

your next email (I can see into the future) and report back.


Great, let me know what the answer is:)



-Thanks




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade & tablespaces

2013-12-23 Thread John R Pierce

On 12/23/2013 6:50 AM, Joseph Kregloh wrote:




Well the original architect started out in the 80s with banking 
databases they just kept that model without revisiting if it works 
well or not, that might explain it a little bit. But also given the 
size of our tables we use the physical disks and filesytem advantages 
to improve speed and performance of the application, but not as often 
as I would like. We have a pretty big database.


I've found these days, you're usually better off just stripping all your 
mirrors into one big raid10, and letting statistics load balance your 
IO.   I've got stripe sets of as many as 20 small-fast drives, totalling 
several terabytes, using XFS (Linux), or ZFS (Solaris, BSD), or JFS2 
(AIX), all of which seem to handle the large file system quite efficiently.


--
john r pierce  37N 122W
somewhere on the middle of the left coast



Re: [GENERAL] Multi Master Replication

2013-12-23 Thread pax
Sometime ago i was looking for something like this and because at this time
XC was a little baby i tried installing bucardo but i gave up when stucked
fighting with perl modules. So, after testing some other solutions i decided
to make my own, just "touching" the trigger part of the pyreplica project
and building a daemon in a binary compiled program (no script) for a better
performance. It's working well by now, with some tables and few nodes (12
nodes deployed across long distances), but i was lucky then because the
database and the systems were still in design and development phases.
There are many things you have to keep in mind, as all people is warning
you. I tell you, it can be done, but you MUST redesign your existing
database preparing for multimaster paradigm where the key is avoid node
level concurrency, and your client database apps to THINK what to do when
nodes are down, but if you are not ready to prepare your database (design)
or you have not access to the source code of your client database apps then
you have problems.
I suggest you read this
http://www.dbspecialists.com/files/presentations/mm_replication.html and
this
http://scale-out-blog.blogspot.com/2012/04/if-you-must-deploy-multi-master.html.
Finally, i can mention another posible solution that i never tried but maybe
can help you, SymmetricDS.
Good luck.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Multi-Master-Replication-tp5783855p5784468.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multi Master Replication

2013-12-23 Thread Anand Kumar, Karthik
We use symmetricDS pretty extensively, across oracle and postgres
databases. It has its flaws and its strengths. It shines when there's
multiple database platforms involved, when the volume of transactions is
not too high, and supports multi master. Its optimized for wan topologies,
so its great if you have small amounts of data over longer distances, not
as great when there's several gigabytes needing transfer within the same
datacenter. It also has inbuilt insert vs update conflict resolution.

On the flip side, as the volume of writes increases, it can quickly lead
to significant database bloat and high CPU usage. Its also not as fast as
some other tools (like slony, or ora2pg) for initial loads of tables.

It exposes all of its switches and knobs (and there are a *lot* of
tunables), which make it both very flexible, but also prone to breakage if
you fiddle with it too much. Its in active development and we've found the
community forums to be pretty helpful when we run into undocumented bugs.

Let me know if you need help with it.

Thanks,
Karthik



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] to_json(now()) result doesn't have 'T' separator

2013-12-23 Thread Joe Van Dyk
This has been brought up a few times in the past:
http://www.postgresql.org/message-id/CAAZKuFZF5=raA=rlncqeg_8gsj9vi4_e-fi1aomk4zp+dxc...@mail.gmail.com
http://www.postgresql.org/message-id/ec26f5ce-9f3b-40c9-bf23-f0c2b96e3...@gmail.com

Any chance it could be fixed? I can't figure out a way to easily let
javascript applications parse json timestamps generated by postgresql in
row_to_json() statements.


On Fri, Dec 20, 2013 at 6:27 PM, Joe Van Dyk  wrote:

> On Fri, Dec 20, 2013 at 4:24 PM, Joe Van Dyk  wrote:
>
>>
>>
>> On Fri, Dec 20, 2013 at 4:18 PM, Joe Van Dyk  wrote:
>>
>>> # select to_json(now());
>>>  to_json
>>> -
>>>  "2013-12-20 15:53:39.098204-08"
>>> (1 row)
>>>
>>> I'd like to see it output "2013-12-20T15:53:39.098204-08" so it's
>>> interchangeable with more systems.
>>>
>>
>> Actually, "2013-12-20T15:53:39.44553-08:00" would be ideal.
>>
>
> I dug into the docs some more, and I found this at
> http://www.postgresql.org/docs/9.3/static/datatype-datetime.html:
> "Note: ISO 8601 specifies the use of uppercase letter T to separate the
> date and time.PostgreSQL accepts that format on input, but on output it
> uses a space rather than T, as shown above. This is for readability and for
> consistency with RFC 3339 as well as some other database systems."
>
> So I looked up RFC 3339 (http://www.ietf.org/rfc/rfc3339.txt) and read:
>  "NOTE: ISO 8601 defines date and time separated by "T".  Applications
> using this syntax may choose, for the sake of readability, to specify a
> full-date and full-time separated by (say) a space character."
>
> Doesn't seem like including the 'T' separator would be inconsistent with
> RFC 3399?
>
> I'm sending the output of to_json(now()) to web browsers. Most browsers
> aren't able to parse the date strings if they are missing the 'T'
> separator. If datetime strings could include the 'T' time separator and the
> full timezone, that would make generating json that worked with web
> browsers much simpler.
>
> Joe
>
>
>
>>
>>>
>>>
>>> http://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representationssays
>>>  'T' can be omitted "by mutual agreement".
>>>
>>> I'm working with javascript/json systems that expect the 'T' to be there
>>> however, so there's no mutual agreement happening.
>>>
>>> Thoughts? I know I can hack around it by specifying my own date format,
>>> but I'd really like to be able to use row_to_json and other functions
>>> without specifying custom date formats everywhere.
>>>
>>> Joe
>>>
>>
>>
>


Re: [GENERAL] to_json(now()) result doesn't have 'T' separator

2013-12-23 Thread Joe Van Dyk
I tried making some cast functions as follows, but it doesn't seem to work
properly:

create or replace function iso_timestamp(timestamptz) returns json as $$
  select ('"' ||
  substring(xmlelement(name x, $1)::text from 4 for 32) || '"'
)::json
$$ language sql immutable;

create cast (timestamptz as json) with function iso_timestamp (timestamptz)
as implicit;

create function to_json(timestamptz) returns json as $$
  select $1::json
$$ language sql immutable;

create table t (id serial primary key, created_at timestamptz default
now());
insert into t values (default);
select row_to_json(t) from t;

  row_to_json
---
 {"id":1,"created_at":"2013-12-23 17:37:08.825935-08"}


On Mon, Dec 23, 2013 at 5:28 PM, Joe Van Dyk  wrote:

> This has been brought up a few times in the past:
>
> http://www.postgresql.org/message-id/CAAZKuFZF5=raA=rlncqeg_8gsj9vi4_e-fi1aomk4zp+dxc...@mail.gmail.com
>
> http://www.postgresql.org/message-id/ec26f5ce-9f3b-40c9-bf23-f0c2b96e3...@gmail.com
>
> Any chance it could be fixed? I can't figure out a way to easily let
> javascript applications parse json timestamps generated by postgresql in
> row_to_json() statements.
>
>
> On Fri, Dec 20, 2013 at 6:27 PM, Joe Van Dyk  wrote:
>
>> On Fri, Dec 20, 2013 at 4:24 PM, Joe Van Dyk  wrote:
>>
>>>
>>>
>>> On Fri, Dec 20, 2013 at 4:18 PM, Joe Van Dyk  wrote:
>>>
 # select to_json(now());
  to_json
 -
  "2013-12-20 15:53:39.098204-08"
 (1 row)

 I'd like to see it output "2013-12-20T15:53:39.098204-08" so it's
 interchangeable with more systems.

>>>
>>> Actually, "2013-12-20T15:53:39.44553-08:00" would be ideal.
>>>
>>
>> I dug into the docs some more, and I found this at
>> http://www.postgresql.org/docs/9.3/static/datatype-datetime.html:
>> "Note: ISO 8601 specifies the use of uppercase letter T to separate the
>> date and time.PostgreSQL accepts that format on input, but on output it
>> uses a space rather than T, as shown above. This is for readability and for
>> consistency with RFC 3339 as well as some other database systems."
>>
>> So I looked up RFC 3339 (http://www.ietf.org/rfc/rfc3339.txt) and read:
>>  "NOTE: ISO 8601 defines date and time separated by "T".  Applications
>> using this syntax may choose, for the sake of readability, to specify a
>> full-date and full-time separated by (say) a space character."
>>
>> Doesn't seem like including the 'T' separator would be inconsistent with
>> RFC 3399?
>>
>> I'm sending the output of to_json(now()) to web browsers. Most browsers
>> aren't able to parse the date strings if they are missing the 'T'
>> separator. If datetime strings could include the 'T' time separator and the
>> full timezone, that would make generating json that worked with web
>> browsers much simpler.
>>
>> Joe
>>
>>
>>
>>>


 http://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representationssays
  'T' can be omitted "by mutual agreement".

 I'm working with javascript/json systems that expect the 'T' to be
 there however, so there's no mutual agreement happening.

 Thoughts? I know I can hack around it by specifying my own date format,
 but I'd really like to be able to use row_to_json and other functions
 without specifying custom date formats everywhere.

 Joe

>>>
>>>
>>
>


Re: [GENERAL] to_json(now()) result doesn't have 'T' separator

2013-12-23 Thread Adrian Klaver
On Mon, Dec 23, 2013 at 5:37 PM, Joe Van Dyk  wrote:
> I tried making some cast functions as follows, but it doesn't seem to work
> properly:
>
> create or replace function iso_timestamp(timestamptz) returns json as $$
>   select ('"' ||
>   substring(xmlelement(name x, $1)::text from 4 for 32) || '"'
> )::json
> $$ language sql immutable;
>
> create cast (timestamptz as json) with function iso_timestamp (timestamptz)
> as implicit;
>
> create function to_json(timestamptz) returns json as $$
>   select $1::json
> $$ language sql immutable;
>
> create table t (id serial primary key, created_at timestamptz default
> now());
> insert into t values (default);
> select row_to_json(t) from t;
>
>   row_to_json
> ---
>  {"id":1,"created_at":"2013-12-23 17:37:08.825935-08"}
>
>


Close as I could get using above is:

test=# select to_json(created_at) from t;
  to_json

 "2013-12-23T19:09:33.886092-08:00"

Seems row_to_json bypasses casts whereas to_json does not.
-- 
Adrian Klaver
adrian.kla...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG replication across DataCenters

2013-12-23 Thread Sameer Kumar
Though I will agree that slony is a nice and a great tool w.r.t.
replication (specifically selective replication). But I would dis-agree on
below points:


* Cascading replication chains (a really big deal when you want

  multiple slaves in the secondary facility and don't want to hog

  your bandwidth)

Really? which version of Postgres are we talking about? I think cascaded
replication facility is available since v9.2
http://www.postgresql.org/docs/9.2/static/warm-standby.html#CASCADING-REPLICATION

* Quick and easy movement of the master to any of the database in

  the cluster without destroying replication.

Again, which version? Re-mastering is made simple in v9.3.

* Seeding of new slaves without interrupting existing nodes (assuming

  your hardware has a little free capacity)

AFAIK, streaming replication does not cause any interruption while you add
a new node.

* Selective replication of tables, potentially in complex arrangements

  where some tables are replicated to only to A and some only to B

  and some to A and B, etc, etc.

Agree.


In general I do not like trigger based (replication) solutions for huge
clusters [this is my personal opinion and does not necessarily indicate my
employer's opinion ;-)] and for databases which has huge write volume
specifically if you do bulk insert/delete/update operations.
I think if it's slony or streaming replication will depend on below factors:


1) The change-set that you want to replicate contributes how much of your
total change set? e.g. on a per minute basis if it's 70% or above, I will
recommend you to go for streaming replication

2) Do you have too many tables to be added to replication set? lets say
above 70% of your total tables needs to be replication (unless rest 30%
have high write operations), then go for streaming replication

3) Do you too many bulk operations happening on the tables which needs to
be replicated

4) To some extent your choice will be influenced by the motivation behind
replication, DR, HA, reporting application (esp if you are particular about
replicating only selective tables for reports)

There are few easier ways of managing a slony cluster:

1)
http://shoaibmir.wordpress.com/2009/08/05/setting-up-slony-cluster-with-perltools/

2) I think even pgadmin supports slony replication (not sure if its slony-I
or slony-II)


Regards
Sameer


Re: [GENERAL] Some good news ([i...@arin.net: [arin-announce] ARIN Database Migration Completed])

2013-12-23 Thread Sameer Kumar
On Sun, Dec 15, 2013 at 11:57 PM, Andrew Sullivan wrote:

> I thought people'd like to know about this.  For those who don't know,
> ARIN is the Regional Internet Registry for North America and the
> Carribean.  That is, if you have an IP address and you are operating
> in that region, you depend on ARIN.
>
> Hurray!
>
> A
>
> - Forwarded message from ARIN  -
>
> Date: Sat, 14 Dec 2013 17:15:41 -0500
> From: ARIN 
> To: arin-annou...@arin.net
> Subject: [arin-announce] ARIN Database Migration Completed
> List-Id: ARIN Announcements and Notices 
>
> We are pleased to announce that ARIN successfully completed migration
> of its database from Oracle to PostgreSQL. All ARIN systems are now
> operating normally. We thank you for your patience as we continue to
> strive to improve our services.
>
> Please send any questions, comments, or issues to hostmas...@arin.net.
>
> Regards,
>
> Mark Kosters
> Chief Technical Officer
> American Registry for Internet Numbers (ARIN)
> ___
> ARIN-Announce
> You are receiving this message because you are subscribed to
> the ARIN Announce Mailing List (arin-annou...@arin.net).
> Unsubscribe or manage your mailing list subscription at:
> http://lists.arin.net/mailman/listinfo/arin-announce
> Please contact i...@arin.net if you experience any issues.
>
> - End forwarded message -
>
> --
> Andrew Sullivan
> a...@crankycanuck.ca
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


+ pg-advocacy