[GENERAL] Text,Citext column and Btree index

2016-09-01 Thread Viswanath
Hi,
Why is a citext column not using Btree index for like operation even without
any wildcard character? It can use Btree index if there is no wildcard
character and if it is at the end right?
Also a text column is using index when there is no wildcard character,but it
is also not using if it is present at the end.



--
View this message in context: 
http://postgresql.nabble.com/Text-Citext-column-and-Btree-index-tp5918823.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Text,Citext column and Btree index

2016-09-01 Thread Vik Fearing
On 09/01/2016 09:34 AM, Viswanath wrote:
> Hi,
> Why is a citext column not using Btree index for like operation even without
> any wildcard character?

Because it doesn't know how.

> It can use Btree index if there is no wildcard character and if it is at the 
> end right?

No; use pg_trgm for this.

> Also a text column is using index when there is no wildcard character,but it
> is also not using if it is present at the end.

Did you declare your index with text_pattern_ops?
-- 
Vik Fearing  +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Array element foreign keys

2016-09-01 Thread Emrul
Hi,

I know a while ago someone posted a patch to support this:
http://blog.2ndquadrant.com/postgresql-9-3-development-array-element-foreign-keys/
but it seems that it has not made it as of 9.5.

Is anyone aware of why it wasn't already included / if it is planned for it
to be in a (near) future release?





--
View this message in context: 
http://postgresql.nabble.com/Array-element-foreign-keys-tp5918837.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] COL unique (CustomerID) plus COL unique (COUNT) inside CustomerID

2016-09-01 Thread Michelle Konzack
Hello to all,

after a period of silence from Debian, Courier, PHP and PostgreSQL I  am
half back and running into a problem...  :-/

I need a table with an UNIQUE CustomerID which is working fine...

...BUT I need also a second column with a count, which  must  be  UNIQUE
inside the CustomerID.

In clear this:

CustID   Count
 1   1
 1   2
 1   3

 2   1
 2   2

 3   1
 3   2
 3   3
 ...

How to do this?

Thanks in avance

-- 
Michelle KonzackITSystems
GNU/Linux Developer 0033-6-61925193


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COL unique (CustomerID) plus COL unique (COUNT) inside CustomerID

2016-09-01 Thread Szymon Lipiński
Hi,
you can use the UNIQUE constraint with two columns: UNIQUE(CustID, Count).


regards,
Szymon Lipiński


On 1 September 2016 at 11:28, Michelle Konzack 
wrote:

> Hello to all,
>
> after a period of silence from Debian, Courier, PHP and PostgreSQL I  am
> half back and running into a problem...  :-/
>
> I need a table with an UNIQUE CustomerID which is working fine...
>
> ...BUT I need also a second column with a count, which  must  be  UNIQUE
> inside the CustomerID.
>
> In clear this:
>
> CustID   Count
>  1   1
>  1   2
>  1   3
>
>  2   1
>  2   2
>
>  3   1
>  3   2
>  3   3
>  ...
>
> How to do this?
>
> Thanks in avance
>
> --
> Michelle KonzackITSystems
> GNU/Linux Developer 0033-6-61925193
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Do function calls the cached?

2016-09-01 Thread Daniel Caldeweyher
Thanks David,

Lateral did the trick:

CREATE VIEW with_keywords AS
SELECT x,y,z, keywords.a, keywords.b, keywords.c
FROM large_table l, LATERAL extract_keywords(l.*) keywords(a,b,c)

Regards,
Daniel

On Wed, Aug 31, 2016 at 6:46 AM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Tue, Aug 30, 2016 at 4:15 PM, Daniel Caldeweyher 
> wrote:
>
>>
>> select x,y,z, (extract_keywords(l.*)).*
>>
>> ​[...]​
>
>
>> Does this mean the function gets called three time?
>>
>
> ​Yes.
> ​
>
> (​function_call(...)).*​
>
> ​syntax is problematic.  You should avoid it via one of two options.
>
> LATERAL (new way, preferred)
> or
> CTE​ (old way)
>
> In the CTE version you make the call in the CTE but do "(col).*" in the
> main query.  This way the function is only called once to generate a
> composite output, then the composite output is exploded.
>
> With LATERAL the system is smart enough to do it the right way.
>
> David J.
>


Re: [GENERAL] Array element foreign keys

2016-09-01 Thread Tom Lane
Emrul  writes:
> I know a while ago someone posted a patch to support this:
> http://blog.2ndquadrant.com/postgresql-9-3-development-array-element-foreign-keys/
> but it seems that it has not made it as of 9.5.

> Is anyone aware of why it wasn't already included / if it is planned for it
> to be in a (near) future release?

According to the thread at
https://www.postgresql.org/message-id/flat/1343842863.5162.4.camel%40greygoo.devise-it.lan
it was bounced back for rework because of serious semantic and performance
deficiencies.  Since that was four years ago and nothing new has been
submitted, I wouldn't hold my breath.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE OR REPLACE?

2016-09-01 Thread Mike Sofen
On Thu, Sep 1, 2016 at 12:10 PM, dandl  wrote:
> Sqlite has options to handle an update that causes a duplicate key. Is 
> there anything similar in Postgres?
> This is not an UPSERT. The scenario is an UPDATE that changes some key 
> field so that there is now a duplicate key. In Sqlite this handled as:
> UPDATE OR IGNORE table SET 
> UPDATE OR REPLACE table SET 
>
> And so on
>
> See https://www.sqlite.org/lang_update.html.
>
> Can Postgres do this?

I would propose that this effectively violates referential integrity and 
shouldn't be a valid design pattern.

In my mind primary keys are supposed to be static, stable, non-volatile...aka 
predictable.  It feels like an alien invading my schema, to contemplate such an 
activity.  I hope PG never supports that.

Postgres allows developers incredible freedom to do really crazy things.  That 
doesn't mean that they should.

Mike Sofen (USA)



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Clustered index to preserve data locality in a multitenant application?

2016-09-01 Thread Igor Neyman
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Nicolas Grilly
Sent: Wednesday, August 31, 2016 6:32 PM
To: Kenneth Marshall 
Cc: Vick Khera ; pgsql-general 
Subject: Re: [GENERAL] Clustered index to preserve data locality in a 
multitenant application?

On Tue, Aug 30, 2016 at 8:17 PM, Kenneth Marshall 
mailto:k...@rice.edu>> wrote:
We have been using the extension pg_repack to keep a table groomed into
cluster order. With an appropriate FILLFACTOR to keep updates on the same
page, it works well. The issue is that it needs space to rebuild the new
index/table. If you have that, it works well.

In DB2, it seems possible to define a "clustering index" that determines how 
rows are physically ordered in the "table space" (the heap).

The documentation says: "When a table has a clustering index, an INSERT 
statement causes DB2 to insert the records as nearly as possible in the order 
of their index values."

It looks like a kind of "continuous CLUSTER/pg_repack". Is there something 
similar available or planned for PostgreSQL?


Don’t know about plans to implement clustered indexes in PostgreSQL.

Not sure if this was mentioned, MS SQL Server has clustered indexes, where heap 
row is just stored on the leaf level of the index.
Oracle also has similar feature: IOT, Index Organized Table.

It seems to me (may be I’m wrong), that in PostgreSQL it should be much harder 
to implement clustered index (with the heap row stored in the index leaf) 
because of the way how MVCC implemented: multiple row versions are stored in 
the table itself (e.g. Oracle for that purpose keeps table “clean” and stores 
multiple row versions in UNDO tablespace/segment).

Regards,
Igor Neyman




Re: [GENERAL] COL unique (CustomerID) plus COL unique (COUNT) inside CustomerID

2016-09-01 Thread rob stone

On Thu, 2016-09-01 at 12:28 +0300, Michelle Konzack wrote:
> Hello to all,
> 
> after a period of silence from Debian, Courier, PHP and PostgreSQL
> I  am
> half back and running into a problem...  :-/
> 
> I need a table with an UNIQUE CustomerID which is working fine...
> 
> ...BUT I need also a second column with a count,
> which  must  be  UNIQUE
> inside the CustomerID.
> 
> In clear this:
> 
> CustID   Count
>  1   1
>  1   2
>  1   3
> 
>  2   1
>  2   2
> 
>  3   1
>  3   2
>  3   3
>  ...
> 
> How to do this?
> 
> Thanks in avance
> 
> -- 
> Michelle KonzackITSystems
> GNU/Linux Developer 0033-6-61925193
> 
> 


Hi,

(I wouldn't name a column "count").


SELECT MAX(count_er) FROM my_customer_table WHERE cust_id = $1;

Add one to the result. Do INSERT.

As SELECT MAX is a GROUP function it will return zero if the cust_id
doesn't exist.

HTH,

Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COL unique (CustomerID) plus COL unique (COUNT) inside CustomerID

2016-09-01 Thread Andreas Kretschmer
Michelle Konzack  wrote:

> Hello to all,
> 
> after a period of silence from Debian, Courier, PHP and PostgreSQL I  am
> half back and running into a problem...  :-/
> 
> I need a table with an UNIQUE CustomerID which is working fine...
> 
> ...BUT I need also a second column with a count, which  must  be  UNIQUE
> inside the CustomerID.
> 
> In clear this:
> 
> CustID   Count
>  1   1
>  1   2
>  1   3
> 
>  2   1
>  2   2
> 
>  3   1
>  3   2
>  3   3
>  ...
> 
> How to do this?

don't store the Count-column and using row_number() over (partition by
CustId) instead?

Btw.: Greetings, how are you? ;-)


Regards, Andreas Kretschmer
-- 
Andreas Kretschmer
http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE OR REPLACE?

2016-09-01 Thread dandl
> > Sqlite has options to handle an update that causes a duplicate key.
> Is
> > there anything similar in Postgres?
> > This is not an UPSERT. The scenario is an UPDATE that changes some
> key
> > field so that there is now a duplicate key. In Sqlite this handled
> as:
> > UPDATE OR IGNORE table SET 
> > UPDATE OR REPLACE table SET 
> >
> > And so on
> >
> > See https://www.sqlite.org/lang_update.html.
> >
> > Can Postgres do this?
> 
> I would propose that this effectively violates referential integrity
> and shouldn't be a valid design pattern.
> 
> In my mind primary keys are supposed to be static, stable, non-
> volatile...aka predictable.  It feels like an alien invading my
> schema, to contemplate such an activity.  I hope PG never supports
> that.

It's an interesting proposition, but not one I fear will find universal 
support. The relational model itself has no such requirements, and there are 
perfectly valid tables that have no primary key, but use a constraint to forbid 
duplicates. A link table implementing an N:M relationship is one such.

In my particular situation the case I care about is when the result of an 
UPDATE is two identical rows. All I really want is a DISTINCT option.

> Postgres allows developers incredible freedom to do really crazy
> things.  That doesn't mean that they should.

To the best of my knowledge and belief that statement could be made about every 
serious programming language I've ever used. Why should Postgres SQL be any 
different?

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE OR REPLACE?

2016-09-01 Thread Adrian Klaver

On 09/01/2016 07:37 AM, dandl wrote:

Sqlite has options to handle an update that causes a duplicate key.

Is

there anything similar in Postgres?
This is not an UPSERT. The scenario is an UPDATE that changes some

key

field so that there is now a duplicate key. In Sqlite this handled

as:

UPDATE OR IGNORE table SET 
UPDATE OR REPLACE table SET 

And so on

See https://www.sqlite.org/lang_update.html.

Can Postgres do this?


I would propose that this effectively violates referential integrity
and shouldn't be a valid design pattern.

In my mind primary keys are supposed to be static, stable, non-
volatile...aka predictable.  It feels like an alien invading my
schema, to contemplate such an activity.  I hope PG never supports
that.


It's an interesting proposition, but not one I fear will find universal 
support. The relational model itself has no such requirements, and there are 
perfectly valid tables that have no primary key, but use a constraint to forbid 
duplicates. A link table implementing an N:M relationship is one such.

In my particular situation the case I care about is when the result of an 
UPDATE is two identical rows. All I really want is a DISTINCT option.


Assuming I am following correctly what you want is that the result of an 
UPDATE not be two identical rows.





Postgres allows developers incredible freedom to do really crazy
things.  That doesn't mean that they should.


To the best of my knowledge and belief that statement could be made about every 
serious programming language I've ever used. Why should Postgres SQL be any 
different?

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org










--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rackspace to RDS using DMS (Postgres 9.2)

2016-09-01 Thread Joshua D. Drake

On 08/31/2016 03:41 PM, Patrick B wrote:

Hi guys, I posted this question on the ADMIN list but will post here as
well so more people can comment...
https://www.postgresql.org/message-id/CAJNY3it_AfxJhmwMHtpiAbHG47GS5rJOAUgfHw%2BGm5OXCbUm7w%40mail.gmail.com

I've got a 2.3TB Database running at Rackspace... We'll be migrating it
to RDS PostgreSQL 9.5 very soon...

We already have an EC2 Instance at Amazon running PostgreSQL 9.2 as
streaming replication from Rackspace.

I'll have to upgrade the version of Postgres on that instance before
start using DMS service.

*Question:*
Has anybody ever used that service? I'm just trying to find out how much
time it will take to perform the migration...


It is 2.3TB, it is going to take a long time no matter what service you 
are running.


No, I have not used DMS. Frankly, with all respect to AWS/RDS the idea 
of running a 2.3TB instance that will get any level of performance 
sounds ridiculously expensive.


Sincerely,

JD




Thanks!
Patrick



--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rackspace to RDS using DMS (Postgres 9.2)

2016-09-01 Thread Mike Sofen
From: Joshua D. Drake  Sent: Thursday, September 01, 2016 9:29 AM
On 08/31/2016 03:41 PM, Patrick B wrote:

> Hi guys, I posted this question on the ADMIN list but will post here 

> as well so more people can comment...

>   
> https://www.postgresql.org/message-id/CAJNY3it_AfxJhmwMHtpiAbHG47GS5rJ

> OAUgfHw%2BGm5OXCbUm7w%40mail.gmail.com

> 

> I've got a 2.3TB Database running at Rackspace... We'll be migrating 

> it to RDS PostgreSQL 9.5 very soon...

> 

> We already have an EC2 Instance at Amazon running PostgreSQL 9.2 as 

> streaming replication from Rackspace.

> 

> I'll have to upgrade the version of Postgres on that instance before 

> start using DMS service.

> 

> *Question:*

> Has anybody ever used that service? I'm just trying to find out how 

> much time it will take to perform the migration...

 

It is 2.3TB, it is going to take a long time no matter what service you are 
running.

 

No, I have not used DMS. Frankly, with all respect to AWS/RDS the idea of 
running a 2.3TB instance that will get any level of performance sounds 
ridiculously expensive.

 

Sincerely,

 

JD

 

I currently have an EC2 instance in AWS – an m4.xlarge (4 cores, 16gb, 3tb 
SSDs) and it’s pretty cheap, about $620/mo ($210/mo for the compute, $410 for 
the storage).  The performance of this setup rivals in-house Cisco UCS server 
that we are demoing that costs ~$100k, as long as our batch sizes don’t exceed 
available memory – that’s where the larger Cisco pulls ahead.  The $620/mo is 
the on-demand price, btw…the reserved price is much lower.

 

$100k/ $620 = 161 months of operation before cost parity.

 

Mike S



Re: [GENERAL] COL unique (CustomerID) plus COL unique (COUNT) inside CustomerID

2016-09-01 Thread Kevin Grittner
On Thu, Sep 1, 2016 at 4:28 AM, Michelle Konzack
 wrote:

> I need a table with an UNIQUE CustomerID which is working fine...
>
> ...BUT I need also a second column with a count, which  must  be  UNIQUE
> inside the CustomerID.

Just to be clear, you probably have a customer table with "CustIomerId"
as its key and are talking about a "child"of that where you want a
2-column key?

> In clear this:
>
> CustID   Count
>  1   1
>  1   2
>  1   3
>
>  2   1
>  2   2
>
>  3   1
>  3   2
>  3   3
>  ...
>
> How to do this?

If, for example, the child table is a list of customer contacts,
you might add a "LastContactNo" column to the customer table,
defaulting to zero on customer insert, and which you increment to
get values for the second key column in the contact table.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Clustered index to preserve data locality in a multitenant application?

2016-09-01 Thread Eduardo Morras
On Wed, 31 Aug 2016 17:33:18 +0200
Nicolas Grilly  wrote:

> Eduardo Morras wrote:
> 
> 
> > Check BRIN indexs, they are "designed for handling very large
> > tables in which certain columns have some natural correlation with
> > their physical location within the table", I think they fit your
> > needs.
> 
> 
> Yes, a BRIN index on the tenant ID would be very useful if the rows
> in the heap were naturally sorted by the tenant ID, but they are not.
> They are naturally sorted by their order of insertion, which is
> completely unrelated. The first step in solving this is to find a way
> to keep rows belonging to the same tenant close to each other. The
> second step could be to use a BRIN index.

Then you can make multiple column partial indexes:

CREATE INDEX CONCURRENTLY tenant_01_idx ON big_tenant_table 
(the_columns_with_data_you_need, tenant_id) WHERE tenant_id = 1;
CREATE INDEX CONCURRENTLY tenant_02_idx ON big_tenant_table 
(the_columns_with_data_you_need, tenant_id) WHERE tenant_id = 2;

This way each index has the data for a tenant, is updated only when the data 
for that tenant is updated and each index has it own files and you can reindex 
to clean index content and debloat.

REINDEX INDEX tenant_01_idx;

Or grouping them if there are too much indexes:
CREATE INDEX CONCURRENTLY tenant_01_idx ON big_tenant_table 
(the_columns_with_data_you_need, tenant_id) WHERE tenant_id <= 300;
CREATE INDEX CONCURRENTLY tenant_02_idx ON big_tenant_table 
(the_columns_with_data_you_need, tenant_id) WHERE tenant_id > 300 AND tenant_id 
<= 600;

---   ---
Eduardo Morras 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rackspace to RDS using DMS (Postgres 9.2)

2016-09-01 Thread Vick Khera
On Thu, Sep 1, 2016 at 1:44 PM, Mike Sofen  wrote:
> $100k/ $620 = 161 months of operation before cost parity.

You ought to maybe compare what you can buy from an outfit like
Silicon Mechanics for ~$15k. I suspect you could get 16-core, 256GB
RAM, and several TB of disk. And then your parity would be much
shorter and you'd have higher performance.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rackspace to RDS using DMS (Postgres 9.2)

2016-09-01 Thread Patrick B
>
>
>
> It is 2.3TB, it is going to take a long time no matter what service you
> are running.
>
>
I know that Joshua! I just wanna get an idea from someone who has used DMS
service...


>
>
> No, I have not used DMS. Frankly, with all respect to AWS/RDS the idea of
> running a 2.3TB instance that will get any level of performance sounds
> ridiculously expensive.
>
>
A business that has a 2.3TB database large, is not worried about costs...
the system has been running fine since 2009 and now it's time to change the
infrastructure to something more easily to make changes when needed We
also have lots of customers in Australia and USA, so the Viability at
Amazon will help with the latency for sure.


>
>
> Sincerely,
>
>
>
> JD
>
>
>
> I currently have an EC2 instance in AWS – an m4.xlarge (4 cores, 16gb, 3tb
> SSDs) and it’s pretty cheap, about $620/mo ($210/mo for the compute, $410
> for the storage).  The performance of this setup rivals in-house Cisco UCS
> server that we are demoing that costs ~$100k, as long as our batch sizes
> don’t exceed available memory – that’s where the larger Cisco pulls ahead.
> The $620/mo is the on-demand price, btw…the reserved price is much lower.
>
>
>
> $100k/ $620 = 161 months of operation before cost parity.
>
>
>
> Mike S
>


We'll be using RDS, not an EC2 Instance.


Re: [GENERAL] UPDATE OR REPLACE?

2016-09-01 Thread dandl
> > In my particular situation the case I care about is when the result
> of an UPDATE is two identical rows. All I really want is a DISTINCT
> option.
> 
> Assuming I am following correctly what you want is that the result of
> an UPDATE not be two identical rows.

Correct. In practice I don't care whether the action is IGNORE or REPLACE (in 
Sqlite terms), the outcome is the same.

Obviously two different records that share the same primary key is a bad thing 
and worth an error. Two identical records is just boring.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 2.5TB Migration from SATA to SSD disks - PostgreSQL 9.2

2016-09-01 Thread Patrick B
Hi guys,

I'll be performing a migration on my production master database server,
which is running PostgreSQL 9.2 atm, from SATA disks to SSD disks.
I've got some questions about it, and it would be nice if u guys could
share your experiences/thoughts:

*SCENARIO:*

I currently have one MASTER and two Streaming Replication Slaves servers...

master01
> slave01 (Streaming replication + wal_files)
> slave02 (Streaming replication + wal_files)


...Postgres is mounted on: /var/lib/pgsql/... The SSD disks will be
installed only on my Master server, because my main problem is Writes and
not reads.

The new SSD volume will be mounted on /var/lib/pgsql2/



   - The slave02 server will loose the streaming replication connection to
   the master, once slave01 becomes the new master a new timeline will be
   settled? Will slave02 be able to connect to the slave01 server for
   streaming replication?




*MIGRATION OPTIONS:*

*Migration Option 1:* *I know this option will work*


   1. Mount the new volume */var/lib/pgsql2/ *on the master01 server
   2. Turn slave01 into a master server
   3. once I can confirm everything is working fine, I can go to step 4
   4. Stop postgres on the master01, start copying the DB using
   pg_basebackup from slave02 to master01 (Will have to edit postgres to
   use /var/lib/pgsql2/ instead /var/lib/pgsql - Is that possible? Or I'd have
   to create a symbolic link?)
   5. Start postgres on master01 server and check if all goes well as
   streaming replication server (Will test it for days)
   6. Turn master01 into a master server and I'll have to re-copy the DB
   into slave01 to make it a streaming replication server again


*Migration Option 2:* *I don't know if this is possible - IS THIS
POSSIBLE*

   1. Mount the new volume */var/lib/pgsql2/* on the master01 server
   2. Stop postgres on the server (I won't stop postgres on the slave so
   the users will be able to use the server as read-only)
   3. Copy the data from /var/lib/pgsql/ to /var/lib/pgsql2/
   4. Configure postgres to start using the new volume(/var/lib/pgsql2/)


What do you guys think? Is option possible? if so it would be much easier :)
Thanks!


Re: [GENERAL] 2.5TB Migration from SATA to SSD disks - PostgreSQL 9.2

2016-09-01 Thread Venkata B Nagothi
On Fri, Sep 2, 2016 at 12:48 PM, Patrick B  wrote:

> Hi guys,
>
> I'll be performing a migration on my production master database server,
> which is running PostgreSQL 9.2 atm, from SATA disks to SSD disks.
> I've got some questions about it, and it would be nice if u guys could
> share your experiences/thoughts:
>
> *SCENARIO:*
>
> I currently have one MASTER and two Streaming Replication Slaves servers...
>
> master01
>> slave01 (Streaming replication + wal_files)
>> slave02 (Streaming replication + wal_files)
>
>
> ...Postgres is mounted on: /var/lib/pgsql/... The SSD disks will be
> installed only on my Master server, because my main problem is Writes and
> not reads.
>
> The new SSD volume will be mounted on /var/lib/pgsql2/
>
>
>
>- The slave02 server will loose the streaming replication connection
>to the master, once slave01 becomes the new master a new timeline will be
>settled? Will slave02 be able to connect to the slave01 server for
>streaming replication?
>
> Yes, slave01 becomes new master with a new timeline id. Cascading
replication is supported in 9.2, but, the dependency on WAL archives is a
bit heavy. You need to ensure .history file is copied over to slave02. I
think, you have WAL archiving enabled, so, should be fine.

*MIGRATION OPTIONS:*
>
> *Migration Option 1:* *I know this option will work*
>
>
>1. Mount the new volume */var/lib/pgsql2/ *on the master01 server
>2. Turn slave01 into a master server
>3. once I can confirm everything is working fine, I can go to step 4
>4. Stop postgres on the master01, start copying the DB using
>pg_basebackup from slave02 to master01 (Will have to edit postgres to
>use /var/lib/pgsql2/ instead /var/lib/pgsql - Is that possible? Or I'd have
>to create a symbolic link?)
>5. Start postgres on master01 server and check if all goes well as
>streaming replication server (Will test it for days)
>6. Turn master01 into a master server and I'll have to re-copy the DB
>into slave01 to make it a streaming replication server again
>
> @ Step 4, you can consider making master01 slave directly by building a
new recovery.conf file and copying over slave02's history file by doing
which, you can avoid re-build streaming replication from scratch.
When you "edit postgres", did you mean changing postgresql.conf ? if yes,
changing the parameter in postgresql.conf to use the new location should
not be a problem.

@ Step 6, Once you turn master01 (new slave) back to master server, you can
consider making slave01 (new master) a slave again by copying over the
.history files and required WALs. You do not have to build replication from
scratch.

*Migration Option 2:* *I don't know if this is possible - IS THIS
> POSSIBLE*
>
>1. Mount the new volume */var/lib/pgsql2/* on the master01 server
>2. Stop postgres on the server (I won't stop postgres on the slave so
>the users will be able to use the server as read-only)
>3. Copy the data from /var/lib/pgsql/ to /var/lib/pgsql2/
>4. Configure postgres to start using the new volume(/var/lib/pgsql2/)
>
>  This looks pretty straight forward. The only issue would be that, users
will not be able to do writes. If you are bringing down master and starting
up again, it should not a problem, slaves should be able catch up again.

What do you guys think? Is option possible? if so it would be much easier :)
>

Well, both the options work based on your expectations, Application
requirements on downtime, SLAs etc.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] 2.5TB Migration from SATA to SSD disks - PostgreSQL 9.2

2016-09-01 Thread Patrick B
2016-09-02 15:36 GMT+12:00 Venkata B Nagothi :

>
> On Fri, Sep 2, 2016 at 12:48 PM, Patrick B 
> wrote:
>
>> Hi guys,
>>
>> I'll be performing a migration on my production master database server,
>> which is running PostgreSQL 9.2 atm, from SATA disks to SSD disks.
>> I've got some questions about it, and it would be nice if u guys could
>> share your experiences/thoughts:
>>
>> *SCENARIO:*
>>
>> I currently have one MASTER and two Streaming Replication Slaves
>> servers...
>>
>> master01
>>> slave01 (Streaming replication + wal_files)
>>> slave02 (Streaming replication + wal_files)
>>
>>
>> ...Postgres is mounted on: /var/lib/pgsql/... The SSD disks will be
>> installed only on my Master server, because my main problem is Writes and
>> not reads.
>>
>> The new SSD volume will be mounted on /var/lib/pgsql2/
>>
>>
>>
>>- The slave02 server will loose the streaming replication connection
>>to the master, once slave01 becomes the new master a new timeline will be
>>settled? Will slave02 be able to connect to the slave01 server for
>>streaming replication?
>>
>> Yes, slave01 becomes new master with a new timeline id. Cascading
> replication is supported in 9.2, but, the dependency on WAL archives is a
> bit heavy. You need to ensure .history file is copied over to slave02. I
> think, you have WAL archiving enabled, so, should be fine.
>


hmmm is the .history files located into pg_xlog? I can't see none
are they only generated when a new timeline id is created?
If so, I think it will be fine as they're already being shipped to the
slaves



>
> *MIGRATION OPTIONS:*
>>
>> *Migration Option 1:* *I know this option will work*
>>
>>
>>1. Mount the new volume */var/lib/pgsql2/ *on the master01 server
>>2. Turn slave01 into a master server
>>3. once I can confirm everything is working fine, I can go to step 4
>>4. Stop postgres on the master01, start copying the DB using
>>pg_basebackup from slave02 to master01 (Will have to edit postgres to
>>use /var/lib/pgsql2/ instead /var/lib/pgsql - Is that possible? Or I'd 
>> have
>>to create a symbolic link?)
>>5. Start postgres on master01 server and check if all goes well as
>>streaming replication server (Will test it for days)
>>6. Turn master01 into a master server and I'll have to re-copy the DB
>>into slave01 to make it a streaming replication server again
>>
>> @ Step 4, you can consider making master01 slave directly by building a
> new recovery.conf file and copying over slave02's history file by doing
> which, you can avoid re-build streaming replication from scratch.
> When you "edit postgres", did you mean changing postgresql.conf ? if yes,
> changing the parameter in postgresql.conf to use the new location should
> not be a problem.
>
> @ Step 6, Once you turn master01 (new slave) back to master server, you
> can consider making slave01 (new master) a slave again by copying over the
> .history files and required WALs. You do not have to build replication from
> scratch.
>
> *Migration Option 2:* *I don't know if this is possible - IS THIS
>> POSSIBLE*
>>
>>1. Mount the new volume */var/lib/pgsql2/* on the master01 server
>>2. Stop postgres on the server (I won't stop postgres on the slave so
>>the users will be able to use the server as read-only)
>>3. Copy the data from /var/lib/pgsql/ to /var/lib/pgsql2/
>>4. Configure postgres to start using the new volume(/var/lib/pgsql2/)
>>
>>  This looks pretty straight forward. The only issue would be that, users
> will not be able to do writes. If you are bringing down master and starting
> up again, it should not a problem, slaves should be able catch up again.
>
> What do you guys think? Is option possible? if so it would be much easier
>> :)
>>
>
> Well, both the options work based on your expectations, Application
> requirements on downtime, SLAs etc.
>
>
So is that really possible? Just copy the data between folders? if so, i'll
probably chose option 2!!!
Even that is 2.5TB I don't think the copy will take longer than 20
minutes... and I'd still be able to perform reads...

I'll do some test to see if option 2 can be done :)

Thanks !!! :D