[GENERAL] Partitioning V schema

2013-09-19 Thread Dave Potts

Hi List

I am looking for some general advice about the best was of splitting  a 
large data table,I have  2 different choices, partitioning or different 
schemas.


The data table refers to the number of houses that can be include in a 
city, as such there are large number of records.



I am wondering if decided to partition the table if the update 
speed/access might be faster that just declaring a different schema per 
city.


Under the partition the data table would appear to be smaller, so I 
should get an increase in speed, but the database still have to do some 
sort of indexing.


If I used different schemas, it resolves data protection issues, but 
doing a backup might become a nightmare


In general which is the fast access method?

regards


Dave.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using ODBC and VBA to pull data from a large object

2013-09-19 Thread Vincent Veyron
Le mardi 17 septembre 2013 à 12:25 -0400, Adam C Falkenberg a écrit :
> Sorry about that.  Here's the driver information and some code. 


> Driver Name: PostgreSQL ANSI 
> Version: 9.02.01.00 
> 
> constr = "Driver={PostgreSQL ANSI}; Server=servername; Port=5432;
> Database=databasename; Uid=username; Pwd=password;" 
> With conn 
> .ConnectionString = (constr) 
> .Open 
> End With 
> 
> SQL = "SELECT data FROM pg_largeobject WHERE loid = " & id & " ORDER
> BY pageno" 
> rs.Open SQL, conn 
> 
> stream.Type = adTypeBinary 
> stream.Open 
> 
> ' Loop through the recordset and write the binary data to the stream 
> While Not rs.EOF 
> stream.Write rs.Fields("data").Value 
> rs.MoveNext 
> Wend 


You don't say where you use that recordset (Excel, Access?)

A google search will return several discussions about this problem; here
is an example :

http://social.msdn.microsoft.com/Forums/en-US/32b64a3f-3e7a-4e02-a7ef-824cacfea57a/256-char-limit-on-ado-recordset-field



-- 
Salutations, Vincent Veyron
http://gdlc.fr/logiciels
Applications de gestion des contentieux juridiques et des dossiers de sinistres 
assurance



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] the new checksum feature

2013-09-19 Thread Torsten Förtsch
Hi,

is it possible to turn on checksums in an existing database? Or do I
have to dump/initdb/restore?

Thanks,
Torsten


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Partitioning table with billion row

2013-09-19 Thread nuklea
I have a big table referenced by other tables and need to split the table
on several parts.

   1. I create table by CREATE TABLE product_part0 () INHERITS (product);
   2. Move some rows to product_part0 by INSERT INTO product_part0 SELECT *
   FROM ONLY product ORDER BY id LIMIT 200;
   3. Now I need to remove duplicates from master-table. Run DELETE FROM
   ONLY product WHERE id IN (SELECT id FROM product_part0);

With result:

ERROR: update or delete on table "product" violates foreign key constraint
"product_id_refs" on table "searchcache"
DETAIL: Key (id)=(13375) is still referenced from table "searchcache".


So, how to delete duplicate rows?


Re: [GENERAL] the new checksum feature

2013-09-19 Thread Patrick Dung
Hi Torsten,

According to Postgresql 9.3 Wiki:
http://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.3#Data_Checksums

The checksum feature should be enabled during initdb.


Regards,
Patrick Dung




 From: Torsten Förtsch 
To: pgsql-general@postgresql.org 
Sent: Thursday, September 19, 2013 5:32 PM
Subject: [GENERAL] the new checksum feature
 

Hi,

is it possible to turn on checksums in an existing database? Or do I
have to dump/initdb/restore?

Thanks,
Torsten


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Using ODBC and VBA to pull data from a large object

2013-09-19 Thread Adam C Falkenberg
I'm using Excel.  I needed to set the MAXVARCHARSIZE parameter in the 
connection string to take care of my issue (MAXVARCHARSIZE=2048 for me). 
That allowed the defined size of the field to equal the actual size. 
Thanks everyone for your help!

Adam



From:   Vincent Veyron 
To: Adam C Falkenberg , 
Cc: pgsql-general@postgresql.org
Date:   09/19/2013 04:59 AM
Subject:Re: [GENERAL] Using ODBC and VBA to pull data from a large 
object
Sent by:pgsql-general-ow...@postgresql.org



Le mardi 17 septembre 2013 à 12:25 -0400, Adam C Falkenberg a écrit :
> Sorry about that.  Here's the driver information and some code. 


> Driver Name: PostgreSQL ANSI 
> Version: 9.02.01.00 
> 
> constr = "Driver={PostgreSQL ANSI}; Server=servername; Port=5432;
> Database=databasename; Uid=username; Pwd=password;" 
> With conn 
> .ConnectionString = (constr) 
> .Open 
> End With 
> 
> SQL = "SELECT data FROM pg_largeobject WHERE loid = " & id & " ORDER
> BY pageno" 
> rs.Open SQL, conn 
> 
> stream.Type = adTypeBinary 
> stream.Open 
> 
> ' Loop through the recordset and write the binary data to the stream 
> While Not rs.EOF 
> stream.Write rs.Fields("data").Value 
> rs.MoveNext 
> Wend 


You don't say where you use that recordset (Excel, Access?)

A google search will return several discussions about this problem; here
is an example :

http://social.msdn.microsoft.com/Forums/en-US/32b64a3f-3e7a-4e02-a7ef-824cacfea57a/256-char-limit-on-ado-recordset-field




-- 
Salutations, Vincent Veyron
http://gdlc.fr/logiciels
Applications de gestion des contentieux juridiques et des dossiers de 
sinistres assurance



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Partitioning V schema

2013-09-19 Thread Agustin Larreinegabe
If I were you I will use partitioning. In my experience, partitioning is
easier and transparent. I just have to set it up and then refers just to
one table and done.
About speed, if you have the value "constraint_exclusion" = partition,
postgres will examine constraints only for inheritance child tables and
UNION ALL subqueries and will improve the perfomance of your query

http://www.postgresql.org/docs/9.2/static/ddl-partitioning.html
http://www.postgresql.org/docs/9.2/static/runtime-config-query.html


On Thu, Sep 19, 2013 at 3:02 AM, Dave Potts  wrote:

> Hi List
>
> I am looking for some general advice about the best was of splitting  a
> large data table,I have  2 different choices, partitioning or different
> schemas.
>
> The data table refers to the number of houses that can be include in a
> city, as such there are large number of records.
>
>
> I am wondering if decided to partition the table if the update
> speed/access might be faster that just declaring a different schema per
> city.
>
> Under the partition the data table would appear to be smaller, so I should
> get an increase in speed, but the database still have to do some sort of
> indexing.
>
> If I used different schemas, it resolves data protection issues, but doing
> a backup might become a nightmare
>
> In general which is the fast access method?
>
> regards
>
>
> Dave.
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>



-- 
Gracias
-
Agustín Larreinegabe


[GENERAL] reading cvs logs with pgadmin queries

2013-09-19 Thread Dave Cramer
pgadmin uses multi line queries and copy from 'cvslogs' cannot read them.

Does anyone have a work around or other solution ?

Dave Cramer

dave.cramer(at)credativ(dot)ca
http://www.credativ.ca


Re: [GENERAL] Something Weird Going on with VACUUM ANALYZE

2013-09-19 Thread Shaun Thomas

On 09/18/2013 11:50 AM, Jeff Janes wrote:


That line is the final line of a multi-line log entry.  To know what
it is about, you need to look at the lines before it in the logfile.
Perhaps it is failing to obtain a lock or something.


Thanks for the input, Jeff. The full chunk for that PID (22078) is:

|2013-09-17 03:20:08 CDT|LOG:  automatic vacuum of table 
"table_redacted": could not (re)acquire exclusive lock for truncate scan

2013-09-17 03:20:08 CDT|STATEMENT:  VACUUM (VERBOSE, ANALYZE);
2013-09-17 03:20:37 CDT|LOG:  sending cancel to blocking autovacuum PID 
23000
DETAIL:  Process 22078 waits for ShareUpdateExclusiveLock on relation 
18745 of database 16384.

2013-09-17 03:20:37 CDT|STATEMENT:  VACUUM (VERBOSE, ANALYZE);
2013-09-17 03:37:31 CDT|LOG:  duration: 2246467.567 ms  statement: 
VACUUM (VERBOSE, ANALYZE);


Which suggests an autovacuum was screwing with the launch time of the 
database-wide VACUUM VERBOSE, or at least interfered with that one 
table. What that doesn't explain, is why the VACUUM VERBOSE would finish 
much sooner when this happens. 17 minutes in this case (if it was 
blocking, or 37 if it was just stuck on that one table) instead of the 
usual 55. I could see it starting later, but it should still take 55 
minutes.


It did it again today, but this time, it "finished" in 25 minutes, and 
didn't send a cancel to the other autovacuum. The fact it's so much 
faster after encountering the truncate scan error, makes me wonder if 
the analyze portion (high CPU usage) just plain stops working after that.


The first line also seems extremely misleading. a manually launched 
vacuum is not an autovacuum, so why is it complaining about an 
autovacuum? Shouldn't the PID doing the autovacuum emit the log message? 
This all seems a little sketchy.



--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to evaluate if a query is correct?

2013-09-19 Thread David Johnston
Juan Daniel Santana Rodés wrote
> Hi my friends...
> I wrote in the last post a question similiar to this. But in this post I 
> clarify better the previous question.
> I need know how to evaluated if a query is correct without execute it. 
> When I say if a query is correct, is that if I run the query, it did not 
> throw an exception.
> For example...
> 
> create or replace function is_correct(query text) returns boolean as
> $body$
>   Declare
>   Begin
>-- Here I check if the query is correct, but I can't execute this 
> query, because it may make changes in the data base
>   End;
> $body$
> language 'plpgsql';

Some errors only manifest themselves if there is data (like a division by
zero error). Others are dependent on who is running the query.  Maybe it
uses DBLink/FDW and the target is unavailable/missing. These are impossible
to check for and indeed change so that the same exact query can fail or
succeed depending on external state.

A savepoint/rollback option is possible though some queries can cause
irreversible actions to occur.

As mentioned before you can try something like:

EXECUTE 'EXPLAIN ' || query; RETURN true;  EXCEPTION ... RETURN false;

Try and make the concept work and post a specific question, with your
attempt, if you cannot.  Or wait and see if someone else gives you a working
answer.  Either way there are limitations to this and since you provide no
context as to how this function will be used it is hard to know whether
those limitations matter.  Most people here try to give more than just
"here's how you do it" answers since even if something technically works it
may not be doing what you think.

David J.







--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/How-to-evaluate-if-a-query-is-correct-tp5771568p5771607.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Something Weird Going on with VACUUM ANALYZE

2013-09-19 Thread Kevin Grittner
Shaun Thomas  wrote:

> The first line also seems extremely misleading. a manually
> launched vacuum is not an autovacuum, so why is it complaining
> about an autovacuum? Shouldn't the PID doing the autovacuum emit
> the log message? This all seems a little sketchy.

The table truncation in autovacuum had severe problems in some
conditions which were causing production down time.  The fix
accidentally introduced some messages which are confusing, and
caused the statistics from autovacuum to fail to be generated more
often than had previously been the case.  These problems should be
fixed in the next minor release.  Until then, if you have a
workload which triggers these issues, the workaround is to schedule
ANALYZE jobs.

Where things are now running faster, it's hard to say how much is
from missing the analyze step, how much is due to caching issues,
and how much is actual gain from the fixes for the original
problems.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Something Weird Going on with VACUUM ANALYZE

2013-09-19 Thread Shaun Thomas

On 09/19/2013 08:42 AM, Kevin Grittner wrote:


The fix accidentally introduced some messages which are confusing,
and caused the statistics from autovacuum to fail to be generated
more often than had previously been the case.  These problems should
be fixed in the next minor release.  Until then, if you have a
workload which triggers these issues, the workaround is to schedule
ANALYZE jobs.


Thanks Kevin, that explanation totally makes sense. I can switch to a 
separate VACUUM and ANALYZE step to avoid this until the fix comes through.


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] reading cvs logs with pgadmin queries

2013-09-19 Thread Adrian Klaver

On 09/19/2013 05:47 AM, Dave Cramer wrote:

pgadmin uses multi line queries and copy from 'cvslogs' cannot read them.

Does anyone have a work around or other solution ?


It worked here for me:

test=# INSERT INTO 


big_int_test
VALUES
(6.7);

COPY postgres_log FROM 
'/usr/local/pgsql/data/pg_log/postgresql-2013-09-19_070152.csv' WITH csv;



-[ RECORD 7 ]--+---
log_time   | 2013-09-19 07:04:42.422-07
user_name  | aklaver
database_name  | test
process_id | 8281
connection_from| [local]
session_id | 523b0401.2059
session_line_num   | 4
command_tag| idle
session_start_time | 2013-09-19 07:02:41-07
virtual_transaction_id | 2/30
transaction_id | 0
error_severity | LOG
sql_state_code | 0
message| statement: INSERT INTO
   | big_int_test
   | VALUES
   | (6.7);
detail |
hint   |
internal_query |
internal_query_pos |
context|
query  |
query_pos  |
location   |
application_name   | psql



What are the errors you are seeing?



Dave Cramer

dave.cramer(at)credativ(dot)ca
http://www.credativ.ca



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to evaluate if a query is correct?

2013-09-19 Thread Rob Sargentg

On 09/19/2013 12:13 AM, Juan Daniel Santana Rodés wrote:

Hi my friends...
I wrote in the last post a question similiar to this. But in this post 
I clarify better the previous question.
I need know how to evaluated if a query is correct without execute it. 
When I say if a query is correct, is that if I run the query, it did 
not throw an exception.

For example...

create or replace function is_correct(query text) returns boolean as
$body$
 Declare
 Begin
  -- Here I check if the query is correct, but I can't execute this 
query, because it may make changes in the data base

 End;
$body$
language 'plpgsql';

Greetings
__
"Todos el 12 de Septiembre con una Cinta Amarilla"
FIN A LA INJUSTICIA, LIBERENLOS YA!!
http://www.antiterroristas.cu http://justiciaparaloscinco.wordpress.com


Is this an academic exercise or are you actually planning on checking 
sql then executing it if it's ok (according to your function).  I love 
plpgsql but I don't think it's the best option for, just to get started, 
parsing the incoming sql text - that would be one nasty regexp :)


rjs



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to evaluate if a query is correct?

2013-09-19 Thread David Johnston
lup wrote
> On 09/19/2013 12:13 AM, Juan Daniel Santana Rodés wrote:
>> Hi my friends...
>> I wrote in the last post a question similiar to this. But in this post 
>> I clarify better the previous question.
>> I need know how to evaluated if a query is correct without execute it. 
>> When I say if a query is correct, is that if I run the query, it did 
>> not throw an exception.
>> For example...
> 
> Is this an academic exercise or are you actually planning on checking 
> sql then executing it if it's ok (according to your function).  I love 
> plpgsql but I don't think it's the best option for, just to get started, 
> parsing the incoming sql text - that would be one nasty regexp :)

There isn't a need to parse the text; the internal parser will do that for
you - you just need to make sure that the executor never gets ahold of the
resultant execution plan.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/How-to-evaluate-if-a-query-is-correct-tp5771568p5771624.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to evaluate if a query is correct?

2013-09-19 Thread Pavel Stehule
Hello

you can use a PREPARE statement.

Pavel


2013/9/19 Rob Sargentg 

> On 09/19/2013 12:13 AM, Juan Daniel Santana Rodés wrote:
>
>> Hi my friends...
>> I wrote in the last post a question similiar to this. But in this post I
>> clarify better the previous question.
>> I need know how to evaluated if a query is correct without execute it.
>> When I say if a query is correct, is that if I run the query, it did not
>> throw an exception.
>> For example...
>>
>> create or replace function is_correct(query text) returns boolean as
>> $body$
>>  Declare
>>  Begin
>>   -- Here I check if the query is correct, but I can't execute this
>> query, because it may make changes in the data base
>>  End;
>> $body$
>> language 'plpgsql';
>>
>> Greetings
>> __**
>> "Todos el 12 de Septiembre con una Cinta Amarilla"
>> FIN A LA INJUSTICIA, LIBERENLOS YA!!
>> http://www.antiterroristas.cu 
>> http://justiciaparaloscinco.**wordpress.com
>>
>>
>>  Is this an academic exercise or are you actually planning on checking
> sql then executing it if it's ok (according to your function).  I love
> plpgsql but I don't think it's the best option for, just to get started,
> parsing the incoming sql text - that would be one nasty regexp :)
>
> rjs
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>


[GENERAL] How to failover from Primary to Standby and Set the old Primary as a new Standby

2013-09-19 Thread ascot.m...@gmail.com
Hi,

I use PG 9.2.4 with streaming replication.  What will be the manual procedure 
to failover from Primary to Standby and Set the old Primary as a new standby?


step 1:  standby's recovery.conf :
# Specifies a trigger file whose presence should cause streaming replication to 
end (i.e., failover).
trigger_file = '/var/lib/postgresql/main/trigger'

step 2:  To trigger a smart failover, create a trigger file containing the word 
smart, or just create it and leave it empty. (e.g. 
vi  /var/lib/postgresql/main/trigger
smart

step 3: wait until the failover is completed, the server is brought up after 
applying all WAL files available in the archive. 
How to check if the failover is completed and the new Primary is ready?

step 4: if the failover is done
Do I need to edit the new primary's postgresql.conf and restart postgresql? 
e.g. comment out the hot_standy = on 

step 5: how to quickly set the old primary as a new standby?
SELECT pg_start_backup('replbackup');
tar cfP /home/postgres/pg_backup.tar $PG_DATA
SELECT pg_stop_backup();
send  /home/postgres/pg_backup.tar to the old primary
unzip the tar file to $PG_DATA & delete postmaster.pid
create the recovery.conf 
edit the postgresql.conf to enable "hot_standby = on"
start postgresql 
if the PG_DATA has 600GB data, is there a quicker way to set the old primary as 
a new standby?


regards 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Upgrading from postgreSQL 9.1.6 to 9.3

2013-09-19 Thread fburgess


Are there any showstoppers/recommendations/experiences with
upgrading from Postgres 9.1.6 Postgis 1.5.3 to PostGres 9.3 and associated
PostGIS version XX on rhel 6.4 that will help the process go smoothly.thanks





Re: [GENERAL] reading cvs logs with pgadmin queries

2013-09-19 Thread Dave Cramer
The errors are extra lines after  with a query like this

2013-09-12 10:33:19.145
WST,"user","dbname",14581,"192.168.1.22:58840",523126d3.38f5,1,"SELECT",2013-09-12
10:28:35 WST,6/503023,0,ERROR,42P01,"relation ""dds_stores"" does not
exist",,"WITH RECURSIVE recursive_stores(id, name) AS (
SELECT id, name FROM customer_store WHERE id IN (1, 280, 864, 2376)
UNION ALL
SELECT ss.id, ss.name FROM recursive_stores sss, customer_store
ss WHERE sss.id = ss.parent_store_id
) SELECT
to_char(i.timestamp, 'Mon-YY') AS ""Month Name"",
s.name AS ""Target"",
COUNT(DISTINCT i.id) / (SELECT COUNT(dds_stores.id) FROM
dds_stores) AS ""Ticket Count""
FROM
customer_store s
INNER JOIN printdata_workstation w ON s.id = w.store_id AND s.id
IN (SELECT recursive_stores.id FROM recursive_stores)
INNER JOIN printdata_report r ON w.id = r.workstation_id AND
r.package_id IS NOT NULL
INNER JOIN printdata_page p ON r.id = p.report_id
INNER JOIN printdata_item i ON p.id = i.page_id
WHERE
r.timestamp >= '2012-09-01' AND r.timestamp <= '2013-08-31'
GROUP BY
""Month Name"",
""Target""",367,,"pgAdmin III - Query Tool"


Dave Cramer

dave.cramer(at)credativ(dot)ca
http://www.credativ.ca


On Thu, Sep 19, 2013 at 10:14 AM, Adrian Klaver wrote:

> On 09/19/2013 05:47 AM, Dave Cramer wrote:
>
>> pgadmin uses multi line queries and copy from 'cvslogs' cannot read them.
>>
>> Does anyone have a work around or other solution ?
>>
>
> It worked here for me:
>
> test=# INSERT INTO
> big_int_test
> VALUES
> (6.7);
>
> COPY postgres_log FROM '/usr/local/pgsql/data/pg_log/**
> postgresql-2013-09-19_070152.**csv' WITH csv;
>
>
> -[ RECORD 7 ]--+--**-
> log_time   | 2013-09-19 07:04:42.422-07
> user_name  | aklaver
> database_name  | test
> process_id | 8281
> connection_from| [local]
> session_id | 523b0401.2059
> session_line_num   | 4
> command_tag| idle
> session_start_time | 2013-09-19 07:02:41-07
> virtual_transaction_id | 2/30
> transaction_id | 0
> error_severity | LOG
> sql_state_code | 0
> message| statement: INSERT INTO
>| big_int_test
>| VALUES
>| (6.7);
> detail |
> hint   |
> internal_query |
> internal_query_pos |
> context|
> query  |
> query_pos  |
> location   |
> application_name   | psql
>
>
>
> What are the errors you are seeing?
>
>
>
>> Dave Cramer
>>
>> dave.cramer(at)credativ(dot)ca
>> http://www.credativ.ca
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


Re: [GENERAL] Unary Operators

2013-09-19 Thread Andreas Ulbrich

On 18.09.2013 02:17, David Johnston wrote:

Andreas Ulbrich wrote

create operator ^- (leftarg = float, procedure = reciproce);
works too, but
create operator / (leftarg = float, procedure = reciproce);
not.

Do you mean the "^" operator or the "^-" operator?  Rowan claims that "^"
does not in fact work here...


--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Unary-Operators-tp5770983p5771360.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


I'd like to take the /-Operator as the inverse element of an element 
relating to multiplication analogous the '-'-Operator for the inverse 
relating to addition.
As a compromise, because the left unary / not work, I took '^-' for "to 
the power of minus (one)".


It sounds logical, that native operators (4.1.6, Table 4-2) could be not 
redefined with other number of arguments (% doesn't work to as left 
unary...). But than a miss some documentation hints what are native 
operators.


Thanks

Andreas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading from postgreSQL 9.1.6 to 9.3

2013-09-19 Thread Paul Ramsey
Recommendation: do one at a time. First postgis 1.5 to 2.1, then pg 9.1 to 9.3. 
P.

-- 
Paul Ramsey
http://cleverelephant.ca
http://postgis.net


On Thursday, September 19, 2013 at 4:34 PM, fburg...@radiantblue.com wrote:

> 
> 
> Are there any showstoppers/recommendations/experiences with upgrading from 
> Postgres 9.1.6 Postgis 1.5.3 to PostGres 9.3 and associated PostGIS version 
> XX on rhel 6.4 that will help the process go smoothly.
> 
> 
> 
> thanks 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to failover from Primary to Standby and Set the old Primary as a new Standby

2013-09-19 Thread Vick Khera
On Thu, Sep 19, 2013 at 11:31 AM, ascot.m...@gmail.com  wrote:

> I use PG 9.2.4 with streaming replication.  What will be the manual
> procedure to failover from Primary to Standby and Set the old Primary as a
> new standby?
>

>From what I understand, you start over by setting up the old primary as a
new standby from scratch.


Re: [GENERAL] Upgrading from postgreSQL 9.1.6 to 9.3

2013-09-19 Thread fburgess
1.) We have to upgrade four 9.1 database instances, so is using pg_upgrade still the way to go? Our prior upgrade methodology when we moved from pg 8.4.3 to 9.1.6 was to use the hard links install option. We also have our data spread across storage mediums; fiber, nas. Do these things factor in?2.) We also have to stand up a fresh install of pg 9.3 and restore the data from a 9.1.6 dump, do you recommended that you use the pg_dump or pg_dumpall?and since we have our data spread across storage locations on fiber and nas, we will have to replicate the file system structure on the target os to be identical to that of the source 9.1.6 dump file system?3.) In general if the backup takes approximately 5-7 days, will the restore to the fresh install environment take as long, or longer? Trying to gauge downtime requirements. thanks


 Original Message 
Subject: Re: [GENERAL] Upgrading from postgreSQL 9.1.6 to 9.3
From: Paul Ramsey 
Date: Thu, September 19, 2013 10:51 am
To: fburg...@radiantblue.com
Cc: pgsql-general@postgresql.org

Recommendation: do one at a time. First postgis 1.5 to 2.1, then pg 9.1 to 9.3. 
P.

-- 
Paul Ramsey
http://cleverelephant.ca
http://postgis.net


On Thursday, September 19, 2013 at 4:34 PM, fburg...@radiantblue.com wrote:

> 
> 
> Are there any showstoppers/recommendations/experiences with upgrading from Postgres 9.1.6 Postgis 1.5.3 to PostGres 9.3 and associated PostGIS version XX on rhel 6.4 that will help the process go smoothly.
> 
> 
> 
> thanks 








Re: [GENERAL] How to failover from Primary to Standby and Set the old Primary as a new Standby

2013-09-19 Thread John R Pierce

On 9/19/2013 1:29 PM, Vick Khera wrote:


On Thu, Sep 19, 2013 at 11:31 AM, ascot.m...@gmail.com 
 > wrote:


I use PG 9.2.4 with streaming replication.  What will be the
manual procedure to failover from Primary to Standby and Set the
old Primary as a new standby?


From what I understand, you start over by setting up the old primary 
as a new standby from scratch.


if you use rsync for the base backup of new master to old, it should go 
fairly quickly as relatively few files should have changed assuming not 
much time has elapsed.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



Re: [GENERAL] How to failover from Primary to Standby and Set the old Primary as a new Standby

2013-09-19 Thread Michael Nolan
Assuming the database hasn't changed much since the failover, doing a
fsync from the new primary back to the old primary should be fairly
quick.
--
Mike Nolan

On 9/19/13, Vick Khera  wrote:
> On Thu, Sep 19, 2013 at 11:31 AM, ascot.m...@gmail.com
> > wrote:
>
>> I use PG 9.2.4 with streaming replication.  What will be the manual
>> procedure to failover from Primary to Standby and Set the old Primary as
>> a
>> new standby?
>>
>
> From what I understand, you start over by setting up the old primary as a
> new standby from scratch.
>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to failover from Primary to Standby and Set the old Primary as a new Standby

2013-09-19 Thread Michael Nolan
On 9/19/13, John R Pierce  wrote:
> On 9/19/2013 1:29 PM, Vick Khera wrote:
>>
>> On Thu, Sep 19, 2013 at 11:31 AM, ascot.m...@gmail.com
>>  > > wrote:
>>
>> I use PG 9.2.4 with streaming replication.  What will be the
>> manual procedure to failover from Primary to Standby and Set the
>> old Primary as a new standby?
>>
>>
>> From what I understand, you start over by setting up the old primary
>> as a new standby from scratch.
>
> if you use rsync for the base backup of new master to old, it should go
> fairly quickly as relatively few files should have changed assuming not
> much time has elapsed.

Of course, before you do anything, you should spend some time figuring
out WHY the old master failed.  There could be issues that need to be
resolved before putting it back online, and fixing them could affect
how much work you have to do to get the physical files back in sync.
--
Mike Nolan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] reading cvs logs with pgadmin queries

2013-09-19 Thread Adrian Klaver

On 09/19/2013 08:26 AM, Dave Cramer wrote:

The errors are extra lines after  with a query like this

2013-09-12 10:33:19.145 WST,"user","dbname",14581,"192.168.1.22:58840  
",523126d3.38f5,1,"SELECT",2013-09-12
10:28:35 WST,6/503023,0,ERROR,42P01,"relation ""dds_stores"" does not
exist",,"WITH RECURSIVE recursive_stores(id, name) AS (


I going to assume you mean after the ,, and are referring to the 
WITH query.


Several questions:

1) Are you using the example table from the docs?:

http://www.postgresql.org/docs/9.3/interactive/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-CSVLOG

If not what is the table definition?


2) What is the actual error message you get when running COPY?

--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] reading cvs logs with pgadmin queries

2013-09-19 Thread Dave Cramer
Adrian,

1) yes I am using the example table

CREATE TABLE postgres_log
(
  log_time timestamp(3) with time zone,
  user_name text,
  database_name text,
  process_id integer,
  connection_from text,
  session_id text,
  session_line_num bigint,
  command_tag text,
  session_start_time timestamp with time zone,
  virtual_transaction_id text,
  transaction_id bigint,
  error_severity text,
  sql_state_code text,
  message text,
  detail text,
  hint text,
  internal_query text,
  internal_query_pos integer,
  context text,
  query text,
  query_pos integer,
  location text,
  application_name text,
  PRIMARY KEY (session_id, session_line_num)
);


2) The exact error message is  ERROR:  extra data after last expected column


considering ~39000 lines go in before this line I am fairly certain it
is the line.


Dave


Dave Cramer

dave.cramer(at)credativ(dot)ca
http://www.credativ.ca


On Thu, Sep 19, 2013 at 8:25 PM, Adrian Klaver wrote:

> On 09/19/2013 08:26 AM, Dave Cramer wrote:
>
>> The errors are extra lines after  with a query like this
>>
>> 2013-09-12 10:33:19.145 
>> WST,"user","dbname",14581,"192**.168.1.22:58840 <
>> http://192.168.1.22:58840>",**523126d3.38f5,1,"SELECT",2013-**09-12
>>
>> 10:28:35 WST,6/503023,0,ERROR,42P01,"**relation ""dds_stores"" does not
>> exist",,"WITH RECURSIVE recursive_stores(id, name) AS (
>>
>
> I going to assume you mean after the ,, and are referring to the WITH
> query.
>
> Several questions:
>
> 1) Are you using the example table from the docs?:
>
> http://www.postgresql.org/**docs/9.3/interactive/runtime-**
> config-logging.html#RUNTIME-**CONFIG-LOGGING-CSVLOG
>
> If not what is the table definition?
>
>
> 2) What is the actual error message you get when running COPY?
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


Re: [GENERAL] Why does this array query fail?

2013-09-19 Thread Ken Tanzer
>
> Is this a quarterly report because that is how long it takes to run?


It takes about 7 seconds to run.  I suppose if I optimized it I could save
a minute every couple of years.

I usually get concerned about performance issues when they're actually
causing problems.  I'm generally more concerned about how long it takes to
write queries, and how cumbersome the SQL involved is.  And since arrays
are relatively new to me, I've been trying to understand generally the best
ways to query information out of them, or when their behavior just doesn't
make sense to me.  I'll say I answer 99.99% of my own questions before they
ever make it to the list, and by the time they do I invariably have read
the documentation as best as I can.  This has been my favorite list ever to
read, as people are invariably helpful, patient and polite to each other.

I would suggest considering how to use functions to encapsulate some
of the "medical
> code collecting" logic.  And consider WITH/CTE constructs as well, like I
> used in my last message, to effectively create temporary named tables for
> different parts of the query.
>


> Might want to move the whole thing into function and pass in the various
> parameters - namely the date range - instead of hard-coding the values
> into the view.


Thanks for these constructive suggestions.  I see benefits both ways.  And
the dates are actually parsed in by an app at run-time.  (I stripped that
part out to avoid confusion--I find it hard to know when submitting a list
item how much to just dump a full real example, and how much to simplify
down to a test case that illustrates the specific issue.)

On a smaller scale I've written queries like this.  I enrolled in a
> university database design course shortly thereafter...


until someone more knowledgeable (like a future you probably) comes back and


I'm not sure what the point of either of these comments were, but perhaps
they made you feel better.  Either way, thanks for taking the time to look
my stuff over and for the other comments and explanations you made.

Ken


Re: [GENERAL] Why does this array query fail?

2013-09-19 Thread David Johnston
Ken Tanzer wrote
>>
>> Is this a quarterly report because that is how long it takes to run?
> 
> 
> It takes about 7 seconds to run.  I suppose if I optimized it I could save
> a minute every couple of years.

Was meant to be tongue-in-cheek...though I was curious on the real
answer...agree on optimize when necessary.


> On a smaller scale I've written queries like this.  I enrolled in a
>> university database design course shortly thereafter...
> 
> until someone more knowledgeable (like a future you probably) comes back
> and
> 
> I'm not sure what the point of either of these comments were, but perhaps
> they made you feel better.  Either way, thanks for taking the time to look
> my stuff over and for the other comments and explanations you made.
> 
> Ken

Mostly that SQL, and regular expressions - which I use a lot too, tend to
result in stuff that makes perfect sense as you build it up from scratch but
if you have to come back and modify it later its very difficult to makes
changes as opposed to building up a new query with the new needs and
experience.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Why-does-this-array-query-fail-tp5771165p5771691.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioning V schema

2013-09-19 Thread Julian
Hi Dave,
How many rows of data are we talking here and how much information? (GiB)

Are you able to provide the table definition? (can normalisation
partition off some of this data?).

Have you addressed dedicated options for lookup data, tune the database
appropriately and keep that single table?

With postgres we have schemas, so it can provide some convenience and
design options.

So look at it in terms how your query will look.

SELECT schema.table.column FROM schema.table;

vs

SELECT schema_table.column FROM schema_table;

Not much in it?

However, I tend to go with partitions when required to be generated on
demand dynamically and automatically (which probably isn't the case
here). SCHEMAs have other uses, provide a level of security (GRANT) and
useful in design when partitioning off blocks of related datasets
completely.

Regards,
Julian


On 19/09/13 17:02, Dave Potts wrote:
> Hi List
>
> I am looking for some general advice about the best was of splitting 
> a large data table,I have  2 different choices, partitioning or
> different schemas.
>
> The data table refers to the number of houses that can be include in a
> city, as such there are large number of records.
>
>
> I am wondering if decided to partition the table if the update
> speed/access might be faster that just declaring a different schema
> per city.
>
> Under the partition the data table would appear to be smaller, so I
> should get an increase in speed, but the database still have to do
> some sort of indexing.
>
> If I used different schemas, it resolves data protection issues, but
> doing a backup might become a nightmare
>
> In general which is the fast access method?
>
> regards
>
>
> Dave.
>
>
>
>
>



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] reading cvs logs with pgadmin queries

2013-09-19 Thread Adrian Klaver

On 09/19/2013 06:04 PM, Dave Cramer wrote:

Adrian,

1) yes I am using the example table





2) The exact error message is  ERROR:  extra data after last expected column


considering ~39000 lines go in before this line I am fairly certain it is the 
line.


I would tend to agree, I just can't see why. The number of values 
matches the number of fields. Its the end of the day, I will take 
another look in the morning with fresh eyes.






Dave



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general