Re: [GENERAL] Duplicated IDs

2014-08-12 Thread Alexis Bernard
Hi,

Thanks for your help Alban !

Alexis.


2014-08-09 18:46 GMT+02:00 Alban Hertroys :

> On 09 Aug 2014, at 13:24, Alexis Bernard  wrote:
>
> > Primary key definition: "tracks_pkey" PRIMARY KEY, btree (id)
> > select version(): PostgreSQL 9.3.4 on x86_64-unknown-linux-gnu, compiled
> by gcc (Ubuntu 4.8.2-16ubuntu6) 4.8.2, 64-bit
> > Ubuntu trusty with kernel 3.13.0-29-generic.
> >
> > I have a autovacuum process running and configuration is by default: ps
> aux | grep vacuum | grep -v grep
> > postgres   587  0.0  0.0 1174304 3720 ?Ss   juil.30   0:05
> postgres: autovacuum launcher process
> >
> > I tried to vacuum manualy.
>
> What’s the output of: VACUUM VERBOSE tracks;
> Anything that looks abnormal?
>
>
INFO:  vacuuming "public.tracks"
INFO:  scanned index "tracks_pkey" to remove 25777 row versions
DETAIL:  CPU 0.03s/0.12u sec elapsed 0.87 sec.
INFO:  scanned index "index_tracks_on_permalink" to remove 25777 row
versions
DETAIL:  CPU 0.07s/0.40u sec elapsed 5.30 sec.
INFO:  "tracks": removed 25777 row versions in 5374 pages
DETAIL:  CPU 0.00s/0.14u sec elapsed 0.66 sec.
INFO:  index "tracks_pkey" now contains 1657345 row versions in 4594 pages
DETAIL:  17739 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "index_tracks_on_permalink" now contains 1657345 row versions
in 6143 pages
DETAIL:  25715 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "tracks": found 17560 removable, 1657349 nonremovable row versions
in 35586 out of 35586 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 64749 unused item pointers.
0 pages are entirely empty.
CPU 0.14s/0.86u sec elapsed 7.30 sec.
INFO:  vacuuming "pg_toast.pg_toast_17228"
INFO:  index "pg_toast_17228_index" now contains 0 row versions in 1 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "pg_toast_17228": found 0 removable, 0 nonremovable row versions in
0 out of 0 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
VACUUM


> What do below queries return? They’re from the manual at:
> http://www.postgresql.org/docs/9.3/static/routine-vacuuming.html
>
> SELECT c.oid::regclass as table_name,
>greatest(age(c.relfrozenxid),age(t.relfrozenxid)) as age
> FROM pg_class c
> LEFT JOIN pg_class t ON c.reltoastrelid = t.oid
> WHERE c.relkind IN ('r', 'm');
>
>
 table_name |   age
+--
 pg_statistic   | 37049759
 pg_type| 37049759
 artists| 37042311
 pg_authid  | 37049759
 pg_attribute   | 37049759
 pg_proc| 37049759
 pg_class   | 37049759
 pg_user_mapping| 37049759
 pg_constraint  | 37049759
 pg_inherits| 37049759
 pg_index   | 37049759
 pg_operator| 37049759
 pg_opfamily| 37049759
 pg_opclass | 37049759
 pg_am  | 37049759
 pg_amop| 37049759
 pg_amproc  | 37049759
 pg_language| 37049759
 pg_largeobject_metadata| 37049759
 pg_database| 37049759
 pg_rewrite | 37049759
 pg_trigger | 37049759
 pg_event_trigger   | 37049759
 pg_description | 37049759
 pg_cast| 37049759
 pg_enum| 37049759
 pg_namespace   | 37049759
 pg_conversion  | 37049759
 pg_depend  | 37049759
 explorers  | 36847481
 pg_db_role_setting | 37049759
 pg_tablespace  | 37049759
 pg_pltemplate  | 37049759
 pg_auth_members| 37049759
 pg_shdepend| 37049759
 pg_shdescription   | 37049759
 pg_ts_config   | 37049759
 pg_ts_config_map   | 37049759
 pg_ts_dict | 37049759
 pg_ts_parser   | 37049759
 pg_ts_template   

Re: [GENERAL] Duplicated IDs

2014-08-12 Thread Alexis Bernard
The vacuum full tracks just finished and I still have the duplicated IDs.


2014-08-12 9:17 GMT+02:00 Alexis Bernard :

> Hi,
>
> Thanks for your help Alban !
>
> Alexis.
>
>
> 2014-08-09 18:46 GMT+02:00 Alban Hertroys :
>
> On 09 Aug 2014, at 13:24, Alexis Bernard  wrote:
>>
>> > Primary key definition: "tracks_pkey" PRIMARY KEY, btree (id)
>> > select version(): PostgreSQL 9.3.4 on x86_64-unknown-linux-gnu,
>> compiled by gcc (Ubuntu 4.8.2-16ubuntu6) 4.8.2, 64-bit
>> > Ubuntu trusty with kernel 3.13.0-29-generic.
>> >
>> > I have a autovacuum process running and configuration is by default: ps
>> aux | grep vacuum | grep -v grep
>> > postgres   587  0.0  0.0 1174304 3720 ?Ss   juil.30   0:05
>> postgres: autovacuum launcher process
>> >
>> > I tried to vacuum manualy.
>>
>> What’s the output of: VACUUM VERBOSE tracks;
>> Anything that looks abnormal?
>>
>>
> INFO:  vacuuming "public.tracks"
> INFO:  scanned index "tracks_pkey" to remove 25777 row versions
> DETAIL:  CPU 0.03s/0.12u sec elapsed 0.87 sec.
> INFO:  scanned index "index_tracks_on_permalink" to remove 25777 row
> versions
> DETAIL:  CPU 0.07s/0.40u sec elapsed 5.30 sec.
> INFO:  "tracks": removed 25777 row versions in 5374 pages
> DETAIL:  CPU 0.00s/0.14u sec elapsed 0.66 sec.
> INFO:  index "tracks_pkey" now contains 1657345 row versions in 4594 pages
> DETAIL:  17739 index row versions were removed.
> 0 index pages have been deleted, 0 are currently reusable.
> CPU 0.00s/0.00u sec elapsed 0.00 sec.
> INFO:  index "index_tracks_on_permalink" now contains 1657345 row versions
> in 6143 pages
> DETAIL:  25715 index row versions were removed.
> 0 index pages have been deleted, 0 are currently reusable.
> CPU 0.00s/0.00u sec elapsed 0.00 sec.
> INFO:  "tracks": found 17560 removable, 1657349 nonremovable row versions
> in 35586 out of 35586 pages
> DETAIL:  0 dead row versions cannot be removed yet.
> There were 64749 unused item pointers.
> 0 pages are entirely empty.
> CPU 0.14s/0.86u sec elapsed 7.30 sec.
> INFO:  vacuuming "pg_toast.pg_toast_17228"
> INFO:  index "pg_toast_17228_index" now contains 0 row versions in 1 pages
> DETAIL:  0 index row versions were removed.
> 0 index pages have been deleted, 0 are currently reusable.
> CPU 0.00s/0.00u sec elapsed 0.00 sec.
> INFO:  "pg_toast_17228": found 0 removable, 0 nonremovable row versions in
> 0 out of 0 pages
> DETAIL:  0 dead row versions cannot be removed yet.
> There were 0 unused item pointers.
> 0 pages are entirely empty.
> CPU 0.00s/0.00u sec elapsed 0.00 sec.
> VACUUM
>
>
>> What do below queries return? They’re from the manual at:
>> http://www.postgresql.org/docs/9.3/static/routine-vacuuming.html
>>
>> SELECT c.oid::regclass as table_name,
>>greatest(age(c.relfrozenxid),age(t.relfrozenxid)) as age
>> FROM pg_class c
>> LEFT JOIN pg_class t ON c.reltoastrelid = t.oid
>> WHERE c.relkind IN ('r', 'm');
>>
>>
>  table_name |   age
> +--
>  pg_statistic   | 37049759
>  pg_type| 37049759
>  artists| 37042311
>  pg_authid  | 37049759
>  pg_attribute   | 37049759
>  pg_proc| 37049759
>  pg_class   | 37049759
>  pg_user_mapping| 37049759
>  pg_constraint  | 37049759
>  pg_inherits| 37049759
>  pg_index   | 37049759
>  pg_operator| 37049759
>  pg_opfamily| 37049759
>  pg_opclass | 37049759
>  pg_am  | 37049759
>  pg_amop| 37049759
>  pg_amproc  | 37049759
>  pg_language| 37049759
>  pg_largeobject_metadata| 37049759
>  pg_database| 37049759
>  pg_rewrite | 37049759
>  pg_trigger | 37049759
>  pg_event_trigger   | 37049759
>  pg_description | 37049759
>  pg_cast| 37049759
>  pg_enum| 37049759
>  pg_namespace   | 37049759
>  pg_conversion  | 37049759
>  pg_depend  | 37049759
>  explorers  | 36847481
>  pg_db_role_setting | 37049759
>  pg_tablespace  | 37049759
>  pg_pltemplate  | 37049759
>  pg_auth_members| 37049759
>  pg_shdepend| 

Re: [GENERAL] Upgrading 9.0 to 9.3 - permission denied to pg_upgrade_utility.log

2014-08-12 Thread Russell Keane


-Original Message-
From: Bruce Momjian [mailto:br...@momjian.us] 
Sent: 07 August 2014 17:43
To: Russell Keane
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Upgrading 9.0 to 9.3 - permission denied to 
pg_upgrade_utility.log

On Thu, Aug  7, 2014 at 09:42:13AM +0100, Russell Keane wrote:
> Hi,
> 
>  
> 
> We’re trying to upgrade a 9.0 database to 9.3 using pg_upgrade. The 
> upgrade must be automated so a manual pg_dump / pg_restore is not an option.
> 
> We use the following command:
> 
> D:\PostgreSQL\9.3\Data>"c:\Program Files (x86)\PostgreSQL\9.3\bin\ 
> pg_upgrade.exe" -d "D:\PostgreSQL\Data" -D "D:\PostgreSQL\9.3\Data" -b 
> "C:\ Program Files (x86)\PostgreSQL\9.0\bin" -B "C:\Program Files 
> (x86)\PostgreSQL\ 9.3\bin" -u postgres -c -P 5432
> 
>  
> 
> This is on a Windows 7 64 bit environment.
> 
> We’ve given the postgres windows user full permissions to all folders 
> involved and are running the upgrade in a cmd prompt opened with runas  
> /user:postgres
> 
>  
> 
> The upgrade works all the way up until the final hurdle where, in the 
> pg_upgrade_utility.log the following statement appears:
> 
> command: ""C:\Program Files (x86)\PostgreSQL\9.3\bin/initdb" 
> --sync-only "D:\ PostgreSQL\9.3\Data" >> "pg_upgrade_utility.log" 2>&1"
> 
> syncing data to disk ... initdb: could not open file 
> "D:/PostgreSQL/9.3/Data/
> pg_upgrade_utility.log": Permission denied

Uh, it would appear you are running pg_upgrade from _inside_ the 9.3 data 
directory.  That should work, but it would probably be better to run it in 
another directory where you also have write permission.  I think the problem is 
that initdb --sync-only is syncing those files to disk as you are writing to 
the log file.

We have had Windows problems of two processes writing to the same file, but 
that is usually a different error message, e.g.

 * For some reason, Windows issues a file-in-use error if we write data to
 * the log file from a non-primary thread just before we create a
 * subprocess that also writes to the same log file.  One fix is to sleep
 * for 100ms.  A cleaner fix is to write to the log file _after_ the
 * subprocess has completed, so we do this only when writing from a
 * non-primary thread.  fflush(), running system() twice, and pre-creating
 * the file do not see to help.

I think that returns a "share violation" error.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


Excellent! That's fixed it for us.

Thanks Bruce,

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_advisory_lock problem

2014-08-12 Thread Rémi Cura
2014-08-11 22:48 GMT+02:00 Kevin Grittner :

> Rémi Cura  wrote:
>
> > as you (both?) suggested it works using advisory lock used at the
> > beginning and end of the transaction. This way there is no upsert
> > at all if the element is locked? (I used general advisory lockbut
> > in the same way as transactionnal lock)
>
> This is too vague to comment on.
>
> > The issue in this case is simple : I have to use about 100k
> > advisory locks, which is a big memory requirement for my hardware
> > :-(
>
> ... and that doesn't seem to make any sense.  Either you are not
> understanding advisory locks or you are doing something very, very
> unusual.
>
> > Merlin I'm afraid I don't understand what is "vanilla" LOCK
> > TABLE.
>
> See the LOCK TABLE command.
>
> http://www.postgresql.org/docs/current/interactive/sql-lock.html
>
> http://www.postgresql.org/docs/current/interactive/explicit-locking.html
>
> > I can't really use a lock table because each query upsert
> > sequentially into 3 tables, doing lots of computing between.
>
> Now *that* I understand.  :-)  It's not an unusual requirement,
> but can be a challenge when using snapshot isolation (where writes
> don't block reads and reads don't block anything).  There are two
> main approaches -- introduce blocking to serialize some of the
> operations, or use the SERIALIZABLE transaction isolation level to
> ensure that the behavior of all concurrent transactions is
> consistent with the behavior you would see if they were run one at
> a time.  The latter approach doesn't introduce any new blocking,
> but it can cause transactions to get an ERROR with a SQLSTATE of
> 40001 at just about any point, so you need to be prepared to
> recognize that and retry those transactions from the beginning (not
> just the last statement of the transaction), ignoring any data read
> during the failed attempt.
>
> You may want to read the entire chapter on concurrency control:
>
> http://www.postgresql.org/docs/current/interactive/mvcc.html
>
> If you are considering using SERIALIZABLE transactions, you should
> probably review the examples in the Wiki, to get an idea of how it
> behaves in various cases:
>
> http://wiki.postgresql.org/wiki/SSI
>
> > I use parallel query to compute faster (load dividing). I guess
> > it would be very slow with about 8 parallel queries with locks.
>
> Well, if you introduce blocking you reduce your parallelism, but if
> you use serializable transactions and there are actually a lot of
> conflicts you can see poor performance because of the errors
> rolling back transactions and the need to retry them from the
> start.  The techniques used to implement serializable transactions
> in PostgreSQL are basically a refinement of the Optimistic
> Concurrency Control (OCC) techniques, but generally with far fewer
> retries needed -- the point being that it optimistically assumes
> that there will not be a conflict so that concurrency is better,
> but has to cancel things if that optimism proves to be unfounded.
>
> To make related to changes to multiple tables and maintain coherent
> data, you probably will need to do one or the other.
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Hey, thanks for your detailed answer.

The particularity here is that I use a big query with CTE instead of a more
procedural way.
I do sophisticated geometric computing using postGIS. I guess it is a hack
of both postgres and postgis.

I explain better the pg_advisory locks uses I have tried.

*First classic use of pg_advisory, not working :*
CTE_1 (find what rows will be upserted in table_1)
CTE_2 (find what rows will be upserted in table_2)
CTE_3 (find what rows will be upserted in table_3)
CTE_4 (compute the result to be upserted into table_1)
CTE_5 (upsert into table_1 using custom upsert_function)
CTE_6 (compute the result to be upserted into table_2)
CTE_7 (upsert into table_2 using custom upsert_function)
CTE_8 (compute the result to be upserted into table_2)
CTE_9 (upsert into table_2 using custom upsert_function)
CTE_10 (end of query)
each of the upserting function is plpgsql and do something like
pg_advisory_lock(table_number, id of row to be upserted)
with updating AS (update table)
insert into table if not updated
pg_advisory_unlock(table_number,id of row to be upserted)

According to what the list said, it doesn't work because of visibility
issues : the locking work, so we know each processes will upsert the same
thing sequentially. However it will fail because each process has no
visibility on the insert done by the others. So when the second process
upsert the same thing, it will try to insert , and we get 2 inserts for the
same row

*Second non classic use of pg_adivsory, working but too taxing on resources*
CTE_1 (find what rows will be upserted in table_1*,
pg_try_advisory_lock(1,id1)*  )
CTE_2 (find what rows will be upserted in table_2,
*pg_try_advisory_lock(2,id2)*  )
CTE_3 (find what rows will be upserte

Re: [GENERAL] pg_advisory_lock problem

2014-08-12 Thread Merlin Moncure
On Tue, Aug 12, 2014 at 5:07 AM, Rémi Cura  wrote:
>
>
>
> 2014-08-11 22:48 GMT+02:00 Kevin Grittner :
>
>> Rémi Cura  wrote:
>>
>> > as you (both?) suggested it works using advisory lock used at the
>> > beginning and end of the transaction. This way there is no upsert
>> > at all if the element is locked? (I used general advisory lockbut
>> > in the same way as transactionnal lock)
>>
>> This is too vague to comment on.
>>
>> > The issue in this case is simple : I have to use about 100k
>> > advisory locks, which is a big memory requirement for my hardware
>> > :-(
>>
>> ... and that doesn't seem to make any sense.  Either you are not
>> understanding advisory locks or you are doing something very, very
>> unusual.
>>
>> > Merlin I'm afraid I don't understand what is "vanilla" LOCK
>> > TABLE.
>>
>> See the LOCK TABLE command.
>>
>> http://www.postgresql.org/docs/current/interactive/sql-lock.html
>>
>> http://www.postgresql.org/docs/current/interactive/explicit-locking.html
>>
>> > I can't really use a lock table because each query upsert
>> > sequentially into 3 tables, doing lots of computing between.
>>
>> Now *that* I understand.  :-)  It's not an unusual requirement,
>> but can be a challenge when using snapshot isolation (where writes
>> don't block reads and reads don't block anything).  There are two
>> main approaches -- introduce blocking to serialize some of the
>> operations, or use the SERIALIZABLE transaction isolation level to
>> ensure that the behavior of all concurrent transactions is
>> consistent with the behavior you would see if they were run one at
>> a time.  The latter approach doesn't introduce any new blocking,
>> but it can cause transactions to get an ERROR with a SQLSTATE of
>> 40001 at just about any point, so you need to be prepared to
>> recognize that and retry those transactions from the beginning (not
>> just the last statement of the transaction), ignoring any data read
>> during the failed attempt.
>>
>> You may want to read the entire chapter on concurrency control:
>>
>> http://www.postgresql.org/docs/current/interactive/mvcc.html
>>
>> If you are considering using SERIALIZABLE transactions, you should
>> probably review the examples in the Wiki, to get an idea of how it
>> behaves in various cases:
>>
>> http://wiki.postgresql.org/wiki/SSI
>>
>> > I use parallel query to compute faster (load dividing). I guess
>> > it would be very slow with about 8 parallel queries with locks.
>>
>> Well, if you introduce blocking you reduce your parallelism, but if
>> you use serializable transactions and there are actually a lot of
>> conflicts you can see poor performance because of the errors
>> rolling back transactions and the need to retry them from the
>> start.  The techniques used to implement serializable transactions
>> in PostgreSQL are basically a refinement of the Optimistic
>> Concurrency Control (OCC) techniques, but generally with far fewer
>> retries needed -- the point being that it optimistically assumes
>> that there will not be a conflict so that concurrency is better,
>> but has to cancel things if that optimism proves to be unfounded.
>>
>> To make related to changes to multiple tables and maintain coherent
>> data, you probably will need to do one or the other.
>>
>> --
>> Kevin Grittner
>> EDB: http://www.enterprisedb.com
>> The Enterprise PostgreSQL Company
>
>
>
> Hey, thanks for your detailed answer.
>
> The particularity here is that I use a big query with CTE instead of a more
> procedural way.
> I do sophisticated geometric computing using postGIS. I guess it is a hack
> of both postgres and postgis.
>
> I explain better the pg_advisory locks uses I have tried.
>
> First classic use of pg_advisory, not working :
> CTE_1 (find what rows will be upserted in table_1)
> CTE_2 (find what rows will be upserted in table_2)
> CTE_3 (find what rows will be upserted in table_3)
> CTE_4 (compute the result to be upserted into table_1)
> CTE_5 (upsert into table_1 using custom upsert_function)
> CTE_6 (compute the result to be upserted into table_2)
> CTE_7 (upsert into table_2 using custom upsert_function)
> CTE_8 (compute the result to be upserted into table_2)
> CTE_9 (upsert into table_2 using custom upsert_function)
> CTE_10 (end of query)
> each of the upserting function is plpgsql and do something like
> pg_advisory_lock(table_number, id of row to be upserted)
> with updating AS (update table)
> insert into table if not updated
> pg_advisory_unlock(table_number,id of row to be upserted)
>
> According to what the list said, it doesn't work because of visibility
> issues : the locking work, so we know each processes will upsert the same
> thing sequentially. However it will fail because each process has no
> visibility on the insert done by the others. So when the second process
> upsert the same thing, it will try to insert , and we get 2 inserts for the
> same row
>
> Second non classic use of pg_adivsory, working but too taxing on resources
> CTE_1 (f

[GENERAL] Inserting large binary data into lo type table

2014-08-12 Thread Jose Moreira
I guess this is easy a question for the gurus, but afraid I cannot get 
te answer!


I have this table:

aif_test=# \d sbox;
 Table "public.sbox"
 Column |  Type  | Modifiers
++---
 id | character varying(255) | not null
 data   | lo |
Indexes:
"sbox_pkey" PRIMARY KEY, btree (id)


And trying to insert a binary file which I show only the first lines:

Emitidas/10001.doc|\\x3C3F786D6C2076657273696F6E3D22312E302220656E636F64696E673D225554462D38223F3E0A3C41434B43464520786D6C6E733D22687474703A2F2F6366652E646769
2E6775622E75792220786D6C6E733A6E73323D22687474703A2F2F772E77332E6F72672F323030302F30392F786D6C64736967232220786D6C6E733A7873693D22687474703A2F2F772E77332E6F726
72F323030312F584D4C536368656D612D696E7374616E6365222076657273696F6E3D22312E3

When runing a simple insert I get:

[postgres@dbfactura01dmz  /tmp]$ psql -d aif_test -U aif < 
subida_postgres_copys_sbox.sql

Password for user aif:
ERROR:  invalid input syntax for type oid: 
\x3C3F786D6C2076657273696F6E3D22312E302220656E636F64696E673D225554462D38223F3E0A3C41434B43464520786D6C6E733D22687474703A2F2F6366652E6.


sql script:
[postgres@dbfactura01dmz  /tmp]$ more subida_postgres_copys_sbox.sql
COPY sbox FROM '/tmp/TXT/SBOX.txt' WITH DELIMITER '|' NULL '';

I am not able to insert a simple binary text into a "lo" type column!!!  
Read on the doc but it should work thoug. Any help please??











[GENERAL] PostgreSQL as a triple store

2014-08-12 Thread Jimmy Thrasibule
Hello,

I'm looking into gathering information about geographical locations:

- Coordinates
- Place name
- Pictures
- etc.

Since a place can be anything, a specific place may  need any kind of data
type to be described.

I'm therefore looking into using the same method as the semantic Web and
trying to describe a place with triples.

1. The "triples" table will reference a subject, attribute and value.
2. The "attributes" table will have an attribute name, a type and mauve a
table name.
3. For each attribute type, a new table is created to store the values.
4. A "resources" table list all the available resources (location, picture,
...).

Using this, I can easily add a new resource and describe it by adding new
triples. Every item is identified using UUIDs as primary key.

This seems too simple to be true so I falling back to you in case you see
any pitfalls here.

The "triples" table will grow very fast in this setup and may become a
bottleneck. However, I don't see any other way to store information about
something as wide as places around the world.

Regards,
Jimmy


Re: [GENERAL] Inserting large binary data into lo type table

2014-08-12 Thread Adrian Klaver

On 08/12/2014 06:20 AM, Jose Moreira wrote:

I guess this is easy a question for the gurus, but afraid I cannot get
te answer!

I have this table:

aif_test=# \d sbox;
  Table "public.sbox"
  Column |  Type  | Modifiers
++---
  id | character varying(255) | not null
  data   | lo |
Indexes:
 "sbox_pkey" PRIMARY KEY, btree (id)


And trying to insert a binary file which I show only the first lines:

Emitidas/10001.doc|\\x3C3F786D6C2076657273696F6E3D22312E302220656E636F64696E673D225554462D38223F3E0A3C41434B43464520786D6C6E733D22687474703A2F2F6366652E646769
2E6775622E75792220786D6C6E733A6E73323D22687474703A2F2F772E77332E6F72672F323030302F30392F786D6C64736967232220786D6C6E733A7873693D22687474703A2F2F772E77332E6F726
72F323030312F584D4C536368656D612D696E7374616E6365222076657273696F6E3D22312E3

When runing a simple insert I get:

[postgres@dbfactura01dmz  /tmp]$ psql -d aif_test -U aif <
subida_postgres_copys_sbox.sql
Password for user aif:
ERROR:  invalid input syntax for type oid:
\x3C3F786D6C2076657273696F6E3D22312E302220656E636F64696E673D225554462D38223F3E0A3C41434B43464520786D6C6E733D22687474703A2F2F6366652E6.

sql script:
[postgres@dbfactura01dmz  /tmp]$ more subida_postgres_copys_sbox.sql
COPY sbox FROM '/tmp/TXT/SBOX.txt' WITH DELIMITER '|' NULL '';

I am not able to insert a simple binary text into a "lo" type column!!!
Read on the doc but it should work thoug. Any help please??


I think the type you are looking for is bytea:

http://www.postgresql.org/docs/9.0/interactive/datatype-binary.html

If you really want to use large objects(lo) see here:

http://www.postgresql.org/docs/9.0/interactive/largeobjects.html













--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql referencing and creating types as record

2014-08-12 Thread Merlin Moncure
On Thu, Aug 7, 2014 at 11:50 PM, vpmm2007  wrote:
> create or replace package CUM_A_TYPES
> as
>  type LT_PAYMENT is record
> (BASICnumber,
>   DPnumber,
>   GRADE_PAYnumber
> );
> TYPE TYPE_CALC_TAX is record
>  (
>   FIN_ROLE_ID   number(8),
>   CALC_FOR_ROLE_CODE  number(4));

Looks like something like:

CREATE TYPE LT_PAYMENT AS
(
  BASIC numeric,
  DP numeric,
  GRADE_PAY numeric
);

CREATE TYPE TYPE_CALC_TAX AS
(
  FIN_ROLE_ID numeric(8),
  CALC_FOR_ROLE_CODE numeric(8)
);

CREATE OR REPLACE FUNCTION some_function() RETURNS LT_PAYMENT  AS
$$
...
$$ LANGAUGE PLPGSQL;

I'd be studying the pl/pgsql documentation and the data type
differences (for number, you'd want to use int, numeric, or float8
depending on circumstances).

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL as a triple store

2014-08-12 Thread Chris Travers
Is there a reason why hstore or json is not an option?  That may work a lot
better than this approach.

Another approach I have had is a set of additional catalog tables and
dynamically adding/removing columns from an extended attributes table.



On Tue, Aug 12, 2014 at 6:19 AM, Jimmy Thrasibule <
thrasibule.ji...@gmail.com> wrote:

> Hello,
>
> I'm looking into gathering information about geographical locations:
>
> - Coordinates
> - Place name
> - Pictures
> - etc.
>
> Since a place can be anything, a specific place may  need any kind of data
> type to be described.
>
> I'm therefore looking into using the same method as the semantic Web and
> trying to describe a place with triples.
>
> 1. The "triples" table will reference a subject, attribute and value.
> 2. The "attributes" table will have an attribute name, a type and mauve a
> table name.
> 3. For each attribute type, a new table is created to store the values.
> 4. A "resources" table list all the available resources (location,
> picture, ...).
>
> Using this, I can easily add a new resource and describe it by adding new
> triples. Every item is identified using UUIDs as primary key.
>
> This seems too simple to be true so I falling back to you in case you see
> any pitfalls here.
>
> The "triples" table will grow very fast in this setup and may become a
> bottleneck. However, I don't see any other way to store information about
> something as wide as places around the world.
>
> Regards,
> Jimmy
>



-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: [GENERAL] PostgreSQL as a triple store

2014-08-12 Thread Jonas Xie

We are currently working on in-database semantic reasoning.
However, if you do not need any T-Box or A-Box reasoning, maybe JSON is 
a better choice in your case.


Am 12.08.2014 15:19, schrieb Jimmy Thrasibule:


Hello,

I'm looking into gathering information about geographical locations:

- Coordinates
- Place name
- Pictures
- etc.

Since a place can be anything, a specific place may need any kind of 
data type to be described.


I'm therefore looking into using the same method as the semantic Web 
and trying to describe a place with triples.


1. The "triples" table will reference a subject, attribute and value.
2. The "attributes" table will have an attribute name, a type and 
mauve a table name.

3. For each attribute type, a new table is created to store the values.
4. A "resources" table list all the available resources (location, 
picture, ...).


Using this, I can easily add a new resource and describe it by adding 
new triples. Every item is identified using UUIDs as primary key.


This seems too simple to be true so I falling back to you in case you 
see any pitfalls here.


The "triples" table will grow very fast in this setup and may become a 
bottleneck. However, I don't see any other way to store information 
about something as wide as places around the world.


Regards,
Jimmy





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgcluu

2014-08-12 Thread Ramesh T
Hi,
  I don't  have the root permission for yum.
secondly,When i try to install it's return following same as previous
error/msg

 tar xvzf Test-Harness-3.32.tar.gz
 cd Test-Harness-3.32

-bash-4.1$ *perl Makefile.PL*

Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains:
/usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl
/usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at
Makefile.PL line 8.
BEGIN failed--compilation aborted at Makefile.PL line 8.

-bash-4.1$ make
make: *** No targets specified and no makefile found.  Stop.
-bash-4.1$ make test
make: *** No rule to make target `test'.  Stop.
-bash-4.1$

let me know any help..?



On Mon, Aug 11, 2014 at 9:07 PM, David Carpio 
wrote:

>  Hello
>
> You must install the harness.pm module
>
> You may do it via yum
>
> yum install perl-Test-Harness
>
> or downloaded the tar package and install it
>
> http://search.cpan.org/~leont/Test-Harness-3.32/lib/Test/Harness.pm
>
> I hope this help you.
>
> David
>
>
> On 08/11/2014 08:52 AM, Ramesh T wrote:
>
>  Hello ,
>
>   I specified correct method i got error/message  at perl Makefile.PL
> tar xzf pgcluu-2.0.tar.gz
> -bash-4.1$ cd pgcluu-2.0
> *-bash-4.1$ perl Makefile.PL*
> Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains:
> /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl
> /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at
> Makefile.PL line 1.
> BEGIN failed--compilation aborted at Makefile.PL line 1.
> -bash-4.1$
>
>  then i trying to  install ExtUtils/MakeMaker  same place perl
> Makefile.PL error/notifying
>
>   tar xzf ExtUtils-MakeMaker-6.99_07.tar.gz
> -bash-4.1$ cd ExtUtils-MakeMaker-6.99_07
> -bash-4.1$ perl Makefile.PL
> Using included version of CPAN::Meta (2.120351) because it is not already
> instal
>led.
> Using included version of ExtUtils::Install (1.54) because it is not
> already ins
>talled.
> Using included version of CPAN::Meta::YAML (0.008) because it is not
> already ins
>talled.
> Using included version of CPAN::Meta::Requirements (2.120351) because it
> is not
> already installed.
> Using included version of File::Copy::Recursive (0.38) because it is not
> already
> installed.
> Using included version of Parse::CPAN::Meta (1.4405) because it is not
> already i
>  nstalled.
> Using included version of JSON::PP (2.27203) because it is not already
> installed
>  .
> Using included version of JSON::PP::Compat5006 (1.09) because it is not
> already
> installed.
> Using included version of ExtUtils::Manifest (1.60) because it is not
> already in
>stalled.
> Generating a Unix-style Makefile
> Writing Makefile for ExtUtils::MakeMaker
> Writing MYMETA.yml and MYMETA.json
> Can't locate Test/Harness.pm in @INC (@INC contains: bundled/CPAN-Meta
> bundled/E
>  xtUtils-Install bundled/CPAN-Meta-YAML
> bundled/CPAN-Meta-Requirements bundled/Fi
>  le-Copy-Recursive
> bundled/Parse-CPAN-Meta bundled/JSON-PP bundled/version bundle
>
>  d/Scalar-List-Utils bundled/JSON-PP-Compat5006 bundled/ExtUtils-Command
> bundled/
>  ExtUtils-Manifest bundled/File-Temp lib .
> /usr/local/lib64/perl5 /usr/local/shar
>e/perl5
> /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/per
>
>  l5 /usr/share/perl5) at Makefile.PL line 142.
>
>
>  wheni trying test Harness it is also not installing..
>
>  let me know how to fix issue..?
>
>  thanks,
> rao
>
>
> On Sat, Aug 9, 2014 at 9:43 AM, Adrian Klaver 
> wrote:
>
>> On 08/08/2014 06:40 AM, Ramesh T wrote:
>>
>>> Hi,
>>> i want install pgcluu on postgres 9.3 and i'm putty tool to
>>> connect pg database when i ran .
>>>tar xzf pgcluu-2.0.tar.gz
>>>  cd pgcluu-2.0.tar/
>>> perl Makefile.PL
>>>  make && sudo make install
>>>
>>> it's return like..
>>> bash-4.1$ tar xzf pgcluu-2.0.tar.gz
>>> tar (child): pgcluu-2.0.tar.gz: Cannot open: No such file or directory
>>> tar (child): Error is not recoverable: exiting now
>>> tar: Child returned status 2
>>> tar: Error is not recoverable: exiting now
>>>
>>
>>  Well at this point you are dead in the water, all the other steps are
>> bound to fail.
>>
>> Are you sure pgcluu-2.0.tar.gz is there?
>> If it is, then it may be corrupted, so try downloading it again.
>>
>>
>>
>>  -bash-4.1$ cd pgcluu-2.0.tar/
>>> -bash: cd: pgcluu-2.0.tar/: No such file or directory
>>> -bash-4.1$perl Makefile.PL
>>> Can't open perl script "Makefile.PL": No such file or directory
>>> -bash-4.1$ make && sudo make install
>>>
>>> where do i run the tar file let me know..
>>> thanks in advance..
>>> R
>>>
>

Re: [GENERAL] pgcluu

2014-08-12 Thread Ramesh T
where need to install Test-Harness-3.32. or  ExtUtis/MakeMaker.pm  in pg
cluu -2.0 folder or /usr/local/lib64/perl..

please let me know

advance thanks..


On Tue, Aug 12, 2014 at 3:20 PM, Ramesh T 
wrote:

> Hi,
>   I don't  have the root permission for yum.
> secondly,When i try to install it's return following same as previous
> error/msg
>
>  tar xvzf Test-Harness-3.32.tar.gz
>  cd Test-Harness-3.32
>
> -bash-4.1$ *perl Makefile.PL*
>
> Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains:
> /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl
> /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at
> Makefile.PL line 8.
> BEGIN failed--compilation aborted at Makefile.PL line 8.
>
> -bash-4.1$ make
> make: *** No targets specified and no makefile found.  Stop.
> -bash-4.1$ make test
> make: *** No rule to make target `test'.  Stop.
> -bash-4.1$
>
> let me know any help..?
>
>
>
> On Mon, Aug 11, 2014 at 9:07 PM, David Carpio 
> wrote:
>
>>  Hello
>>
>> You must install the harness.pm module
>>
>> You may do it via yum
>>
>> yum install perl-Test-Harness
>>
>> or downloaded the tar package and install it
>>
>> http://search.cpan.org/~leont/Test-Harness-3.32/lib/Test/Harness.pm
>>
>> I hope this help you.
>>
>> David
>>
>>
>> On 08/11/2014 08:52 AM, Ramesh T wrote:
>>
>>  Hello ,
>>
>>   I specified correct method i got error/message  at perl Makefile.PL
>> tar xzf pgcluu-2.0.tar.gz
>> -bash-4.1$ cd pgcluu-2.0
>> *-bash-4.1$ perl Makefile.PL*
>> Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains:
>> /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl
>> /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at
>> Makefile.PL line 1.
>> BEGIN failed--compilation aborted at Makefile.PL line 1.
>> -bash-4.1$
>>
>>  then i trying to  install ExtUtils/MakeMaker  same place perl
>> Makefile.PL error/notifying
>>
>>   tar xzf ExtUtils-MakeMaker-6.99_07.tar.gz
>> -bash-4.1$ cd ExtUtils-MakeMaker-6.99_07
>> -bash-4.1$ perl Makefile.PL
>> Using included version of CPAN::Meta (2.120351) because it is not already
>> instal
>>led.
>> Using included version of ExtUtils::Install (1.54) because it is not
>> already ins
>>talled.
>> Using included version of CPAN::Meta::YAML (0.008) because it is not
>> already ins
>>talled.
>> Using included version of CPAN::Meta::Requirements (2.120351) because it
>> is not
>> already installed.
>> Using included version of File::Copy::Recursive (0.38) because it is not
>> already
>> installed.
>> Using included version of Parse::CPAN::Meta (1.4405) because it is not
>> already i
>>  nstalled.
>> Using included version of JSON::PP (2.27203) because it is not already
>> installed
>>  .
>> Using included version of JSON::PP::Compat5006 (1.09) because it is not
>> already
>> installed.
>> Using included version of ExtUtils::Manifest (1.60) because it is not
>> already in
>>stalled.
>> Generating a Unix-style Makefile
>> Writing Makefile for ExtUtils::MakeMaker
>> Writing MYMETA.yml and MYMETA.json
>> Can't locate Test/Harness.pm in @INC (@INC contains: bundled/CPAN-Meta
>> bundled/E
>>  xtUtils-Install bundled/CPAN-Meta-YAML
>> bundled/CPAN-Meta-Requirements bundled/Fi
>>  le-Copy-Recursive
>> bundled/Parse-CPAN-Meta bundled/JSON-PP bundled/version bundle
>>
>>  d/Scalar-List-Utils bundled/JSON-PP-Compat5006 bundled/ExtUtils-Command
>> bundled/
>>  ExtUtils-Manifest bundled/File-Temp lib .
>> /usr/local/lib64/perl5 /usr/local/shar
>>e/perl5
>> /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/per
>>
>>  l5 /usr/share/perl5) at Makefile.PL line 142.
>>
>>
>>  wheni trying test Harness it is also not installing..
>>
>>  let me know how to fix issue..?
>>
>>  thanks,
>> rao
>>
>>
>> On Sat, Aug 9, 2014 at 9:43 AM, Adrian Klaver 
>> wrote:
>>
>>> On 08/08/2014 06:40 AM, Ramesh T wrote:
>>>
 Hi,
 i want install pgcluu on postgres 9.3 and i'm putty tool to
 connect pg database when i ran .
tar xzf pgcluu-2.0.tar.gz
  cd pgcluu-2.0.tar/
 perl Makefile.PL
  make && sudo make install

 it's return like..
 bash-4.1$ tar xzf pgcluu-2.0.tar.gz
 tar (child): pgcluu-2.0.tar.gz: Cannot open: No such file or directory
 tar (child): Error is not recoverable: exiting now
 tar: Child returned status 2
 tar: Error is not recoverable: exiting now

>>>
>>>  Well at this point you are dead in the water, all the other steps are
>>> bound to fail.
>>>
>>> Are you sure pgcluu-2.0.tar.gz is there?
>>> If it is, then it may be corrupted, so try download

Re: [GENERAL] PostgreSQL as a triple store

2014-08-12 Thread Jimmy Thrasibule
> Is there a reason why hstore or json is not an option?  That may work a lot
> better than this approach.

I don't want to move away from SQL common features so I can have a
test environment using SQLite and deploy on PostgreSQL. This approach
looks elegant and simple to me. Using a new table per attribute type
or even per attribute for the values, I can easily index them for
quick lookups.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgresql 9.3 tuning advice

2014-08-12 Thread dushy
Hello all,

Iam running a postgresql 9.0.13 master/slave instance in a write heavy
workload.

The hardware is a Dell 720, E5530 - 8 core, 128GB RAM. The database (around
250g with indexes/bloat etc) is sitting on flashcache device with 2 fusion-
io PCIe MLC SSDs as frontend and a MD3200 based RAID10 (14*1TB SATA disks)
as backend. OS is centos 6.2 with kernel 2.6.32-431.20.3.el6.x86_64.

Currently, the performance related configuration is mostly default i,e
shared_buffers,
effective_cache_size. The only directive that seems different is
checkpoint_segments = 96

Iam moving to postgresql 9.3 shortly and planning to tune the above
directives as below..

effective_cache_size = 100GB # free+buffers is pretty consistent around 110
to 120GB and pg_oscache_total is around 80GB
checkpoint_segments = 32 # 96 seems to long and all flushes seem to be only
due to checkpoint_timeout with the older value

Additionally iam turning off THB defrag on the OS as suggested by some
posts on the
lists. Though, My initial pgbench testing doesn't seem to indicate any
issues with THB defrag turned on/off.

Iam not sure about shared_buffers and wal_buffers for this HW/OS & DB
combination - iam inclined to leave
them to defaults. But based on this article
(http://rhaas.blogspot.in/2012/03/tuning-sharedbuffers-and-walbuffers.html)
it looks there will be some advantages in tuning it

What would be a good value (to start with atleast) for shared_buffers and
wal_buffers ?

Please let me know if additional information will help.

TIA
dushy


Re: [GENERAL] PostgreSQL as a triple store

2014-08-12 Thread Raymond O'Donnell
On 12/08/2014 15:57, Jimmy Thrasibule wrote:
>> Is there a reason why hstore or json is not an option?  That may work a lot
>> better than this approach.
> 
> I don't want to move away from SQL common features so I can have a
> test environment using SQLite and deploy on PostgreSQL. This approach
> looks elegant and simple to me. Using a new table per attribute type
> or even per attribute for the values, I can easily index them for
> quick lookups.

What is your test environment? Postgres works quite happily on my rather
low-powered Windows 7 laptop, and so I have all the bells and whistles
available for development.

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL as a triple store

2014-08-12 Thread Bill Moran
On Tue, 12 Aug 2014 16:57:32 +0200
Jimmy Thrasibule  wrote:

> > Is there a reason why hstore or json is not an option?  That may work a lot
> > better than this approach.
> 
> I don't want to move away from SQL common features so I can have a
> test environment using SQLite and deploy on PostgreSQL. This approach
> looks elegant and simple to me. Using a new table per attribute type
> or even per attribute for the values, I can easily index them for
> quick lookups.

Couple of things to keep in mind:
* SQLLite and PostgreSQL are not 100% compatable.  It's cheap and easy to
set up PostgreSQL for testing/development, so it seems silly (to me) to test
on something that might behave differently than the production environment.

* Any setup where the application can execute DDL is a potential security
concern.  If the code can manipulate tables then a bug in the code can
allow an accidental or malicious user to quickly and easily destroy data.

-- 
Bill Moran
I need your help to succeed:
http://gamesbybill.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgcluu

2014-08-12 Thread David Carpio

Hello

When you install pgcluu, this program will need some perl modules that 
normally they are not installed in our OS


For this reason the installation stop some times.

You must search the different modules required in the internet and 
install them.


I hope this help you

David

On Tue 12 Aug 2014 08:57:39 AM PET, Ramesh T wrote:

where need to install Test-Harness-3.32. or  ExtUtis/MakeMaker.pm  in
pg cluu -2.0 folder or /usr/local/lib64/perl..

please let me know

advance thanks..


On Tue, Aug 12, 2014 at 3:20 PM, Ramesh T
mailto:rameshparnandit...@gmail.com>>
wrote:

Hi,
  I don't  have the root permission for yum.
secondly,When i try to install it's return following same as
previous error/msg

 tar xvzf Test-Harness-3.32.tar.gz
 cd Test-Harness-3.32

-bash-4.1$ *perl Makefile.PL*
*
*
Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains:
/usr/local/lib64/perl5 /usr/local/share/perl5
/usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl
/usr/lib64/perl5 /usr/share/perl5 .) at Makefile.PL line 8.
BEGIN failed--compilation aborted at Makefile.PL line 8.

-bash-4.1$ make
make: *** No targets specified and no makefile found.  Stop.
-bash-4.1$ make test
make: *** No rule to make target `test'.  Stop.
-bash-4.1$
let me know any help..?



On Mon, Aug 11, 2014 at 9:07 PM, David Carpio
mailto:dav...@consistentstate.com>>
wrote:

Hello

You must install the harness.pm  module

You may do it via yum

yum install perl-Test-Harness

or downloaded the tar package and install it

http://search.cpan.org/~leont/Test-Harness-3.32/lib/Test/Harness.pm

I hope this help you.

David


On 08/11/2014 08:52 AM, Ramesh T wrote:

Hello ,

 I specified correct method i got error/message  at perl
Makefile.PL
tar xzf pgcluu-2.0.tar.gz
-bash-4.1$ cd pgcluu-2.0
*-bash-4.1$ perl Makefile.PL*
Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains:
/usr/local/lib64/perl5 /usr/local/share/perl5
/usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl
/usr/lib64/perl5 /usr/share/perl5 .) at Makefile.PL line 1.
BEGIN failed--compilation aborted at Makefile.PL line 1.
-bash-4.1$

then i trying to  install ExtUtils/MakeMaker  same place perl
Makefile.PL error/notifying

 tar xzf ExtUtils-MakeMaker-6.99_07.tar.gz
-bash-4.1$ cd ExtUtils-MakeMaker-6.99_07
-bash-4.1$ perl Makefile.PL
Using included version of CPAN::Meta (2.120351) because it is
not already instal
 led.
Using included version of ExtUtils::Install (1.54) because it
is not already ins  talled.
Using included version of CPAN::Meta::YAML (0.008) because it
is not already ins  talled.
Using included version of CPAN::Meta::Requirements (2.120351)
because it is not already installed.
Using included version of File::Copy::Recursive (0.38)
because it is not already installed.
Using included version of Parse::CPAN::Meta (1.4405) because
it is not already i  nstalled.
Using included version of JSON::PP (2.27203) because it is
not already installed
   .
Using included version of JSON::PP::Compat5006 (1.09) because
it is not already installed.
Using included version of ExtUtils::Manifest (1.60) because
it is not already in  stalled.
Generating a Unix-style Makefile
Writing Makefile for ExtUtils::MakeMaker
Writing MYMETA.yml and MYMETA.json
Can't locate Test/Harness.pm in @INC (@INC contains:
bundled/CPAN-Meta bundled/E  xtUtils-Install
bundled/CPAN-Meta-YAML bundled/CPAN-Meta-Requirements
bundled/Fi  le-Copy-Recursive bundled/Parse-CPAN-Meta
bundled/JSON-PP bundled/version bundle
 d/Scalar-List-Utils bundled/JSON-PP-Compat5006
bundled/ExtUtils-Command bundled/  ExtUtils-Manifest
bundled/File-Temp lib . /usr/local/lib64/perl5
/usr/local/shar  e/perl5 /usr/lib64/perl5/vendor_perl
/usr/share/perl5/vendor_perl /usr/lib64/perl5
/usr/share/perl5) at Makefile.PL line 142.


wheni trying test Harness it is also not installing..

let me know how to fix issue..?

thanks,
rao


On Sat, Aug 9, 2014 at 9:43 AM, Adrian Klaver
mailto:adrian.kla...@aklaver.com>> wrote:

On 08/08/2014 06:40 AM, Ramesh T wrote:

Hi,
i want install pgcluu on postgres 9.3 and
i'm putty tool to
connect pg database when i ran .
   tar xzf pgcluu-2.0.tar.gz
 cd pgcluu-2.0.tar/
   

Re: [GENERAL] Duplicated IDs

2014-08-12 Thread John R Pierce

On 8/12/2014 12:32 AM, Alexis Bernard wrote:

The vacuum full tracks just finished and I still have the duplicated IDs.


vacuum full does a reindex, the reindex should have errored out on the 
duplicate ID's.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL as a triple store

2014-08-12 Thread John R Pierce

On 8/12/2014 6:19 AM, Jimmy Thrasibule wrote:


I'm looking into gathering information about geographical locations:

- Coordinates
- Place name
- Pictures
- etc.

Since a place can be anything, a specific place may need any kind of 
data type to be described.


I'm therefore looking into using the same method as the semantic Web 
and trying to describe a place with triples.


1. The "triples" table will reference a subject, attribute and value.
2. The "attributes" table will have an attribute name, a type and 
mauve a table name.

3. For each attribute type, a new table is created to store the values.
4. A "resources" table list all the available resources (location, 
picture, ...).


Using this, I can easily add a new resource and describe it by adding 
new triples. Every item is identified using UUIDs as primary key.




that is a variation of an anti-pattern known as EAV 
(Entity-Attribute-Value).


its impossible to implement well-performing queries with this as you'll 
need to make several sequential queries each time, since table names 
can't be 'dynamic' in a sql query by design. (query triples join 
attributes, case on the attribute type, query appropriate attribute 
table and possibly resources)


http://mikesmithers.wordpress.com/2013/12/22/the-anti-pattern-eavil-database-design/


--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql 9.3 tuning advice

2014-08-12 Thread Soni M
Genereal advice is to set up shared_buffers to 25% of total RAM. 75% RAM
for OS cache.
On my case (1.5 TB database, 145 GB RAM), setting shared_buffers bigger
than 8GB would give no significant performance impact.
On some cases, setting it low would be an advantage
http://www.depesz.com/2007/12/05/shared-buffers-and-their-impact-on-performance/


On Tue, Aug 12, 2014 at 10:25 PM, dushy  wrote:

> Hello all,
>
> Iam running a postgresql 9.0.13 master/slave instance in a write heavy
> workload.
>
> The hardware is a Dell 720, E5530 - 8 core, 128GB RAM. The database
> (around
> 250g with indexes/bloat etc) is sitting on flashcache device with 2 fusion-
> io PCIe MLC SSDs as frontend and a MD3200 based RAID10 (14*1TB SATA disks)
> as backend. OS is centos 6.2 with kernel 2.6.32-431.20.3.el6.x86_64.
>
> Currently, the performance related configuration is mostly default i,e
> shared_buffers,
> effective_cache_size. The only directive that seems different is
> checkpoint_segments = 96
>
> Iam moving to postgresql 9.3 shortly and planning to tune the above
> directives as below..
>
> effective_cache_size = 100GB # free+buffers is pretty consistent around
> 110
> to 120GB and pg_oscache_total is around 80GB
> checkpoint_segments = 32 # 96 seems to long and all flushes seem to be
> only
> due to checkpoint_timeout with the older value
>
> Additionally iam turning off THB defrag on the OS as suggested by some
> posts on the
> lists. Though, My initial pgbench testing doesn't seem to indicate any
> issues with THB defrag turned on/off.
>
> Iam not sure about shared_buffers and wal_buffers for this HW/OS & DB
> combination - iam inclined to leave
> them to defaults. But based on this article
> (http://rhaas.blogspot.in/2012/03/tuning-sharedbuffers-and-walbuffers.html
> )
> it looks there will be some advantages in tuning it
>
> What would be a good value (to start with atleast) for shared_buffers and
> wal_buffers ?
>
> Please let me know if additional information will help.
>
> TIA
> dushy
>
>


-- 
Regards,

Soni Maula Harriz


Re: [GENERAL] Duplicated IDs

2014-08-12 Thread David G Johnston
Alexis Bernard wrote
> Hi,
> 
> I am having regurlarly duplicated rows with the same primary key.
> 
> => select id, created_at, updated_at from tracks where created_at =
> '2014-07-03 15:09:16.336488';
>id   | created_at | updated_at
> ++
>  331804 | 2014-07-03 15:09:16.336488 | 2014-07-03 15:37:55.253729
>  331804 | 2014-07-03 15:09:16.336488 | 2014-07-03 15:37:55.253801
> 
> => select id from tracks where id = 331804;
>  id
> 
> (0 rows)
> 
> => delete from tracks where created_at = '2014-07-03 15:09:16.336488' and
> updated_at = '2014-07-03 15:37:55.253801';
> ERROR:  update or delete on table "tracks" violates foreign key constraint
> "fk_sources_on_track_id" on table "sources"
> DETAIL:  Key (id)=(331804) is still referenced from table "sources".
> 
> All colums are identical except the updated_at.
> 
> PostgreSQL version is 9.3 and fsync is true.
> 
> Any idea how I can clean my table? And then how I can stop this?
> 
> Cheers,
> Alexis

What happens if you try and delete the older row instead of the newer one?

How does "updated_at" get populated/changed?

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Duplicated-IDs-tp5814311p5814615.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgcluu

2014-08-12 Thread Adrian Klaver

On 08/12/2014 06:57 AM, Ramesh T wrote:

where need to install Test-Harness-3.32. or  ExtUtis/MakeMaker.pm  in pg
cluu -2.0 folder or /usr/local/lib64/perl..

please let me know

advance thanks..



Two ways:

1) Quick and dirty

Go here:
http://search.cpan.org/~bingos/ExtUtils-MakeMaker-6.98/lib/ExtUtils/MakeMaker.pm

Download MakeMaker, untar and then follow instructions in INSTALL file 
for doing a local install:


For just you:

perl Makefile.PL INSTALL_BASE=~
make
make test
make install

2) More setup, but probably better if you end up needing more Perl 
modules set up as user(not system)


Go here:
http://www.cpan.org/modules/INSTALL.html

Follow instructions for local::lib in particular:

https://metacpan.org/pod/local::lib#The-bootstrapping-technique






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [GENERAL] wired problem for a 9.1 slave:receive wal but do not replay it?

2014-08-12 Thread Soni M
Do you run intensive read query on slave ?
If yes, query conflict can cause that,
http://www.postgresql.org/docs/9.1/static/hot-standby.html#HOT-STANDBY-CONFLICT
On conflict, xlog stream will be saved on xlog dir on slave instead of
replaying it. This happen until slave has opportunity to write all xlog
into disk.


On Mon, Aug 11, 2014 at 5:24 PM, Jov  wrote:

> Today,our monitor report a pg slave instance'disk space usage reach 96%,I
> login in to the machine,and find the pg_xlog dir take up more than
> 2TB,which is abnormal.
> the number of WAL file in the pg_xlog dir is more than 130k,while we set
> the wal keep number to 8192.
> I think there is something stop the replay,so I check the
> pg_stat_activity,pg_prepare_statement,pg_xact etc,but find all normal.
> I run:
> ps auxwww | grep postgres
> and can find the wal receiver and streaming receiver work happily,because
> the wal file name,the streaming log id changed.
>
> So I have no idea.
>
> I then restart the slave PG,and find it recover from  a very old wal which
> is one month ago.
> We are now set up a new slave for the master while let the recover from
> this slave go.
>
> the PG version is 9.1.9,OS is CentOS 6 x86-64.
>
> Jov
> blog: http:amutu.com/blog 
>



-- 
Regards,

Soni Maula Harriz


[GENERAL] Postgres 9.3 tuning advice

2014-08-12 Thread dushy
Hello all,

Iam running a postgresql 9.0.13 master/slave instance in a write heavy 
workload. 

The hardware is a Dell 720, E5530 - 8 core, 128GB RAM. The database (around 
250g with indexes/bloat etc) is sitting on flashcache device with 2 fusion-
io PCIe MLC SSDs as frontend and a MD3200 based RAID10 (14*1TB SATA disks) 
as backend. OS is centos 6.2 with kernel 2.6.32-431.20.3.el6.x86_64.

Currently, the performance related configuration is mostly default i,e 
shared_buffers,
effective_cache_size. The only directive that seems different is 
checkpoint_segments = 96

Iam moving to postgresql 9.3 shortly and planning to tune the above 
directives as below..

effective_cache_size = 100GB # free+buffers is pretty consistent around 110 
to 120GB and pg_oscache_total is around 80GB consistently
checkpoint_segments = 32 # 96 seems to long and all flushes seem to be only 
due to checkpoint_timeout 

Additionally iam turning off THB defrag as suggested by some posts on the 
lists. Though, My initial pgbench testing doesn't seem to indicate any 
issues with THB defrag turned on/off. 

Iam not sure about shared_buffers and wal_buffers - iam inclined to leave 
them to defaults. But based on this article 
(http://rhaas.blogspot.in/2012/03/tuning-sharedbuffers-and-walbuffers.html) 
it looks there will be some advantages in tuning it 

What would be a good value for shared_buffers and wal_buffers ?

Please let me know if additional information will help.

TIA
dushy





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Database block lifecycle

2014-08-12 Thread pinker
Yesterday I had an interesting discussion with my colleague about shared
buffers size for our new server. This machine (is dedicated for db) has got
512GB of RAM and database size is about 80GB, so he assumes that db will
never have to read from disk, so there is no point to adjust read ahead
setting, because every block gonna be read from RAM. As I've red in Greg
Smith book, once a block is changed it will be written to a disk and buffers
page is marked as clean, which would mean than changes occur in the same
page as before? What if dirty page doesn't have enough space for another row
and row has to be written to another page? Is it still occurs in RAM? If
that's true all updates of FSM occurs in RAM as well?

What about buffers_clean and pg_clog then? Are those maintained completely
in RAM as well without direct read from disk at all? 

To be precise, does the path to update and read updated row looks like a or
b?:
a). clean page (shared buffers) -> dirty page (shared buffers) -> to disk ->
read from disk -> shared buffers -> query
b). clean page (shared buffers) -> dirty page (shared buffers) -> to disk
& dirty page (shared buffers) -> clean page (shared buffers) -> query

btw. 512MB if we assume up to 600 connection is a reasonable value?




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Database-block-lifecycle-tp5814627.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] OIDs for jsonb type

2014-08-12 Thread Daniele Varrazzo
Hello,

I'm going to add support to the jsonb data type in psycopg2, in order
to have the type behaving like json currently does
(http://initd.org/psycopg/docs/extras.html#json-adaptation).

Is it correct that oid and arrayoid for the type will be 3802 and 3807
and that they won't change before the 9.4 final release?

Thank you very much

-- Daniele


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] OIDs for jsonb type

2014-08-12 Thread Vik Fearing
On 08/12/2014 11:49 PM, Daniele Varrazzo wrote:
> Hello,
> 
> I'm going to add support to the jsonb data type in psycopg2, in order
> to have the type behaving like json currently does
> (http://initd.org/psycopg/docs/extras.html#json-adaptation).
> 
> Is it correct that oid and arrayoid for the type will be 3802 and 3807
> and that they won't change before the 9.4 final release?

That is correct.
-- 
Vik


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database block lifecycle

2014-08-12 Thread John R Pierce

On 8/12/2014 2:41 PM, pinker wrote:

btw. 512MB if we assume up to 600 connection is a reasonable value?


thats an insanely high connection count, if you actually expect those 
connections to be executing concurrent queries, unless you have 
something north of 100 CPU cores.


you'd be much better to have a MUCH smaller connection count, and use a 
connection pooler such as pgbouncer, in transaction mode... let 600 
client htreads connect to the pooler, but have the pooler share maybe 4X 
your CPU core/thread count of actual connections for transactions in 
progress.






--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database block lifecycle

2014-08-12 Thread pinker
yes, I know the count is quite high. It is the max value we've 
estimated, but probably on average day it will be 100-200, and yes we 
use pgpool.


Am 13.08.2014 00:09, schrieb John R Pierce:

On 8/12/2014 2:41 PM, pinker wrote:

btw. 512MB if we assume up to 600 connection is a reasonable value?


thats an insanely high connection count, if you actually expect those 
connections to be executing concurrent queries, unless you have 
something north of 100 CPU cores.


you'd be much better to have a MUCH smaller connection count, and use 
a connection pooler such as pgbouncer, in transaction mode... let 600 
client htreads connect to the pooler, but have the pooler share maybe 
4X your CPU core/thread count of actual connections for transactions 
in progress.










--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database block lifecycle

2014-08-12 Thread John R Pierce

On 8/12/2014 3:29 PM, pinker wrote:
yes, I know the count is quite high. It is the max value we've 
estimated, but probably on average day it will be 100-200, and yes we 
use pgpool. 



if you're using a pooler, then why would you be using 200 concurrent 
connections, unless you have a 50 or 100 CPU cores/threads ?


if you have 1000 transactions to execute on a 32 core server, and you 
try and do 200 at once, it will take longer than if you do 64 at a time 
and let the rest queue up.


--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database block lifecycle

2014-08-12 Thread pinker
Ok,  I wasn't precisely enough, you are right. It's brand new server, 
nothing is yet configured and we have not even os installed. The number 
was the overall count we expect for a whole cluster.


But the main question is: is it possible to completely avoid disk read 
if there is huge amount of RAM available?



Am 13.08.2014 00:39, schrieb John R Pierce:

On 8/12/2014 3:29 PM, pinker wrote:
yes, I know the count is quite high. It is the max value we've 
estimated, but probably on average day it will be 100-200, and yes we 
use pgpool. 



if you're using a pooler, then why would you be using 200 concurrent 
connections, unless you have a 50 or 100 CPU cores/threads ?


if you have 1000 transactions to execute on a 32 core server, and you 
try and do 200 at once, it will take longer than if you do 64 at a 
time and let the rest queue up.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database block lifecycle

2014-08-12 Thread John R Pierce

On 8/12/2014 3:52 PM, pinker wrote:
Ok,  I wasn't precisely enough, you are right. It's brand new server, 
nothing is yet configured and we have not even os installed. The 
number was the overall count we expect for a whole cluster.


But the main question is: is it possible to completely avoid disk read 
if there is huge amount of RAM available?


the OS file cache will ensure that.

--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database block lifecycle

2014-08-12 Thread Tatsuo Ishii
> Yesterday I had an interesting discussion with my colleague about shared
> buffers size for our new server. This machine (is dedicated for db) has got
> 512GB of RAM and database size is about 80GB, so he assumes that db will
> never have to read from disk, so there is no point to adjust read ahead
> setting, because every block gonna be read from RAM. As I've red in Greg
> Smith book, once a block is changed it will be written to a disk and buffers
> page is marked as clean, which would mean than changes occur in the same
> page as before? What if dirty page doesn't have enough space for another row
> and row has to be written to another page? Is it still occurs in RAM? If
> that's true all updates of FSM occurs in RAM as well?
> 
> What about buffers_clean and pg_clog then? Are those maintained completely
> in RAM as well without direct read from disk at all? 

As long as they are on shared buffers, they are read from the buffer,
not from disk.

> To be precise, does the path to update and read updated row looks like a or
> b?:
> a). clean page (shared buffers) -> dirty page (shared buffers) -> to disk ->
> read from disk -> shared buffers -> query
> b). clean page (shared buffers) -> dirty page (shared buffers) -> to disk
> & dirty page (shared buffers) -> clean page (shared buffers) -> query

I'm not sure what you exactly mean by a) or b) but both look incorrect
for me. A reader can read a page from shared bufferes even if it's
dirty. So:

clean page (shared buffers) -> dirty page (shared buffers) ->  query

will be closer to the reality. Note that dirty page will be written by
bgwriter process at different timing.

Also note that I completely ignore lock or
buffer replacement algorithm. Please read
src/backend/storage/buffer/README for more precise information.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database block lifecycle

2014-08-12 Thread Jeff Janes
On Tuesday, August 12, 2014, pinker  wrote:

> Yesterday I had an interesting discussion with my colleague about shared
> buffers size for our new server. This machine (is dedicated for db) has got
> 512GB of RAM and database size is about 80GB, so he assumes that db will
> never have to read from disk,


Do you ever plan on restarting this server?  Doing maintenance?  Applying
security patches?


> so there is no point to adjust read ahead
> setting, because every block gonna be read from RAM. As I've red in Greg
> Smith book, once a block is changed it will be written to a disk and
> buffers
> page is marked as clean, which would mean than changes occur in the same
> page as before? What if dirty page doesn't have enough space for another
> row
> and row has to be written to another page? Is it still occurs in RAM? If
> that's true all updates of FSM occurs in RAM as well?
>

None of that still should need to read from disk regularly once the
database is warmed up.

>
> What about buffers_clean and pg_clog then? Are those maintained completely
> in RAM as well without direct read from disk at all?
>
> To be precise, does the path to update and read updated row looks like a or
> b?:
> a). clean page (shared buffers) -> dirty page (shared buffers) -> to disk
> ->
> read from disk -> shared buffers -> query
> b). clean page (shared buffers) -> dirty page (shared buffers) -> to disk
> & dirty page (shared buffers) -> clean page (shared buffers) -> query
>

More like b), but you are missing all the states that involve "clean in
shared_buffers, dirty in FS cache" and such.


>
> btw. 512MB if we assume up to 600 connection is a reasonable value?
>

Reasonable value for what?

Cheers,

Jeff


Re: [GENERAL] PostgreSQL as a triple store

2014-08-12 Thread Chris Travers
On Tue, Aug 12, 2014 at 8:33 AM, Bill Moran 
wrote:

> On Tue, 12 Aug 2014 16:57:32 +0200
> Jimmy Thrasibule  wrote:
>
> > > Is there a reason why hstore or json is not an option?  That may work
> a lot
> > > better than this approach.
> >
> > I don't want to move away from SQL common features so I can have a
> > test environment using SQLite and deploy on PostgreSQL. This approach
> > looks elegant and simple to me. Using a new table per attribute type
> > or even per attribute for the values, I can easily index them for
> > quick lookups.
>
> Couple of things to keep in mind:
> * SQLLite and PostgreSQL are not 100% compatable.  It's cheap and easy to
> set up PostgreSQL for testing/development, so it seems silly (to me) to
> test
> on something that might behave differently than the production environment.
>
> * Any setup where the application can execute DDL is a potential security
> concern.  If the code can manipulate tables then a bug in the code can
> allow an accidental or malicious user to quickly and easily destroy data.
>

This is a good point I probably should have mentioned specifically.  Making
my solution secure means restricting access to the catalog and catalog to
DDL features to a very small subset of trusted users (ideally not through a
standard application connection).  In this case, one might as well restrict
it to those who are already db superusers unless there is a clear need to
broaden it.

BTW, a specific case in point...  DDL can't be parameterized since it
doesn't have a query plan.  This means you are doing string concatenation
to create your ddl queries.  If you aren't careful someone can add an
attribute like:

'; DROP TABLE things; --

In practice this means a great deal of manual review and restriction on
which users can access this feature.

Best Wishes,
Chris Travers

>
> --
> Bill Moran
> I need your help to succeed:
> http://gamesbybill.com
>



-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: [GENERAL] OIDs for jsonb type

2014-08-12 Thread Michael Paquier
On Wed, Aug 13, 2014 at 6:54 AM, Vik Fearing  wrote:
> On 08/12/2014 11:49 PM, Daniele Varrazzo wrote:
>> Hello,
>>
>> I'm going to add support to the jsonb data type in psycopg2, in order
>> to have the type behaving like json currently does
>> (http://initd.org/psycopg/docs/extras.html#json-adaptation).
>>
>> Is it correct that oid and arrayoid for the type will be 3802 and 3807
>> and that they won't change before the 9.4 final release?
>
> That is correct.
Any modifications on this stuff would require a dump of
CATALOG_VERSION_NO, something unlikely to happen after beta2.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Duplicated IDs

2014-08-12 Thread Alexis Bernard
I re-tried both vacuum and reindex:

=> vacuum full verbose tracks;
INFO:  vacuuming "public.tracks"
INFO:  "tracks": found 0 removable, 1662221 nonremovable row versions in
34274 pages
DETAIL:  0 dead row versions cannot be removed yet.
CPU 0.27s/0.68u sec elapsed 2.44 sec.
VACUUM
=> reindex index tracks_pkey;
ERROR:  could not create unique index "tracks_pkey"
DETAIL:  Key (id)=(793935) is duplicated.





2014-08-12 20:21 GMT+02:00 John R Pierce :

> On 8/12/2014 12:32 AM, Alexis Bernard wrote:
>
>> The vacuum full tracks just finished and I still have the duplicated IDs.
>>
>
> vacuum full does a reindex, the reindex should have errored out on the
> duplicate ID's.
>
>
>
> --
> john r pierce  37N 122W
> somewhere on the middle of the left coast
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Postgres 9.3 tuning advice

2014-08-12 Thread Albe Laurenz
dushy wrote:
> Iam running a postgresql 9.0.13 master/slave instance in a write heavy
> workload.
> 
> The hardware is a Dell 720, E5530 - 8 core, 128GB RAM. The database (around
> 250g with indexes/bloat etc) is sitting on flashcache device with 2 fusion-
> io PCIe MLC SSDs as frontend and a MD3200 based RAID10 (14*1TB SATA disks)
> as backend. OS is centos 6.2 with kernel 2.6.32-431.20.3.el6.x86_64.
> 
> Currently, the performance related configuration is mostly default i,e
> shared_buffers,
> effective_cache_size. The only directive that seems different is
> checkpoint_segments = 96
> 
> Iam moving to postgresql 9.3 shortly and planning to tune the above
> directives as below..
> 
> effective_cache_size = 100GB # free+buffers is pretty consistent around 110
> to 120GB and pg_oscache_total is around 80GB consistently
> checkpoint_segments = 32 # 96 seems to long and all flushes seem to be only
> due to checkpoint_timeout
> 
> Additionally iam turning off THB defrag as suggested by some posts on the
> lists. Though, My initial pgbench testing doesn't seem to indicate any
> issues with THB defrag turned on/off.
> 
> Iam not sure about shared_buffers and wal_buffers - iam inclined to leave
> them to defaults. But based on this article
> (http://rhaas.blogspot.in/2012/03/tuning-sharedbuffers-and-walbuffers.html)
> it looks there will be some advantages in tuning it
> 
> What would be a good value for shared_buffers and wal_buffers ?
> 
> Please let me know if additional information will help.

The frequently heard advice for setting shared_buffers is 25% of RAM, but
with memory as big as that that may be too much (it can lead to checkpoint
I/O spikes and greater overhead in managing shared buffers).
Try with something like 8 or 16 GB.
Ideally you should test, use pg_buffercache to inspect shared buffers
and see what setting works best for you.

Set wal_buffers to 16MB so that a whole WAL segment will fit.

The best tuning strategy would be to stuff another 128 GB RAM into
the machine and have your DB in RAM.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general