[BUGS] Postgresql ACID bug?

2011-08-30 Thread Jan Snelders


Introduction
I wrote small java program which performs some selects in
one transaction (and thread) and one delete in another transaction and
thread on the same table holding one record initially. 

ENVIRONMENT
OS:
Centos 5.5 kernel 2.6.18-194.32.1.el5
Postgresql: 9.0.4
jdbc driver:
9.0-801 JDBC 4 

RESULTS AND EXPECTATIONS:
 The program starts with a
select (SELECT transaction) and prints one record ('id' and 'address' from
RelationAddresses table)
 The program now deletes the record in another
transaction (DELETE TRANSACTION)
 The program now selects and prints the
records from the SELECT transaction. (result as expected one record
printed)
 The DELETE transaction commits.
 The program now selects and
prints the records from the SELECT transaction. (0 records printed, we
expected one record since we are still within the SELECT transaction which
started while this record was still available)

 [code]
import
java.sql.Connection;
 import java.sql.DriverManager;
 import
java.sql.ResultSet;
 import java.sql.SQLException;
 import
java.sql.Statement;

 public class TransactionTest {
 /**/
 private static
final String driver = "org.postgresql.Driver";
 private static final String
connection = "jdbc:postgresql://192.168.0.1/sampledb";
 private static
final String username = "postgres";
 private static final String password =
"secret";
 private static final String escapeChar = """;
 /**/

 /*

private static final String driver = "com.mysql.jdbc.Driver";
 private
static final String connection = "jdbc:mysql://192.168.0.1/sampledb";

private static final String username = "root";
 private static final String
password = "secret";
 private static final String escapeChar = "`";
 /**/


private TransactionTest(){

 Select select = new Select();

select.start();

 try {
 Thread.sleep(1000);
 } catch (InterruptedException
e) { /*IGNORE*/ }

 Delete delete = new Delete();
 delete.start();
 }


public static void main(String[] args) {
 new TransactionTest();
 }


Object block = new Object();
 class Select extends Thread{
 @Override

public void run(){
 try {
 Class.forName(driver).newInstance();
 Connection
conn = DriverManager.getConnection(connection, username, password);

conn.setAutoCommit(false);

 System.out.println("");

System.out.println("SELECT TRANSACTION: List all addresses:");
 String
query = "SELECT " + escapeChar + "relationAddressesId" + escapeChar + ", "
+ escapeChar + "address" + escapeChar + " FROM " + escapeChar +
"RelationAddresses" + escapeChar + "";
 Statement stmt =
conn.createStatement();
 ResultSet rs = stmt.executeQuery(query);

while(rs.next()){
 System.out.println("relationAddressesId: " +
rs.getInt(1) + " - address: " + rs.getString(2));
 }
 rs.close();
stmt.close();
 System.out.println("");

 synchronized(block){

try {
 block.wait();
 } catch (InterruptedException e) { /*IGNORE*/}
 }


System.out.println("");
 System.out.println("SELECT
TRANSACTION: List all addresses before DELETE TRANSACTION commit():");

query = "SELECT " + escapeChar + "relationAddressesId" + escapeChar + ", "
+ escapeChar + "address" + escapeChar + " FROM " + escapeChar +
"RelationAddresses" + escapeChar + "";
 stmt = conn.createStatement();
 rs
= stmt.executeQuery(query);
 while(rs.next()){

System.out.println("relationAddressesId: " + rs.getInt(1) + " - address: "
+ rs.getString(2));
 }
 rs.close(); stmt.close();

System.out.println("");

 synchronized(block){
 try {

block.wait();
 } catch (InterruptedException e) { /*IGNORE*/}
 }


System.out.println("");
 System.out.println("SELECT
TRANSACTION: List all addresses after DELETE TRANSACTION commit():");

query = "SELECT " + escapeChar + "relationAddressesId" + escapeChar + ", "
+ escapeChar + "address" + escapeChar + " FROM " + escapeChar +
"RelationAddresses" + escapeChar + "";
 stmt = conn.createStatement();
 rs
= stmt.executeQuery(query);
 while(rs.next()){

System.out.println("relationAddressesId: " + rs.getInt(1) + " - address: "
+ rs.getString(2));
 }
 rs.close(); stmt.close();

System.out.println("");

 conn.commit();
 } catch
(ClassNotFoundException e) {
 e.printStackTrace();
 } catch (SQLException
e) {
 e.printStackTrace();
 } catch (InstantiationException e) {

e.printStackTrace();
 } catch (IllegalAccessException e) {

e.printStackTrace();
 }
 }
 }

 class Delete extends Thread{
 @Override

public void run(){
 try {
 Class.forName(driver).newInstance();
 Connection
conn = DriverManager.getConnection(connection, username, password);

conn.setAutoCommit(false);

 System.out.println("");

System.out.println("DELETE TRANSACTION: Deleting addresses");

System.out.println("");
 String query = "DELETE FROM " + escapeChar +
"RelationAddresses" + escapeChar + "";
 Statement stmt =
conn.createStatement();
 stmt.executeUpdate(query);
 stmt.close();


synchronized(block){
 block.notifyAll();
 }

 try {
 Thread.sleep(1000);
 }
catch (InterruptedException e) {/*IGNORE*/}
 conn.commit();


synch

Re: [BUGS] Postgresql ACID bug?

2011-08-30 Thread Kevin Grittner
Jan Snelders  wrote:
 
> I wrote small java program which performs some selects in one
> transaction (and thread) and one delete in another transaction and
> thread on the same table holding one record initially. 
 
> The DELETE transaction commits.
> The program now selects and
> prints the records from the SELECT transaction. (0 records
> printed, we expected one record since we are still within the
> SELECT transaction which started while this record was still
> available)
 
You are probably running at the READ COMMITTED transaction isolation
level.  The behavior you describe is allowed at that isolation
level, both by the standard and by PostgreSQL.  You seem to be
expecting that the default transaction isolation level will be
SERIALIZABLE (which is not altogether unreasonable, since that is
what the standard requires); but PostgreSQL has a default
configuration of defaulting to the READ COMMITTED level.  You can
edit your postgresql.conf file to specify
default_transaction_isolation = 'serializable' to get standard
conforming behavior.
 
Note that there are still some corner cases where you don't get full
ACID behavior at the SERIALIZABLE level in PostgreSQL version 9.0;
this has been enhanced to fully compliant behavior in the upcoming
9.1 release.
 
For 9.0 behavior, see:
 
http://www.postgresql.org/docs/9.0/static/transaction-iso.html
 
For behavior in the soon-to-be-release 9.1 version, see:
 
http://www.postgresql.org/docs/9.1/static/transaction-iso.html
 
-Kevin

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] Postgresql ACID bug?

2011-08-30 Thread Craig Ringer

On 30/08/2011 9:18 PM, Jan Snelders wrote:


The program now selects and prints the records from the SELECT
transaction. (0 records printed, we expected one record since we are
still within the SELECT transaction which started while this record was
still available)


This isn't a bug. You're using READ COMMITTED mode, and you appear to 
expect SERIALIZABLE mode. See:


http://www.postgresql.org/docs/current/interactive/transaction-iso.html

--
Craig Ringer


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #6184: Inconsistencies in log messages

2011-08-30 Thread Ding Yuan

The following bug has been logged online:

Bug reference:  6184
Logged by:  Ding Yuan
Email address:  ding.yuan...@hotmail.com
PostgreSQL version: 8.4.8
Operating system:   All
Description:Inconsistencies in log messages
Details: 

Hi Postgres developers,
(I reported this a day ago and for some reason did not see my message appear
in the archive, so I am resending it again and I apologize if it seems
duplicated). 

I am a student doing research on improving software's diagnosability by
improving the error messages. Recently I run a static checker on
postgres-8.4.8 and found that some inconsistencies in log messages in
similar situations. While we suspect some of them should be fixed, we are
not 100% sure since we do not have domain expertise in Squid code. 

I am reporting 10 such cases in this report. We truly appreciate your
feedback on whether our findings make sense.

Thanks,
Ding

== Report 1 
The two elog messages are of similar situations but with different verbosity
levels.
Pattern: 7478
 src/backend/executor/nodeResult.c -
@@ line: 171 @@
171:void
172:ExecResultMarkPos(ResultState *node)
173:{
174:PlanState  *outerPlan = outerPlanState(node);
175:
176:if (outerPlan != NULL)
177:ExecMarkPos(outerPlan);
178:else
179:elog(DEBUG2, "Result nodes do not support
mark/restore");
 src/backend/executor/nodeResult.c -
@@ line: 186 @@
186:void
187:ExecResultRestrPos(ResultState *node)
188:{
189:PlanState  *outerPlan = outerPlanState(node);
190:
191:if (outerPlan != NULL)
192:ExecRestrPos(outerPlan);
193:else
194:elog(ERROR, "Result nodes do not support
mark/restore");



 Report 2
==
The verbosity level for error return for CreateEvent is not consistent!
Pattern: 6490
 src/backend/port/win32/socket.c -
@@ line: 128 @@
128:waitevent = CreateEvent(NULL, TRUE, FALSE, NULL);
129:
130:if (waitevent == INVALID_HANDLE_VALUE)
131:ereport(ERROR,
132:(errmsg_internal("Failed to
create socket waiting event: %i", (int) GetLastError(;
 src/backend/port/win32/signal.c -
@@ line: 83 @@
83: pgwin32_signal_event = CreateEvent(NULL, TRUE, FALSE,
NULL);
84: if (pgwin32_signal_event == NULL)
85: ereport(FATAL,
86: (errmsg_internal("failed to create
signal event: %d", (int) GetLastError(;


 Report 3
==
The log message in the first snippet (line 3487) has ERROR level, while
PANIC
is logged in other similar situations.
Pattern: 4207
 src/backend/access/heap/heapam.c -
@@ line: 3483 @@
3483:   if (PageGetMaxOffsetNumber(page) >= offnum)
3484:   lp = PageGetItemId(page, offnum);
3485:
3486:   if (PageGetMaxOffsetNumber(page) < offnum ||
!ItemIdIsNormal(lp))
3487:   elog(ERROR, "heap_inplace_update: invalid lp");
3488:
3489:   htup = (HeapTupleHeader) PageGetItem(page, lp);
3490:
3491:   oldlen = ItemIdGetLength(lp) - htup->t_hoff;
 src/backend/access/heap/heapam.c -
@@ line: 4732 @@
4732:   if (PageGetMaxOffsetNumber(page) >= offnum)
4733:   lp = PageGetItemId(page, offnum);
4734:
4735:   if (PageGetMaxOffsetNumber(page) < offnum ||
!ItemIdIsNormal(lp))
4736:   elog(PANIC, "heap_inplace_redo: invalid lp");
4737:
4738:   htup = (HeapTupleHeader) PageGetItem(page, lp);
4739:
4740:   oldlen = ItemIdGetLength(lp) - htup->t_hoff;
 src/backend/access/heap/heapam.c -
@@ line: 4268 @@
4268:   if (PageGetMaxOffsetNumber(page) >= offnum)
4269:   lp = PageGetItemId(page, offnum);
4270:
4271:   if (PageGetMaxOffsetNumber(page) < offnum ||
!ItemIdIsNormal(lp))
4272:   elog(PANIC, "heap_delete_redo: invalid lp");
4273:
4274:   htup = (HeapTupleHeader) PageGetItem(page, lp);
 src/backend/access/heap/heapam.c -
@@ line: 4469 @@
4469:   if (PageGetMaxOffsetNumber(page) >= offnum)
4470:   lp = PageGetItemId(page, offnum);
4471:
4472:   if (PageGetMaxOffsetNumber(page) < offnum ||
!ItemIdIsNormal(lp))
4473:   elog(PANIC, "heap_update_redo: invalid lp");
4474:
4475:   htup = (HeapTupleHeader) PageGetItem(page, lp);
 src/backend/access/heap/heapam.c 

[BUGS] BUG #6185: Segmentation fault with NULL string columns

2011-08-30 Thread Isaac Jurado

The following bug has been logged online:

Bug reference:  6185
Logged by:  Isaac Jurado
Email address:  ijur...@econcept.es
PostgreSQL version: 8.4.8
Operating system:   Ubuntu 10.04.03 LTS (x86_64)
Description:Segmentation fault with NULL string columns
Details: 

Here's the script to make it fail:

CREATE DATABASE killme;
\c killme
CREATE FUNCTION to_ascii(bytea, name) RETURNS text AS 'to_ascii_encname'
LANGUAGE internal;
CREATE TABLE mytable (id INTEGER PRIMARY KEY, mytext TEXT);
INSERT INTO mytable (id, mytext) VALUES (1, '');
SELECT * FROM mytable WHERE to_ascii(convert_to(mytext, 'latin1'), 'latin1')
= to_ascii(convert_to('nicetry', 'latin1'), 'latin1');
UPDATE mytable SET mytext=NULL WHERE id=1;
SELECT * FROM mytable WHERE to_ascii(convert_to(mytext, 'latin1'), 'latin1')
= to_ascii(convert_to('nicetry', 'latin1'), 'latin1');

After the second query, the database server segfaults.  Note that it only
fails for NULL values.  The to_ascii/convert_to combination is a recipe for
diacritics elimination found on a PostgreSQL forum.

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6185: Segmentation fault with NULL string columns

2011-08-30 Thread Heikki Linnakangas

On 30.08.2011 11:51, Isaac Jurado wrote:


The following bug has been logged online:

Bug reference:  6185
Logged by:  Isaac Jurado
Email address:  ijur...@econcept.es
PostgreSQL version: 8.4.8
Operating system:   Ubuntu 10.04.03 LTS (x86_64)
Description:Segmentation fault with NULL string columns
Details:

Here's the script to make it fail:

CREATE DATABASE killme;
\c killme
CREATE FUNCTION to_ascii(bytea, name) RETURNS text AS 'to_ascii_encname'
LANGUAGE internal;
CREATE TABLE mytable (id INTEGER PRIMARY KEY, mytext TEXT);
INSERT INTO mytable (id, mytext) VALUES (1, '');
SELECT * FROM mytable WHERE to_ascii(convert_to(mytext, 'latin1'), 'latin1')
= to_ascii(convert_to('nicetry', 'latin1'), 'latin1');
UPDATE mytable SET mytext=NULL WHERE id=1;
SELECT * FROM mytable WHERE to_ascii(convert_to(mytext, 'latin1'), 'latin1')
= to_ascii(convert_to('nicetry', 'latin1'), 'latin1');

After the second query, the database server segfaults.  Note that it only
fails for NULL values.  The to_ascii/convert_to combination is a recipe for
diacritics elimination found on a PostgreSQL forum.


to_ascii_encname is marked as STRICT, which means that it returns NULL 
on NULL input, without ever calling the underlying C function. The 
"to_ascii" function that you created is not marked as STRICT, so the 
to_ascii_encname C function is called on NULL input. It's not prepared 
for that, and crashes, because it's not supposed to be called on NULL input.


Why do you create your own 'internal' language function like that in the 
first place? Just use the existing to_ascii function.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6185: Segmentation fault with NULL string columns

2011-08-30 Thread Alvaro Herrera
Excerpts from Isaac Jurado's message of mar ago 30 05:51:55 -0300 2011:
> 
> The following bug has been logged online:
> 
> Bug reference:  6185
> Logged by:  Isaac Jurado
> Email address:  ijur...@econcept.es
> PostgreSQL version: 8.4.8
> Operating system:   Ubuntu 10.04.03 LTS (x86_64)
> Description:Segmentation fault with NULL string columns
> Details: 
> 
> Here's the script to make it fail:
> 
> CREATE DATABASE killme;
> \c killme
> CREATE FUNCTION to_ascii(bytea, name) RETURNS text AS 'to_ascii_encname'
> LANGUAGE internal;

Seems you neglected to mark the function STRICT.

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


[BUGS] BUG #6186: out of memory while analyze

2011-08-30 Thread Lampa

The following bug has been logged online:

Bug reference:  6186
Logged by:  Lampa
Email address:  lamp...@gmail.com
PostgreSQL version: 9.0.4/8.4.8
Operating system:   Linux Debian testing 32bit
Description:out of memory while analyze
Details: 

Trying analyze table (ANALYZE TABLE table or VACUUM ANALYZE table) with
3051316 rows (displayed size 2521MB in \dt+)

Log:
TopMemoryContext: 42800 total in 5 blocks; 5304 free (7 chunks); 37496 used
  Type information cache: 8192 total in 1 blocks; 1800 free (0 chunks); 6392
used
  TopTransactionContext: 8192 total in 1 blocks; 7696 free (0 chunks); 496
used
  MessageContext: 8192 total in 1 blocks; 5688 free (1 chunks); 2504 used
  Operator class cache: 8192 total in 1 blocks; 3848 free (0 chunks); 4344
used
  smgr relation table: 8192 total in 1 blocks; 2816 free (0 chunks); 5376
used
  TransactionAbortContext: 32768 total in 1 blocks; 32752 free (0 chunks);
16 used
  Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
  PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 1024 total in 1 blocks; 1008 free (13 chunks); 16
used
  Analyze: 2236767256 total in 279 blocks; 18080 free (423 chunks);
2236749176 used
  Vacuum: 8192 total in 1 blocks; 7872 free (0 chunks); 320 used
  Relcache by OID: 8192 total in 1 blocks; 2856 free (0 chunks); 5336 used
  CacheMemoryContext: 667696 total in 20 blocks; 135264 free (1 chunks);
532432 used
i_table_zamid: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
i_table_zamestnanci_id: 1024 total in 1 blocks; 344 free (0 chunks); 680
used
i_table_zacatek_delka: 1024 total in 1 blocks; 280 free (0 chunks); 744
used
i_table_zacatek: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
i_table_tymy_instance_id: 1024 total in 1 blocks; 344 free (0 chunks);
680 used
i_table_tymy_id: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
i_table_strid: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
i_table_stavukonu_id: 1024 total in 1 blocks; 344 free (0 chunks); 680
used
i_table_stavcas: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
i_table_stav: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
i_table_splneno_cas: 1024 total in 1 blocks; 344 free (0 chunks); 680
used
i_table_skupinyzamesntnacu_id: 1024 total in 1 blocks; 344 free (0
chunks); 680 used
i_table_sablonyukonu_id: 1024 total in 1 blocks; 344 free (0 chunks);
680 used
i_table_prostredky_id: 1024 total in 1 blocks; 344 free (0 chunks); 680
used
i_table_priorita: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
i_table_pacienti_id: 1024 total in 1 blocks; 344 free (0 chunks); 680
used
i_table_odbid: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
i_table_objednano_id: 1024 total in 1 blocks; 344 free (0 chunks); 680
used
i_table_nadrazeny_ukon: 1024 total in 1 blocks; 344 free (0 chunks); 680
used
i_table_maxid: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
i_table_kpid_spl: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
i_table_kpid_obj: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
i_table_klientskekarty_id: 1024 total in 1 blocks; 344 free (0 chunks);
680 used
i_table_delka: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
i_table_adresareukonu_id: 1024 total in 1 blocks; 344 free (0 chunks);
680 used
table_pkey: 1024 total in 1 blocks; 344 free (0 chunks); 680 used
pg_index_indrelid_index: 1024 total in 1 blocks; 304 free (0 chunks);
720 used
pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 240 free (0
chunks); 784 used
pg_user_mapping_user_server_index: 1024 total in 1 blocks; 280 free (0
chunks); 744 used
pg_user_mapping_oid_index: 1024 total in 1 blocks; 344 free (0 chunks);
680 used
pg_type_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); 720
used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 280 free (0 chunks);
744 used
pg_ts_template_oid_index: 1024 total in 1 blocks; 344 free (0 chunks);
680 used
pg_ts_template_tmplname_index: 1024 total in 1 blocks; 280 free (0
chunks); 744 used
pg_ts_parser_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
used
pg_ts_parser_prsname_index: 1024 total in 1 blocks; 280 free (0 chunks);
744 used
pg_ts_dict_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
used
pg_ts_dict_dictname_index: 1024 total in 1 blocks; 280 free (0 chunks);
744 used
pg_ts_config_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680
used
pg_ts_config_cfgname_index: 1024 total in 1 blocks; 280 free (0 chunks);
744 used
pg_ts_config_map_index: 1024 total in 1 blocks; 192 free (0 chunks); 832
used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 280 free (0
chunks); 744 used
pg_class_relname_nsp_index: 1024 total in 1 blocks; 240 free (0 chunks);
784 used
pg_proc_oid_index: 1024 total in 1 bloc

[BUGS] BUG #6187: Select query stuck

2011-08-30 Thread Bhavesh Dangi

The following bug has been logged online:

Bug reference:  6187
Logged by:  Bhavesh Dangi
Email address:  dangi.bhaves...@gmail.com
PostgreSQL version: PostgreSQL8.0.1
Operating system:   i686-pc-linux-gnu
Description:Select query stuck
Details: 

Hello,

Here the problem is SELECT query takes a lot of time(in days) to return a
result than expected time and sometimes it stop to responding. Its totally
hang. We are suffering from last 1 years by having this issue. If it will
happen then we finding that query from pg_stat_activity and cancel it but
that is not a solution of problem.
We also found that almost SELECT queries are stuck. We are also continuous
performing veccumdb before 2 billion transactions. 
I hope that you will be provide some way to solve it.

Thanks & Regards,
Bhavesh Dangi

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6187: Select query stuck

2011-08-30 Thread Heikki Linnakangas

On 30.08.2011 14:28, Bhavesh Dangi wrote:

PostgreSQL version: PostgreSQL8.0.1


That version is over 6 years old. PostgreSQL 8.0 major version is no 
longer supported, see 
http://wiki.postgresql.org/wiki/PostgreSQL_Release_Support_Policy. Even 
if it was, the latest (and last) minor release from the 8.0 series was 
8.0.26, and there was a lot of important bug-fixes between 8.0.1 and that.



Here the problem is SELECT query takes a lot of time(in days) to return a
result than expected time and sometimes it stop to responding. Its totally
hang. We are suffering from last 1 years by having this issue. If it will
happen then we finding that query from pg_stat_activity and cancel it but
that is not a solution of problem.
We also found that almost SELECT queries are stuck. We are also continuous
performing veccumdb before 2 billion transactions.


That's unlikely a bug. See 
http://wiki.postgresql.org/wiki/SlowQueryQuestions. But upgrading is the 
first thing you need to do.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6186: out of memory while analyze

2011-08-30 Thread Heikki Linnakangas

On 30.08.2011 14:20, Lampa wrote:

Trying analyze table (ANALYZE TABLE table or VACUUM ANALYZE table) with
3051316 rows (displayed size 2521MB in \dt+)

Log:
...
   Analyze: 2236767256 total in 279 blocks; 18080 free (423 chunks);
2236749176 used

> ...

Hmm, that looks like a memory like in ANALYZE. What does the table look 
like? "\d table" would be helpful.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6186: out of memory while analyze

2011-08-30 Thread Tom Lane
"Lampa"  writes:
> Trying analyze table (ANALYZE TABLE table or VACUUM ANALYZE table) with
> 3051316 rows (displayed size 2521MB in \dt+)

What have you got maintenance_work_mem set to?  shared_buffers might
also be interesting.

regards, tom lane

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6185: Segmentation fault with NULL string columns

2011-08-30 Thread Isaac Jurado
On Tue, Aug 30, 2011 at 8:21 PM, Heikki Linnakangas
 wrote:
>
> to_ascii_encname is marked as STRICT, which means that it returns NULL
> on NULL input, without ever calling the underlying C function. The
> "to_ascii" function that you created is not marked as STRICT, so the
> to_ascii_encname C function is called on NULL input. It's not prepared
> for that, and crashes, because it's not supposed to be called on NULL
> input.

I figured that 5 minutes after reporting.  I tried to sent a reply email
but it seems it didn't arrive.  Thanks anyway.

> Why do you create your own 'internal' language function like that in
> the first place? Just use the existing to_ascii function.

The default to_ascii expects a string as a first argument.  As our DB
encoding is UTF8 we need to filter everything through
convert_to(..., 'LATIN1').  Because convert_to function returns a
"bytea" type, we cannot use to_ascii in vanilla flavour.

We are not too fond with the PostgreSQL specifics (although the explicit
::text cast didn't work), so we found that function definition on a
wiki and it worked.

But now, unless you know a more correct way to do it, it works like a
charm by appending "STRICT" to the function definition.

Best regards and sorry for the noise.

-- 
Isaac Jurado
Internet Busines Solutions eConcept

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6185: Segmentation fault with NULL string columns

2011-08-30 Thread Isaac Jurado
On Tue, Aug 30, 2011 at 8:22 PM, Alvaro Herrera
 wrote:
>
> Seems you neglected to mark the function STRICT.

Yes, that was it.  Sorry for the invalid bug report.

Best regards.

-- 
Isaac Jurado
Internet Busines Solutions eConcept

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6186: out of memory while analyze

2011-08-30 Thread Lampa
Hello,

table design http://pastebin.com/RW6vLAVP

And configuration:

ssl = false
shared_buffers = 900MB  # min 128kB
work_mem = 100MB# min 64kB
maintenance_work_mem = 524MB# min 1MB
fsync = off # turns forced synchronization on or off
synchronous_commit = off# immediate fsync at commit
full_page_writes = off  # recover from partial page writes
effective_cache_size = 1GB
default_statistics_target = 8000# range 1-1


Also tried change maintenance_work_mem to 2G and same result.
Imposible to set over 2GB (FATAL:  3145728 is outside the valid range
for parameter "maintenance_work_mem" (1024 .. 2097151))

Problem seems to be in default_statistics_target values from approx
6300. Up to 6300 analyze works but on another machine (64bit) works
1 without problems on same but larger table (more data)

2011/8/30 Tom Lane :
> "Lampa"  writes:
>> Trying analyze table (ANALYZE TABLE table or VACUUM ANALYZE table) with
>> 3051316 rows (displayed size 2521MB in \dt+)
>
> What have you got maintenance_work_mem set to?  shared_buffers might
> also be interesting.
>
>                        regards, tom lane
>



-- 
Lampa

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs


Re: [BUGS] BUG #6186: out of memory while analyze

2011-08-30 Thread John R Pierce

On 08/30/11 10:25 PM, Lampa wrote:

Also tried change maintenance_work_mem to 2G and same result.
Imposible to set over 2GB (FATAL:  3145728 is outside the valid range
for parameter "maintenance_work_mem" (1024 .. 2097151))


is this a 32bit postgres ?



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs