[GENERAL] SELECT is immediate but the UPDATE takes forever

2010-12-07 Thread Raimon Fernandez
Hi,


I want to understand why one of my postgresql functions takes an eternity to 
finish.

Here's an example:

UPDATE comptes SET belongs_to_compte_id=42009 WHERE (codi_compte LIKE '1%' 
AND empresa_id=2 AND nivell=11); // takes forever to finish

QUERY PLAN 

 Seq Scan on comptes  (cost=0.00..6559.28 rows=18 width=81)
   Filter: (((codi_compte)::text ~~ '1%'::text) AND (empresa_id = 2) AND 
(nivell = 11))
(2 rows)


but the same SELECT count, it's immediate:

SELECT count(id) FROM comptes WHERE codi_compte LIKE '1%' AND empresa_id=2 
AND nivell=11;


what I'm doing wrong ?

thanks,

regards,

r.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT is immediate but the UPDATE takes forever

2010-12-07 Thread Raimon Fernandez

On 7dic, 2010, at 15:45 , Michał Roszka wrote:

> Quoting Raimon Fernandez :
> 
>> I want to understand why one of my postgresql functions takes an
>> eternity to finish.
>> 
>> Here's an example:
>> 
>> UPDATE comptes SET belongs_to_compte_id=42009 WHERE (codi_compte LIKE
>> '1%' AND empresa_id=2 AND nivell=11); // takes forever to finish
> 
> [...]
> 
>> but the same SELECT count, it's immediate:
>> 
>> SELECT count(id) FROM comptes WHERE codi_compte LIKE '1%' AND
>> empresa_id=2 AND nivell=11;
> 
> Maybe there is any check or constraint on belongs_to_compte_id.comptes that
> might take longer?

no, there's no check or constraint (no foreign key, ...) on this field.

I'm using now another database with same structure and data and the delay 
doesn't exist there, there must be something wrong in my current development 
database.

I'm checking this now ...

thanks,

r.


> 
> Cheers,
> 
>   -Mike
> 
> --
> Michał Roszka
> m...@if-then-else.pl
> 
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT is immediate but the UPDATE takes forever

2010-12-08 Thread Raimon Fernandez

On 8dic, 2010, at 18:18 , Vick Khera wrote:

> 2010/12/7 Raimon Fernandez :
>> I'm using now another database with same structure and data and the delay 
>> doesn't exist there, there must be something wrong in my current development 
>> database.
>> 
> 
> does autovacuum run on it?

no

> is the table massively bloated?  

no

> is your disk system really, really slow to allocate new space?

no


now:

well, after a VACUUM things are going faster ... I'm still trying to analyze 
the function as it seems there are other bottlechecnk, but at least the first 
update now is faster as before ...

thanks,

r.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT is immediate but the UPDATE takes forever

2010-12-08 Thread Raimon Fernandez

On 7dic, 2010, at 16:37 , Tom Lane wrote:

>> Quoting Raimon Fernandez :
>>> I want to understand why one of my postgresql functions takes an
>>> eternity to finish.
> 
>> Maybe there is any check or constraint on belongs_to_compte_id.comptes that
>> might take longer?
> 
> Or maybe the UPDATE is blocked on a lock ... did you look into
> pg_stat_activity or pg_locks to check?

no, there's no lock, blocked, ... I'm the only user connected with my developer 
test database and I'm sure there are no locks, and more sure after looking at 
pg_locks :-)

thanks,

r.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] use a variable name for an insert in a trigger for an audit

2010-12-08 Thread Raimon Fernandez
Hello,

I have to audit all the changes for all rows of one database.

I have a trigger that executes BEFORE any update or delete, and simply copy the 
row (INSERT INTO) into the replicated table.

For example, every table has the same name plus '_audit' at the end and belongs 
to the schema audit:

table public.persons => audit.persons_audit

I don't want to create specific triggers/functions for every table, so I want 
to modify the table_name in the INSERT INTO, using the TG_TABLE_NAME, but I 
can't make it working.

Also I can't see a working solution in the archive, and some examples are quite 
messy to do, so maybe I have to rethink how I'm doing thinks or just create a 
specific trigger for each table.

Here is my function, and I'm only testing now the INSERT:

...
DECLARE
 tableRemote varchar;
BEGIN

IF TG_TABLE_NAME = 'assentaments' THEN
 tableRemote:='audit.'||TG_TABLE_NAME||'_audit';
END IF;

--
-- Create a row in table_audit to reflect the operation performed on 
emp,
-- make use of the special variable TG_OP to work out the operation.
--

IF (TG_OP = 'DELETE') THEN
EXECUTE 'INSERT INTO audit.assentaments_audit SELECT 
CURRVAL(''audit_id_seq''),5, OLD.*';
RETURN OLD;
ELSIF (TG_OP = 'UPDATE') THEN
INSERT INTO tableRemote  SELECT 
CURRVAL('audit.audit_id_seq'),3,OLD.*;
RETURN OLD;
END IF;
RETURN NULL; -- result is ignored since this is an AFTER trigger
END;
...

thanks,

regards,




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT is immediate but the UPDATE takes forever

2010-12-09 Thread Raimon Fernandez

On 9dic, 2010, at 14:32 , Vick Khera wrote:

>> well, after a VACUUM things are going faster ... I'm still trying to analyze 
>> the function as it seems there are other bottlechecnk, but at least the 
>> first update now is faster as before ...
>> 
> 
> If that's the case then your 'no' answer to "is the table bloated" was 
> probably incorrect,

here you maybe are right

> and your answer to "is your I/O slow to grow a file" is also probably 
> incorrect.

not sure as I'm not experiencing any slownes on the same machine with other 
postgresql databases that are also more or less the same size, I'm still a real 
newbie ...

thanks!

regards,

raimon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] use a variable name for an insert in a trigger for an audit

2010-12-09 Thread Raimon Fernandez

On 9dic, 2010, at 04:40 , Raimon Fernandez wrote:

> Hello,
> 
> I have to audit all the changes for all rows of one database.
> 
> I have a trigger that executes BEFORE any update or delete, and simply copy 
> the row (INSERT INTO) into the replicated table.
> 
> For example, every table has the same name plus '_audit' at the end and 
> belongs to the schema audit:
> 
> table public.persons => audit.persons_audit
> 
> I don't want to create specific triggers/functions for every table, so I want 
> to modify the table_name in the INSERT INTO, using the TG_TABLE_NAME, but I 
> can't make it working.
> 
> Also I can't see a working solution in the archive, and some examples are 
> quite messy to do, so maybe I have to rethink how I'm doing thinks or just 
> create a specific trigger for each table.
> 
> Here is my function, and I'm only testing now the INSERT:
> 
> ...
> DECLARE
> tableRemote varchar;
> BEGIN
> 
> IF TG_TABLE_NAME = 'assentaments' THEN
> tableRemote:='audit.'||TG_TABLE_NAME||'_audit';
> END IF;
> 
>--
>-- Create a row in table_audit to reflect the operation performed on 
> emp,
>-- make use of the special variable TG_OP to work out the operation.
>--
>   
>   IF (TG_OP = 'DELETE') THEN
>EXECUTE 'INSERT INTO audit.assentaments_audit SELECT 
> CURRVAL(''audit_id_seq''),5, OLD.*';
>RETURN OLD;
>ELSIF (TG_OP = 'UPDATE') THEN
>INSERT INTO tableRemote  SELECT 
> CURRVAL('audit.audit_id_seq'),3,OLD.*;
>RETURN OLD;
>END IF;
>RETURN NULL; -- result is ignored since this is an AFTER trigger
>END;
> ...
> 
> thanks,
> 
> regards,
> 

finally I've moved all the audit tables to a new schema called audit, and the 
tables being audited have now the same name as the 'master' tables.

In the trigger function I want to change the default schema to audit to use the 
same tablename, but it seems that I can't change the schema in the function.

Also, as now the audit tables belong to the audit schema and have the same 
name, I'm trying to use just the TG_TABLE_NAME as this:

 INSERT INTO TG_TABLE_NAME  SELECT CURRVAL('audit.audit_id_seq'),3,OLD.*;

but also isn't allowed ...

I have to specify always a fixed value for the INSERT INTO myTable to work ?

If I use:

 INSERT INTO assentaments  SELECT CURRVAL('audit.audit_id_seq'),3,OLD.*;

this works perfectly, as the trigger function belongs to the audit schema, I 
can use the same table name, but I can't use the TG_TABLE_NAME, and I have only 
two options:

- use the same triggger function with IF ELSEIF to test wich table invoked the 
trigger function 
- or just write a different trigger function for each table.

what are the best options ?

thanks for your guide!

regards,

r.



also I'm trying to change the default schema



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Getting number of affected rows after DELETE FROM

2010-12-17 Thread Raimon Fernandez
Hi,

I'm trying to solve what I think must be a real trivial question.

When I use psql after every DELETE FROM table WHERE id= I get how many rows 
were affected, in this case, deleted.

Also I've implemented the full FrontEnd/BackEnd Protocol v3 and there after a 
CommandComplete also I receive how many rows were affected.

But now, I'm using REALstudio www.realsoftware.com with their plugin, and I 
can't get the rows affected.

I can send a simple DELETE FROM table WHERE id=  and all what I get is 
nothing, no rows, no set, no info, even if the action didn't delete any row 
because the id was wrong.

They say that if the DELETE gives an empty string, means that PostgreSQL isn't 
returning nothing and that I have to get those values with some special values, 
like return parameters.

In pg/plsql I've used sometimes the GET DIAGNOSTICS  = ROW_COUNT or 
FOUND with great success, but I really can't make them work outside their main 
function.

There is something like select lastval(); but for rows affected ?

thanks in advance,

regards,

r.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting number of affected rows after DELETE FROM

2010-12-19 Thread Raimon Fernandez

On 19dic, 2010, at 10:33 , Jasen Betts wrote:

> On 2010-12-17, Raimon Fernandez  wrote:
>> Hi,
>> 
>> I'm trying to solve what I think must be a real trivial question.
>> 
>> When I use psql after every DELETE FROM table WHERE id= I get how many 
>> rows were affected, in this case, deleted.
>> 
>> Also I've implemented the full FrontEnd/BackEnd Protocol v3 and there after 
>> a CommandComplete also I receive how many rows were affected.
>> 
>> But now, I'm using REALstudio www.realsoftware.com with their plugin, and I 
>> can't get the rows affected.
>> 
>> I can send a simple DELETE FROM table WHERE id=  and all what I get is 
>> nothing, no rows, no set, no info, even if the action didn't delete any row 
>> because the id was wrong.
>> 
>> They say that if the DELETE gives an empty string, means that PostgreSQL 
>> isn't returning nothing and that I have to get those values with some 
>> special values, like return parameters.
>> 
>> In pg/plsql I've used sometimes the GET DIAGNOSTICS  = ROW_COUNT 
>> or FOUND with great success, but I really can't make them work outside their 
>> main function.
>> 
>> There is something like select lastval(); but for rows affected ?
>> 
>> thanks in advance,
> 
> Easiest work-around is to add "returning true" on the end of your delete 
> then the delete will return some row(s) when it succeeds.
> 
> 

thanks for your idea ...

I've tried to add after my DELETE FROM x the RETURNING TRUE and when it 
succeeds I get only a True, when postgre can't delete the row because can't 
find it, I've get NIL.

Just to be sure, but the RETURNING TRUE is not a mandatory option to get how 
many rows were affected after an insert,delete,update ?

I'm not sure if REALstudio uses the libpq in their plugin. The libpq returns 
how many rows were affected or at least has some option to return those values ?

Last year I made some postgreSQL for iPhone and I compiled the libpq but only 
for SELECTS, not inserts or delete, and I'm not sure of this option.

> Best solution is to get realstudio to fix their plugin or use a different 
> framework.

Of course, but those 'commercial frameworks' that are not really interested in 
doing professional front ends for profressional databases like postgreSQL, are 
lazy,  first I have to demonstrate that they are doing something wrong in their 
plugin or at least that they have a simple option to implement this, wich I 
consider, a mandatory for professional databases.

Also I'm open to other frameworks but there are few that can deploy the same 
code to Windows, OS X and Linux.

Thanks again,

regards,

r.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore 8.x to postgreSQL 9.x functions and triggers aren't created

2010-12-20 Thread Raimon Fernandez
Hello,

We have two postgreSQL servers that are in the latest 9.x as testing, but when 
we use pg_dump and pg_restore, our functions and triggers are never copied to 
postgreSQL Server 9.x.

This is how we restore:

data=`date +%Y_%m_%d`
pg_restore -c -i -h 192.168.0.9 -p 5432 -U postgres -d globalgest -v 
"/Users/montx/documents/BackUp/globalgest/globalgest_"$data


we have only two warnings:
...
pg_restore: dropping FUNCTION rowsaffected()
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 32; 1255 36705 FUNCTION 
rowsaffected() postgres
pg_restore: [archiver (db)] could not execute query: ERROR:  function 
public.rowsaffected() does not exist
Command was: DROP FUNCTION public.rowsaffected();
pg_restore: dropping FUNCTION repairassentamentsnumero(integer)
pg_restore: dropping FUNCTION process_audit()
pg_restore: dropping FUNCTION increment_lock_version()
pg_restore: dropping FUNCTION increment(integer)
pg_restore: dropping FUNCTION getserialnumber(integer, integer)
pg_restore: dropping FUNCTION comptesrepair()
pg_restore: dropping FUNCTION rowsaffected()
pg_restore: [archiver (db)] Error from TOC entry 31; 1255 36704 FUNCTION 
rowsaffected() postgres
pg_restore: [archiver (db)] could not execute query: ERROR:  function 
menus.rowsaffected() does not exist
Command was: DROP FUNCTION menus.rowsaffected();
pg_restore: dropping FUNCTION process_audit()
pg_restore: dropping PROCEDURAL LANGUAGE plpgsql
pg_restore: dropping COMMENT SCHEMA public
pg_restore: dropping SCHEMA public
pg_restore: dropping SCHEMA menus
pg_restore: dropping SCHEMA audit
pg_restore: creating SCHEMA audit
pg_restore: creating SCHEMA menus
pg_restore: creating SCHEMA public
pg_restore: creating COMMENT SCHEMA public
pg_restore: creating PROCEDURAL LANGUAGE plpgsql
pg_restore: creating FUNCTION process_audit()
pg_restore: creating FUNCTION rowsaffected()
pg_restore: creating FUNCTION comptesrepair()
pg_restore: creating FUNCTION getserialnumber(integer, integer)
pg_restore: creating FUNCTION increment(integer)
pg_restore: creating FUNCTION increment_lock_version()
pg_restore: creating FUNCTION process_audit()
pg_restore: creating FUNCTION repairassentamentsnumero(integer)
pg_restore: creating FUNCTION rowsaffected()
pg_restore: creating FUNCTION updateallcomptes(integer)
pg_restore: creating FUNCTION updatecompte(integer)
pg_restore: creating FUNCTION updatecompte11(integer)
pg_restore: creating FUNCTION updatecompte3_5(integer)
pg_restore: creating TABLE assentaments
pg_restore: creating TABLE audit
...


when restoring the same file to any of our postgreSQL Servers 8.x we have no 
problems.

thanks,

r.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore 8.x to postgreSQL 9.x functions and triggers aren't created [solved]

2010-12-20 Thread Raimon Fernandez
ok, solved. it was a problem with the application that interfaces with pg that 
has a bug ...

sorry,

regards,

r.


On 20dic, 2010, at 09:28 , Raimon Fernandez wrote:

> Hello,
> 
> We have two postgreSQL servers that are in the latest 9.x as testing, but 
> when we use pg_dump and pg_restore, our functions and triggers are never 
> copied to postgreSQL Server 9.x.
> 
> This is how we restore:
> 
> data=`date +%Y_%m_%d`
> pg_restore -c -i -h 192.168.0.9 -p 5432 -U postgres -d globalgest -v 
> "/Users/montx/documents/BackUp/globalgest/globalgest_"$data
> 
> 
> we have only two warnings:
> ...
> pg_restore: dropping FUNCTION rowsaffected()
> pg_restore: [archiver (db)] Error while PROCESSING TOC:
> pg_restore: [archiver (db)] Error from TOC entry 32; 1255 36705 FUNCTION 
> rowsaffected() postgres
> pg_restore: [archiver (db)] could not execute query: ERROR:  function 
> public.rowsaffected() does not exist
>Command was: DROP FUNCTION public.rowsaffected();
> pg_restore: dropping FUNCTION repairassentamentsnumero(integer)
> pg_restore: dropping FUNCTION process_audit()
> pg_restore: dropping FUNCTION increment_lock_version()
> pg_restore: dropping FUNCTION increment(integer)
> pg_restore: dropping FUNCTION getserialnumber(integer, integer)
> pg_restore: dropping FUNCTION comptesrepair()
> pg_restore: dropping FUNCTION rowsaffected()
> pg_restore: [archiver (db)] Error from TOC entry 31; 1255 36704 FUNCTION 
> rowsaffected() postgres
> pg_restore: [archiver (db)] could not execute query: ERROR:  function 
> menus.rowsaffected() does not exist
>Command was: DROP FUNCTION menus.rowsaffected();
> pg_restore: dropping FUNCTION process_audit()
> pg_restore: dropping PROCEDURAL LANGUAGE plpgsql
> pg_restore: dropping COMMENT SCHEMA public
> pg_restore: dropping SCHEMA public
> pg_restore: dropping SCHEMA menus
> pg_restore: dropping SCHEMA audit
> pg_restore: creating SCHEMA audit
> pg_restore: creating SCHEMA menus
> pg_restore: creating SCHEMA public
> pg_restore: creating COMMENT SCHEMA public
> pg_restore: creating PROCEDURAL LANGUAGE plpgsql
> pg_restore: creating FUNCTION process_audit()
> pg_restore: creating FUNCTION rowsaffected()
> pg_restore: creating FUNCTION comptesrepair()
> pg_restore: creating FUNCTION getserialnumber(integer, integer)
> pg_restore: creating FUNCTION increment(integer)
> pg_restore: creating FUNCTION increment_lock_version()
> pg_restore: creating FUNCTION process_audit()
> pg_restore: creating FUNCTION repairassentamentsnumero(integer)
> pg_restore: creating FUNCTION rowsaffected()
> pg_restore: creating FUNCTION updateallcomptes(integer)
> pg_restore: creating FUNCTION updatecompte(integer)
> pg_restore: creating FUNCTION updatecompte11(integer)
> pg_restore: creating FUNCTION updatecompte3_5(integer)
> pg_restore: creating TABLE assentaments
> pg_restore: creating TABLE audit
> ...
> 
> 
> when restoring the same file to any of our postgreSQL Servers 8.x we have no 
> problems.
> 
> thanks,
> 
> r.
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] libpq ASYNC with PQgetResult and PQisBusy

2010-12-20 Thread Raimon Fernandez
Hello,


I'm trying to create a plugin using the libpq.

Almost everything is working, and now I want to implememt the asynchronous 
issue.

I send the SQL using the PQsendQuery, and my interface is not blocking, great.

Now, everytime I check fot the PQgetResult my interface gets blocked.

So, now I'm using the PQisBusy to check if postgre is still busy and I can 
safely call the PQgetResult wihtout blocking, or just wait *some time* before 
sending a new PQisBusy.

Before every PQisBusy i call PQconsumeInput to update the status.

So, in pseudo code:


1. PQsendQuery (a really slow select just to check the asyncronous)

2. From a timer every 0.2 seconds, I call:
2.1 PQconsumeInput
2.2 PQisBusy
2.3 evaluate => if it's busy => sleep and start again from 2 ; if it's not 
busy, continue
2.4 call PQgetResult

Using PQisBusy it's not working, it's taking really longer to just send the 0 
(non-busy) and at this moment the PQgetResult is null.

If I force to call the PQgetResult after just one second of the PQsendQuery I 
can get the PQgetResult, without testing the PQisBusy.


here is my montxPG_isBusy

static long montxPG_isBusy()

{   int execStatus;
int consumeeVar;

consumeeVar = PQconsumeInput(gPGconn);

if (consumeeVar == 0) return (long) PGRES_FATAL_ERROR;

execStatus = PQisBusy(gPGconn);

return (long) execStatus;

}


thanks,

regards,

r.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] libpq ASYNC with PQgetResult and PQisBusy

2010-12-20 Thread Raimon Fernandez

On 20dic, 2010, at 18:48 , Tom Lane wrote:

>> So, now I'm using the PQisBusy to check if postgre is still busy and I can 
>> safely call the PQgetResult wihtout blocking, or just wait *some time* 
>> before sending a new PQisBusy.
> 
> Your proposed code is still a busy-wait loop.

This is how are made all the examples I've found.

Last year I was implementing the FE/BE protocol v3 and there I was using what 
you proposed, a TCP/Socket with events, no loops and no waits, just events.

> What you should be doing is waiting for some data to arrive on the socket.  

where I have to wait, in a function inside my plugin or from the framework that 
uses my plugin ?

> Once you see
> read-ready on the socket, call PQconsumeInput, then check PQisBusy to
> see if the query is complete or not.  If not, go back to waiting on the
> socket.  Typically you'd use select() or poll() to watch for both data
> on libpq's socket and whatever other events your app is interested in.

Here is what I've found:

extern int  PQsocket(const PGconn *conn);

There are some examples in the postgreSQL documentation:

/* 
* Sleep untilsomething happens on the connection. We use select(2)
* to wait for input, but you could also use poll() or similar
* facilities.
*/

 
int sock;
fd_set  input_mask;

sock = PQsocket(conn);

if (sock < 0) break; /* shouldn’t happen */

FD_ZERO(&input_mask);
FD_SET(sock, &input_mask);

if (select(sock + 1, &input_mask, NULL, NULL, NULL) < 0) {

fprintf(stderr, "select() failed: %s\n", strerror(errno));
exit_nicely(conn);
}

/* Now check for input */ 
PQconsumeInput(conn);

while ((notify = PQnotifies(conn)) != NULL) {

fprintf(stderr, "ASYNC NOTIFY of ’%s’ received from backend pid %d\n", 
notify->relname, notify->be_pid);
PQfreemem(notify);
}


The select(2) that says that are using for wait is this line ? if (select(sock 
+ 1, &input_mask, NULL, NULL, NULL) < 0) {

I can't see where is 'sleeping'


and the approach you are refering, is the only way to non-block the plugin 
calls and postgreSQL ?



>> here is my montxPG_isBusy
> 
>> static long montxPG_isBusy()
> 
>> {int execStatus;
>>  int consumeeVar;
>   
>>  consumeeVar = PQconsumeInput(gPGconn);
>   
>>  if (consumeeVar == 0) return (long) PGRES_FATAL_ERROR;
>   
>>  execStatus = PQisBusy(gPGconn);
>   
>>  return (long) execStatus;
>   
>> }
> 
> This code seems a bit confused.  PQisBusy returns a bool (1/0), not a
> value of ExecStatusType.

yes, here the execStatus is the name of the int, and yes, I know, a bad name ...

thanks again,

regards,

r.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] libpq ASYNC with PQgetResult and PQisBusy

2010-12-20 Thread Raimon Fernandez

On 21dic, 2010, at 00:56 , Alban Hertroys wrote:

> On 20 Dec 2010, at 21:49, Raimon Fernandez wrote:
> 
>> The select(2) that says that are using for wait is this line ? if 
>> (select(sock + 1, &input_mask, NULL, NULL, NULL) < 0) {
>> 
>> I can't see where is 'sleeping'
> 
> 
> See man 2 select.
> If you're on an OS without manual pages (Windows is pretty much the only 
> exception I know of), there are plenty of online versions of man pages 
> available. I'd suggest looking at the FreeBSD ones, as in my experience they 
> tend to be pretty keen on proper documentation.

thanks, I didn't know this was from the OS level.

I'm reading the documentation now ...

regards,

r.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] libpq ASYNC with PQgetResult and PQisBusy

2010-12-21 Thread Raimon Fernandez
hi again,

On 20dic, 2010, at 18:48 , Tom Lane wrote:

>> So, now I'm using the PQisBusy to check if postgre is still busy and I can 
>> safely call the PQgetResult wihtout blocking, or just wait *some time* 
>> before sending a new PQisBusy.
> 
> Your proposed code is still a busy-wait loop.  What you should be doing
> is waiting for some data to arrive on the socket.  Once you see
> read-ready on the socket, call PQconsumeInput, then check PQisBusy to
> see if the query is complete or not.  If not, go back to waiting on the
> socket.  Typically you'd use select() or poll() to watch for both data
> on libpq's socket and whatever other events your app is interested in.

As I'm doing this as an excercise and how knows, I'm still playing with my 
previous approach, the same question but in a different way:

Why the PQisBusy is telling me that it's still busy if I send the PQgetResult 
and obtain them at the same moment ?

Now I'm not in a loop, just send the PQsendQuery from a button, and then, I 
just click on another button that simply checks for the PQconsumeInput and 
PQisBusy, and I'm clickin on it each second, and always it's returning busy, 
but if I send the PQgetResult I get it.

So, why libpq it's not updating it's status ?

thanks again,

r.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Implementing Frontend/Backend Protocol TCP/IP

2009-10-26 Thread Raimon Fernandez



Hello,


I'm trying to implement the front-end protocol with TCP from REALbasic  
to PostgreSQL.


The docs from PostgreSQL, well, I understand almost, but there are  
some points that maybe need more clarification.


Anyone have some experience to start making questions ?

:-)


The front-end tool is REALbasic but can be any tool that have TCP/IP  
comunication, so here is irrelevant.


Actually I can connect to Postgre Server, get and parse some  
parameters, and send some SELECT, but I don't like how I'm doing, so  
any guidence or wiki or blog or how-to where I can get more  
information, it would be perfect...


thanks for your time,

regards,

r.







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing Frontend/Backend Protocol TCP/IP

2009-10-27 Thread Raimon Fernandez


On 27/10/2009, at 0:17, John R Pierce wrote:


Alvaro Herrera wrote:

I'm trying to implement the front-end protocol with TCP from
REALbasic to PostgreSQL.



That sounds the most difficult way to do it.  Can't you just embed
libpq?



yah, seriously.   the binary protocol is not considered stable, it  
can change in subtle ways in each version.  libpq handles the  
current version and all previous versions, and exposes all methods.


Well, if I specify that I'm using the protocol 300 it should work, and  
be stable, not ?


REALbasic has plugin for PostgreSQL, but they are synchronous  and  
freeze the GUI when interacting with PG. This is not a problem  
noramlly, as the SELECTS/UPDATES/... are fast enopugh, but sometimes  
we need to fetch 1000, 5000 or more rows and the application stops to  
respond, I can't have a progressbar because all is freeze, until all  
data has come from PG, so we need a better way.


I found someone who created what I'm trying to do, with the same  
language, with the same protocol, with the same version, but it's a  
comercial app, and we need the source code. The communication is made  
through TCP/IP, really fast, and always asynchronous, our application  
is always responsive.


I don't know almost nothing about C and implementing it would be too  
much work, and maybe we would have the same problem.


Anyway, I'll try to go further with the binary implementation, at  
least, as a learn-approach ...


:-)

thanks,


regards,


r.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing Frontend/Backend Protocol TCP/IP

2009-10-27 Thread Raimon Fernandez


On 27/10/2009, at 8:29, John R Pierce wrote:


Raimon Fernandez wrote:
REALbasic has plugin for PostgreSQL, but they are synchronous  and  
freeze the GUI when interacting with PG. This is not a problem  
noramlly, as the SELECTS/UPDATES/... are fast enopugh, but  
sometimes we need to fetch 1000, 5000 or more rows and the  
application stops to respond, I can't have a progressbar because  
all is freeze, until all data has come from PG, so we need a better  
way.


I would think the better solution would be to get the vendor to fix  
its native plugin to support an asynchronous mode.   Or, does this  
RealBasic support any sort of client-server or multi-task type  
programming?  if so, have a separate task or thread that does the  
SQL operations which your interactice program interfaces with...


The plugin is from the same company REALbasic, and it's free.

They don't have any plans to support asynchronous mode, maybe only in  
the plugin for their own database, REALserver, wich serves a SQLite  
database


REALbasic supports threads (multitasking), but also they freeze when  
using the plugin and waiting for a complete answer from the plugin  
call ...


Thanks,

regards,


raimon


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing Frontend/Backend Protocol TCP/IP

2009-10-27 Thread Raimon Fernandez


On 27/10/2009, at 14:00, Alvaro Herrera wrote:


Raimon Fernandez wrote:


REALbasic has plugin for PostgreSQL, but they are synchronous  and
freeze the GUI when interacting with PG. This is not a problem
noramlly, as the SELECTS/UPDATES/... are fast enopugh, but sometimes
we need to fetch 1000, 5000 or more rows and the application stops
to respond, I can't have a progressbar because all is freeze, until
all data has come from PG, so we need a better way.


If you need to fetch large numbers of rows, perhaps it would be better
to use a cursor and fetch a few at a time, moving the progress bar in
the pauses.  So instead of

SELECT * FROM sometab;

you would o
DECLARE foo CURSOR FOR SELECT * FROM sometab;

and then, repeatedly,
FETCH 50 FROM foo

Until there are no more rows.

This can still freeze your app in certain cases, but it will be  
probably
a lot better than what you currently have.  And it will be MUCH  
easier/

cheaper to do than working with the FE/BE protocol yourself.


Yes, I'm aware of this possibility but it's a lot of extra work also.

The initial idea of TCP/IP still remains in my brain ...

:-)

thanks,


raimon


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing Frontend/Backend Protocol TCP/IP

2009-10-27 Thread Raimon Fernandez

Hello,


As this thread it's alive, I'm going to ask more specific questions:


After sending the satartup sequence, I receive the paramlist. I don't  
need to send Authentication as I'm using a Trust user, for making  
things easier.


I receive string data, I suppose it's text data.

I can parse the data received, search for a B.

I don't know if it's better to transform the data into Hex.

After the S I found thre char(0) and later the size of the packet, and  
later the name + char(0) (separator between value and parameter), the  
parameter, and so on.


Why I found those three char(0) after the S and before the packet  
length?


Or it's because the Int32 that has 4 bytes ?

thanks,

regards,

raimon



Documentation:
-
ParameterStatus (B)
Byte1(’S’)
Identifies the message as a run-time parameter status report.
Int32
Length of message contents in bytes, including self.
String
The name of the run-time parameter being reported.
String
The current value of the parameter.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing Frontend/Backend Protocol TCP/IP

2009-10-27 Thread Raimon Fernandez


On 27/10/2009, at 14:41, Alvaro Herrera wrote:


Raimon Fernandez wrote:


After the S I found thre char(0) and later the size of the packet,
and later the name + char(0) (separator between value and
parameter), the parameter, and so on.

Why I found those three char(0) after the S and before the packet
length?


Because the length is an int32.  There are 3 zeros because the packet
length is less than 256.


here is where I'm doing a bad parsing.

how I know where the length ends ?

I know it starts after the S, and for the parsing that I have, always  
the length is 4 chars.


I have to evaluate one by one ?

thanks,

r.



--
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing Frontend/Backend Protocol TCP/IP

2009-10-27 Thread Raimon Fernandez


On 27/10/2009, at 15:06, Alvaro Herrera wrote:


Raimon Fernandez wrote:


how I know where the length ends ?


You count 4 bytes.


thanks,

I'm parsing now the resulted string as a binarystring and all is  
getting sense ...


thanks for your help,

raimon



--
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cancelling Requests Frontend/Backend Protocol TCP/IP

2009-11-02 Thread Raimon Fernandez

Hello,


For what I've read in the manuals, this operation is only valid before  
PostgreSQL has finished processing the SELECT statement ?


If for example I send  a SELECT * from myTable, it has 2 rows, and  
postgre starts sending the rows, how I can cancel this operation ?


I thought Cancelling Requests would be perfect for this ... the  
workaround is closing and opening again the connection but I don't  
like this approach ...


thanks,

regards,


raimon


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DataRow Null values Frontend/Backend Protocol TCP/IP

2009-11-02 Thread Raimon Fernandez

Here I'm again ...


I'm parsing the DataRow(B), and I'm having problems with NULL values.

In the docs I can read they have a -1 value, an no bytes follow them  
for the value.


But I'm getting a 1020 value instead of -1

Int32  The length of the column value, in bytes (this count does not  
include itself). Can be zero.
As a special case, -1 indicates a NULL column value. No value bytes  
follow in the NULL case.


byte 1: 255 &HFF
byte 2: 255 &HFF
byte 3: 255 &HFF
byte 4: 255 &HFF
-
1020 decimal or

&H 4294967295

but never -1

If I change my code to be aware of the 1020 instead of -1, I can  
extract the next field/values without problems.



thanks,


regards,


raimon


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cancelling Requests Frontend/Backend Protocol TCP/IP

2009-11-02 Thread Raimon Fernandez


On 02/11/2009, at 10:29, Craig Ringer wrote:


On 2/11/2009 5:15 PM, Raimon Fernandez wrote:

For what I've read in the manuals, this operation is only valid  
before

PostgreSQL has finished processing the SELECT statement ?

If for example I send  a SELECT * from myTable, it has 2 rows,  
and

postgre starts sending the rows, how I can cancel this operation ?


Assuming you're asking "is there any way to cancel a running query  
using

the postgresql tcp/ip protocol" then, as you've noted, you can cancel
the request until you start getting data.


yes,



After that, you can still cancel the query by establishing another
connection to the server and calling pg_cancel_backend( ) at the SQL
level. This does, unfortunately, involve the overhead of setting up  
and

tearing down a connection and the associated backend.


I assume the PID is the same as the process_id that I get from  
BackendKeyData ?



BackendKeyData (B)
Byte1(’K’) Identifies the message as cancellation key data. The  
frontend must save these values if it  wishes to be able to issue  
CancelRequest messages later.

Int32(12)  Length of message contents in bytes, including self.
Int32  The process ID of this backend.
Int32  The secret key of this backend.

process_id= 22403

I can send a SELECT, and while it's running, I open a new connection  
with psql and send a SELECT pg_cancel_backend(22403)   and postgresql  
returns t (true), but I'm still receiving rows in the first process ...


thanks,

regards,

r.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DataRow Null values Frontend/Backend Protocol TCP/IP

2009-11-02 Thread Raimon Fernandez


On 02/11/2009, at 10:37, Craig Ringer wrote:


On 2/11/2009 5:21 PM, Raimon Fernandez wrote:

Here I'm again ...


I'm parsing the DataRow(B), and I'm having problems with NULL values.

In the docs I can read they have a -1 value, an no bytes follow  
them for

the value.

But I'm getting a 1020 value instead of -1


You're using RealBasic or something, right?

If so, you're probably doing something funky with signed/unsigned
integer handling and the representation of integers.

-1 is 0x as a _signed_ 32 bit integer, same in little-endian  
or
big-endian form. The same hex value as an unsigned integer is  
4294967295 .


...

I don't know where you're getting the 1020, but 4294967295 is  
MAXUINT32

and suggests you're treating the data as an unsigned rather than a
signed 32 bit integer.


yes, you're right, I had and old legacy code that was processing  
incorrectly the received data as string directly than getting the  
binary ...


what I see that it was working all the time, except the -1 ...


now it returns -1 ...

thanks !

refards,


raimon



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cancelling Requests Frontend/Backend Protocol TCP/IP

2009-11-02 Thread Raimon Fernandez


On 02/11/2009, at 15:38, Tom Lane wrote:


Craig Ringer  writes:

On 2/11/2009 5:15 PM, Raimon Fernandez wrote:
If for example I send  a SELECT * from myTable, it has 2 rows,  
and

postgre starts sending the rows, how I can cancel this operation ?


Assuming you're asking "is there any way to cancel a running query  
using

the postgresql tcp/ip protocol" then, as you've noted, you can cancel
the request until you start getting data.



After that, you can still cancel the query by establishing another
connection to the server and calling pg_cancel_backend( ) at the SQL
level. This does, unfortunately, involve the overhead of setting up  
and

tearing down a connection and the associated backend.


The above distinction is nonsense.  Query cancel works the same way
whether you have started receiving data or not --- it will open a
transient connection in any case.  Otherwise there would be race
condition problems if the backend is just about to start sending data.


So my approach of sending only the CancelRequest should work ?

Always from a new connection, before sending the StartUpSequence, just  
open, connect and send it ?



Quoted from the documentation:

"The cancellation signal might or might not have any effect — for  
example, if it arrives after the
backend has finished processing the query, then it will have no  
effect. If the cancellation is effective,
it results in the current command being terminated early with an error  
message. "


Here I understand that maybe it will have NO effect, so postgresql  
will still sending rows and rows and rows ...



thanks,

regards,

raimon

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cancelling Requests Frontend/Backend Protocol TCP/IP

2009-11-02 Thread Raimon Fernandez


On 02/11/2009, at 15:12, John DeSoi wrote:



On Nov 2, 2009, at 4:15 AM, Raimon Fernandez wrote:

If for example I send  a SELECT * from myTable, it has 2 rows,  
and postgre starts sending the rows, how I can cancel this  
operation ?


I thought Cancelling Requests would be perfect for this ... the  
workaround is closing and opening again the connection but I don't  
like this approach ...


A cleaner solution is to use the extended query protocol to limit  
the total number of rows returned. For example, in my application I  
limit the result set to 1000 rows. I have not received all of the  
results, I switch to a server side cursor but still have the first  
1000 rows for immediate display to the end user.


This is another option, but at least I want to make it to work the  
CancelRequest ...


:-)

As an experiment, I'm doing with LIMIT and OFFSET instead of cursors  
at this moment ...


thanks,


regards,


raimon


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cancelling Requests Frontend/Backend Protocol TCP/IP

2009-11-02 Thread Raimon Fernandez


On 02/11/2009, at 17:35, Tom Lane wrote:


Raimon Fernandez  writes:

Quoted from the documentation:
"The cancellation signal might or might not have any effect — for
example, if it arrives after the
backend has finished processing the query, then it will have no
effect.



Here I understand that maybe it will have NO effect, so postgresql
will still sending rows and rows and rows ...


If you're too late, the backend has already sent all the rows.   
There's
not much we can do about data that's already in flight.  There  
probably

won't be that much of it though, as TCP stacks don't buffer infinite
amounts of data.


The sentence 'backend has finished processing the query' means that  
postgresql has finished processing the select and also has sent all  
the rows ?


I thought it meant only processing the request, and the rows were not  
yet sent all of them.


If the rows have been sent, and there are data in the TCP buffer,  
that's another story ...


thanks,

raimon

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cancelling Requests Frontend/Backend Protocol TCP/IP

2009-11-02 Thread Raimon Fernandez


On 02/11/2009, at 17:53, Tom Lane wrote:


Raimon Fernandez  writes:

The sentence 'backend has finished processing the query' means that
postgresql has finished processing the select and also has sent all
the rows ?


There is no distinction; rows are sent as they are generated.


Yes, but  a SELECT can return 5 rows, and as you say, postgresql  
sends the rows as they are generated.


My question still remain unclear to me:

when postgres has finished processing the select, just before sending  
the first row(1), in the middle(2), or at the end(3), when the last  
row has been sent ?


If I send the CancelRequest when postgres is in point 3, I'm too late,  
but if postgres is in 1 or 2, the CancelRequest will have some effect.


I'm still wrong here ?

thanks for clarification the concept!

regards,

raimon

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cancelling Requests Frontend/Backend Protocol TCP/IP

2009-11-02 Thread Raimon Fernandez


On 02/11/2009, at 20:01, John DeSoi wrote:



On Nov 2, 2009, at 12:17 PM, Raimon Fernandez wrote:

when postgres has finished processing the select, just before  
sending the first row(1), in the middle(2), or at the end(3), when  
the last row has been sent ?


If I send the CancelRequest when postgres is in point 3, I'm too  
late, but if postgres is in 1 or 2, the CancelRequest will have  
some effect.


I'm still wrong here ?

thanks for clarification the concept!


Yes, it will have some effect in cases 1 and 2. You will know it  
worked because you'll get error 57014 - canceling statement due to  
user request.


An easy way to test this out is to call pg_sleep with a big number  
and then cancel the query on another connection. You won't have to  
worry about the timing of receiving all rows or not.


thanks!

Now I can Cancel them using the Front End or the pg_cancel_backend, I  
had an error in my FrontEnd function, no is working ...


:-)

thanks for all,

regards,

raimon


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Where I can find "SSL specification"?

2009-11-04 Thread Raimon Fernandez

Hello,


I want to implement SSL in my Frontend implementation with TCP/IP.

The manual just says, after receiving an S:

"To continue after S, perform an SSL startup handshake (not described  
here, part of the SSL specification) with the server."


I can't find it in the manual or in the postgresql web page.

thanks,

regards,


raimon

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MD5 Authentication

2009-11-05 Thread Raimon Fernandez

I'm blocked ...


On 06/11/2009, at 6:27, John DeSoi wrote:



On Nov 5, 2009, at 12:35 PM, Raimon Fernandez wrote:

at least, my first md5 (psw+user) is the same as the pg_shadow  
(wihtout the 'md5') ...


should I md5 the first md5 as I get it as string (like username) or  
byte by byte ?


As far as I know, a string. But it is unclear to me what happens  
when the user or database name has non-ascii characters. The client  
encoding is not established until after authentication.


I asked about that a while ago but did not get any responses.


After reading all the emails about it, I'm blocked, maybe someone can  
see where the error is and shade some light on it ...


user: postgres (test values)
psw:postgres (test values)

first md5("postgrepostgres") ==> 44965A835F81EC252D83961D2CC9F3E1

salt: A6B76060


second md5("44965A835F81EC252D83961D2CC9F3E1"+"A6B76060") ==>   
34F74BEF877202D4399092F97EFE8712



send to server:  header + length +  
"md5"+"34F74BEF877202D4399092F97EFE8712" ==> Fatal error, password  
Authentication failed for user postgres ...


thanks,

regards,


raimon


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MD5 Authentication

2009-11-06 Thread Raimon Fernandez


On 06/11/2009, at 8:48, Raimon Fernandez wrote:


I'm blocked ...


On 06/11/2009, at 6:27, John DeSoi wrote:



On Nov 5, 2009, at 12:35 PM, Raimon Fernandez wrote:

at least, my first md5 (psw+user) is the same as the pg_shadow  
(wihtout the 'md5') ...


should I md5 the first md5 as I get it as string (like username)  
or byte by byte ?


As far as I know, a string. But it is unclear to me what happens  
when the user or database name has non-ascii characters. The client  
encoding is not established until after authentication.


I asked about that a while ago but did not get any responses.


After reading all the emails about it, I'm blocked, maybe someone  
can see where the error is and shade some light on it ...


user: postgres (test values)
psw:postgres (test values)

first md5("postgrepostgres") ==> 44965A835F81EC252D83961D2CC9F3E1

salt: A6B76060


second md5("44965A835F81EC252D83961D2CC9F3E1"+"A6B76060") ==>   
34F74BEF877202D4399092F97EFE8712



send to server:  header + length +  
"md5"+"34F74BEF877202D4399092F97EFE8712" ==> Fatal error, password  
Authentication failed for user postgres ...


I've created a tcpdump with all information:

server =>

52 (R)
00 00 00 0C (12 length)
00 00 00 05 (5 => md5)
C8 C3 57 17 (token)



psql sends =>

70 00 00 00 28 6D 64 35 33 38 38 35 30 37 37 39 31 39 64 38 30 63 39  
35 62 33 32 34 65 39 63 36 38 65 39 64 37 66 64 63 00 => binary

p(md53885077919d80c95b324e9c68e9d7fdc => string


user: postgres
psw: postgre

I can't create an identical HASH with those values, because:

the first md5 is easy:  44965a835f81ec252d83961d2cc9f3e1c8c35717

Now we have to MD5 this one with the token:

1. 44965a835f81ec252d83961d2cc9f3e1c8c35717C8C35717 (uppercase and  
lowercase)

2. 44965a835f81ec252d83961d2cc9f3e1c8c35717c8c35717 (lowercase)
3. 44965a835f81ec252d83961d2cc9f3e1c8c35717 + &HC8 + &HC3 + &H57 + &H17
4. ??

wich one is the correct ?

thanks,

regards,


raimon



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Where I can find "SSL specification"?

2009-11-10 Thread Raimon Fernandez

Hello,


More on this ...


To be clear, just after receiving the S confirmation that PostgreSQL  
can handle SSL connections, I have to switch my TCPSocket into SSL.


Immediatly, I receive some errors, depending my configuration:


0 - SSLv2: SSL (Secure Sockets Layer) version 2.   ERROR => 102
1 - SSLv23: SSL version 3, but can roll back to 2 if needed.    
ERROR => 336031996

2- SSLv3: SSL version 3.   ERROR => 336130315
3- TLSv1: TLS (Transport Layer Security) version 1.  ERROR =>  
336150773



NavicatPostgreSQL can connect and establish a SSL connection with my  
PostgreSQL server.

pgAdminIII can also connect using SSL.

So, the problem must be in my code ?

thanks,

regards,

raimon


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Encoding using the Frontend/Backend Protocol TCP/IP

2009-11-19 Thread Raimon Fernandez
Hello,


I'm trying to send some strings that have chars outside from standar ascii, 
like çñàèó

Once I'm connected, the client and server both uses UT8Encoding.

And I'm sending all the strings encoded in UTF8.

At least the received ones are working, as I get the text exactly as it is, 
with special chars.

But when I'm trying to update a row using some of them, I'm getting an error: 

ERROR
08P01
Invalid string in message
pqformat.c
691
pq_getmstring
Invalid Front End message type 0
postgres.c
408
socketbackend
you have been disconected

How should I encode 


thanks,

regards,


r.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Encoding using the Frontend/Backend Protocol TCP/IP

2009-11-19 Thread Raimon Fernandez

On 19/11/2009, at 17:27, Kovalevski Andrei wrote:

> Hi
> 
> could it be that you have errors in your UTF8 string? For example you might 
> use UTF16 encoding, it can explain why some characters force errors but 
> others are not.

It only happens with values like àéïçñ I think UTF8 can handle this ...



> Can you post here the string and its' encoded version?
> 

Original string:
QFUpdate transactions set description='Test ValuesdÇ' where id=113

UTF: 

QFUpdate transactions set description='Test ValuesdÇ' where id=113
510046557064617465207472616E73616374696F6E7320736574206465736372697074696F6E3D27546573742056616C75657364C387272077686572652069643D313133

It has also the header Q and the length ...

thanks,

regards,

r.


> Raimon Fernandez wrote:
>> Hello,
>> 
>> 
>> I'm trying to send some strings that have chars outside from standar ascii, 
>> like çñàèó
>> 
>> Once I'm connected, the client and server both uses UT8Encoding.
>> 
>> And I'm sending all the strings encoded in UTF8.
>> 
>> At least the received ones are working, as I get the text exactly as it is, 
>> with special chars.
>> 
>> But when I'm trying to update a row using some of them, I'm getting an 
>> error: 
>> ERROR
>> 08P01
>> Invalid string in message
>> pqformat.c
>> 691
>> pq_getmstring
>> Invalid Front End message type 0
>> postgres.c
>> 408
>> socketbackend
>> you have been disconected
>> 
>> How should I encode 
>> 
>> 
>> thanks,
>> 
>> regards,
>> 
>> 
>> r.
>> 
>>  
> 
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Encoding using the Frontend/Backend Protocol TCP/IP

2009-11-19 Thread Raimon Fernandez

On 19/11/2009, at 18:13, Raimon Fernandez wrote:

> 
> On 19/11/2009, at 17:27, Kovalevski Andrei wrote:
> 
>> Hi
>> 
>> could it be that you have errors in your UTF8 string? For example you might 
>> use UTF16 encoding, it can explain why some characters force errors but 
>> others are not.
> 
> It only happens with values like àéïçñ I think UTF8 can handle this ...


yes, It can handle it ...

if I send the decoding by hand in a very simple update, it works, so there's 
something with UTF8 conversion that dosn't work ...

for example, instead of sending Ç i send their equivalent in UTF8 &HC3+&H87 and 
it works ...

thanks,

regards,




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Encoding using the Frontend/Backend Protocol TCP/IP

2009-11-19 Thread Raimon Fernandez

On 19/11/2009, at 21:21, Kovalevski Andrei wrote:

> Hi,
> 
> the string is ok, but the problem is inside the message. The length of the 
> message is incorrect:
> 
> your message:
> 510046557064617465207472616E73616374696F6E7320736574206465736372697074696F6E3D27546573742056616C75657364C387272077686572652069643D313133
> it should be:
> 510045557064617465207472616E73616374696F6E7320736574206465736372697074696F6E3D27546573742056616C75657364C387272077686572652069643D313133

ok, thanks.

Finally it's working, there was a mistake from my part sending the encoding ...

:-)

regards,

raimon

[GENERAL] Extended Query using the Frontend/Backend Protocol 3.0

2009-12-17 Thread Raimon Fernandez
Hello again,


I'm trying to integrate the extended query protocol with my libraries.

I'm sending a simple SELECT to validate the method, but I'm getting an Invalid 
Message Format.


50 => P
00 00 00 29 => length
6D7973746174656D656E74 00 => mystatement + null
73656C656374202A2066726F6D206D797461626C653B 00 => select * from mytable; + null
00 00 => number of parameters, zero

any idea ?

thanks and regards,


raimon




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extended Query using the Frontend/Backend Protocol 3.0

2009-12-18 Thread Raimon Fernandez

On 18/12/2009, at 2:26, John DeSoi wrote:

> 
> On Dec 17, 2009, at 11:13 AM, Raimon Fernandez wrote:
> 
>> I'm trying to integrate the extended query protocol with my libraries.
>> 
>> I'm sending a simple SELECT to validate the method, but I'm getting an 
>> Invalid Message Format.
> 
> I did not add up your byte count, but maybe this will help:
> 
> 
> (write-byte p stream)
> (write-int32 (+ int32-length (length name) 1 (length sql-string) 1 
> int16-length (* int32-length param-count)) stream) 
> (write-cstring name stream)
> (write-cstring sql-string stream)
> (write-int16 param-count stream)

I'm doing as you say:

mystatement => 11
select * from mytable; => 22

> (write-byte p stream) 
50 => P


> (write-int32 (+ int32-length (length name) 1 (length sql-string) 1 
> int16-length (* int32-length param-count)) stream) 

4 + 11 + 1 + 22 + 1 + 2 + 0 (param count=0) => 41

00 00 00 29 => length

> (write-cstring name stream)

6D7973746174656D656E74 00 => mystatement + null

> (write-cstring sql-string stream)

73656C656374202A2066726F6D206D797461626C653B 00 => select * from mytable; + null

> (write-int16 param-count stream)


00 00 => number of parameters, zero


any idea ???

thanks,

regards,

r.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Extended Query, flush or sync ?

2009-12-18 Thread Raimon Fernandez
Hello,

It's not clear for me if I have to issue a flush or sync after each process of 
an extended query.

It's almost working for me only when I send a sync, but not when I send a 
flush. With the flush, the connection seems freezed, or at least, I don't get 
any data from postgre.


- Send the parse command
- sync
- Receive the ParseComplete
-sync
- Send the Bind
- sync
- Receive the BincComplete
- send the Execute 
- receive an error => "portal xxx does not exist"


thanks,

regards,


r.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extended Query, flush or sync ?

2009-12-21 Thread Raimon Fernandez
Hello,

On 19/12/2009, at 4:31, John DeSoi wrote:

> 
> On Dec 18, 2009, at 4:44 PM, Raimon Fernandez wrote:
> 
>> It's not clear for me if I have to issue a flush or sync after each process 
>> of an extended query.
>> 
>> It's almost working for me only when I send a sync, but not when I send a 
>> flush. With the flush, the connection seems freezed, or at least, I don't 
>> get any data from postgre.
>> 
>> 
>> - Send the parse command
>> - sync
>> - Receive the ParseComplete
>> -sync
>> - Send the Bind
>> - sync
>> - Receive the BincComplete
>> - send the Execute 
>> - receive an error => "portal xxx does not exist"
> 
> 
> I send:
> 
> parse
> bind
> describe
> execute
> sync
> 
> and then loop on the connection stream to receive the responses.

And do you get the parseComplete after sending the parse or after sending the 
sync ?

I'm not getting parseComplete, bindComplete if I don't send a sync after each 
command.

If I follow your advice, after the sync, I get the parseComplete, bincComplete, 
and portalSuspended (beacuse I've reach the max rows)

Don't know if your correct approach is the correct, but why send a Bind if we 
don't know if the parse has been successfully created ... 

From the docs:

 "A Flush must be sent after any extended-query command except Sync, if the 
frontend wishes to examine the results of that command before issuing more 
commands.
Without Flush, messages returned by the backend will be combined into the 
minimum possible number of packets to minimize network overhead."

Ok, I see that both approachs should work, but for me, sending a flush after 
each extended query command like parse, bind, ... doesn't do nothing ...


And also from the docs:

"If Execute terminates before completing the execution of a portal (due to 
reaching a nonzero result- row count), it will send a PortalSuspended message; t
he appearance of this message tells the frontend that another Execute should be 
issued against the same portal to complete the operation. "

If I execute with a row limit of 1000, and I know there are more than 1000 
rows, I get the portalSuspended as described.

But, If a issue a new Execute, postgresql says that myPortal doesn't exist 
anymore.

How I can get those 1000 rows ?


thanks !

regards,


raimon




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extended Query, flush or sync ?

2009-12-21 Thread Raimon Fernandez

On 18/12/2009, at 22:55, Tom Lane wrote:

> Raimon Fernandez  writes:
>> It's not clear for me if I have to issue a flush or sync after each process 
>> of an extended query.
> 
> Basically, you send one of these at the points where you're going to
> wait for an answer back.  Sync is different from Flush in that it also
> provides a resynchronization point after an error: when the backend hits
> an error while processing a given message, it ignores following messages
> up to the next Sync.

So I have to send on of these after sending a Parsing comand, a Bind comand, 
and Execute ?

It's normal that I don't receive nothing if I send a Flush instead of a Sync ?
 
regards and thanks,


raimon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extended Query, flush or sync ?

2009-12-21 Thread Raimon Fernandez
Hi John,


I'm not seeing my e-mails on the PostgreSQL General List ...

??

On 19/12/2009, at 16:32, John DeSoi wrote:

> 
> On Dec 19, 2009, at 2:40 AM, Raimon Fernandez wrote:
> 
>>> I send:
>>> 
>>> parse
>>> bind
>>> describe
>>> execute
>>> sync
>>> 
>>> and then loop on the connection stream to receive the responses.
>> 
>> And do you get the parseComplete after sending the parse or after sending 
>> the sync ?
> 
> I don't really know or care. I send the entire sequence above and then read 
> the results handling each possible case. In other words, I don't read 
> anything after each message; I only read after sending the sync.

I see, I don't know why I was sending each command in a separate communication, 
I can pack all of them and send them at the same time, except de Parse, that 
will go at the connection beggining in my case.


>> And also from the docs:
>> 
>> "If Execute terminates before completing the execution of a portal (due to 
>> reaching a nonzero result- row count), it will send a PortalSuspended 
>> message; t
>> he appearance of this message tells the frontend that another Execute should 
>> be issued against the same portal to complete the operation. "
>> 
>> If I execute with a row limit of 1000, and I know there are more than 1000 
>> rows, I get the portalSuspended as described.
>> 
>> But, If a issue a new Execute, postgresql says that myPortal doesn't exist 
>> anymore.
>> 
>> How I can get those 1000 rows ?
> 
> Are you using a named portal? Are you reading all responses until you receive 
> a ready for query response? There are a lot of details - it really helped me 
> to look at the psql source.

I'm using Portals with my own name, I'll give a shot later ...

thanks !

regards,

r.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extended Query, flush or sync ?

2009-12-22 Thread Raimon Fernandez

On 19/12/2009, at 16:32, John DeSoi wrote:

>> If I execute with a row limit of 1000, and I know there are more than 1000 
>> rows, I get the portalSuspended as described.
>> 
>> But, If a issue a new Execute, postgresql says that myPortal doesn't exist 
>> anymore.
>> 
>> How I can get those 1000 rows ?
> 
> Are you using a named portal? Are you reading all responses until you receive 
> a ready for query response? There are a lot of details - it really helped me 
> to look at the psql source.

Yes, I'm using a named portal.

The new question is:

When I get the PortalSuspended, I get the 1000 rows, and for fetching the 
others, I have to send a new Execute to the same Portal:

"If Execute terminates before completing the execution of a portal (due to 
reaching a nonzero result- row count), it will send a PortalSuspended message; 
the appearance of this message tells the frontend that another Execute should 
be issued against the same portal to complete the operation. "

But the portal isn't destroyed after a sync ?

I'm getting a "Portal 'myPortal' doesn't exist  "when sending the next Execute 
...



1. Parse the Select with some $1, $2

2. Send a Bind + Describe + Execute + Sync

3. received the portalSuspended

4. Send the Execute

5. Receive the error "Portal 'myPortal' doesn't exist "


thanks,


regards,


raimon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Extended Query vs Simple Query

2009-12-22 Thread Raimon Fernandez
Hello again,


Now that I have working the Extended Query using the Front End Protocol 3.0, 
I'm getting better results with simple queries than extended queries.


table comptes: 

Simple query:

 select * from comptes WHERE codi_empresa = '05' AND nivell=11 and clau_compte 
like '05430%' => 0,0273 seconds for 14 rows



Extened Query: 111074 rows

All three columns are indexed.


Parse: select * from comptes WHERE codi_empresa = $1 AND nivell=$2 and 
clau_compte like $3

Bind + Execute + Sync in the same packet connection: 05,11,05430% => 0.1046 for 
10 rows

I measure the time when binding + executing + Sync.

I'm using prepared named statement and portals.

The difference is really big ...


In the docs I understand that using the unnamed prepared statement with 
parameters, is planned during the binding phase, but I'm using a prepared 
statement ...

And later, in a Note, I can read:

Note: Query plans generated from a parameterized query might be less efficient 
than query plans generated from an equivalent query with actual parameter 
values substituted. The query planner cannot make decisions based on actual 
parameter values (for example, index selectivity) when planning a parameterized 
query assigned to a named prepared-statement object. This possible penalty is 
avoided when using the unnamed statement, since it is not planned until actual 
parameter values are available. The cost is that planning must occur afresh for 
each Bind, even if the query stays the same.

And now it's not clear to me nothing at all ...

What are the advantages of using the extended query ?

thanks,

regards,


raimon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extended Query, flush or sync ?

2009-12-22 Thread Raimon Fernandez

On 22/12/2009, at 18:15, Tom Lane wrote:

> Raimon Fernandez  writes:
>> But the portal isn't destroyed after a sync ?
> 
> Not directly by a Sync, no.

ok,


>> I'm getting a "Portal 'myPortal' doesn't exist  "when sending the next 
>> Execute ...
> 
> End of transaction would destroy portals --- are you holding a
> transaction open for this?  It's basically just like a cursor.

no that I'm aware of it ...

I'll investigate it further ...

thanks!


regards,


raimon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extended Query, flush or sync ?

2009-12-23 Thread Raimon Fernandez

On 22/12/2009, at 18:15, Tom Lane wrote:

> Raimon Fernandez  writes:
>> But the portal isn't destroyed after a sync ?
> 
> Not directly by a Sync, no.
> 
>> I'm getting a "Portal 'myPortal' doesn't exist  "when sending the next 
>> Execute ...
> 
> End of transaction would destroy portals --- are you holding a
> transaction open for this?  It's basically just like a cursor.


OK, after re-reading your email and the docs again and again, I see that 
portals must be inside a transaction, now it's working ...

Here are my steps:

- parse the Selects 
...
- start transaction
- bind using a prepared statement name and a portal name
- execute x n
- close transaction
...



is this the correct way ?

And in the case I limit the execute, how I can get the pending rows ?

I'm using a CURSOR with the portal just created, and it works perfectly.

Using a new execute, I'm getting again the previous rows plus the new ones, and 
with the CURSOR, only the pending rows ...

Is this the correct way ?


And, where I can get more info about when it's better to use an extended query, 
a portal, a cursor, a simple query, ... ?

thanks!


regards,



raimon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general