[GENERAL] Too much logging

2010-08-27 Thread Mike Christensen
Hi all -

I've noticed my log files for Postgres are getting way too big, since
every single SQL statement being run ends up in the log.  However,
nothing I change in postgresql.conf seems to make a bit of
difference..  I've tried restarting postgres, deleting all the
existing logs, etc.  No matter what I do, every statement is logged.

What I want is to only log SQL statements that result in errors.
Here's my config options:

log_destination = 'stderr'
logging_collector = on
client_min_messages = error
log_min_messages = error
log_error_verbosity = default
log_min_error_statement = error
log_min_duration_statement = 3000
log_statement = 'all'

Pretty much everything else log related is commented out..  What am I
doing wrong?  Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Too much logging

2010-08-27 Thread Thomas Kellerer

Mike Christensen, 27.08.2010 11:39:

Hi all -

I've noticed my log files for Postgres are getting way too big, since
every single SQL statement being run ends up in the log.  However,
nothing I change in postgresql.conf seems to make a bit of
difference..  I've tried restarting postgres, deleting all the
existing logs, etc.  No matter what I do, every statement is logged.

What I want is to only log SQL statements that result in errors.
Here's my config options:

log_destination = 'stderr'
logging_collector = on
client_min_messages = error
log_min_messages = error
log_error_verbosity = default
log_min_error_statement = error
log_min_duration_statement = 3000
log_statement = 'all'

Pretty much everything else log related is commented out..  What am I
doing wrong?  Thanks!



log_statement = 'all'

should be

log_statement = 'none'

Regards
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Too much logging

2010-08-27 Thread Mike Christensen
Yup, someone emailed me privately - this fixed the problem..  Thanks!

On Fri, Aug 27, 2010 at 2:55 AM, Thomas Kellerer  wrote:
> Mike Christensen, 27.08.2010 11:39:
>>
>> Hi all -
>>
>> I've noticed my log files for Postgres are getting way too big, since
>> every single SQL statement being run ends up in the log.  However,
>> nothing I change in postgresql.conf seems to make a bit of
>> difference..  I've tried restarting postgres, deleting all the
>> existing logs, etc.  No matter what I do, every statement is logged.
>>
>> What I want is to only log SQL statements that result in errors.
>> Here's my config options:
>>
>> log_destination = 'stderr'
>> logging_collector = on
>> client_min_messages = error
>> log_min_messages = error
>> log_error_verbosity = default
>> log_min_error_statement = error
>> log_min_duration_statement = 3000
>> log_statement = 'all'
>>
>> Pretty much everything else log related is commented out..  What am I
>> doing wrong?  Thanks!
>
>
> log_statement = 'all'
>
> should be
>
> log_statement = 'none'
>
> Regards
> Thomas
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is your data okay?

2010-08-27 Thread Richard Harley
That really helped me, thanks - although I wish someone had told me 
about that/ before/ I tried to run a nuclear reactor using MSSQL


On 27/08/10 07:30, Mike Christensen wrote:

I found this tool pretty helpful for validating my architectural decisions..

http://www.howfuckedismydatabase.com/

   




Re: [GENERAL] Too much logging

2010-08-27 Thread Richard Harley


Won't log state = all catch everything?

Richard

On 27/08/10 10:39, Mike Christensen wrote:

Hi all -

I've noticed my log files for Postgres are getting way too big, since
every single SQL statement being run ends up in the log.  However,
nothing I change in postgresql.conf seems to make a bit of
difference..  I've tried restarting postgres, deleting all the
existing logs, etc.  No matter what I do, every statement is logged.

What I want is to only log SQL statements that result in errors.
Here's my config options:

log_destination = 'stderr'
logging_collector = on
client_min_messages = error
log_min_messages = error
log_error_verbosity = default
log_min_error_statement = error
log_min_duration_statement = 3000
log_statement = 'all'

Pretty much everything else log related is commented out..  What am I
doing wrong?  Thanks!

Mike

   



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Scalar subquery

2010-08-27 Thread Vyacheslav Kalinin
Hi,

Apparently scalar subquery when used as a part of SELECT statement and when
it does not depend on outer query columns
is executed only once per statement, e.g.:

postgres=# select i, (select random()) rand from generate_series(1, 3) i;
 i |   rand
---+---
 1 | 0.992319826036692
 2 | 0.992319826036692
 3 | 0.992319826036692

(Though term "depend" is subtle, compare these:

postgres=# select i, (select random() + case when false then i else 0 end )
rand from generate_series(1, 3) i;
 i |   rand
---+---
 1 | 0.806265413761139
 2 | 0.806265413761139
 3 | 0.806265413761139
(3 rows)


postgres=# select i, (select random() where i=i ) rand from
generate_series(1, 3) i;
 i |   rand
---+---
 1 | 0.426443862728775
 2 | 0.133071997668594
 3 | 0.751982506364584
(3 rows)


postgres=# select i, (select random() where i=i or i is null ) rand from
generate_series(1, 3) i;
 i |   rand
---+---
 1 | 0.320982406847179
 2 | 0.996762252878398
 3 | 0.076554249972105
(3 rows)

Looks like dependence is not there anymore if PG is smart enough to simplify
boolean expressions)

Anyway, as some older PG versions and Oracle behave similarly I suppose this
result is expected and desired (correct?),
but unfortunately not well-documented (did I miss it mentioned?).
Can anyone shed some light on this and/or probably update docs?

P.S.
I got bitten by a statement like this:
  select (select nextval('someseq') * a + b from somefunc()), col1, 
with a and b being OUT parameters of somefunc().


Re: [GENERAL] Too much logging

2010-08-27 Thread Andreas Schmitz


http://www.postgresql.org/docs/8.4/interactive/runtime-config-logging.html#GUC-LOG-STATEMENT


[...]

Valid values are none, ddl, mod, and all.


regards

andreas


On 08/27/2010 03:01 PM, Richard Harley wrote:


Won't log state = all catch everything?

Richard

On 27/08/10 10:39, Mike Christensen wrote:

Hi all -

I've noticed my log files for Postgres are getting way too big, since
every single SQL statement being run ends up in the log.  However,
nothing I change in postgresql.conf seems to make a bit of
difference..  I've tried restarting postgres, deleting all the
existing logs, etc.  No matter what I do, every statement is logged.

What I want is to only log SQL statements that result in errors.
Here's my config options:

log_destination = 'stderr'
logging_collector = on
client_min_messages = error
log_min_messages = error
log_error_verbosity = default
log_min_error_statement = error
log_min_duration_statement = 3000
log_statement = 'all'

Pretty much everything else log related is commented out..  What am I
doing wrong?  Thanks!

Mike







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] createdb process hangs when batched

2010-08-27 Thread Scott Bradley
Hello -

I'm relatively new to the postgresql world - primarily using it to load up a 
backup of our hosted Jive community databases (which use postgresql).

To that end, we've created and scheduled a batch file (yes, we're using a 
windows server...) to download the backups (*.dmp files) from the hosted 
service and reload our local databases each night.

The process appears to work fine (i.e., the databases are loaded as expected), 
but every day there is another lingering instance of "createdb.exe" in the task 
manager.  This does not appear to happen when I run the batch file manually 
from the command line - only when it runs as a scheduled task.

Here's a trimmed down version of the batch file - it's relatively simplistic at 
this point.  Has anyone run into this issue, or have any suggestions for where 
to start...?



rem Remove previously downloaded backup files
del *.dmp

rem Use wget to download 2 backup files
"C:\Program Files\WinWget\wget\wget.exe" --http-user=%USER% 
--http-passwd=%PASS% "%ANALYTICS_DATABASE%.dmp"
"C:\Program Files\WinWget\wget\wget.exe" --http-user=%USER% 
--http-passwd=%PASS% "%FULL_BACKUP%.dmp"

rem Restart the windows service to ensure no connected users - otherwise dropdb 
will fail
net stop postgresql-8.4
net start postgresql-8.4

rem Drop existing databases
"c:\Program Files\PostgreSQL\8.4\bin\dropdb.exe" -U postgres -w jive_analytics
"c:\Program Files\PostgreSQL\8.4\bin\dropdb.exe" -U postgres -w jive_full

rem Recreate Analytics database
"c:\Program Files\PostgreSQL\8.4\bin\createdb" -U postgres jive_analytics
"c:\Program Files\PostgreSQL\8.4\bin\pg_restore" -U postgres -O -d 
jive_analytics "%ANALYTICS_DATABASE_FILE%.dmp"

rem Recreate Analytics database
"c:\Program Files\PostgreSQL\8.4\bin\createdb" -U postgres jive_full
"c:\Program Files\PostgreSQL\8.4\bin\pg_restore" -U postgres -O -d jive_full 
"%FULL_BACKUP_FILE%.dmp"

rem Grant access to all tables to jive_local user for both databases
"c:\Program Files\PostgreSQL\8.4\bin\psql.exe" -U postgres -d jive_full -qAt -c 
"select 'grant select on ' || tablename || ' to jive_local;' from pg_tables 
where schemaname = 'public'" | "c:\Program Files\PostgreSQL\8.4\bin\psql.exe" 
-U postgres -d jive_full

"c:\Program Files\PostgreSQL\8.4\bin\psql.exe" -U postgres -d jive_analytics 
-qAt -c "select 'grant select on ' || tablename || ' to jive_local;' from 
pg_tables where schemaname = 'public'" | "c:\Program 
Files\PostgreSQL\8.4\bin\psql.exe" -U postgres -d jive_analytics


Re: [GENERAL] error while autovacuuming

2010-08-27 Thread tamanna madaan
Hi All

Any idea about this problem ??

Thanks..
Tamanna


-Original Message-
From: tamanna madaan
Sent: Fri 8/20/2010 11:54 AM
To: pgsql-general@postgresql.org
Subject: error while autovacuuming
 
Hi All

I am using a cluster setup with two nodes in it . postgres version is 8.1.2 and 
slony 1.1.5 is being used for replication.
I have autovacuum enabled. But it was  giving the following error while 
vacuuming template0 database.

2010-08-18 18:36:14 UTC LOG: autovacuum: processing database "template0"
2010-08-18 18:36:14 UTC ERROR: could not access status of transaction 3222599682
2010-08-18 18:36:14 UTC DETAIL: could not open file "pg_clog/0C01": No such 
file or directory

Because of this error autovacuum is not able to process other databses in 
postgres including my databse 'my_database'.
which is causing database to slow down.

I googled about this problem and found that this may be because of some data 
corruption. But I don't see any source of data corruption
in my database. Can anyone please let me know what could be causing this error. 
We generally don't use template0 database.
So what could be causing this data corruption. 

Following below steps have solved the problem  for now :

 1.  set 'datallowconn' to true for template0 database in pg_database table.
 2. Stop postgres 
 3. create a zero filled 0C01 file in /var/lib/pgsql/data/pg_clog folder.
 4. start postgres
 5. Execute `vacuum freeze` for all the databases i.e template0 , template1, 
postgres and my_database
 6. set 'datallowconn' to false for template0 database in pg_database table.
 
But I doubt if the problem has been solved permanently or it can reapper again. 
If it has been solved permanenlty then how.
I just created a zero filled 0C01 file of 256 K size. It doesn't have any 
meaningful data. How this can resolve the problem permanently??

Please reply ...

Thanks...
Tamanna



[GENERAL] Index File locations

2010-08-27 Thread Callum Scott
Hi All,

I am looking for a way to define where index files should be located.

I am wanting to store the index files on a separate SSD from the database
itself.  I can see that there is a way to define this from within the
database.  Is there a way to do this at a system level within the
configuration file?

Regards
Callum


Re: [GENERAL] WAL Archive Log

2010-08-27 Thread Sam Nelson
Alright, well, we'll probably do something with the archive command, then,
like either echoing %f to a log file or sending that to syslog (and then,
after the echo, doing the actual cp or scp or rsync or whatever).  That way,
we should be able to get some form of timestamp of when each WAL file is
copied to the archive directory (using the archive_command), right?

Is there any way we could somehow get postgres to log a line for us, so that
we get that line in the postgres log file?  Or are we going to have to use a
separate file?

Thanks much.

-Sam

On Thu, Aug 26, 2010 at 5:33 PM, Alvaro Herrera
wrote:

> Excerpts from Sam Nelson's message of jue ago 26 19:24:00 -0400 2010:
> > Is there a way to get postgres to write a line to the log file when it
> > creates a WAL file?  We wrote a script that tries to grab the times
> between
> > WAL file creation and ingestion without stopping to make absolutely sure
> > that postgres actually logs the WAL file creation, and so we're kinda
> stuck
> > staring at our screens and blinking.
>
> Not currently.
>
> WAL files are created well in advance of them being used, so perhaps
> there is no point, depending on what you want to do with the
> information.
>
> File recycling (which is the usual mechanism for them being created) is
> logged, sort of: it happens during checkpoint.
>
> --
> Álvaro Herrera 
> The PostgreSQL Company - Command Prompt, Inc.
> PostgreSQL Replication, Consulting, Custom Development, 24x7 support
>


Re: [GENERAL] Index File locations

2010-08-27 Thread A. Kretschmer
In response to Callum Scott :
> Hi All,
> 
> I am looking for a way to define where index files should be located.
> 
> I am wanting to store the index files on a separate SSD from the database
> itself.  I can see that there is a way to define this from within the
> database.  Is there a way to do this at a system level within the 
> configuration
> file?

You have to define a own tablespace and then you can create new indexes
on this tablespace.

test=# \h create tablespace
Command: CREATE TABLESPACE
Description: define a new tablespace
Syntax:
CREATE TABLESPACE tablespacename [ OWNER username ] LOCATION 'directory'

test=# \h create index
Command: CREATE INDEX
Description: define a new index
Syntax:
CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] name ON table [ USING method ]
( { column | ( expression ) } [ opclass ] [ ASC | DESC ] [ NULLS {
FIRST | LAST } ] [, ...] )
[ WITH ( storage_parameter = value [, ... ] ) ]
[ TABLESPACE tablespace ]
[ WHERE predicate ]


-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error while autovacuuming

2010-08-27 Thread Amitabh Kant
On Fri, Aug 27, 2010 at 3:21 PM, tamanna madaan <
tamanna.ma...@globallogic.com> wrote:

>  Hi All
>
> Any idea about this problem ??
>
> Thanks..
> Tamanna
>
>
> -Original Message-
> From: tamanna madaan
> Sent: Fri 8/20/2010 11:54 AM
> To: pgsql-general@postgresql.org
> Subject: error while autovacuuming
>
> Hi All
>
> I am using a cluster setup with two nodes in it . postgres version is 8.1.2
> and slony 1.1.5 is being used for replication.
> I have autovacuum enabled. But it was  giving the following error while
> vacuuming template0 database.
>
> 2010-08-18 18:36:14 UTC LOG: autovacuum: processing database "template0"
> 2010-08-18 18:36:14 UTC ERROR: could not access status of transaction
> 3222599682
> 2010-08-18 18:36:14 UTC DETAIL: could not open file "pg_clog/0C01": No such
> file or directory
>
> Because of this error autovacuum is not able to process other databses in
> postgres including my databse 'my_database'.
> which is causing database to slow down.
>
> I googled about this problem and found that this may be because of some
> data corruption. But I don't see any source of data corruption
> in my database. Can anyone please let me know what could be causing this
> error. We generally don't use template0 database.
> So what could be causing this data corruption.
>
> Following below steps have solved the problem  for now :
>
>  1.  set 'datallowconn' to true for template0 database in pg_database
> table.
>  2. Stop postgres
>  3. create a zero filled 0C01 file in /var/lib/pgsql/data/pg_clog folder.
>  4. start postgres
>  5. Execute `vacuum freeze` for all the databases i.e template0 ,
> template1, postgres and my_database
>  6. set 'datallowconn' to false for template0 database in pg_database
> table.
>
> But I doubt if the problem has been solved permanently or it can reapper
> again. If it has been solved permanenlty then how.
> I just created a zero filled 0C01 file of 256 K size. It doesn't have any
> meaningful data. How this can resolve the problem permanently??
>
> Please reply ...
>
> Thanks...
> Tamanna
>
>   Any chance you can upgrade your version to 8.4.x series? If I am not
wrong, there has been a significant change in autovaccum between the
versions. In case you cannot, I would still suggest you to update your
version to 8.1.21 which should atleast cover any known bugs.

With regards

Amitabh


Re: [GENERAL] Is your data okay?

2010-08-27 Thread Alban Hertroys
On 27 Aug 2010, at 12:43, Richard Harley wrote:

> That really helped me, thanks - although I wish someone had told me about 
> that before I tried to run a nuclear reactor using MSSQL

Gosh, seems like someone picked up on my joke when the world will end - namely 
at the release-date of Microsoft Visual Nuclear Power Plant Designer.

> On 27/08/10 07:30, Mike Christensen wrote:
>> I found this tool pretty helpful for validating my architectural decisions..
>> 
>> http://www.howfuckedismydatabase.com/

Interesting tool :)

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


!DSPAM:737,4c77f57210401527218663!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error while autovacuuming

2010-08-27 Thread Alvaro Herrera
Excerpts from tamanna madaan's message of vie ago 27 05:51:16 -0400 2010:

> I am using a cluster setup with two nodes in it . postgres version is 8.1.2 
> and slony 1.1.5 is being used for replication.
> I have autovacuum enabled. But it was  giving the following error while 
> vacuuming template0 database.
> 
> 2010-08-18 18:36:14 UTC LOG: autovacuum: processing database "template0"
> 2010-08-18 18:36:14 UTC ERROR: could not access status of transaction 
> 3222599682
> 2010-08-18 18:36:14 UTC DETAIL: could not open file "pg_clog/0C01": No such 
> file or directory

At the very least, update to 8.1.21.  I think this may have been caused
by a bug fixed years ago in 8.1.6.

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] are there any documents describe the index mechanic?

2010-08-27 Thread sunpeng
are there any documents describe the index mechanic? For example, how to
store the B tree in tables in hard disk?
thanks!
peng


[GENERAL] Surge 2010 Early Registration ends Tuesday!

2010-08-27 Thread Jason Dixon
Early Bird Registration for Surge Scalability Conference 2010 ends next
Tuesday, August 31.  We have a killer lineup of speakers and architects
from across the Internet.  Listen to experts talk about the newest
methods and technologies for scaling your Web presence.

http://omniti.com/surge/2010/register

This year's event is all about the challenges faced (and overcome) in
real-life production architectures.  Meet the engineering talent from
some of the best and brightest throughout the Internet:

John Allspaw, Etsy
Theo Schlossnagle, OmniTI
Bryan Cantrill, Joyent
Rasmus Lerdorf, creator of PHP
Tom Cook, Facebook
Benjamin Black, fast_ip
Christopher Brown, Opscode
Artur Bergman, Wikia
Baron Schwartz, Percona
Paul Querna, Cloudkick

Surge 2010 takes place at the Tremont Grand Historic Venue on Sept 30
and Oct 1, 2010 in Baltimore, MD.  Register NOW for the Early Bird
discount and guarantee your seat to this year's event!


-- 
Jason Dixon
OmniTI Computer Consulting, Inc.
jdi...@omniti.com
443.325.1357 x.241

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index File locations

2010-08-27 Thread John R Pierce

 On 08/27/10 8:43 AM, A. Kretschmer wrote:

In response to Callum Scott :

Hi All,

I am looking for a way to define where index files should be located.

I am wanting to store the index files on a separate SSD from the database
itself.  I can see that there is a way to define this from within the
database.  Is there a way to do this at a system level within the configuration
file?

You have to define a own tablespace and then you can create new indexes
on this tablespace.

test=# \h create tablespace
Command: CREATE TABLESPACE
Description: define a new tablespace
Syntax:
CREATE TABLESPACE tablespacename [ OWNER username ] LOCATION 'directory'

test=# \h create index
Command: CREATE INDEX
Description: define a new index
Syntax:
CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] name ON table [ USING method ]
 ( { column | ( expression ) } [ opclass ] [ ASC | DESC ] [ NULLS {
FIRST | LAST } ] [, ...] )
 [ WITH ( storage_parameter = value [, ... ] ) ]
 [ TABLESPACE tablespace ]
 [ WHERE predicate ]



you can also ALTER INDEX indexname SET TABLESPACE tablespacename;

this will move the existing index to the specified tablespace.

note that you want to do this when your applications are quiescent as 
this will require an exclusive access lock for the duration of the move.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] plpgsql question

2010-08-27 Thread Jon Griffin
 I am trying to calculate a value from a current record in a query and 
can't seem to get it working.


Here is the shortened query;
SELECT
  s.id,
  r.the_date_time,
  s.open_price,
  s.high_price,
  s.low_price,
  s.close_price,
  thesheet_onepair.symbol,
  r.buy_long,
  r.buy_stop,
  r.sell_cl,
  r.stop_sell,
  r.sell_short,
  r.sell_stop,
  r.buy_os,
  r.stop_buy,
  check_long_profit ()as tst
  FROM
public.thesheet_dailystats s,
public.thesheet_recommendation r,
public.thesheet_onepair
  WHERE
s.one_pair_id = thesheet_onepair.id AND
s.the_date = r.the_date_time::date AND
r.one_pair_id = thesheet_onepair.id
  ORDER BY
r.the_date_time DESC,
thesheet_onepair.id ASC;


check_long_profit is the function. I basically want to do some 
calculations on this record and return some calculated fields.


I can't seem to figure out how to pass the row.

Here is my header for the function
create or replace function check_long_profit () returns integer AS $$

I know I am missing something easy.

Thanks



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql question

2010-08-27 Thread Pavel Stehule
Hello

you used a wrong syntax

see 
http://www.pgsql.cz/index.php/PL/pgSQL_%28en%29#Usage_PL.2FpgSQL_function_with_parametres_of_type_table

Regards

Pavel Stehule

2010/8/28 Jon Griffin :
>  I am trying to calculate a value from a current record in a query and can't
> seem to get it working.
>
> Here is the shortened query;
> SELECT
>      s.id,
>      r.the_date_time,
>      s.open_price,
>      s.high_price,
>      s.low_price,
>      s.close_price,
>      thesheet_onepair.symbol,
>      r.buy_long,
>      r.buy_stop,
>      r.sell_cl,
>      r.stop_sell,
>      r.sell_short,
>      r.sell_stop,
>      r.buy_os,
>      r.stop_buy,
>      check_long_profit ()as tst
>      FROM
>        public.thesheet_dailystats s,
>        public.thesheet_recommendation r,
>        public.thesheet_onepair
>      WHERE
>        s.one_pair_id = thesheet_onepair.id AND
>        s.the_date = r.the_date_time::date AND
>        r.one_pair_id = thesheet_onepair.id
>      ORDER BY
>        r.the_date_time DESC,
>        thesheet_onepair.id ASC;
>
>
> check_long_profit is the function. I basically want to do some calculations
> on this record and return some calculated fields.
>
> I can't seem to figure out how to pass the row.
>
> Here is my header for the function
> create or replace function check_long_profit () returns integer AS $$
>
> I know I am missing something easy.
>
> Thanks
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general