[SQL] Error: Template Id should be teh identifier of a template - help

2012-08-17 Thread Alex


Hi, All!

When I execute this:

UPDATE HTMLPAGE SET PAGE_URL = REPLACE(PAGE_URL, '.dot', '.html')  
WHERE PAGE_URL LIKE '%.dot';


I get the following error from psql.  Could you tell me what is wrong  
or how to fix it?


Thanks,

Alex
Failed to execute SQL : SQL UPDATE HTMLPAGE SET PAGE_URL =  
REPLACE(PAGE_URL, '.dot', '.html') WHERE PAGE_URL LIKE '%.dot';  
failed : ERROR: Template Id should be the identifier of a template


[SQL] Simple search question

2000-06-12 Thread Alex

Hi,
 after running a script which performs an insert, a new tuple is
created, and a serial number is generated for it. I want to write the
new tuple data back to the screen, including the new serial number.
 My question is, do I have to do a search for the tuple just inserted in
order to get the data back again? I am thinking there must be a faster,
more efficient way.
Thanks,
Alex




[SQL] Re: Simple search question

2000-06-12 Thread Alex

Hi,

> What language is your "script" written in? Based on some previous posts,
> I guess you're talking about a PHP script. How did you add the tuple?
> I'd guess with a pg_Exec(connect_id,"INSERT "). From reading the doc,
> I think that you can get the row just inserted by using the pg_fetch_row()
> function, passing it the result from the pg_Exec and asking for row 0.

OK, I just tried pg_fetch_array() and the above didn't work.

> I have not yet gotten anything running with PHP (lack of time to "play"),

For someone who hasn't had time to play, you certainly know alot about php :)

If you need any examples, I'd be willing to post them, time permitting.
I thought libpq was easy to use, this is very straightforward. I have been
'playing' with it for a few days, and it is my development tool of choice
now. Of course, this is because of faster development time, don't expect
Yahoo! to replace their cgi with PHP just yet.

> so I can't test this. If it doesn't work, I'd try using pg_GetLastOid()
> to get the OID of the inserted row. The use the pg_Exec and SELECT
> * WHERE OID=oid-value, followed by pg_fetch_row().

Thanks John, and the other person that replied to my email (I know it is a
bit of a stupid question, but in such an unpopulated list, I don't think
there's any reason to post privately).
 This leads to another question. If someone adds another row during this,
what will happen?

Thanks,
Alex




[SQL] Re: Simple search question

2000-06-15 Thread Alex

Hi,

> I had the same problem but was using Java, not PHP (I guess that
> whatever I can do in JDBC, you can do in PHP ;-)).
>
> SELECT last_value FROM ;

This brings me back to another unanswered question recently posted up, maybe
it is impossible...
I declared a new table with one of the types as serial (which is really just a
sort of macro I believe, which automates the creation of a few things for your
convenience), which initialises the last_value of the relation (the sequence
itself), as 1. Therefore, upon INSERTing my first row, the serial number began
at 1, next was 2, then 3, and so forth.
 My question is, is it possible to alter the sequence last_value column, as
I need the serial number to begin from 1000?
Thanks,
Alex




[SQL] using INTERSECT and UNION in IN clause

2000-08-22 Thread Alex Guryanow

Hi,

postgresql 7.0.2. Why by executing the following query

select * from magazine
where id in (
  select mag_id from dict where word = 'akademie' intersect
  select mag_id from dict where word = 'der' intersect
  select mag_id from dict where word = 'klasse' )

I receive the following error:

ERROR: parse error at or near 'intersect'

while the query

  select mag_id from dict where word = 'akademie' intersect
  select mag_id from dict where word = 'der' intersect
  select mag_id from dict where word = 'klasse' )

is executed successfully.

Is it possible to use INTERSECT and UNION keywords in subqueries?


Regards,
Alex





[SQL] sorting in UNICODE table

2000-08-24 Thread Alex Guryanow

Hi,

I'm use postgresql-7.0.2. It's compiled with unicode support
(./configure --enable-multibyte=UNICODE ...)
I have a table which contains both latin and non-latin letters. All they are in UTF-8 
encoding. When
I try to sort the rows

( SELECT * FROM my_table ORDER BY sort_field )

I receive strange error: the rows that begin with ascii symbols are ordered while all 
other - are not!

Why is this happen?

Regards,
Alex





Re: [GENERAL] Re: [SQL] Query never returns ...

2001-02-08 Thread Alex Pilosov

Um, no.

You should run vacuum analyze AFTER you loaded up the data, otherwise,
your table statistics will be all wrong (it'll contain 'empty table'
statistics).

-alex
On Thu, 8 Feb 2001, Brice Ruth wrote:

> Stephan,
> 
> Here is what EXPLAIN shows:
> 
> NOTICE:  QUERY PLAN:
> 
> Sort  (cost=0.02..0.02 rows=1 width=64)
>   ->  Nested Loop  (cost=0.00..0.01 rows=1 width=64)
> ->  Seq Scan on tblmedcond  (cost=0.00..0.00 rows=1 width=36)
> ->  Seq Scan on tblsidedruglink  (cost=0.00..0.00 rows=1 width=28)
> 
> As for vacuum analyze - prior to running into these problems, I deleted
> all data from the database (using delete from ) and then ran
> vacuumdb -a, after which I loaded the data into the tables using 'copy
> ... from' - there have been no updates to the database since then -
> merely selects.
> 
> -Brice
> 
> Stephan Szabo wrote:
> > 
> > What does explain show for the query and have you run
> > vacuum analyze recently on the tables?
> > 
> > On Thu, 8 Feb 2001, Brice Ruth wrote:
> > 
> > > The following query:
> > >
> > > SELECT
> > >   tblSIDEDrugLink.DrugID,
> > >   tblSIDEDrugLink.MedCondID,
> > >   tblMedCond.PatientName AS MedCondPatientName,
> > >   tblMedCond.ProfessionalName AS MedCondProfessionalName,
> > >   tblSIDEDrugLink.Frequency,
> > >   tblSIDEDrugLink.SeverityLevel
> > > FROM
> > >   tblSIDEDrugLink,
> > >   tblMedCond
> > > WHERE
> > >   (tblSIDEDrugLink.DrugID IN ('DG-18698')) AND
> > >   (tblSIDEDrugLink.MedCondID = tblMedCond.MedCondID)
> > > ORDER BY
> > >   tblSIDEDrugLink.DrugID,
> > >   tblSIDEDrugLink.Frequency,
> > >   tblSIDEDrugLink.SeverityLevel,
> > >   tblSIDEDrugLink.MedCondID;
> > >
> > > seems to not be liked by PostgreSQL.  Table 'tblSIDEDrugLink' has the
> > > following structure:
> > >
> > > CREATE TABLE TBLSIDEDRUGLINK
> > > (
> > > DRUGID  VARCHAR(10) NOT NULL,
> > > MEDCONDID   VARCHAR(10) NOT NULL,
> > > FREQUENCY   INT2,
> > > SEVERITYLEVEL   INT2,
> > > CONSTRAINT PK_TBLSIDEDRUGLINK PRIMARY KEY (DRUGID, MEDCONDID)
> > > );
> > >
> > > with the following index:
> > > CREATE INDEX IX_TBLSIDEDRUGLINK1 ON TBLSIDEDRUGLINK (MEDCONDID);
> > >
> > > This table has 153,288 rows.
> > >
> > > Table 'tblMedCond' has the following structure:
> > >
> > > CREATE TABLE TBLMEDCOND
> > > (
> > > MEDCONDID   VARCHAR(10) NOT NULL,
> > > PROFESSIONALNAMEVARCHAR(58),
> > > PATIENTNAME VARCHAR(58),
> > > CONSTRAINT PK_TBLMEDCOND PRIMARY KEY (MEDCONDID)
> > > );
> > >
> > > This table has 1,730 rows.
> > >
> > > The query above is made by a third-party API that I don't have the
> > > source for, so I can't modify the query in the API, though the
> > > third-party has been quite willing to help out - they may even ship me a
> > > 'special' version of the API if there's something in this query that
> > > PostgreSQL for some reason doesn't implement efficiently enough.
> > >
> > > If it would help anyone to see the query plan or such - I can modify the
> > > logs to show that, just let me know.
> > >
> > > Btw - I've let this query run for a while & I haven't seen it complete
> > > ... s ... I don't know if it would ever complete or not.
> > >
> > > Any help at all is as always, appreciated.
> > >
> > > Sincerest regards,
> > > --
> > > Brice Ruth
> > > WebProjkt, Inc.
> > > VP, Director of Internet Technology
> > > http://www.webprojkt.com/
> > >
> 
> 




Re: [SQL] Help retrieving lastest record

2001-02-16 Thread Alex Pilosov

Get an SQL book.

select * from basket 
where date=(select max(date) where fruit='Apples) 
and fruit='Apples';

On Thu, 15 Feb 2001, Steve Meynell wrote:

> Ok what I am trying to do is select out of the database the latest
> record meeting a certain criteria.
> 
> Example:
> 
> Number |Fruit | Date
> 15Apples  July 20, 1999
> 20OrangesJune 7, 2000
> 13 PearsJan 31, 2000
> 17 Apples April 10, 1999
> Now what I need to do is select the oranges out because the date is the
> latest one, something like:
> 
> select * from basket where max(date);
> This would yield me:
> 20OrangesJune 7, 2000
> 
> I know this doesn't work but I need something like it.
> or something like
> 
> select * from basket where max(date) and fruit='Apples';
> This would yield me:
> 15Apples  July 20, 1999
> 
> Thank you in advance,
> 
> 
> --
> Steve Meynell
> Candata Systems
> 
> 
> 




Re: [SQL] pl/Perl

2001-02-21 Thread Alex Pilosov

On Wed, 21 Feb 2001, Jeff MacDonald wrote:

> 1: can you call other stored procedures from within pl/Perl
No.

> 2: from within a pl/Perl script , can i do a select etc..
>i'm assuming no, because you cannot use DBI.. but just wondering
>if there is a way..
Not currently.

> 3: installing it.. i installed postgres 7.0.3 from ports on my fbsd system.
>when i tried to install pl/perl i get this..
> 
> cd /usr/ports/databases/postgresql7/work/postgresql-7.0.2/src/pl/plperl/
> perl Makefile.pl
> make
Try using gmake instead of make (cd /usr/ports/devel/gmake, make)

> 
> "../../../src/Makefile.global", line 135: Need an operator
> "../../../src/Makefile.global", line 139: Missing dependency operator
> "../../../src/Makefile.global", line 143: Need an operator
> "../../../src/Makefile.global", line 144: Missing dependency operator
> "../../../src/Makefile.global", line 148: Need an operator
> "../../../src/Makefile.global", line 149: Need an operator
> "../../../src/Makefile.global", line 150: Need an operator
> "../../../src/Makefile.port", line 1: Need an operator
> "../../../src/Makefile.port", line 3: Need an operator
> "../../../src/Makefile.port", line 6: Need an operator
> "../../../src/Makefile.port", line 8: Need an operator
> "../../../src/Makefile.port", line 16: Need an operator
> "../../../src/Makefile.global", line 246: Missing dependency operator
> "../../../src/Makefile.global", line 247: Could not find ../../../src/Makefile.custom
> "../../../src/Makefile.global", line 248: Need an operator
> "../../../src/Makefile.global", line 253: Missing dependency operator
> "../../../src/Makefile.global", line 255: Need an operator
> "../../../src/Makefile.global", line 284: Missing dependency operator
> "../../../src/Makefile.global", line 286: Need an operator
> "../../../src/Makefile.global", line 288: Missing dependency operator
> "../../../src/Makefile.global", line 290: Need an operator
> "../../../src/Makefile.global", line 292: Missing dependency operator
> "../../../src/Makefile.global", line 294: Need an operator
> "../../../src/Makefile.global", line 296: Need an operator
> "../../../src/Makefile.global", line 299: Need an operator
> "../../../src/Makefile.global", line 301: Need an operator
> "../../../src/Makefile.global", line 304: Need an operator
> make: fatal errors encountered -- cannot continue
> 
> any tips ?
> 
> Jeff MacDonald,
> 
> -
> PostgreSQL Inc| Hub.Org Networking Services
> [EMAIL PROTECTED]| [EMAIL PROTECTED]
> www.pgsql.com | www.hub.org
> 1-902-542-0713| 1-902-542-3657
> -
> Facsimile : 1 902 542 5386
> IRC Nick  : bignose
> PGP Public Key : http://bignose.hub.org/public.txt
> 
> 




Re: [SQL] distinguishing different database connections

2001-06-22 Thread Alex Pilosov

You can use backend's PID from a trigger, it is unique.

On Fri, 22 Jun 2001, Markus Wagner wrote:

> Hi,
> 
> can I access information on the current connection from within a trigger
> function?
> I need to identify different server connections somehow. Something like
> a "connection id" would be enough.
> 
> The background:
> 
> When a user starts a frontend application (NT, Access) he starts a new
> connection to our Linux database server. But for all users the "postgres
> user" is the same, since the ODBC connection is hard linked into the
> application. The problem is that we need the NT user name within our
> trigger (C) functions.
> 
> The idea:
> Let's execute a function "logon" whenever an instance of the application
> is started. This function will get the NT user name as a parameter. It
> will recognize the current connection id and it would store the pair
> (connection id, NT user name) in a table. Then, whenever some trigger
> needs to know the user name (for logging actions), it could lookup the
> user name with the current connection id.
> 
> Can we do this? Please help.
> 
> Thank you very much,
> 
> Markus
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
> 
> 

-- 
--
Alex Pilosov| http://www.acedsl.com/home.html
CTO - Acecape, Inc. | AceDSL:The best ADSL in Bell Atlantic area
325 W 38 St. Suite 1005 | (Stealth Marketing Works! :)
New York, NY 10018  |


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Returning multiple Rows from PL/pgSQL-Function

2001-07-09 Thread Alex Pilosov

Currently, this is not possible.

It will be possible in 7.2, or with a patch I'm working on...

On Mon, 9 Jul 2001, Alvar Freude wrote:

> Hi,
> 
> I want to create a function (PL/pgSQL), which return multiple rows. But it
> fails -- when Creating the function, I get a notice:
> 
>   NOTICE:  ProcedureCreate: return type 'records' is only a shell
> 
> 
> When executing it, this error:
> 
>   ERROR:  fmgr_info: function 0: cache lookup failed
> 
> 
> How should I do this?
> 
> 
> The function is simple:
> 
>CREATE FUNCTION foo (timestamp) RETURNS SETOF records AS 
> '
> BEGIN
>RETURN SELECT * FROM table;
> END;
> ' LANGUAGE 'plpgsql';
>
> 
> 
> or, in more detail the exact function:
> 
> 
>CREATE FUNCTION get_emotions (timestamp) RETURNS SETOF records AS 
>  '
>  DECLARE 
> start ALIAS FOR $1;
> end_id int4;
>  BEGIN
> SELECT emotion_id FROM emotions 
>  WHERE date <= start 
>  LIMIT 1 
>   INTO end_id;
> RETURN SELECT emotion_id, emotion1, [...]
>   FROM  emotions 
>   WHERE emotion_id BETWEEN end_id-3000 AND end_id
>   ORDER BY date_epoch + full_rating*(3600*12)
>   LIMIT 300;
>  END;
>  '
>   LANGUAGE 'plpgsql'; 
> 
> 
> 
> Thanx for any help!
> 
> 
> Ciao
>   Alvar
> 
> 
> 
> 



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] can we write to a flat file from Postgresql procedure

2001-07-11 Thread Alex Pilosov

On Wed, 11 Jul 2001, Peter Eisentraut wrote:

> R Vijayanath writes:
> 
> > It would be great if you can tell me if I can write a
> > procedure that can write the output to the OS(Linux
> > OS) file.
> 
> You could try out PL/sh for that.
> 
> http://www.postgresql.org/~petere/plsh.html
Well, as long as we all keep plugging our favorite languages, in 7.2, you
can use pl/perlu ;)
-alex


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] RE: can we write to a flat file from Postgresql procedure

2001-07-11 Thread Alex Pilosov

Its done and committed to CVS, will be in 7.2 release.

Also done is DBD::PgSPI, interface to access postgres from within your
stored procedure. (www.formenos.org/PgSPI)

-alex

On Wed, 11 Jul 2001, Jeff Eckermann wrote:

> I haven't seen anything on this list to say that pl/perlu is being worked
> on.  Is it?
> 
> > -Original Message-----
> > From:   Alex Pilosov [SMTP:[EMAIL PROTECTED]]
> > Sent:   Wednesday, July 11, 2001 10:03 AM
> > To: Peter Eisentraut
> > Cc: R Vijayanath; [EMAIL PROTECTED]
> > Subject:Re: can we write to a flat file from Postgresql procedure
> > 
> > On Wed, 11 Jul 2001, Peter Eisentraut wrote:
> > 
> > > R Vijayanath writes:
> > > 
> > > > It would be great if you can tell me if I can write a
> > > > procedure that can write the output to the OS(Linux
> > > > OS) file.
> > > 
> > > You could try out PL/sh for that.
> > > 
> > > http://www.postgresql.org/~petere/plsh.html
> > Well, as long as we all keep plugging our favorite languages, in 7.2, you
> > can use pl/perlu ;)
> > -alex
> > 
> > 
> > ---(end of broadcast)---
> > TIP 3: if posting/reading through Usenet, please send an appropriate
> > subscribe-nomail command to [EMAIL PROTECTED] so that your
> > message can get through to the mailing list cleanly
> 
> 


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] Problem with aggregate functions and GROUP BY

2001-08-08 Thread Alex Page

I'm having real trouble with aggregate functions. I have a table which
tracks the value of a field in another table over time - it contains a
foreign key to the row in the other table, the value, and a timestamp to
sort by. I'm trying to write a query that returns the rows containing the
most recent values for each foreign key.

The table looks like this: (fixed width fonts required, and this is a test
table):

# SELECT * FROM test ORDER BY fk, sortby;
 fk | value | sortby
+---+
  1 | 2 |  1
  1 | 0 |  2
  1 | 4 |  3
  2 | 2 |  1

Now obviously, what I'm aiming for is:

 fk | value | sortby
+---+
  1 | 4 |  3
  2 | 2 |  1

which is the highest value of 'sortby' for each value of 'fk'.

I would think that the SQL to achieve this is:

SELECT * FROM test GROUP BY fk HAVING sortby = MAX(sortby);

but running this in psql gives:

ERROR:  Attribute test.value must be GROUPed or used in an aggregate
function

No matter what I do to the query, it gives me this message for test.value
and test.sortby, unless I GROUP BY all three of them, which doesn't give me
the result I want. I'm completely stumped - can anyone help?

Thanks in advance,

Alex
--
Alex Page, IT Department, Solid State Logic
E-Mail: [EMAIL PROTECTED]
Phone: +44 (0) 1865 842 300
Web: http://www.solid-state-logic.com



**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the system manager.

This footnote also confirms that this email message has been swept by
MIMEsweeper for the presence of computer viruses.

www.mimesweeper.com
**

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Function returning an array

2001-08-23 Thread Alex Pilosov

On Thu, 23 Aug 2001, Tom Lane wrote:

> "Josh Berkus" <[EMAIL PROTECTED]> writes:
> >> What's the correct syntax to return two VARCHAR(255)?
> 
> > There isn't one.  At this time, functions cannot return arrays.
> 
> Well, they *can*, it's just that there's no syntax to support
> constructing an array value in SQL or plpgsql languages.
Most importantly, there's no syntax to support deconstructing an array ;)

I have patches to support "select * from function(args)" almost done, I
just need to perform merge against -current. But unfortunately RL took
most of my time and I was unable to work on pg-related things for last
months. The patches contain quite a lot of changes to core structures
(RangeTblEntry and others) and I want to get them in before 7.2 gets
frozen. Hopefully this weekend I'll have some time to do merge and
cleanup

Also included in the patch is ability to do 'select * from cursor foo'.


-alex


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



pl/sh (was Re: [SQL] calling a shell script from pl/pgsql)

2001-09-10 Thread Alex Pilosov

On Mon, 10 Sep 2001, Stephan Szabo wrote:

> On Mon, 10 Sep 2001, Jeff Barrett wrote:
> 
> > How can I call a shell script from within a pl/pgsql function that is called
> > as from a trigger. I do not want to interact with the script I just want it
> > to run. I do want the trigger to wait for the script it called to complete
> > before returning a value.
> > 
> > Any ideas would be greatly appreciated.
> 
> I don't think you can do that from within pl/pgsql.  You'd probably need a
> function in pl/tclu or c.
Or pl/perlu! (Sorry, had to say it).


Actually, I remember that Jan once mentioned something about pl/SH. I
don't know what's the status of it?


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: pl/sh (was Re: [SQL] calling a shell script from pl/pgsql)

2001-09-10 Thread Alex Pilosov

On Mon, 10 Sep 2001, Stephan Szabo wrote:

> 
> On Mon, 10 Sep 2001, Alex Pilosov wrote:
> 
> > On Mon, 10 Sep 2001, Stephan Szabo wrote:
> > 
> > > On Mon, 10 Sep 2001, Jeff Barrett wrote:
> > > 
> > > > How can I call a shell script from within a pl/pgsql function that is called
> > > > as from a trigger. I do not want to interact with the script I just want it
> > > > to run. I do want the trigger to wait for the script it called to complete
> > > > before returning a value.
> > > > 
> > > > Any ideas would be greatly appreciated.
> > > 
> > > I don't think you can do that from within pl/pgsql.  You'd probably need a
> > > function in pl/tclu or c.
> > Or pl/perlu! (Sorry, had to say it).
> 
> I thought perlu was added after 7.1?  I considered mentioning it, but
> realized that it wouldn't help if it wasn't there yet.
Sorry, my fault, I thought pl/tclu is also a new language, but it
apparently is in 7.1...Yes, for existing postgres, its pltclu or C,
sorry :)

-alex


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] outer joins strangeness

2001-09-23 Thread Alex Pilosov

On Sun, 23 Sep 2001, Stephan Szabo wrote:

> On Sun, 23 Sep 2001, Alex Pilosov wrote:
> 
> > It may be just me, or I am grossly misunderstanding syntax of outer joins,
> > but I see that plans for my queries are different depending on how I place
> > join conditions and sometimes even on order of the tables.
> > 
> > Example:
> > 1:
> > explain select * from customers c,orders o left outer join adsl_orders ao
> > on ao.order_id=o.order_id
> > where c.cust_id=o.cust_id
> > and c.cust_id=152

> > 
> > explain select * from customers c join orders o on c.cust_id=o.cust_id
> > left outer join adsl_orders ao on ao.order_id=o.order_id
> > where c.cust_id=152

> Postgres treats join syntax as an explicit definition of what order to
> joins in.  So, I'd guess it sees the first as: do the LOJ and then join
> that to the separate table.  
Yeah, I figure that's how it sees it, but that's pretty stupid from
performance reasons :P)

It _should_ realize that left outer join only constricts join order
between two tables in outer join, and joins to all other tables should
still be treated normally.

I'm going to CC this to -hackers, maybe someone will shed a light on the
internals of this. 

> And for right outer join (for example), those two queries would not
> be equivalent if I read the ordering correctly.  The former syntax
> would mean outer first and then the inner, whereas the second would
> be inner first then the outer, and that could have different results.
True. But this is not right outer join, its a left outer join...:)

Postgres should understand that left outer join does not constrict join
order...

-alex


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[SQL] outer joins strangeness

2001-09-23 Thread Alex Pilosov

It may be just me, or I am grossly misunderstanding syntax of outer joins,
but I see that plans for my queries are different depending on how I place
join conditions and sometimes even on order of the tables.

Basically, if I mix ANSI-syntax outer joins (a left outer join b on
a.id=b.id) and "where-syntax" joins (from a,b where a.id=b.id) in the same
query, things get strange.

Example:
1:
explain select * from customers c,orders o left outer join adsl_orders ao
on ao.order_id=o.order_id
where c.cust_id=o.cust_id
and c.cust_id=152


Nested Loop  (cost=94.23..577.47 rows=2 width=290)
  ->  Index Scan using customers_pkey on customers c  (cost=0.00..2.02
rows=1 width=125)
  ->  Materialize  (cost=501.65..501.65 rows=5904 width=165)
->  Hash Join  (cost=94.23..501.65 rows=5904 width=165)
  ->  Seq Scan on orders o  (cost=0.00..131.04 rows=5904
width=58)
  ->  Hash  (cost=86.18..86.18 rows=3218 width=107)
->  Seq Scan on adsl_orders ao  (cost=0.00..86.18
rows=3218 width=107)

Query 2:

explain select * from customers c join orders o on c.cust_id=o.cust_id
left outer join adsl_orders ao on ao.order_id=o.order_id
where c.cust_id=152

Nested Loop  (cost=0.00..9.30 rows=2 width=290)
  ->  Nested Loop  (cost=0.00..5.06 rows=2 width=183)
->  Index Scan using customers_pkey on customers c
(cost=0.00..2.02 rows=1 width=125)
->  Index Scan using orders_idx1 on orders o  (cost=0.00..3.03
rows=1 width=58)
  ->  Index Scan using adsl_orders_pkey on adsl_orders ao
(cost=0.00..2.02 rows=1 width=107)

To me, both queries seem exactly identical in meaning, and should generate
the same plans. However, in my experience, if I use outer join anywhere in
the query, I must use "JOIN" syntax to join all other tables as well,
otherwise, my query plans are _extremely_ slow.

any hints? Or I am grossly misunderstanding outer join symantics?

-alex


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] PlPerl

2001-10-07 Thread Alex Pilosov

On Mon, 8 Oct 2001, Aasmund Midttun Godal wrote:

> 1. Is there a plperlu? ... unsafe version of perl?
In 7.2, there will be. In 7.1, you have to apply patches manually.

> 2. Is it possible to select which packages the plperl should have access to?
In 7.2, plperlu can do anything. plperl in either version is very
restricted. (use Safe qw(:default :math))

> 3. What exactly can you and can't you do.
See above.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] temporary views

2001-10-05 Thread Alex Pilosov

On Fri, 5 Oct 2001, Bruce Momjian wrote:

> 
> What actually should happen is that the view should go away at the end
> of the session.  However, unlike indexes, we can have several tables
> involved in a view so it is hard to know exactly how to handle this.
> 
> Seems like a TODO item, at least.  What we could do is to create views
> as TEMP if they use temp tables and drop the view as soon as the session
> ends .  You of course would have to recreate the view each time but
> because it is a _temp_ view, it could be done reliably by multiple
> backends at the same time.


Didn't someone suggest dropping saving of parsed plans with OIDs
altogether, and saving the underlying query instead? The point was that
parser and planner are fast enough to make it unnecessary to save plans. 

I don't remember what was disposition of that idea

-alex



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] parse error at or near "(" -- Huh???

2004-12-12 Thread Alex Beamish
Hi,

I'm getting a frustrating error

  ERROR:  parser: parse error at or near "(" at character 201

in a CREATE TABLE statement from an SQL script that I'm running from
within a Perl script. When I run the same script from the command
line, either as a regular user or as root, it works fine.

The SQL is
-
CREATE SEQUENCE "users_id"
START 1 INCREMENT 1 MAXVALUE 2147483647 MINVALUE 1 CACHE 1;

CREATE TABLE "users" (
"u_id" integer DEFAULT nextval('users_id'::text) NOT NULL,
"u_name" text NOT NULL,
"u_password" text NOT NULL,
"u_console_flag" integer DEFAULT 0,
Constraint "users_pkey" Primary Key ("u_id")
);
-

The 'console_flag' field was recently added and is close-ish to the
point that I think the parser is failing. And speaking of that, how I
am to interpret 'character 201' -- should I collapse the SQL into it's
minimal state (least number spaces) and go from that?

Thanks.

Alex

ps Tried to join the list using 
http://webmail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&func=lists-long-full&extra=pgsql-sql
but got a server timeout. :(

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] parse error at or near "(" -- Huh???

2004-12-14 Thread Alex Beamish
On Sun, 12 Dec 2004 19:18:00 -0700, Michael Fuhr <[EMAIL PROTECTED]> wrote:
> On Thu, Dec 09, 2004 at 03:19:56PM -0500, Alex Beamish wrote:
> 
> > I'm getting a frustrating error
> >
> >   ERROR:  parser: parse error at or near "(" at character 201
> >
> > in a CREATE TABLE statement from an SQL script that I'm running from
> > within a Perl script. When I run the same script from the command
> > line, either as a regular user or as root, it works fine.
> 
> Your SQL statements ran fine for me, so please show us a small but
> complete Perl script that duplicates the problem.

Michael, Tom,

Thank you both for your responses .. I discovered that, while I was
editting what I thought was the running script, I was actually running
an earlier (wrong) version extracted a moment earlier from the version
control system, in a paralell directory. This became obvious when I
deleted the offending line and the error remained exactly the same.

However, I am intrigued to find out about the character offset that
appears in the error. How is that calculated? This would be useful
diagnostic information to have in the future.

Thanks!

Alex

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] [SQL] OFFSET impact on Performance???

2005-01-25 Thread Alex Turner
I am also very interesting in this very question.. Is there any way to
declare a persistant cursor that remains open between pg sessions? 
This would be better than a temp table because you would not have to
do the initial select and insert into a fresh table and incur those IO
costs, which are often very heavy, and the reason why one would want
to use a cursor.

Alex Turner
NetEconomist


On Thu, 20 Jan 2005 15:20:59 +, Richard Huxton  wrote:
> Andrei Bintintan wrote:
> >> If you're using this to provide "pages" of results, could you use a
> >> cursor?
> >
> > What do you mean by that? Cursor?
> >
> > Yes I'm using this to provide "pages", but If I jump to the last pages
> > it goes very slow.
> 
> DECLARE mycursor CURSOR FOR SELECT * FROM ...
> FETCH FORWARD 10 IN mycursor;
> CLOSE mycursor;
> 
> Repeated FETCHes would let you step through your results. That won't
> work if you have a web-app making repeated connections.
> 
> If you've got a web-application then you'll probably want to insert the
> results into a cache table for later use.
> 
> --
>Richard Huxton
>Archonet Ltd
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
>   subscribe-nomail command to [EMAIL PROTECTED] so that your
>   message can get through to the mailing list cleanly
>

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] [SQL] OFFSET impact on Performance???

2005-01-31 Thread Alex Turner
As I read the docs, a temp table doesn't solve our problem, as it does
not persist between sessions.  With a web page there is no guarentee
that you will receive the same connection between requests, so a temp
table doesn't solve the problem.  It looks like you either have to
create a real table (which is undesirable becuase it has to be
physicaly synced, and TTFB will be very poor) or create an application
tier in between the web tier and the database tier to allow data to
persist between requests tied to a unique session id.

Looks like the solutions to this problem is not RDBMS IMHO.

Alex Turner
NetEconomist


On Wed, 26 Jan 2005 12:11:49 +0200, Andrei Bintintan <[EMAIL PROTECTED]> wrote:
> The problems still stays open.
> 
> The thing is that I have about 20 - 30 clients that are using that SQL query
> where the offset and limit are involved. So, I cannot create a temp table,
> because that means that I'll have to make a temp table for each session...
> which is a very bad ideea. Cursors somehow the same. In my application the
> Where conditions can be very different for each user(session) apart.
> 
> The only solution that I see in the moment is to work at the query, or to
> write a more complex where function to limit the results output. So no
> replace for Offset/Limit.
> 
> Best regards,
> Andy.
> 
> 
> - Original Message -
> From: "Greg Stark" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Cc: "Richard Huxton" ; "Andrei Bintintan"
> <[EMAIL PROTECTED]>; ;
> 
> Sent: Tuesday, January 25, 2005 8:28 PM
> Subject: Re: [PERFORM] [SQL] OFFSET impact on Performance???
> 
> 
> >
> > Alex Turner <[EMAIL PROTECTED]> writes:
> >
> >> I am also very interesting in this very question.. Is there any way to
> >> declare a persistant cursor that remains open between pg sessions?
> >> This would be better than a temp table because you would not have to
> >> do the initial select and insert into a fresh table and incur those IO
> >> costs, which are often very heavy, and the reason why one would want
> >> to use a cursor.
> >
> > TANSTAAFL. How would such a persistent cursor be implemented if not by
> > building a temporary table somewhere behind the scenes?
> >
> > There could be some advantage if the data were stored in a temporary table
> > marked as not having to be WAL logged. Instead it could be automatically
> > cleared on every database start.
> >
> > --
> > greg
> >
> >
> 
>

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] plperlu user function.

2010-05-18 Thread Alex Hunsaker
On Tue, May 18, 2010 at 15:29, David Harel  wrote:
> Greetings,
>
> I am trying to write a user function on the server to retrive image files.
> Currently I wrote the following:
> CREATE OR REPLACE FUNCTION perl_getfile(text)
>   RETURNS OID AS

I think you meant returns bytea or text or something... else

-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] TABLE PARTITION

2011-02-01 Thread NEVIN ALEX

Hi,
  I am Nevin Alex and I am using postgresql database for a year . But I 
have’nt used table partitions: Please help me to do it in a better way. I got 
it from the documentation that it is an inheritance capability and the Trigger 
working but for dynamic data how can I implement that.
 
Thanks in advance 

[SQL] Question regarding modelling for time series

2012-09-04 Thread Alex Grund
Hi there,

I want to use a database for storing economic time series.

An economic time series can be thought of as something like this:

NAME| RELEASE_DATE | REPORTING_DATE | VALUE
+--++---
Unemployment US | 2011/01/01   | 2010/12/01 | xxx
Unemployment US | 2011/02/01   | 2011/01/01 | xxx
Unemployment US | 2011/03/01   | 2011/02/01 | xxx

The release date is the date on which the data provider published the
value and the reporting date is the date to which the value refers
(read: In Dec, 2010 the unemployment was X but this has not been known
until 2011/01/01).

However, that's not the whole story. On each "release date" not only
ONE value is released but in some cases the values for previous
reporting_dates are changed.

So, the table could read like this:

NAME| RELEASE_DATE | REPORTING_DATE | VALUE
+--++---
Unemployment US | 2011/01/01   | 2010/12/01 | xxx
Unemployment US | 2011/01/01   | 2010/11/01 | xxx
Unemployment US | 2011/01/01   | 2010/10/01 | xxx

Unemployment US | 2011/02/01   | 2010/10/01 | xxx
Unemployment US | 2011/02/01   | 2010/11/01 | xxx
Unemployment US | 2011/02/01   | 2010/12/01 | xxx
Unemployment US | 2011/02/01   | 2011/01/01 | xxx

[...]

So, there are now mainly three questions to be answered:

1) "get me the time series [reporting_date, value] of unemployment as
it is now seen", so give all reporting_date,value tuples with the most
recent release_date.

2) "get me the time series [reporting_date, value] as it was
published/known to the market", so that means, in this case, give this
list:
Unemployment US | 2011/01/01   | 2010/12/01 | xxx
Unemployment US | 2011/02/01   | 2011/01/01 | xxx
Unemployment US | 2011/03/01   | 2011/02/01 | xxx

3) the same as (1) but with one enhancement: if the most recent
release has a history of N month, but all releases has a history of
N+X month, the time series from the most recent release should be
delivered plus the older values (in terms of reporting_dates) from the
second most recent release plus the more older values from the third
most recent release and so on.


So, I thought of a relational data base model like that:

TABLE 'ts' (TimeSeries)
PK:id | name

TABLE 'rs' (ReleaseStages)
PK:id | FK:ts_id | release_date

TABLE 'r' (Releases)
PK:id | FK:rs_id | reporting_date | value

Is this an appropriate model?

If yes, how could I answer the three questions above in terms of
SQL/Stored Procedures?

If no, what would you suggest?


And: If the datasets grow further, will be an RDBMS the right model
for time series storage? Any ideas on what else I could use?



Thank you very much!


--Alex


-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Question regarding modelling for time series

2012-09-05 Thread Alex Grund
Sergey,

thank you very much for your hints, I will play a bit with that and
maybe come back to the list.

Just for clarification, I attached some more explanation and examples below.

2012/9/5 Sergey Konoplev :
> I am not quite understand what is meant here. Could you please provide
> more explanation and some examples.

Imagine, the time series is published monthly, at the first day of a
month with the value for the previous month, such like:

Unemployment; release: 2011/12/01; reporting: 2011/11/01; value: 1
Unemployment; release: 2011/11/01; reporting: 2011/10/01; value: 2
Unemployment; release: 2011/10/01; reporting: 2011/09/01; value: 3
Unemployment; release: 2011/09/01; reporting: 2011/08/01; value: 4

Now, imagine, that on every release, the value for the previous month
is revised, such like:

Unemployment; release: 2011/12/01; reporting: 2011/11/01; value: 1
Unemployment; release: 2011/12/01; reporting: 2011/10/01; value: 2.5

Unemployment; release: 2011/11/01; reporting: 2011/10/01; value: 2
Unemployment; release: 2011/11/01; reporting: 2011/09/01; value: 3.5

Unemployment; release: 2011/10/01; reporting: 2011/09/01; value: 3
Unemployment; release: 2011/10/01; reporting: 2011/08/01; value: 4.5

Unemployment; release: 2011/09/01; reporting: 2011/08/01; value: 4
Unemployment; release: 2011/09/01; reporting: 2011/07/01; value: 5.5

So, what I have now is a time series from 2011/07/01 to 2011/11/01.

The most recent observation (release) ex-post is:  [1]
Unemployment; release: 2011/12/01; reporting: 2011/11/01; value: 1
Unemployment; release: 2011/12/01; reporting: 2011/10/01; value: 2.5

Since the data is not revised further than one month behind, the whole
series ex-post would look like that: [3]
Unemployment; release: 2011/12/01; reporting: 2011/11/01; value: 1
Unemployment; release: 2011/12/01; reporting: 2011/10/01; value: 2.5

Unemployment; release: 2011/11/01; reporting: 2011/09/01; value: 3.5

Unemployment; release: 2011/10/01; reporting: 2011/08/01; value: 4.5

Unemployment; release: 2011/09/01; reporting: 2011/07/01; value: 5.5

Whereas, the "known-to-market"-series would look like that: [2]

Unemployment; release: 2011/12/01; reporting: 2011/11/01; value: 1
Unemployment; release: 2011/11/01; reporting: 2011/10/01; value: 2
Unemployment; release: 2011/10/01; reporting: 2011/09/01; value: 3
Unemployment; release: 2011/09/01; reporting: 2011/08/01; value: 4

That are the series I want to get from the db.


--Alex


-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] pl/pgsql and error handling

2008-02-28 Thread Alex Hochberger
Is it possible to grab access to the actually user-friendly error  
message?  I have a bunch of files that come into my database, each  
slightly different.  The submitter fills out a form that records the  
info, and sticks it into a table.


If the file is malformed (wrong number of columns in a row) from a  
missing comma or something, I want to record the result to store in  
the database.  I assume that I should exception handle the COPY FROM  
line, and store the results, but I can't for the life of me figure out  
how to get the text like "invalid line on line 470352" or whatever  
psql shows.  Is that possible to obtain in plpgsql?


I've done extensive plpgsql programming before, but this is the first  
time I need it to validate the external data in question.


Alex

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] pl/pgsql and error handling

2008-02-28 Thread Alex Hochberger
I do not know.  It's sounds like the Oracle PL/SQL behavior is what I  
want.  Does PostgreSQL's pl/pgsql have such a feature?  It's not in  
the documentation anywhere.  I'll check it from the office tomorrow.


Alex

On Feb 28, 2008, at 11:43 PM, Tom Lane wrote:


Alex Hochberger <[EMAIL PROTECTED]> writes:

Is it possible to grab access to the actually user-friendly error
message?


Doesn't the SQLERRM variable do what you want?

regards, tom lane

---(end of  
broadcast)---

TIP 4: Have you searched our list archives?

  http://archives.postgresql.org



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


[SQL] INSERT waiting under heavy load

2006-01-06 Thread alex-lists-pgsql

After digging through all the discussions of "INSERT waiting" problems I am
still not clear about the concensus about solving it.



I am running ration 6:1 SELECT:INSERT (insert fires up an UPDATE trigger
that hits a column in a table holding keys used by SELECT). I am looking at
doing about 2,000 INSERT/UPDATE per second, with possible peaks at 10,000
INSERT/UPDATE per second (i.e. 60,000 SELECTs).


(table 1)

The table holding primary keys is expected to grow to around 10,000 rows.
This is the table that gets 50% of SELECTs and 100% of UPDATES. This is the
owner status table. It is optimized so with a single SELECT against this
table all information needed for real-time clients would be accessible.

(table 2)

The 2nd number of rows in the second table is expected to be around 100
times the number of rows in the 1st table. Each entry in this table has uses
first table's column as a foreign key to avoid unlinked entries. It also has
foreign key dependecies to some other tables that for the purpose of the
application are never updated. This table gets the other 50% of SELECTs.

(table 3)

Finally, the 3rd table (audit log) is expected to have arbitraty number of
entries (measured in millions). It gets virtually no SELECT activity in the
mornal operations. If the data from this table is needed, a snapshot of this
table gets pulled into a different table (most likely on a different
database) and gets processed there. The table gets cleaned up at specific
intervals using DROP TABLE/CREATE TABLE sequence. It is guaraneed that when
the management application (non-real time) performs DROP TABLE/CREATE table
combination. The only thing that I do not particulary like is that every INSERT
into this table has to adjust a counter column in a corresponding row of the
(table1) via (table3->table2->table1) path.



The server is configured to accept about 200 connections from clients. The
problem is that after first couple of hours of working normally, as the
table (3) grows, the backend indicates that more and more INSERTs into table
3 are held up in the "INSERT waiting" state.

It happens even when table 1 contains only one row, table 2 contains 4 rows.


Is there anything that can be done to diagnose why "INSERT waiting" state
becomes so prevalent?

Would pulling the counter from table 1 into a table
(4) that contains only reference to appropriate table (1) row and counter
value make it better? 


Thanks,
Alex




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] INSERT waiting under heavy load

2006-01-06 Thread alex-lists-pgsql
> > After digging through all the discussions of "INSERT waiting" problems I am
> > still not clear about the concensus about solving it.
> > ...
> > The only thing that I do not particulary like is that every INSERT
> > into this table has to adjust a counter column in a corresponding row of the
> > (table1) via (table3->table2->table1) path.
> 
> Well, if there are only a few rows in table1, then this design is
> inherently going to lose big.  Any two transactions trying to update the
> same table1 row are going to conflict and one will have to wait for the
> other to complete.  Rethink the need for those counters.

I appreciate that it is most likely not the best design though i expect
reasonable distribution of UPDATE hits against the first table when the
number of rows increases. 

What I do not understand is this: 

if the problem is caused by the the acquire lock->modify column->release
lock on the table 1, then why does it increase significantly increase as the
number of entries in the table 3 grows? The simulation maintains pretty much
constant rate of new requests coming to table 3.

Alex



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] stored procedures for complex SELECTs

2006-01-18 Thread alex-lists-pgsql

Are there performance advantages that can be achieved by wrapping a 
complex SELECT into a stored procedure? 

Alex

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] executing external command

2006-03-16 Thread alex-lists-pgsql


Is there a way to execute an external i.e. system command from inside a
pl/pgsql function?

Alex

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[SQL] How to find entries missing in 2nd table?

2006-07-11 Thread alex-lists-pgsql
Hi,
I realize I probably lost my marbles but I've been having a god
awful time with a single query:

control:


controller_id   pk;


datapack:

controller_id   fk;




I need to get all entries from the table control that are not listed in
datapack.


Thanks,
Alex

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster