Re: [GENERAL] Better way to process boolean query result in shell-like situations?

2015-10-29 Thread David

On 10/28/2015 09:42 PM, Tim Landscheidt wrote:

Hi,

I regularly run into the problem that I want to query a
PostgreSQL database in a script/program and depending on a
boolean result do one thing or the other.  A typical example
would be a Puppet Exec that creates a user only if it does
not exist yet.

But unfortunately psql always returns with the exit code 0
if the query was run without errors.  In a shell script I
can use a query that returns an empty string for failure and
something else for success and then test that à la:

| if [ -n "$(psql -Atc "[…]") ]; then echo Success.; fi

but for example in Puppet this requires putting around
'/bin/bash -c "[…]"' with yet another level of quoting.

The best idea I had so far was to cause a runtime error
(here with the logic reversed: If the user exists, psql re-
turns failure, otherwise success):

| [tim@passepartout ~]$ psql -c "SELECT usename::INT FROM pg_user WHERE usename = 
'tim';"; echo $?
| FEHLER:  ungültige Eingabesyntax für ganze Zahl: »tim«
| 1
| [tim@passepartout ~]$ psql -c "SELECT usename::INT FROM pg_user WHERE usename = 
'does-not-exist';"; echo $?
|  usename
| -
| (0 rows)

| 0
| [tim@passepartout ~]$

But this (in theory) could fail if usename could be con-
verted to a number, and for example 'a'::INT will fail al-
ways.

Are there better ways?  The environment I am most interested
in is 9.3 on Ubuntu Trusty.


Good morning Tim,

I solved what I think is a similar problem to what you are trying to do
by storing the query output into a shell variable. For instance:

[dnelson@dave1:~/development]$ output=$(psql -U readonly -d postgres -h 
dev_box -p 55433 -Atc "SELECT TRUE FROM pg_roles WHERE rolname = 
'readonly'")

[dnelson@dave1:~/development]$ echo $output
t

Obviosly you can manipulate the query to return false when the role
does not exist. Hopefully that helps?

Dave



Tim

P. S.: I /can/ write providers or inline templates for Pup-
pet in Ruby to deal with these questions; but here I
am only looking for a solution that is more "univer-
sal" and relies solely on psql or another utility
that is already installed.







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Better way to process boolean query result in shell-like situations?

2015-10-29 Thread David

On 10/29/2015 08:27 AM, Adrian Klaver wrote:

On 10/29/2015 06:07 AM, David wrote:

On 10/28/2015 09:42 PM, Tim Landscheidt wrote:

Hi,

I regularly run into the problem that I want to query a
PostgreSQL database in a script/program and depending on a
boolean result do one thing or the other.  A typical example
would be a Puppet Exec that creates a user only if it does
not exist yet.

But unfortunately psql always returns with the exit code 0
if the query was run without errors.  In a shell script I
can use a query that returns an empty string for failure and
something else for success and then test that à la:

| if [ -n "$(psql -Atc "[…]") ]; then echo Success.; fi

but for example in Puppet this requires putting around
'/bin/bash -c "[…]"' with yet another level of quoting.

The best idea I had so far was to cause a runtime error
(here with the logic reversed: If the user exists, psql re-
turns failure, otherwise success):

| [tim@passepartout ~]$ psql -c "SELECT usename::INT FROM pg_user
WHERE usename = 'tim';"; echo $?
| FEHLER:  ungültige Eingabesyntax für ganze Zahl: »tim«
| 1
| [tim@passepartout ~]$ psql -c "SELECT usename::INT FROM pg_user
WHERE usename = 'does-not-exist';"; echo $?
|  usename
| -
| (0 rows)

| 0
| [tim@passepartout ~]$

But this (in theory) could fail if usename could be con-
verted to a number, and for example 'a'::INT will fail al-
ways.

Are there better ways?  The environment I am most interested
in is 9.3 on Ubuntu Trusty.


Good morning Tim,

I solved what I think is a similar problem to what you are trying to do
by storing the query output into a shell variable. For instance:

[dnelson@dave1:~/development]$ output=$(psql -U readonly -d postgres -h
dev_box -p 55433 -Atc "SELECT TRUE FROM pg_roles WHERE rolname =
'readonly'")
[dnelson@dave1:~/development]$ echo $output


A variation of the above:

test=> select * from users;

  id | name
+---
   1 | Adrian Klaver
   3 | Yogi Berra
   2 | Mickey Mouse



test=> select case when count(*) = 0 then 'f' else 't' end AS user from
users where name = 'Dog';
  user
--
  f
(1 row)

test=> select case when count(*) = 0 then 'f' else 't' end AS user from
users where name = 'Adrian Klaver';
  user

--

t

(1 row)




Nice way to get either condition Adrian!

The use case that led me to discover this trick was a bit different
than that of the OP. I was automating the testing of SQL statements
that I expected to fail. At first the psql exit code seemed perfect
until I realized that the exit code would be 1 whether the failure
was due to, say the foreign key violation that I was expecting, or
due to a syntax error. That's when I hit upon capturing the output
into a variable and grepping for the sql ERROR code to verify that
the failure was for the expected reason. Right now I just send that
output to the console and visually inspect it, but my next step is
to programmatically perform the comparision.

Dave


t

Obviosly you can manipulate the query to return false when the role
does not exist. Hopefully that helps?

Dave



Tim

P. S.: I /can/ write providers or inline templates for Pup-
pet in Ruby to deal with these questions; but here I
am only looking for a solution that is more "univer-
sal" and relies solely on psql or another utility
that is already installed.














--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Looking for pure C function APIs for server extension: language handler and SPI

2016-02-29 Thread david
What I need (to find or create) is a ‘pure’ C language API to support a 
Postgres server extension. By ‘pure’ I mean one that has no knowledge of 
Postgres internals and that could be called by a generic interface provided by 
some other tool that can support C language APIs.

 

The reason is that I’m looking to integrate a new language (Andl) into 
Postgres. To do that I need to find or create a ‘pure’ C-language API to 
interface with:

1.   The generic language call interface (pl_language_handler), which must 
handle conversion of Incoming language call parameters and outgoing language 
call return value.

1.   The backend query execution interface (SPI), which must handle 
conversion of outgoing query parameters and incoming query result values.

 

There are 5 generic data types: boolean, binary (blob), number 
(decimal/real/integer), text (string/varchar), time (date/datetime). Each data 
type needs to be converted between the Postgres internal data types (Datum) and 
some intermediate data type that can be passed through a pure C API. In my case 
that will be C# (MS or Mono), but others might want to use Java or whatever.

 

These conversion tasks are identical to those needed to implement an ODBC/JDBC 
or similar interface, but one bound directly to the server and not going 
through a client (socket) connection. I have already done this successfully 
with Sqlite, which already provides a pure C server API and excellent 
documentation to go with it, so I know what it might look like.

 

FYI apart from these two APIs (and their 4x5 conversions), the only other thing 
needed is some SQL code generation and I expect to have a working language of 
considerable power.

 

Any help, suggestions, pointers much appreciated.

 

Regards

David M Bennett FACS

  _  

Andl - A New Database Language - andl.org

 



Re: [GENERAL] Looking for pure C function APIs for server extension: language handler and SPI

2016-03-02 Thread david
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John McKown
Sent: Wednesday, 2 March 2016 1:03 PM
To: da...@andl.org
Cc: pgsql-general-owner+M220260=david=andl@postgresql.org; Postgres General 

Subject: Re: [GENERAL] Looking for pure C function APIs for server extension: 
language handler and SPI

 

On Mon, Feb 29, 2016 at 5:55 PM, mailto:da...@andl.org> > 
wrote:

What I need (to find or create) is a ‘pure’ C language API to support a 
Postgres server extension. By ‘pure’ I mean one that has no knowledge of 
Postgres internals and that could be called by a generic interface provided by 
some other tool that can support C language APIs.

 

​Well, since nobody else has replied yet, have you read: 
http://www.postgresql.org/docs/9.5/interactive/libpq.html

 

Thanks for noticing! Yes, but this is a server extension and libpq is client 
side. I want to use the generic language call handler and SPI query interface 
on the server, not client side.

 

libpq is the C callable API which communicates with the PostgreSQL server. 
There is a "shared object" and a normal library which can be "statically 
linked". But that's really all that _I_ know about it.

 

[dmb>] 

​ 

 

The reason is that I’m looking to integrate a new language (Andl) into 
Postgres. To do that I need to find or create a ‘pure’ C-language API to 
interface with:

1.   The generic language call interface (pl_language_handler), which must 
handle conversion of Incoming language call parameters and outgoing language 
call return value.

1.   The backend query execution interface (SPI), which must handle 
conversion of outgoing query parameters and incoming query result values.

 

There are 5 generic data types: boolean, binary (blob), number 
(decimal/real/integer), text (string/varchar), time (date/datetime). Each data 
type needs to be converted between the Postgres internal data types (Datum) and 
some intermediate data type that can be passed through a pure C API. In my case 
that will be C# (MS or Mono), but others might want to use Java or whatever.

 

These conversion tasks are identical to those needed to implement an ODBC/JDBC 
or similar interface, but one bound directly to the server and not going 
through a client (socket) connection. I have already done this successfully 
with Sqlite, which already provides a pure C server API and excellent 
documentation to go with it, so I know what it might look like.

 

FYI apart from these two APIs (and their 4x5 conversions), the only other thing 
needed is some SQL code generation and I expect to have a working language of 
considerable power.

 

Any help, suggestions, pointers much appreciated.

 

Regards

David M Bennett FACS


  _  


Andl - A New Database Language - andl.org <http://andl.org> 

 

 

-- 

The man has the intellect of a lobotomized turtle.


Maranatha! <><
John McKown



[GENERAL] CStringGetTextDatum and other conversions in server-side code

2016-03-02 Thread david
Writing a language handler: pl_language_handler. Need to do a variety of data 
conversions. One of them is char* C-string to and from Text/Varchar.

 

The include file postgres.h has the macro CStringGetDatum but this is of no 
use: it’s just a cast.

 

There is a builtin macro CStringGetTextDatum which calls a function and looks 
like it might work. Questions:

1.   Is this the right function?

2.   Is it OK to use, or are there restrictions?

3.   Does it have friends: are there other conversion functions like this 
for other data types (decimal, time in particular)?

4.   Is there any particular documentation I can read?

 

Regards

David M Bennett FACS

  _  

Andl - A New Database Language - andl.org

 



Re: [GENERAL] CStringGetTextDatum and other conversions in server-side code

2016-03-02 Thread david
(please do not post HTML to these lists; see:
https://wiki.postgresql.org/wiki/Mailing_Lists)
[dmb>] I checked the list first: it looks like about 25-50% HTML. Happy to 
oblige, but I think you've got your work cut out.

> and looks like it might work. Questions:
> 
> 1.   Is this the right function?
> 2.   Is it OK to use, or are there restrictions?
> 3.   Does it have friends: are there other conversion functions like
> this for other data types (decimal, time in particular)?
> 4.   Is there any particular documentation I can read?

Your best bet is to look at examples. The code in the source tree under contrib 
has many examples, and more specifically src/pl/* should have even more 
relevant examples. Remember, grep is your friend -- get a copy of the source 
tree and try:
[dmb>] 
[dmb>] I have the source tree, I use grep and other tools, and I've read 
thousands of lines of code. There is a great deal of inconsistency, and much of 
the code looks like "Summer of Code", or has been written a number of years ago 
or talks about issues I know have been resolved, or says it's just 'legacy 
support'. The upshot is: I haven't yet been able to figure out the 'right' way 
to do things, or what might go wrong if I choose the 'wrong' way. 

Basically any symbol exported from the postgres backend can be used by your PL. 
There is not a lot of documentation other than in the source code itself. Look 
at 
examples, see what they do, emulate it.
[dmb>] That was my plan. But I do prefer to emulate code that is 'right'.

[dmb>] So how would I go about finding a set of useful conversion functions for 
basic types (real, decimal, time, etc)?

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CStringGetTextDatum and other conversions in server-side code

2016-03-02 Thread david
> [dmb>] So how would I go about finding a set of useful conversion functions 
> for basic types (real, decimal, time, etc)?

the basic SQL to C mappings are defined by the H files listed here, 
http://www.postgresql.org/docs/current/static/xfunc-c.html#XFUNC-C-TYPE-TABLE
[dmb>] 
[dmb>] Yes, thanks, I found those. Most useful too. What I was looking for was 
the preferred set of functions that could be used to convert between those 
types and native C types (that could be used externally with no Postgres 
includes).

[dmb>] This is fairly easy for ints and reals, but is particularly a problem 
for all the variable length types (eg text, time and decimal).

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CStringGetTextDatum and other conversions in server-side code

2016-03-03 Thread david
all the text types are simply a 32bit length and an array of 
characters.   you need to be aware of the encoding, and null is just 
another character.
[dmb>] Yes, I can see that. What I need is a function that will convert to and 
from whatever the actual encoding happens to be into Unicode, period. I can 
handle UTF8/16/32 as long as I know which.

single and double real/floats are stored in standard x86 IEEE floating point.
[dmb>] Apart from the x86/x64 oddity that seems easy enough (I'm using x64).

the date and time types are all fixed lengths, mostly a long int or two.
[dmb>] Again, I just need a function to convert to and from one single 
date/time format that covers everything. I use 2xint64 and no TZ. It covers 
year  down to milliseconds and then some.

numeric in postgres is a bit tricky.  its a decimal number stored in 
base 1 as a series of short integers, and can represent.   does your 
language even have a decimal or BCD or something numeric type?
[dmb>] Of course. I have decimal which is also 2xint64: it equates to about 28 
digits. For business purposes I think that's enough, but I may go for something 
bigger in future. Most likely the simplest solution will be to use string as an 
intermediate.

[dmb>] The point is that none of my code is in C, so I can't get Postgres 
macros or data types and my conversion routines into the same scope. That's why 
I'm looking for basic routines that support basic C data types as the go 
between, so I don't have to write too much C code. 


Regards
David M Bennett FACS

Andl - A New Database Language - andl.org








-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CStringGetTextDatum and other conversions in server-side code

2016-03-03 Thread david
FWIW here are examples from PL/R for incoming (argument) and outgoing
(result) conversions of scalar values:

  https://github.com/jconway/plr/blob/master/pg_conversion.c#L632
  https://github.com/jconway/plr/blob/master/pg_conversion.c#L1002

That same file also has routines for conversions of more complex data types.
[dmb>] 
[dmb>] Thank you for the links. Obviously you've put a lot of effort into this 
work, and equally obviously you know far more about this than I do. But I think 
the problem I face is a little different.

This code shows a programming environment in which both the Postgres and R 
declarations are in scope, and the code references both. I can't do that. I 
don't have any C code or any C API that I can bring into a Postgres scope, and 
Postgres does not provide a 'pure' C API that can be referenced from another 
language.

So what I'm trying to do is to write that 'pure' interface, using a thin layer 
of C and a chosen set of intermediate data types. [Take a look at 
https://www.sqlite.org/c3ref/funclist.html to see what I mean by a pure 
interface.]

For the fixed types, the Datum format is simple enough, but the variable types 
are much harder. Text comes as char with an encoding and I need Unicode. 
Decimal and time are proprietary formats of some kind, which I would prefer not 
to have to access directly. And so on.

I'm making progress, but it's slow. I've got everything else working: SPI 
queries, decoding tuples, etc. It's just these conversions that have me stuck.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Does a call to a language handler provide a context/session, and somewhere to keep session data?

2016-03-06 Thread david
Given that a language handler would be expected to be persistent, and to 
support concurrent (and reentrant) calls within a single database, is there a 
unique context or session identifier available?

Is there a place to store data, to be retrieved later in the same session?

Is there a place to store data (a cache?) that has been retrieved from the 
database for use by concurrent sessions using that database?

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does a call to a language handler provide a context/session, and somewhere to keep session data?

2016-03-07 Thread david
Thank you for your response.

 

Yes, I was aware of GD and SD. My question is about what facilities Postgres 
provides for implementing such a thing. Where is the proper place for the root 
of the SD/GD? What does an implementation use to determine that two calls 
belong to the same session?

 

I’m not finding that easy to understand by reading source code.

 

Regards

David M Bennett FACS

  _  

MD Powerflex Corporation, creators of PFXplus

To contact us, please call +61-3-9548-9114 or go to  
<http://www.pfxcorp.com/contact.htm> www.pfxcorp.com/contact.htm

 

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of David G. Johnston
Sent: Monday, 7 March 2016 4:28 PM
To: da...@andl.org
Cc: pgsql-general-owner+M220479=david=andl@postgresql.org; pgsql-general 

Subject: Re: [GENERAL] Does a call to a language handler provide a 
context/session, and somewhere to keep session data?

 

On Sun, Mar 6, 2016 at 10:21 PM, mailto:da...@andl.org> > 
wrote:

Given that a language handler would be expected to be persistent, and to 
support concurrent (and reentrant) calls within a single database, is there a 
unique context or session identifier available?

Is there a place to store data, to be retrieved later in the same session?

Is there a place to store data (a cache?) that has been retrieved from the 
database for use by concurrent sessions using that database?

 

http://www.postgresql.org/docs/9.5/interactive/plpython-sharing.html

 

​PL/R also has an implementation for this kind of thing.

 

David J.

 



Re: [GENERAL] Does a call to a language handler provide a context/session, and somewhere to keep session data?

2016-03-07 Thread david
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce



Yes, I was aware of GD and SD. My question is about what facilities Postgres 
provides for implementing such a thing. Where is the proper place for the root 
of the SD/GD? What does an implementation use to determine that two calls 
belong to the same session?

the process ID is unique for each active session.   of course, the OS can 
recycle a PID when a process/connection terminates

[dmb>] Thanks for the idea, but I’m wary of using PID for that purpose.

[dmb>] In the Python implementation the GD appears to just be stored as a 
simple variable at file scope in the DLL. Would I be right in saying that the 
language handler DLL is loaded exactly once for each session (when the language 
is first used)? If so, then any unique identifier allocated in PG_init (such as 
a GUID or timestamp or counter) would seem to serve the purpose. I just 
wondered if there was something clever I hadn’t found out about yet.

 

Regards

David M Bennett FACS

  _  

Andl - A New Database Language - andl.org

 



Re: [GENERAL] Does a call to a language handler provide a context/session, and somewhere to keep session data?

2016-03-07 Thread david
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Jan de Visser

One thing that's probably key here is that pgsql isn't multi-threaded. 
Individual connections are handled by forked backends, which share a shared- 
memory cache that's not accessible by SQL-land code (which includes language 
handlers). So I think your problem goes away once you realize that all the data 
you have is tied to a single connection anyway.

[dmb>] 
[dmb>] Thanks. Yes, I knew that. My problem is a strategy to start up the 
language engine once (per session), load stuff from the database once, and then 
keep the per-call cost as low as possible. I know that palloc() memory goes 
away; that it has a lifetime longer than a function invocation and should span 
a recursive function call, but definitely not longer than a transaction. So I 
need to get some memory per session and keep a pointer to it per session (which 
it seems is per process).

You cannot use multi-threaded code (which touches the database) in language 
handlers or other "plug-in" code.
[dmb>] That's good -- I hate threads. I've never been able to figure out how to 
test them adequately. No loss there.

Also, trying to outsmart the db engine's cache by building your own is usually 
an exercise in futility and often counter-productive. I speak from experience 
:-P
[dmb>] Isn't cache invalidation the second hardest thing in computing (after 
naming things)?

[dmb>] Seems like DLL static memory with allocation from process memory (or 
even malloc()) is "the simplest thing that could possibly work".

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does a call to a language handler provide a context/session, and somewhere to keep session data?

2016-03-07 Thread david
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce

this stuff you're loading from the database once, that's just data about your 
language plugin's configuration, or is it user data, or what?
[dmb>] It's the catalog for Andl. It contains defined functions, types, 
persistent scalar (non table) data values and links to tables. 

if its just a few global settings, you should consider using custom 
settings variables, rather than database tables.   for instance, pljava has a 
setting,
pljava.libjvm_location='/usr/lib/jvm/java-1.8.0/lib/libjvm.so' or whatever 
which it uses to find the Java native calls interface library...
[dmb>] Andl has something similar, but that problem is already solved.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does a call to a language handler provide a context/session, and somewhere to keep session data?

2016-03-07 Thread david
From: Jan de Visser [mailto:j...@de-visser.net] 

You're being pretty oblique about what it is you're trying to achieve.
[dmb>] Sorry you see it that way. I know exactly where I'm trying to get to,
but it would take many pages to explain and I don't want to unduly trouble
other busy people.

To go back to one of your earlier emails: the hardest problem in computing
isn't cache invalidation. It is clearly explaining what the problem at hand
is.
[dmb>] What would you like to know? You can find out about Andl by following
the link in my footer. You can find out about me from various links. I have
a long history of experience in C/C++ in multiple environments. Andl is
written in C#, and I've been doing that since pre 1.0. Much of my life has
been writing compilers and related tools for developers, never for end
users. I don't have problems that look anything like the others I see on
this list.

I know from experience that writing long questions is a good way to get
ignored. So I try to pick out one hard question and ask it as briefly as
possible, in the hope that someone with deep Postgres knowledge will
understand what I need and help me find it. At the moment I have two
pressing problems.

One is the start-up phase: getting the Andl runtime stoked up, load its
catalog, set up its execution environment (including its own type system),
ready for business. That process in Postgres seems to be undocumented, but I
think I have it sorted (barring memory lifetime issues down the track).

The other is type conversions: incoming and outgoing. That is undocumented
too, and that's disappointing. Anyone writing functions or a language
handler will really need this. I'm finding it hard to pick a good path right
now.

The third would be queries, but that doesn't look too hard. SPI is quite
well documented.


Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does a call to a language handler provide a context/session, and somewhere to keep session data?

2016-03-08 Thread david
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Jan de Visser

OK, still not quite sure what your architecture is. Is it


(1)

  +---+
  | Client system |
  +---+
  |
  v
  +---+
  |  andl |
  +---+
  |
  v (SQL)
  +---+
  | pgsql |
  +---+

Or (2)

  +---+
  | Client system |
  +---+
  |
  v (SQL)
  +---++--+
  |  driver/pgsql | -> |   andl   |
  +---++--+
  

In case (1), you're writing a driver: you abstract out the actual datastore
from your client program using andl. In case (2) you have a procedural
language handler where your client program still consciously connects to a
pgsql database, and within that database some/all data processing is
delegated to andl.

[dmb>] It's a fair question, and not obvious at various levels. The plan at
this stage is to continue to use as much of the Postgres client
infrastructure as possible, which means sticking with (2). The client will
definitely connect with a Postgres database and could use ODBC/JDBC/libpq or
whatever.

[dmb>] But the logical model is more like (1). Andl compiles into byte code
which is stored as functions in its catalog (which BTW is just another
Postgres table). The only useful SQL query is: "SELECT func(args);", which
executes an Andl function with some arguments and returns a result set. It
is not intended to ever embed an Andl function call into an SQL query as I
think is done with other languages.

[dmb>] But it would be equally possible to implement (1) directly, using the
Thrift interface (or something else that I haven't tripped over yet). I have
an Sqlite implementation that works like that.

The reason I'm asking is to set terminology. I've browsed some of your
website, and I'm still not clear which of the two options you're after. It
could even be both I think. So let's set some parameters.
[dmb>] The website always lags behind. I only started with Postgres about 2
weeks ago, and I really am only now getting a clear enough idea to be able
to write coherently about it.

Depending on where that goes, you should get pgsql-hackers involved. 
[dmb>] Love to. It takes time to learn the "culture of the lists" too.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does a call to a language handler provide a context/session, and somewhere to keep session data?

2016-03-08 Thread david
> ow...@postgresql.org] On Behalf Of Adrian Klaver

> On that subject. I have followed this thread, but only sort of as the
> quoting your email client is doing tends to obscure the flow. I know
> for me that is making it hard to follow your ideas.
> 
> Is that something that can be changed?

I use Outlook, and while it has lots of options, I can only pick one. I've
now switched to what seems to work best for plain text mailing lists, but it
looks terrible on anything else. Very annoying.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: Email address VERP problems (was RE: [GENERAL] Does a call to a language handler provide a context/session, and somewhere to keep session data?

2016-03-09 Thread david
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Magnus Hagander
Sent: Thursday, 10 March 2016 6:18 AM
To: Alvaro Herrera 
Cc: David Bennett ; pgsql-general@postgresql.org
Subject: Re: Email address VERP problems (was RE: [GENERAL] Does a call to a 
language handler provide a context/session, and somewhere to keep session data?

 

On Wed, Mar 9, 2016 at 7:03 PM, Alvaro Herrera mailto:alvhe...@2ndquadrant.com> > wrote:

David Bennett wrote:
> > ow...@postgresql.org <mailto:ow...@postgresql.org> ] On Behalf Of Alvaro 
> > Herrera
>
> > On that subject.  I noticed that Outlook seems to add the "return
> > path"
> > addresses (sometimes called bounce address or envelope sender) to the
> > CC header, which sets a new record in the stupidity scale.  Since we
> > use VERP, each message gets a different return path address, so with
> > each reply you make, Outlook adds a new address to the CC.
>
> Interesting. I use a lot of mailing lists and I've not run across one
> actually using VERP before. Is it becoming more frequent?

Not sure if it's becoming more frequent -- I only manage *this* list
server and we enabled VERP several years ago.  I thought it was common
practice ... the idea of manually managing addresses that bounce seems
completely outdated now.

 

 

It's been frequent for quite some time.

 

 

> I checked the headers. It seems this list is using a VERP address for both
> the Return-path and the Reply-To, and only the Sender identifies the list
> directly.

I'm pretty sure our list server is not setting the VERP address in
Reply-To.  That would be insane, wouldn't it.  We don't touch the
Reply-To header at all.  Maybe some other program along the way modifies
the email before Outlook gets it?

 

Yeah, same here.

 

However, if you look at the thread, it seems the VERP address was added to the 
*original email*. In the To field. Perhaps that's what confused the MUA into 
adding *another* VERP address on the reply?

 

That happens when I hit reply to.

 

Then AFAICT in 
http://www.postgresql.org/message-id/raw/001601d17852$7bea9e80$73bfdb80$@pfxcorp.com
 a second VERP address was added to the mail (very the 485 one).

 

To me it look slike this was definitely done by the MTA or MUA at pfxcorp.com 
<http://pfxcorp.com> . The archived copy (which is delivered the exact same way 
as a general email, it doesn't have any shortcut) does not contain this address 
naywhere, it was only used as an envelope sender. Possibly it got confused by 
the other VERP address in the initial email, which AFAICT is a manual mistake.

 

That’s possible. We have a non-standard mail server (as you can see from the 
headers). I’ll check.

 

Regards

David M Bennett FACS

  _  

Andl - A New Database Language - andl.org

 

 

 



[GENERAL] How to find configuration data in _PG_init()

2016-03-10 Thread david
My language handler requires some configuration information to start up, which 
I would like to get from the file system (but it could be set up by an SQL 
query SET). Currently I’m using hard-coded paths, which doesn’t seem like a 
good solution. Is there perhaps a way to get the path that was used to load the 
DLL? I couldn’t find one in the existing language implementations.

 

Having got started it then needs to query the database. Is that OK in 
_PG_init()?

 

Any hints much appreciated.

 

Regards

David M Bennett FACS

  _  

Andl - A New Database Language - andl.org

 



Re: [GENERAL] How to find configuration data in _PG_init()

2016-03-10 Thread david
> ow...@postgresql.org] On Behalf Of Adrian Klaver
> > My language handler requires some configuration information to start
> > up, which I would like to get from the file system (but it could be
> > set up by an SQL query SET). Currently I’m using hard-coded paths,
> > which doesn’t seem like a good solution. Is there perhaps a way to
> get
> > the path that was used to load the DLL? I couldn’t find one in the
> > existing language implementations.
> 
> There is pg_config:
> 
> http://www.postgresql.org/docs/9.5/interactive/app-pgconfig.html

Thanks for the response, but this doesn't help. This is a standalone exe and it 
relies on the command line to find its own path.

I need something that works in _PG_init(), which has no arguments passed in, 
and can only make calls to the C api.

> At the moment it is only available from the above command. In 9.6:
> 
> http://www.depesz.com/2016/02/29/waiting-for-9-6-add-new-system-view-
> pg_config/
> 
> it also be available in a system view.

So what C api calls does that implementation make to find this information?

> 
> There are the following systems catalogs that give you information
> about
> languages:
> 
> http://www.postgresql.org/docs/9.5/interactive/catalog-pg-
> language.html
> 
> http://www.postgresql.org/docs/9.5/interactive/catalog-pg-
> pltemplate.html
> http://www.postgresql.org/docs/9.5/interactive/catalog-pg-proc.html

Yes, I'm familiar with all those. One possibility is to retrieve 
pg_proc.probin, which appears to always contain the relevant path.

> 
> >
> > Having got started it then needs to query the database. Is that OK
> in
> > _PG_init()?


Regards
David M Bennett FACS

Andl - A New Database Language - andl.org






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to detoast a column of type BYTEAOID

2016-04-17 Thread david
I am attempting to create a new language implementation. The language is
Andl (andl.org), so the handler is plandl.
I am having a bit of trouble executing SPI queries from inside plandl. 

The particular problem arises after calling SPI_cursor_fetch() and then
SPI_getbinval() on a column of type BYTEA (BYTEAOID 17). I was expecting to
get back a Datum of type bytea, but the length is crazy. From the look of
the data, I suspect that what I have is a compressed TOAST, and I don't know
how to handle those. 

1. Does SPI_getbinval() indeed return compressed TOAST values?
2. If so, what should I do (in general) to ensure the values I retrieve are
de-toasted?


Regards
David M Bennett FACS

Andl - A New Database Language - andl.org




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do BEGIN/COMMIT/ABORT operate in a nested SPI query?

2016-04-18 Thread david
I am attempting to create a new language implementation. The language is Andl 
(andl.org), so the handler is plandl.
This is a question about executing SPI queries from inside plandl.

The documentation makes it clear that SPI allows nested queries; that in some 
instances it will be necessary to call SPI_push() and SPI_pop(), but in others 
this will be handled automatically. Se 
http://www.postgresql.org/docs/9.5/interactive/spi-spi-push.html.

It is an important design feature of plandl to allow nested queries.

My question is: where are the transaction boundaries if the inner/outer query 
do or do not contain BEGIN/ABORT/COMMIT? Do they nest, or does an inner COMMIT 
finish a transaction started by an outer BEGIN, or is it ignored?

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to detoast a column of type BYTEAOID

2016-04-18 Thread david
OK, got it.

I really wasn't expecting to have to deal with TOASTs in what looks like a 
rather ordinary query -- perhaps there might be a note in the documentation?

But thanks, that works just fine. Problem solved.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org

> -Original Message-
> From: Albe Laurenz [mailto:laurenz.a...@wien.gv.at]
> Sent: Monday, 18 April 2016 6:05 PM
> To: 'da...@andl.org *EXTERN*' ; pgsql-general@postgresql.org
> Subject: RE: [GENERAL] How to detoast a column of type BYTEAOID
> 
> da...@andl.org wrote:
> > I am attempting to create a new language implementation. The language
> > is Andl (andl.org), so the handler is plandl.
> > I am having a bit of trouble executing SPI queries from inside plandl.
> >
> > The particular problem arises after calling SPI_cursor_fetch() and
> > then
> > SPI_getbinval() on a column of type BYTEA (BYTEAOID 17). I was
> > expecting to get back a Datum of type bytea, but the length is crazy.
> > From the look of the data, I suspect that what I have is a compressed
> > TOAST, and I don't know how to handle those.
> >
> > 1. Does SPI_getbinval() indeed return compressed TOAST values?
> > 2. If so, what should I do (in general) to ensure the values I
> > retrieve are de-toasted?
> 
> You should use the PG_DETOAST_DATUM* macros from fmgr.h; see the
> documentation there.
> 
> Yours,
> Laurenz Albe





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do BEGIN/COMMIT/ABORT operate in a nested SPI query?

2016-04-18 Thread david
> From: Albe Laurenz [mailto:laurenz.a...@wien.gv.at]
> 
> > I am attempting to create a new language implementation. The language
> > is Andl (andl.org), so the handler is plandl.
> > This is a question about executing SPI queries from inside plandl.
> >
> > The documentation makes it clear that SPI allows nested queries; that
> > in some instances it will be necessary to call SPI_push() and
> > SPI_pop(), but in others this will be handled automatically. Se
> http://www.postgresql.org/docs/9.5/interactive/spi-spi-push.html.
> >
> > It is an important design feature of plandl to allow nested queries.
> >
> > My question is: where are the transaction boundaries if the
> > inner/outer query do or do not contain BEGIN/ABORT/COMMIT? Do they
> > nest, or does an inner COMMIT finish a transaction started by an outer
> BEGIN, or is it ignored?
> 
> You cannot have BEGIN or COMMIT inside a function.

Are you sure you meant it like that? I already have BEGIN/COMMIT inside a 
function and it works perfectly. If it did not, then it would be impossible to 
use BEGIN/COMMIT in any language handler, since every call to a language 
handler is a call to a function.

Did you mean 'inside a nested function'? Or something else?

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org









-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do BEGIN/COMMIT/ABORT operate in a nested SPI query?

2016-04-20 Thread david
> From: Albe Laurenz [mailto:laurenz.a...@wien.gv.at]
> > > > I am attempting to create a new language implementation. The
> > > > language is Andl (andl.org), so the handler is plandl.
> > > > This is a question about executing SPI queries from inside plandl.
> > > >
> > > > The documentation makes it clear that SPI allows nested queries;
> > > > that in some instances it will be necessary to call SPI_push() and
> > > > SPI_pop(), but in others this will be handled automatically. Se
> > > http://www.postgresql.org/docs/9.5/interactive/spi-spi-push.html.
> > > >
> > > > It is an important design feature of plandl to allow nested queries.
> > > >
> > > > My question is: where are the transaction boundaries if the
> > > > inner/outer query do or do not contain BEGIN/ABORT/COMMIT? Do they
> > > > nest, or does an inner COMMIT finish a transaction started by an
> > > > outer
> > > BEGIN, or is it ignored?
> > >
> > > You cannot have BEGIN or COMMIT inside a function.
> >
> > Are you sure you meant it like that? I already have BEGIN/COMMIT
> > inside a function and it works perfectly. If it did not, then it would
> > be impossible to use BEGIN/COMMIT in any language handler, since every call
> to a language handler is a call to a function.
> >
> > Did you mean 'inside a nested function'? Or something else?
> 
> I guess I'm out of my depth when it comes to language handlers...
> 
> But I cannot see how you can have BEGIN or COMMIT called from inside one.
> 
> Doesn't it look like that:
> 
> BEGIN;
> SELECT my_andl_function();
> COMMIT;
> 
> Since there are no autonomous transactions in PostgreSQL, how can you have
> BEGIN and COMMIT called from the code that is invoked by "SELECT
> my_andl_function()"?

I really don't know. But I have code that does this (no explicit BEGIN):

SELECT * FROM COMPILE($$
V6 := {{ abo:=true, abi:=b'DEADBEEF', anu:=123456789.987654321, ate:='abcdef', 
ati:=t'2015-12-31 23:59:58.' },
   { abo:=false, abi:=b'DEADBEEF', anu:=987654321.123456789, 
ate:='ghijklmno', ati:=t'2016-12-31 23:59:58.' }}
V6
$$);

And the generated code (which executes without error):
BEGIN;
DROP TABLE IF EXISTS "V6" ;
CREATE TABLE "V6" ( "abo" BOOLEAN, "abi" BYTEA, "anu" NUMERIC, "ate" TEXT, 
"ati" TIMESTAMP, UNIQUE ( "abo", "abi", "anu", "ate", "ati" ) );
COMMIT;
INSERT INTO "V6" ( "abo", "abi", "anu", "ate", "ati" ) VALUES ( $1, $2, $3, $4, 
$5 );
SELECT "abo", "abi", "anu", "ate", "ati" FROM "V6";

Maybe the generated BEGIN/COMMIT are ignored? I haven't tried an ABORT yet.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do BEGIN/COMMIT/ABORT operate in a nested SPI query?

2016-04-20 Thread david
> From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> 
> > I really don't know. But I have code that does this (no explicit BEGIN):
> 
> > SELECT * FROM COMPILE($$
> > V6 := {{ abo:=true, abi:=b'DEADBEEF', anu:=123456789.987654321,
> ate:='abcdef', ati:=t'2015-12-31 23:59:58.' },
> >{ abo:=false, abi:=b'DEADBEEF', anu:=987654321.123456789,
> > ate:='ghijklmno', ati:=t'2016-12-31 23:59:58.' }}
> > V6
> > $$);
> 
> > And the generated code (which executes without error):
> 
> > BEGIN;
> > DROP TABLE IF EXISTS "V6" ;
> > CREATE TABLE "V6" ( "abo" BOOLEAN, "abi" BYTEA, "anu" NUMERIC, "ate"
> > TEXT, "ati" TIMESTAMP, UNIQUE ( "abo", "abi", "anu", "ate", "ati" ) );
> > COMMIT; INSERT INTO "V6" ( "abo", "abi", "anu", "ate", "ati" ) VALUES
> > ( $1, $2, $3, $4, $5 ); SELECT "abo", "abi", "anu", "ate", "ati" FROM
> > "V6";
> 
> Define "executes".  You could shove those lines in via the wire protocol,
> sure, but SPI won't take them.

Now you really have me puzzled. What I provided is an extract from the log
of generated SQL commands sent to the SPI interface. [Obviously there also
values bound to the parameters which do not show up here.]

The code executes without error and produces exactly the output I expected.

I'll do some more checking to see if I missed something, but so far it just
works.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do BEGIN/COMMIT/ABORT operate in a nested SPI query?

2016-04-20 Thread david
> >> Define "executes".  You could shove those lines in via the wire
> >> protocol, sure, but SPI won't take them.
> 
> > Now you really have me puzzled. What I provided is an extract from the
> > log of generated SQL commands sent to the SPI interface. [Obviously
> > there also values bound to the parameters which do not show up here.]
> 
> > The code executes without error and produces exactly the output I
expected.
> 
> Are you remembering to check the result code from SPI_execute_whatever?

Of course. Every SPI call is checked and any unexpected error is trapped,
resulting eventually in a call to elog(ERROR).

The sequence I provided is a CREATE TABLE followed by an INSERT. The table
is successfully created with the correct contents.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is it possible to call Postgres directly?

2016-04-20 Thread david
I am attempting to create a new language implementation. The language is
Andl (see http://www.andl.org/2016/04/postgres-meet-andl/). 

I would like to be able to execute SQL queries by a direct call into
Postgres, without going through either (1) the language call handler or (2)
a 'wire' protocol. 

Re (1): At present I can do it like this:

SELECT * FROM COMPILE($$

$$);

But I would like to avoid the outer SQL SQL wrapper.

Re (2): Those queries will contain Andl functions, which require a callback
into the same session of the language 'engine'.

I guess what I'm trying to do is provide a substitute for existing wire
protocols, using either a Thrift server or a Web server calling directly
into Postgres.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to call Postgres directly?

2016-04-20 Thread david
> > I would like to be able to execute SQL queries by a direct call into
> > Postgres, without going through either (1) the language call handler
> > or (2) a 'wire' protocol.
> 
> What would you consider to be "directly"?  What advantage would you hope
to
> gain from a different API?

The aim is to execute Andl code at the top level, not inside SQL. So instead
of:

>>>SELECT * FROM AndlFunc(args);

I need

>>>AndlFunc(args)

> FWIW, it seems unlikely to me that we would consider anything much lower-
> level than SPI to be an officially-supported query interface.
> You could no doubt run a query by calling assorted backend functions
directly
> from a PL implementation, but it would be mostly on your own head whether
> that worked and kept working across releases.

No, SPI is quite low enough level for me. I really don't want or need to go
lower than that.

So the question is: Can a C program link to the Postgres DLL and call SPI
directly, rather than through a language function?

Is there a way to launch a Thrift server or a Web server and call SPI
directly?

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to call Postgres directly?

2016-04-21 Thread david
That's OK. I can set things up so that the Thrift or Web servers call the
Andl runtime directly, pass generated SQL queries in through libpq, and call
the Andl runtime recursively from the plandl handler as needed. It's just
one more API to deal with.

Thanks for the suggestion, but I already did an Sqlite implementation. It
was the obvious place to start, but now I need a real server.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org


> -Original Message-
> From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> Sent: Thursday, 21 April 2016 11:57 PM
> To: da...@andl.org
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Is it possible to call Postgres directly?
> 
>  writes:
> > So the question is: Can a C program link to the Postgres DLL and call
> > SPI directly, rather than through a language function?
> 
> No, there is no provision for non-server usage of the code.  If that's
what
> you're after you might be happier with SQLite or something similar.
> 
>   regards, tom lane



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Proper relational database?

2016-04-22 Thread david
Andl is a "proper relational database" language.

Andl is an original implementation of the language D described in The Third 
Manifesto. http://www.dcs.warwick.ac.uk/~hugh/TTM/. 

I have a working implementation of Andl on Postgres. See 
http://www.andl.org/2016/04/postgres-meet-andl/. This version is not ready for 
release, but there is a download that supports Sqlite, Thrift and REST.

The syntax of Andl is quite different from Tutorial D, or SQL. It does what SQL 
does, including any relational query you can think of, and a few you haven't! 
Check out the web site for some code samples.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org

> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Paul Jungwirth
> Sent: Saturday, 23 April 2016 5:38 AM
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Proper relational database?
> 
> On 04/21/2016 01:36 PM, Guyren Howe wrote:
> > Anyone familiar with the issue would have to say that the tech world
>  > would be a significantly better place if IBM had developed a real  >
> relational database with an elegant query language
> 
> I'm surprised no one yet has mentioned Tutorial D by C. J. Date. His book
> _Database in Depth_ is pretty much an extended argument for how superior it
> is to SQL. RelDB is apparently an open source implementation of it, and D4 is
> a commercial one. That's about all I know in terms of practically using it
> for something. But Date & Tutorial D seems like a good place to start if SQL
> isn't relational enough for you. The book I mentioned is short and easy to
> read.
> 
> Paul
> 
> 
> 
> 
> 
> 
> 
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
> changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Proper relational database?

2016-04-22 Thread david
Eventual consistency is not part of the language, so outside the scope of Andl.

 

Easy distribution depends on a standardised language. SQL is a definite fail. 
There is only one Andl and it works identically on all platforms. That should 
help.

 

Why schema-on-demand? Can you explain what you mean by that?

 

Regards

David M Bennett FACS

  _  

Andl - A New Database Language - andl.org

 

 

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Guyren Howe
Sent: Saturday, 23 April 2016 5:54 AM
To: PostgreSQL General 
Subject: Re: [GENERAL] Proper relational database?

 

The SQL language is terrible but we can live with it.

 

But the answer to "Are there any relational data stores that offer eventual 
consistency, easy distribution, schema-on-demand or any such things a large 
modern application can use?" appears to be no. And that's just awful.

 

On Apr 22, 2016, at 12:40 , David G. Johnston mailto:david.g.johns...@gmail.com> > wrote:

 

On Fri, Apr 22, 2016 at 12:25 PM, Raymond Brinzer mailto:ray.brin...@gmail.com> > wrote:

So, let's just flat-out ask.

Dear Important People:  would the PostgreSQL project consider
supporting other query languages? Or creating a plug-in mechanism for
them, so that alternative interface languages could be added without
changing the base code?

 

​

If by important you mean possessing a commit-bit then I don't count...but for 
me, such a project would have to gain significant adoption as a fork of the 
PostgreSQL code base before it would ever be considered for take-over by the 
mainline project.

​

​David J.​

 

 



Re: [GENERAL] Proper relational database?

2016-04-22 Thread david
> Why is starting at a low level important?  A database is truly relational to
> the extent that it implements the relational model. If you don't want the
> database to allow tables without keys, or to allow null values, don't let
> people create them. If the underlying machinery allows them, that seems like
> a mere performance issue; worrying about that from the outset seems like a
> perfect example of premature optimization. If PostgreSQL's performance is
> acceptable now, why wouldn't it be acceptable with a different interface
> language?

Agreed.

> There are other aspects of what would make a truly relational database, of
> course. Codd's 0th rule, for instance, that the "system must be able to
> manage data bases entirely through its relational capabilities" to me says
> that there should be no data definition language, except as syntactic sugar
> for relational operations. So you'd create users (thousands in one command,
> if you liked) by adding tuples to a base relation.

Yes, maybe, but that makes it not part of the language. You can't apply rule 0 
to things like creating a type or operator in a language.

> But which things are important? I think a good many of the things one might
> would be lower-hanging fruit than that. Just having a clean query language
> would alleviate a lot of (my) discomfort.

Andl is that.

> > I don't know if Postgres exposes the lower-level stuff to plugins or
> > not — it would be nice if this could be an alternative query language
> > for Postgres itself,
> 
> Well, the parser doesn't, but as best I can tell it's also somewhat loosely
> coupled from the system. It doesn't do table access, for instance.  It builds
> and returns a parse tree.  There's no reason you couldn't parse a different
> language and return a tree of the same type.  Or you could just translate
> your input language into SQL, and pass it along to the existing parser.

I looked into that, and it's too hard as a starting place. There is too much of 
the tree and the subsequent query planning that is hooked into specific 
features of SQL. Instead, Andl generates a small subset of SQL. Once each query 
has been generated and parsed, the prepared statements can be cached and you 
get most of the benefits.

> > but the assumptions about the two worlds (SQL vs a properly relational
> > store) are probably too different.

> Are there relational algebra expressions, or other operations necessary to a
> truly relational database, which cannot be translated into SQL?  I'm not
> aware that there are, but I'd be interested to hear of it.  If there were,
> there's a good chance you wouldn't be able to translate them into the parse
> tree, either.

Absolutely not. SQL is a (nearly) full implementation of the relational 
algebra, plus other non-relational stuff. The only thing it really can't handle 
is a table with no columns! (I have to fake that)

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Proper relational database?

2016-04-22 Thread david
> On Fri, Apr 22, 2016 at 10:45 PM, John R Pierce  wrote:
> > a table with no columns would have no primary key...  doesn't that
> > violate one of the fundamental tenets of the relational model ?
> 
> Not as I understand it.  A relation must have at least one candidate key.
> That will be the set of all the fields, if no proper subset qualifies.
> Calling one key "primary" is merely convention, so far as I am aware (talking
> relational theory, here, not how databases regard primary keys).

This is a 'soft' requirement. If there is no other key, then the set of all 
attributes is the key.

> In a table with no columns, the only candidate key is the set of all fields,
> which is the empty set.  If you want to call that the primary key, it
> shouldn't be a problem.  The tuples (all 0 of them) are guaranteed to be
> unique.

The relation with no attributes may have a tuple, which itself has no 
attributes. Thus there are two such relations, one empty and one of degree one. 
They can be referred to as DUM and DEE, or as false and true. See 
http://c2.com/cgi/wiki?TableDum for example.

Many experienced users of SQL are aware of situations where they are useful. In 
Andl they are automatically available as literals.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Proper relational database?

2016-04-22 Thread david
> So, let's just flat-out ask.
> 
> Dear Important People:  would the PostgreSQL project consider supporting
> other query languages? Or creating a plug-in mechanism for them, so that
> alternative interface languages could be added without changing the base
> code?

I very much doubt it. The use case has to be established first.

As it happens, the existing PL extension capability provides enough to get an 
alternative query language (such as Andl) to work. That's why I chose Postgres.

Making it a 'native' would be not so hard if there is real demand.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Proper relational database?

2016-04-23 Thread david
> ow...@postgresql.org] On Behalf Of Guyren Howe
> Sent: Saturday, 23 April 2016 4:04 AM
> To: Raymond Brinzer 
> Subject: Re: [GENERAL] Proper relational database?
> 
> On Apr 22, 2016, at 10:45 , Raymond Brinzer  wrote:

> The fundamental storage model needs to at least be a bit different. In
> particular, relations can't allow duplicates. You could have nulls (Codd
> proposed two different forms of null IIRC: a single null value and two
> different null values), although they should be more principled than the
mess
> they are in SQL.

Andl has no nulls. I have read Codd's later work, but IMHO the consequences
of multi-valued logic do not justify that conclusion.

The standard storage engines used for SQL can handle tables with no nulls
perfectly well.

> I am no expert on database optimization, but I understand that it is
> significantly easier to do query optimization in a properly relational
> database, as it forms a reasonably simple algebra, which can be optimized
> much as you would optimize evaluation of a numeric expression.

I would venture a guess that advanced query planners already take into
account whether columns have nulls or not. Whatever can be done for a 'pure'
RA can already be done as a special case for SQL, and probably has been.

> Major gains from a proper relational store would be:
> 
> - a better language, easier to parse, read and generate. Perhaps multiple
> equivalent query languages;

Check. Andl is that, and I know several others.

> - other storage models (distributed and eventually consistent, say);
> - simpler (in implementation and use);

Not sure whether this is a reasonable consequence.
> 
> We may also get some degree of faster and other good things. It also might
be
> implemented in such a way that it can run as a server or more like SQLite.

Andl does that. It provides 3 native servers: Thrift, Web API and REST.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Proper relational database?

2016-04-23 Thread david
> This is a relevant project: https://github.com/agentm/project-m36

Thanks -- I didn't know about that one. I'll add it to my list.

It's quite unlike other implementations. I have some reading to do.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Proper relational database?

2016-04-23 Thread david
> From: Thomas Munro [mailto:thomas.mu...@enterprisedb.com]
 
> FWIW standard SQL may not allow it but Postgres does, and it's even possible
> to exclude duplicates by using an expression that references the whole row.

Thank you. I didn't know that.

I'll use it if I can verify it works right. It's not that important -- Andl can 
emulate it quite easily.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org









-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Proper relational database?

2016-04-23 Thread david
Absolutely not. SQL is a (nearly) full implementation of the relational 
algebra, plus other non-relational stuff. The only thing it really can't handle 
is a table with no columns! (I have to fake that)

a table with no columns would have no primary key...  doesn't that violate one 
of the fundamental tenets of the relational model ?

john r pierce, recycling bits in santa cruz

The relational model requires a key, but the key can be empty (no attributes). 
Such a relation can itself be empty, or it can have a single tuple as its body. 
The maths requires it, and it works just fine.

Regards

David M Bennett FACS

  _  

Andl - A New Database Language - andl.org

 

 

 



Re: [GENERAL] Proper relational database?

2016-04-23 Thread david
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Guyren Howe

 

Why schema-on-demand? Can you explain what you mean by that?

 

Something that is attractive, for beginners or perhaps when prototyping is
that you don't have to declare a table. You can just insert tuples into a
predicate whose name you provide and they go in and you've defined a
relation just by using it.

 

The problem is which side of the fence you're writing code. If you write
code on the app side, it's a hassle and every table is a cost. If you write
on the SQL side, you can create temp tables with little effort.

 

So the answer to that one is: in a relational language (like Andl) it's so
easy to create a relation (table) you just do it.

 

Much of my point in raising this discussion is that there are features that
the NoSQL folks are implementing that are useful in some cases. Things they
are doing like eventually consistent distributed stores are really required
at sufficient scale, but there are other great ideas. SQL's storage model is
not the only way, nor should it be. We shouldn't have to abandon the
relational model to get such features, but we *do* have to abandon SQL to
get them. And good riddance.

 

Agreed.

 

I would like to have relational stores providing such features before some
monstrosity like Mongo or CouchDB becomes so entrenched we'll never be rid
of it.

 

They do provide some genuinely useful non-relational features, but yes.
Friendly relational is what I'm working on.

 

Regards

David M Bennett FACS

  _  

Andl - A New Database Language - andl.org

 

 

 



Re: [GENERAL] Proper relational database?

2016-04-23 Thread david
> ow...@postgresql.org] On Behalf Of Eric Schwarzenbach

> >> If I had a few $million to spend in a philanthropical manner, I would
> >> hire some of the best PG devs to develop a proper relational database
> server.
> >> Probably a query language that expressed the relational algebra in a
> >> scheme-like syntax, and the storage model would be properly
> >> relational (eg no duplicate rows).

If there were someone to pay the bills, would you work on it?

> >> It's an enormous tragedy that all the development effort that has
> >> gone into NoSQL database has pretty much all gotten it wrong: by all
> >> means throw out SQL, but not the relational model with it. They're
> >> all just rehashing the debate over hierarchical storage from the 70s.
> >> Comp Sci courses should feature a history class.
> >>
> >> It's a bit odd to me that someone isn't working on such a thing.

Several people are, but without the few $million...

> > Well when IBM were first developing relational databases there were
> > two different teams.  One in California which produced System-R which
> > became what we now know as DB2 and spawned SQL, and the other in
> > Peterlee in the UK which was called PRTV (the Peterlee Relational Test
> > Vehicle).  PRTV rather died but bits of it survived.  

And many of the people who worked on it are still around.

> > According to the Wikipedia page it did have a language (ISBL) but from
> > what I recall (and it was nearly 40 years ago) there were a series of
> > PL/1 function calls we used rather than encoding the request as a
> > string as SQL systems require.

Ditto. Including Hugh Darwen.

> One of the people involved in that was Hugh Darwen, who is one of the authors
> of The Third Manifesto, which is an attempt to define what a properly
> relational language and system should look like. So you could say the
> experience of ISBL vs SQL has been folded into that effort.

See http://www.thethirdmanifesto.com/. 

Hugh worked for some years for IBM on the SQL Committee, but eventually left 
over a major disagreement in direction. TTM is based on the work he's done 
since (with Chris Date). Andl derives from that.

I would say that very little of PRTV/ISBL experience was added to SQL once it 
had been standardised, even with Hugh doing his best.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org










-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Proper relational database?

2016-04-23 Thread david
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-


> This turns out to be true in many areas of language design, mutli-user
system
> security, virtually everything to do with networking, and application
> deployment.  I was at an IETF meeting some years ago where someone talking
> about "Internet of Things" stuff was going on at length about how nobody
> around the IETF really understood constrained systems.  Standing behind
him
> at the mic was an assortment of grey-bearded men who'd worked directly on
the
> original IMPs (which were 16-bit Honeywells that ran at like 5MHz and had
> IIRC 16Kwords of memory).

Amen to that. I started on embedded systems that ran >1 usec cycle time and
16kb memory. Machine code, no assembler. But I never want to do that again
-- be pleased someone does!

> It's also true that crappy interfaces that are good enough stick around
> anyway.  The UNIX Haters' Handbook is full of evidence of how much less
good
> UNIX was, but even Apple gave in.  Also, many of the historical
compromises
> turn out, once you start to try to make different ones, to be more
obviously
> defensible.  

Amen to that. Replacing SQL is easy when you look at SQL's faults, but not
so easy when you realise its strengths.

> Most of the NoSQL trend was not a hatred of SQL the language but
> a carelessness about the relational syntax or a view that promises about
> consistency are dumb.  Then the first credit card number gets lost in an
> eventually-consistent system, and people suddenly understand viscerally
why
> transactions semantics are so hard.

But there is goodness there, and NoSQL is now just as hard to replace.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: On the right tool (was Re: [GENERAL] Proper relational database?)

2016-04-23 Thread david
he path by which we learn to
make
> new mistakes, as opposed to the same mistake over again.

Most people do very little tool shaping. Most people will use a hammer to
drive a screw if that's what they see other people doing. I'm a toolmaker
and I think we can do better than SQL, but it sure is hard to get there.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Proper relational database?

2016-04-24 Thread david
> ow...@postgresql.org] On Behalf Of Thomas Munro

> So what incremental improvements could we steal from "properly relational"
> query languages?

Here is my list of deficiencies in some or all dialects of SQL.
-Relation and tuple as data types
-Globally exclude NULLs and duplicate columns
-Relation with no columns, null key
-Natural antijoin
-Tuple join operations
-Tuple tests for equality, superset, subset
-Tuple test for set membership of relation
-Relation tests for equality, superset, subset
-Relation set operations include symmetric difference
-User-defined functions of arbitrary complexity
-User-defined aggregation functions of arbitrary complexity
-User-defined ordered queries of arbitrary complexity
-Iterative/recursive queries (when)

> Here's one I've thought about, trivial as it may be.  I noticed that Tutorial
> D (and apparently Andl too) includes a [NOT] MATCHING operator (alternatively
> spelled SEMIJOIN and SEMIMINUS) corresponding to the ⋉ (semi-join) and ▷
> (anti-join) operators from relational algebra.  In SQL you write [NOT] EXISTS
> or [NOT] IN in the WHERE clause, rather than something explicit in a  table> clause, though experienced users often talk explicitly about semi- and
> anti-joins, both because of the theory and because the terms show up in query
> plans.

Yes, there are two joins (join and antijoin). Semijoin is just one of a number 
of projections following a join, but antijoin is a quite different algorithm.

Antijoin is quite hard to write in SQL in such a way that the query planner 
will do the right thing. There is a lot of variation between dialects.

> A recent blog post[1] argues that SQL should have a SQL92-style JOIN syntax
> for this and points at a couple of products that have added one[2][3].  I
> guess it might be hard to convince the Postgres community to add support for
> a non-standard syntax that doesn't give you anything you can't already do,
> but as an idea I find it interesting and it seems to be in the spirit of the
> part of the Third Manifesto that says: "support[ing] the usual operators of
> the relational algebra [..].  All such operators shall be expressible without
> excessive circumlocution."

The purpose is simply that explicit syntax allows for explicit query 
optimisation.
> 
> For example, say we want all students who have one or more exam today:
> 
>   SELECT s.student_id, s.name
> FROM student s
>WHERE EXISTS (SELECT 1
>FROM exam e
>   WHERE e.student_id = s.student_id
> AND e.exam_date = CURRENT_DATE)
> 
> I don't know Tutorial D, but I think it might express that with something
> like:
> 
>   ( student MATCHING exam
> WHERE exam_date = CURRENT_DATE )
>   { student_id, name }
> 
> With 10 minutes of prototype hacking I convinced Postgres to accept SEMI and
> ANTI like this:
> 
>   SELECT s.student_id, s.name
> FROM student s SEMI JOIN exam e USING (student_id)
>WHERE e.exam_date = CURRENT_DATE
> 
> I guess a real version should accept (or require?) LEFT or RIGHT before
> SEMIANTI.  

Andl has semijoin, rsemijoin, ajoin and rajoin. They are trivial to add once 
you have the basic algorithm, but Tutorial D does not.

When using this hypothetical syntax I think you should be allowed
> to refer to e.exam_date in the WHERE clause but not in the SELECT list
> (Impala apparently does allow you to see data from exam, and returns values
> from an arbitrary matching row, but that doesn't seem right to me).  But the
> EXISTS syntax is correspondingly strange in that it requires you to provide a
> SELECT list which is entirely discarded, so people often write "*" or "1"
> (erm, OK, I guess you can use an empty select list in recent Postgres).

SQL has an implicit ordering of query evaluation -- you will often need to 
write a nested subquery or correlated query for what should be very 
straightforward situations. That's another thing that's easy to fix, if allowed.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Background worker plus language handler for Andl: OK?

2016-04-24 Thread david
It seems to me that it should be possible to create a language front end for
Andl by creating a background worker and a language handler. The BGW will
need to implement the Andl language and a listener, and submit generated SQL
to SPI. The PL will get called by query functions and pass them to the BGW
(via a callback) for execution. AFAIK the BGW and the PL run in the same
process, and this should work.

Apart from reading the documentation (I have), reading source code
(worker_spi.c) and heeding warnings about the risk of breaking the server,
is there any reason why this would not work? There is a fair bit of work,
and it would be nice to know what to watch out for.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Background worker plus language handler for Andl: OK?

2016-04-25 Thread david
> ow...@postgresql.org] On Behalf Of David Wilson

> I've been reading your posts over the past few days and while I find it
fun
> to follow, I can't help but wonder why there there is urgency in
> reimplementing a protocol within PG itself.

I think it's an interesting problem -- glad you find it so.

No, I don't plan to implement any more protocols. The problem here is
callbacks, and probably transaction boundaries.

Andl is designed to be a relational language filling a similar niche to SQL
with PLSQL or SQL/PSM. It contains a full implementation of the relational
algebra, but is also a general purpose programming language. [The code is
all compiled and the front end is RPC, nothing like libpq or ODBC.]

A query is a relational expression and may evaluate arbitrary expressions.
Example:

// Q8. Get all shipments where the quantity is in the range 300 to 750
inclusive.
// SQL> select spj.* from spj where spj.QTY>=300 and spj.QTY<=750; 
Andl: SPJ .where(QTY>=300 and QTY<=750)

The JOIN can be generated as SQL but the where predicate requires a callback
into the Andl runtime. I would be quite happy to run queries through libql,
but I can see no way to handle callbacks without running in-process.

[Yes, in some cases the query planner will replace this by an operation on
an index, but this is about the general case.]

I have it working as a PL extension, but then the entire query has to be
embedded inside a PL function call which is (a) messy (b) cannot manage
transaction boundaries.

> It seems to me this is a much larger undertaking than you realize, for
> example, you would at least need to reinvent PG's existing authentication
and
> authorization mechanisms, or perhaps patch PG somehow to expose them
usefully
> to your own code.

It's a useful point, but I'm not sure it applies. Andl is not intended to
have an SQL-like execution or security model. Perhaps this is something that
needs some more thought, but it's unlikely to be a critical factor.

> Is there a hard requirement that this stuff be in-process? Most of the
cost
> of a SQL query will be lost in planning and execution, the actual time
spent
> copying some strings around and context switching will be pretty minimal
for
> a query of any significance.

See above. The cost of setting up the query is trivial compared to the cost
of a callback every time an expression is evaluated, perhaps many times per
row.
> 
> If I were you I'd start with building a robust proxy server first, serving
up
> your custom protocol and rewriting it to a PG client connection
internally,
> and only then look at how that might be merged in-proess if indeed there
was
> a real need for it.

If there really is another way to go, I'm happy to hear about it. I think
this is not a job for a proxy server -- the external interface for Andl is
RPC, not shipping query text around. That's all working -- it's the backend
I need.


Regards
David M Bennett FACS

Andl - A New Database Language - andl.org





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problems running the WorkerSpi sample

2016-04-25 Thread david
I’ve been trying to get the WorkerSpi sample working on a Windows installation, 
VS 2015, x64. So far, the following problems.

 

1.   Does not compile OOB. Added the following lines.

PGDLLEXPORT void _PG_init(void);

PGDLLEXPORT Datum worker_spi_launch(PG_FUNCTION_ARGS);

PGDLLEXPORT void worker_spi_main(Datum);

2.   Two warnings about int usage. Ignored.

3.   Documentation is a bit light on, but copying the control file into the 
right folder makes the extension visible.

4.   Extension installs but does nothing. No log messages. No schema. No 
table.

The variable worker_spi.naptime appears in pg_settings but 
worker_spi.total_workers does not.

5.   Extension uninstalls without error, but does not release the DLL.

6.   Tried to restart server with pg_ctl but it does nothing. No error, no 
log message, nada. [Is this a known problem with Windows?]

7.   Restarted server using Task Manager. And so back to the beginning and 
try again.

 

The worker_spi.c code contains this sequence:

 

 DefineCustomIntVariable("worker_spi.naptime",

 "Duration between each check (in 
seconds).",



 if (!process_shared_preload_libraries_in_progress) 

   return;

 DefineCustomIntVariable("worker_spi.total_workers",

 "Number of workers.",

 

>From this I deduce that process_shared_preload_libraries_in_progress was 
>false. So this appears to be an extension that must be preloaded. That does 
>not seem to be documented anywhere.

 

It would be helpful to get some suggestions about what changes would be needed 
to allow it to be loaded on demand.

 

Regards

David M Bennett FACS

  _  

Andl - A New Database Language - andl.org

 



Re: [GENERAL] Problems running the WorkerSpi sample

2016-04-26 Thread david
Solved. The sample can indeed be loaded at startup (although it emits some 
strange LOG messages).

 

But to load it dynamically requires this SQL:

 

CREATE OR REPLACE FUNCTION worker_spi_launch(i INT) RETURNS INT

AS '' LANGUAGE C;

SELECT * FROM worker_spi_launch();

 

It would be helpful to add this to the documentation. From this point on, it 
looks pretty straightforward, but getting here was not easy.

 

I’m still wondering how to do a clean restart of the server without pg_ctl. 
Restarting the service is a bit brutal.

 

Regards

David M Bennett FACS

  _  

Andl - A New Database Language - andl.org

 

 

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of da...@andl.org
Sent: Tuesday, 26 April 2016 4:04 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Problems running the WorkerSpi sample

 

I’ve been trying to get the WorkerSpi sample working on a Windows installation, 
VS 2015, x64. So far, the following problems.

 

1.   Does not compile OOB. Added the following lines.

PGDLLEXPORT void _PG_init(void);

PGDLLEXPORT Datum worker_spi_launch(PG_FUNCTION_ARGS);

PGDLLEXPORT void worker_spi_main(Datum);

2.  Two warnings about int usage. Ignored.

3.   Documentation is a bit light on, but copying the control file into the 
right folder makes the extension visible.

4.   Extension installs but does nothing. No log messages. No schema. No 
table.

The variable worker_spi.naptime appears in pg_settings but 
worker_spi.total_workers does not.

5.   Extension uninstalls without error, but does not release the DLL.

6.   Tried to restart server with pg_ctl but it does nothing. No error, no 
log message, nada. [Is this a known problem with Windows?]

7.   Restarted server using Task Manager. And so back to the beginning and 
try again.

 

The worker_spi.c code contains this sequence:

 

 DefineCustomIntVariable("worker_spi.naptime",

 "Duration between each check (in 
seconds).",



 if (!process_shared_preload_libraries_in_progress) 

   return;

 DefineCustomIntVariable("worker_spi.total_workers",

 "Number of workers.",

 

>From this I deduce that process_shared_preload_libraries_in_progress was 
>false. So this appears to be an extension that must be preloaded. That does 
>not seem to be documented anywhere.

 

It would be helpful to get some suggestions about what changes would be needed 
to allow it to be loaded on demand.

 

Regards

David M Bennett FACS

  _  

Andl - A New Database Language - andl.org

 



Re: [GENERAL] intermittent issue with windows 7 service manager not able to correctly determine or control postgresql 9.4

2016-04-30 Thread david
I have the same problem routinely on Windows 10.

 

The postgresql-x64-9.5 service shows up in Task Manager as Stopped, but is 
actually running just fine.

 

BTW pg_ctl does nothing – silently. The only way to restart the server is to 
kill off a process or two.

 

Regards

David M Bennett FACS

  _  

Andl - A New Database Language - andl.org

 

 

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Tom Hodder
Sent: Sunday, 1 May 2016 12:36 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] intermittent issue with windows 7 service manager not able 
to correctly determine or control postgresql 9.4

 

Hi All,

I've got several machines running windows 7 which have postgresql 9.4 installed 
as a service, and configured to start automatically on boot. I am monitoring 
these services with zabbix and several times a week I get a notification that 
the postgresql-x64-9.4 service has stopped.

When I login to the machine, the service does appear to be stopped;


​

However when I check the database, I can query it ok;

C:\Program Files\PostgreSQL\9.4>bin\psql.exe -U postgres -c "SELECT count(*) 
from media;" association
Password for user postgres:
  count
-
 1167846
(1 row)



If I try to start the service from the service manager, I see the following 
error in the logs;

2016-04-30 05:03:13 BST FATAL:  lock file "postmaster.pid" already exists
2016-04-30 05:03:13 BST HINT:  Is another postmaster (PID 2556) running in data 
directory "C:/Program Files/PostgreSQL/9.4/data"?

The pg_ctl tool seems to correctly query the state of the service and return 
the correct PID;

C:\Program Files\PostgreSQL\9.4>bin\pg_ctl.exe -D "C:\Program 
Files\PostgreSQL\9.4\data" status
pg_ctl: server is running (PID: 2556)
The other thing that seems to happen is the pgadmin3 tool seems to have lost 
the ability to control the service as all the options for start/stop are greyed 
out;



The only option to get the control back is to kill the processes in the task 
manager or reboot the machine.
Any suggestions on what might be causing this?
Thanks,
Tom






Re: [GENERAL] pgpool II question

2007-09-12 Thread David
On 9/12/07, sharmi Joe <[EMAIL PROTECTED]> wrote:
> Hi,
> Im a newbie to postgres. Can you explain what pgpool is ?
> Thanks
>
>

http://pgpool.projects.postgresql.org/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] General Q's

2007-12-12 Thread David
Hello

 

I have some questions regarding pgsql and how it may apply to other open
source software.  Could you advise which forum is best suited to answer
these questions.

 

Background

I am undertaking a development which typically relies on the type of
functionality that can be found at Joomla (www.joomla.com
<http://www.joomla.com/> ) and Drupel (www.drupel.com
<http://www.drupel.com/> ) - account management, user content development,
social networking, etc.  I want to have it developed using Postgsql and NOT
MySql due to the inherent scalability of Postgsql.

 

Both Joomla / Drupal are making their databases work with Postgsql using an
abstraction layer.

 

Questions - re MySQL/ Postgresql/Joomla/Drupel

Could you advise on the following?

*   Is there a project to create MySQL compatibility for Postgresql? I
am not technical so you will need to explain as if I am a 3-year old!
*   What other Open Source solutions which support the similar type of
functionality found on Joomla and Drupal work with Postgresql? 

 

 

David Brown

 

   

 

Eclipse Consulting

M (UK)   +447939544481

F  (UK)   +442071932590

E: [EMAIL PROTECTED]

 

 

 

 



[GENERAL] Insert Rewrite rules

2008-04-20 Thread David
I am currently attempting to migrate one of our customers databases to
partitioned tables.  This database is used to store firewall logs and
is currently in the range of 600GB (thats 90 days worth).

I am having problems with the rewrite rules though it seems to be skipping over 
any rule that has a where statement in it ie

CREATE OR REPLACE RULE firewall_test AS ON INSERT TO firewall DO INSTEAD INSERT 
INTO firewall_y2008m04d21 VALUES(NEW."time");
INSERT INTO firewall (time) VALUES('2008-04-21 12:00:00');
INSERT 1029459 1

works but

CREATE OR REPLACE RULE firewall_test AS ON INSERT TO firewall WHERE TRUE DO 
INSTEAD INSERT INTO firewall_y2008m04d21 VALUES(NEW."time");
INSERT INTO firewall (time) VALUES('2008-04-21 12:00:00');
ERROR:  No inserting to firewall please

doesn't. I have placed a trigger on the table to prevent anything from 
inserting into the top level table hence the error.

an example of the full rule we are trying to use that doesn't work is

firewall_y2008m04d21_insert AS
ON INSERT TO firewall
   WHERE new."time" >= '2008-04-21 00:00:00'::timestamp without time zone AND 
new."time" < '2008-04-22 00:00:00'::timestamp without time zone DO INSTEAD  
INSERT INTO firewall_y2008m04d21 ("time", firewall, pri, sent, rcvd, lsent, 
lrcvd, duration, src, dst, arg, msg, ref, dstname, srcname, agent, server, 
srcclass, dstclass, rule, username, proto, op, result, vpn, type, cache, 
direction, content, fwdfor, coninfo, tcpflags, method, action, policy, service, 
engine, state, fwid, block, authprofile, summarised, realm, clientmac, account, 
count, interface) 
  VALUES (new."time", new.firewall, new.pri, new.sent, new.rcvd, new.lsent, 
new.lrcvd, new.duration, new.src, new.dst, new.arg, new.msg, new.ref, 
new.dstname, new.srcname, new.agent, new.server, new.srcclass, new.dstclass, 
new.rule, new.username, new.proto, new.op, new.result, new.vpn, new.type, 
new.cache, new.direction, new.content, new.fwdfor, new.coninfo, new.tcpflags, 
new.method, new.action, new.policy, new.service, new.engine, new.state, 
new.fwid, new.block, new.authprofile, new.summarised, new.realm, new.clientmac, 
new.account, new.count, new.interface)

There is one of these for each day with only the times changing.

Am I missing something or is this just broken?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Insert Rewrite rules

2008-04-21 Thread David
I dropped the trigger again and now it works... but it is really slow
it was maxing out the cpu doing 26r/s not that this is a big
problem because the firewall code wont be reling on the rewrite rules
to put the data in the correct spot as it will just insert it into the
correct table and I have modified my data loader to do the same to get
the 600G of data back in in partitioned tables.

And as far as I know it was a row level trigger but I could be wrong

On Mon, Apr 21, 2008 at 05:32:29PM +1000, Klint Gore wrote:
> David wrote:
> >I am having problems with the rewrite rules though it seems to be skipping 
> >over any rule that has a where statement in it ie
> >
> >CREATE OR REPLACE RULE firewall_test AS ON INSERT TO firewall DO INSTEAD 
> >INSERT INTO firewall_y2008m04d21 VALUES(NEW."time");
> >INSERT INTO firewall (time) VALUES('2008-04-21 12:00:00');
> >INSERT 1029459 1
> >
> >works but
> >
> >CREATE OR REPLACE RULE firewall_test AS ON INSERT TO firewall WHERE TRUE 
> >DO INSTEAD INSERT INTO firewall_y2008m04d21 VALUES(NEW."time");
> >INSERT INTO firewall (time) VALUES('2008-04-21 12:00:00');
> >ERROR:  No inserting to firewall please
> >
> >doesn't. I have placed a trigger on the table to prevent anything from 
> >inserting into the top level table hence the error.
> >
> >an example of the full rule we are trying to use that doesn't work is
> >
> >firewall_y2008m04d21_insert AS
> >ON INSERT TO firewall
> >   WHERE new."time" >= '2008-04-21 00:00:00'::timestamp without time zone 
> >   AND new."time" < '2008-04-22 00:00:00'::timestamp without time zone DO 
> >   INSTEAD  INSERT INTO firewall_y2008m04d21 ("time", firewall, pri, sent, 
> >   rcvd, lsent, lrcvd, duration, src, dst, arg, msg, ref, dstname, 
> >   srcname, agent, server, srcclass, dstclass, rule, username, proto, op, 
> >   result, vpn, type, cache, direction, content, fwdfor, coninfo, 
> >   tcpflags, method, action, policy, service, engine, state, fwid, block, 
> >   authprofile, summarised, realm, clientmac, account, count, interface) 
> >  VALUES (new."time", new.firewall, new.pri, new.sent, new.rcvd, 
> >  new.lsent, new.lrcvd, new.duration, new.src, new.dst, new.arg, new.msg, 
> >  new.ref, new.dstname, new.srcname, new.agent, new.server, new.srcclass, 
> >  new.dstclass, new.rule, new.username, new.proto, new.op, new.result, 
> >  new.vpn, new.type, new.cache, new.direction, new.content, new.fwdfor, 
> >  new.coninfo, new.tcpflags, new.method, new.action, new.policy, 
> >  new.service, new.engine, new.state, new.fwid, new.block, 
> >  new.authprofile, new.summarised, new.realm, new.clientmac, new.account, 
> >  new.count, new.interface)
> >
> >There is one of these for each day with only the times changing.
> >
> >Am I missing something or is this just broken?
> >  
> Have you got a statement trigger or a row trigger?
> 
> klint.
> 
> -- 
> Klint Gore
> Database Manager
> Sheep CRC
> A.G.B.U.
> University of New England
> Armidale NSW 2350
> 
> Ph: 02 6773 3789  
> Fax: 02 6773 3266
> EMail: [EMAIL PROTECTED]
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 

-- 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Database design questions

2008-06-18 Thread David
One of my previous projects was to
reliably migrate data from one database to another, where the 2
databases had problems like:

- Many foreign keys weren't enforced

- Some fields needed special treatment (eg: should be unique, or
behave like a foreign key ref, even if db schema doesn't specify it.
In other cases they need to be updated during the migration).

- Most auto-incrementing primary keys (and related foreign key
references) needed to be updated during migration, because they are
already used in the destination database for other records.

- Many tables are undocumented, some fields have an unknown purpose

- Some tables didn't have fields that can be used as a 'natural' key
for the purpose of migration (eg: tables which only exist to link
together other tables, or tables where there are duplicate records).

I wrote a Python script (using SQLAlchemy and Elixir) to do the above
for our databases.

Are there any existing migration tools which could have helped with
the above? (it would have required a *lot* of user help).

Are there recommended ways of designing tables so that synchronization
is easier?

The main thing I've read about is ensuring that all records have a
natural key of some kind, eg GUID. Also, your migration app needs to
have rules for conflict resolution.

==

Thanks in advance for any suggestions :-)

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Controlling write access to a table

2008-06-18 Thread David
On Wed, Jun 18, 2008 at 11:04 AM, Dave Coventry <[EMAIL PROTECTED]> wrote:

[...]

> The marks (or grades) of the students are a different matter and we
> want to restrict changes to this data to a very few people.
>

How about setting up separate database users/groups (aka roles in
newer postgresql versions), and only granting update permissions to
the users/groups who should have it?

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Controlling write access to a table

2008-06-18 Thread David
Hi Dave.

Did you intentionally mail me off-list? On-list is generally better so
other people can give suggestions.

On Wed, Jun 18, 2008 at 12:04 PM, Dave Coventry <[EMAIL PROTECTED]> wrote:
> On Wed, Jun 18, 2008 at 11:33 AM, David <[EMAIL PROTECTED]> wrote:
>> How about setting up separate database users/groups (aka roles in
>> newer postgresql versions), and only granting update permissions to
>> the users/groups who should have it?
>>
>> David.
>
> Thanks, David.
>
> So would you advise a separate database?

Only if you need separate databases for a good reason (simple
permissions isn't a good reason). Splitting into other databases will
make existing apps more complicated (they need to connect to 2
databases instead of 1, etc).

>
> I have just found a reference work that suggests using
> viewshttp://www.archonet.com/pgdocs/chap-access.html
>

Views are good, if you want more fine-grained control over what data
users can view in the database.

Your request was for a way to limit what users can update. I assume
that in your case, if users are allowed to SELECT from a table, that
they are meant to be able to view all columns. If not, then views may
be a good idea.

> My understanding is that Postgres doesn't support pulling in data from
> multiple databases.

Not directly, but you there are automated data replication systems for
Postgresql if you need them. I don't think this is applicable in your
case.

>
> I'm a noob, so I could easily be wrong...
>

I assume that all users are currently logging in as the admin database
user,'postgres'? I also assume that you don't have full control
(ability to update source code, etc) over the software which users use
to manipulate the database. So students could update marks if you
don't change the postgresql permissions correctly.

You should do something like this:

1) Create new users, eg:

CREATE USER student WITH PASSWORD '';
CREATE USER teacher WITH PASSWORD '';

These users are by default locked down, and can't SELECT, UPDATE, etc
on any tables.

2) Grant permissions, eg:

For tables where all users are allowed to do anything:

GRANT ALL ON sandbox TO teacher, student;

For tables where teachers can do everything, but where students can select:

GRANT ALL ON marks TO teacher;
GRANT SELECT ON marks to student;

Whether students should be able to view marks of other students is a
separate issue.

3) Update pg_hba.conf so the new users can login over the network. eg,
add lines like this:

hoststudent  your_db_name  md5
hostteacher  your_db_name   md5

eg subnet: 192.168.0.0
eg netmask: 255.255.255.0

If you know that students and teachers will be connecting from
different IP ranges, then you should update the networking details
appropriately.

4) Clients use the new accounts instead of postgres.

eg: Update frontend configuration

eg: Tell users what their new logins are.

5) Lock down the postgresql account

eg: Edit your pg_hba.conf file, and make sure you have a line like this:

# Database administrative login by UNIX sockets
local   all postgres  ident sameuser

The above line means that to login as the postgres admin user, you
need to first login to the server (eg: with Putty), then change to the
postgres system user (eg: 'su postgres' under Linux) before you can
login to the database as postgres user (eg: 'psql your_database').

Also make sure that all network logins for postgres user are commented
out. Or, if you still need to login to the database over the network
then change the postgres password.

6) Restart the postgresql service, and make sure that everything still
works for the users

Also make sure that users can no longer login as the postgres user.

You will probably need to tweak some configuration and run a few more
GRANT lines. See your postgresql log files for more info.

I haven't tested the above steps, but they should work.

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database design questions

2008-06-18 Thread David
On Wed, Jun 18, 2008 at 12:25 PM, Jorge Godoy <[EMAIL PROTECTED]> wrote:
> On Wednesday 18 June 2008 05:43:25 David wrote:
>> * Should I split this into separate threads instead of 1 thread for
>> all my questions?
>
> I would submit all of the questions in separate messages.  It is tiresome to
> read everything, you'll loose a lot of context after one or two messages
> levels or reply and people won't read the email because of its size.
>

Thanks for your reply.

Should I post a 'please disregard this thread' message to this thread,
and start some new threads instead?

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database design questions

2008-06-18 Thread David
Hi list.

I'm closing this thread, and will re-post as separate questions.

I agree with Jorge that smaller mails will be easier to read.

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Database design: Storing app defaults

2008-06-18 Thread David
Hi list.

If you have a table like this:

table1
 - id
 - field1
 - field2
 - field3

table2
 - id
 - table1_id
 - field1
 - field2
 - field3

table1 & table2 are setup as 1-to-many.

If I want to start providing user-customizable defaults to the
database (ie, we don't want apps to update database schema), is it ok
database design to add a table2 record, with a NULL table1_id field?

In other words, if table1 has no matching table2 record, then the app
will use the table2 record with a NULL table1_id field to get
defaults.

This looks messy however. Is there a better way to do it?

A few other ways I can think of:

1) Have an extra table1 record (with string fields containing
'DEFAULT'), against which the extra table2 record is linked.

2) Have a new table, just for defaults, like this:

table2_defaults
 - field1
 - field2
 - field3

Which is the cleanest way? Is there another method I should use instead?

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Database design: Backwards-compatible field addition

2008-06-18 Thread David
Hi list.

If you have an existing table, and apps which use it, then how do you
add new fields to the table (for new apps), but which might affect
existing apps negatively?

eg: I start with a table like this:

table1
 - id
 - field1
 - field2
 - field3

Later, I want to add a use case, where there is new behaviour, if a
new field is set in the table, like this:

table1
 - id
 - field1
 - field2
 - field3
 - field4 - NEW - if unset, do old behaviour. if set, do something else

The problem is, that existing apps (besides your new app) won't know
about field4, so they will keep using the old behaviour for new
records (where field4 is set), which you don't want.

The most obvious thing to do is to update all apps using table1, so
they also check the value of field4.

Is there another, more backwards-compatible way to add field4 for the
new behaviour, without having to update all the apps?

A few things I can think of:

1) table1 becomes a view of an updated table, with a 'WHERE field4 IS
NULL' clause.

Problem with this is that some RDBMS (Postgresql specifically) don't
let you run update statements on views.

2) Apps use stored procedures for all database access.

Maybe ok for new apps, not so much for existing apps which use regular SQL.

3) All apps use the same library for accessing database

Then you update the library and all apps automagically know about the
extra field. Again, maybe ok for new apps, not so much for existing
apps.

4) Make a new table (copy of the old one), with the extra field.

Then your app checks both tables, or just the new one if applicable.

This can work, but you may end up with a lot of app-specific tables,
where the main difference between the tables is extra columns, and
which apps use the tables.

5) Have a 'db version' column in the table. Older apps only operate on
records at or before the version the programmer knew about at the
time.

This can work, but it seems like a very non-standard, hackish way of
designing database tables. Also it's a pain for all apps to have to
hardcode a db version number.

6) Find a clever way to use table inheritance

I haven't thought it through, but here are some docs I've read on the subject:

http://www.postgresql.org/docs/8.1/static/ddl-inherit.html

Any other ideas?

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Database design: Temporal databases

2008-06-18 Thread David
Hi list.

Some background information on the subject:

http://en.wikipedia.org/wiki/Temporal_database

I haven't used them before, but I like the idea of never
deleting/updating records so you have a complete history (a bit like
source code version control).

How well do temporal databases work? Do RDBMS (ie Postgresql) need
add-ons to make it effective, or can you just add extra temporal
columns to all your tables and add them to your app queries? Does this
increase app complexity and increase server load a lot?

Are there Python libraries which simplify this? (eg: add-ons for
Elixir or SQLAlchemy).

Or should apps all implement their own 'temporal data access' module,
which transparently uses the current date & time until queried for
historical data?

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Database design: Data synchronization

2008-06-18 Thread David
Hi list.

2 cases I'm interested in:

1) Migrating data from one database to another

2) Distributing data over many databases, and later merging

In what ways can you design tables to easier facilitate the above cases?

I am aware of multi-master replication software, as described here:

http://en.wikipedia.org/wiki/Multi-master_replication

For this question, I'm more interested in schema design, so that a
home-brewed database synchronization can perform synchronization.

I have some experience with this. One of my previous projects was to
reliably migrate data from one database to another, where the 2
databases had problems like:

- Many foreign keys weren't enforced

- Some fields needed special treatment (eg: should be unique, or
behave like a foreign key ref, even if db schema doesn't specify it.
In other cases they need to be updated during the migration).

- Most auto-incrementing primary keys (and related foreign key
references) needed to be updated during migration, because they are
already used in the destination database for other records.

- Many tables are undocumented, some fields have an unknown purpose

- Some tables didn't have fields that can be used as a 'natural' key
for the purpose of migration (eg: tables which only exist to link
together other tables, or tables where there are duplicate records).

I wrote a Python script (using SQLAlchemy and Elixir) to do the above
for our databases.

Are there any existing migration tools which could have helped with
the above? (it would have required a *lot* of user help).

Are there recommended ways of designing tables so that synchronization
is easier?

The main thing I've read about is ensuring that all records have a
natural key of some kind, eg GUID. Also, your migration app needs to
have rules for conflict resolution.

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database design: Storing app defaults

2008-06-18 Thread David
On Wed, Jun 18, 2008 at 2:20 PM, Karsten Hilbert
<[EMAIL PROTECTED]> wrote:
> On Wed, Jun 18, 2008 at 02:03:05PM +0200, David wrote:
>
>> If I want to start providing user-customizable defaults to the
>> database (ie, we don't want apps to update database schema), is it ok
>> database design to add a table2 record, with a NULL table1_id field?
>>
>> In other words, if table1 has no matching table2 record, then the app
>> will use the table2 record with a NULL table1_id field to get
>> defaults.
> Or have a trigger on table1 transparently fetching defaults
> when necessary.
>

I'm a n00b when it comes to triggers. Can you point me to an example of this?

I'm looking for this kind of logic (I should have clarified this in my
original post):

1) App writes some default settings (to table2 or wherever)

2) App writes a record to table1 (but no corresponding table2 record)

3) App later reads table1 record, and automatically gets default values

4) App writes updated defaults to table2

5) App later reads table1 record, and automatically gets updated default values.

6) App writes a table2 record for table1

7) App later reads table1 record, and gets (non-default) values from table2

Usually I do this, by doing a SELECT join between table1 & table2,
where the foreign key matches, or table2.table1_id is NONE (or some
other condition to pull in the default value record).

This has worked for me, but I'm wondering if there is a tidier method
for defaults. Abusing NULL values in table2.table_id, to mean 'this is
a default values record' seems untidy :-)

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database design: Storing app defaults

2008-06-19 Thread David
On Wed, Jun 18, 2008 at 3:24 PM, Jonathan Bond-Caron <[EMAIL PROTECTED]> wrote:
> Application defaults go in the application code not in the database (my
> opinion).

That's fine, until you want the defaults to be customizable, without
making an new app version. That's what my question is about :-)

>
> If you wants user, group, whatever customizable defaults, they belong in the
> database schema i.e. table user_prefs or role_prefs
>

These settings (or more precisely, app-customizable default values)
aren't user or group-specific. Also, I don't want to give apps
permission to update table schema unnecessarily.

A more concrete (toy) example to help clarify what I mean. If this
example doesn't work so well then I'll post another one :-)

employee
 - id
 - name
 - job_id
 - salary (if NULL, then use defaults for the job)
 - benefits_id (if NULL, then use defaults for the job)

job
 - id
 - description
 - default_salary
 - default_benefits_id

benefits
 - id
 - benefit_description

One (of the many) dubious thing with the above schema, is that NULL
employee.salary and employee.benefits_id means that apps should use a
default from somewhere else (but this is not immediately obvious from
the schema alone). So I would probably use a COALESCE and sub-query to
get the salary or benefits in one query.

This isn't exactly the same as my original post (where a 2nd table
uses NULLs to mean 'this is a default record'), but the principle is
similar.

> For your question about "backwards compatible database", in most cases apps
> and databases schemas are upgraded at the same time.
> If you have a requirement that old & new apps have to work on the same
> database schema then don't make database schemas changes that will not be
> backwards compatible / break older apps.

That's the obvious answer :-) But what if you need a feature before
there is time to update all the apps? And how would you design your
tables if you were expecting this to be the norm?

>
> Put those changes on hold until both apps & databases can be upgraded. Some
> solutions which may help you java (hibernate) adds a version column to each
> table, rails adds a schema_info table with database version.
>

I haven't used those before. I've mainly worked with apps which use
SQL directly. More recently I've started working with SQLAlchemy and
Elixir in Python. Do those libraries you mention automatically ignore
records which have an unexpectedly high version number? (And what if
that isn't the correct thing to do in all cases?)

Could you provide links so I can read how those schemes work? (so that
I can look into borrowing their logic for my hand-coded tables &
application SQL).

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database design: Storing app defaults

2008-06-19 Thread David
On Wed, Jun 18, 2008 at 9:30 PM, Shane Ambler <[EMAIL PROTECTED]> wrote:
> David wrote:
>>
>> Hi list.
>>
>> If you have a table like this:
>>
>> table1
>>  - id
>>  - field1
>>  - field2
>>  - field3
>>
>> table2
>>  - id
>>  - table1_id
>>  - field1
>>  - field2
>>  - field3
>>
>> table1 & table2 are setup as 1-to-many.
>>
>> If I want to start providing user-customizable defaults to the
>> database (ie, we don't want apps to update database schema), is it ok
>> database design to add a table2 record, with a NULL table1_id field?
>
> Yes - Foreign key constraints will ensure that a value in table1_id exists
> in table1 - it does allow null vales unless you specify that column as NOT
> NULL or UNIQUE

My problem isn't that NULLS are or are allowed. My problem is that the
schema feel a bit unnatual/hackish if you use them in the way I
described. I'm looking for a cleaner, more elegant table schema.

>
>
>>
>> This looks messy however. Is there a better way to do it?
>>
> Sounds back to front to me. table1 would be defaults with table2 user
> defined overrides (I'd also add a user_id column)

That schema was a bit unnatural. See my previous mail in this thread
for a more realistic example.

>
>> A few other ways I can think of:
>>
>> 1) Have an extra table1 record (with string fields containing
>> 'DEFAULT'), against which the extra table2 record is linked.
>
> Create a view returning default values when the column is null?
>

This is possible, but there are a few problems (from my pov).

1) How do you make the views writable? (so you can update/delete/insert)

Another poster mentioned triggers, but I don't know how to use those.
Also, adding triggers increases the overall complexity. I'm looking
for a database & app logic/schema which is:

- As simple as possible
- Elegant
- Not hackish

See the Zen of Python for a better idea of what I mean:

http://www.python.org/dev/peps/pep-0020/

Are there any docs on the internet which give guidelines for good db design?

2) You may need to push a lot of logic from app logic (fetching
defaults from various tables depending on the situation) into your
view logic.

You can end up with a monster view, with complex supporting triggers &
stored procedures :-) I'd like to avoid that if possible.

>>
>> Which is the cleanest way? Is there another method I should use instead?
>>
>
> I would think that the app defines default behaviour which it uses if no
> values are stored in the db. The db only holds non-default options.
>

Sometimes your defaults need to be user-configurable. See my previous
post for more info.

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database design: Backwards-compatible field addition

2008-06-19 Thread David
>> Problem with this is that some RDBMS (Postgresql specifically) don't
>> let you run update statements on views.
>
> Given 1) the view will be "fairly uncomplicated" and hence
> "fairly straightforward" ON INSERT/UPDATE/DELETE rule can
> likely be added to it allowing for an apparently writable
> view.
>

Thanks for the info.

I had the mistaken idea that rules and triggers were only for real
tables & not views (I've never used them before).

I would like to avoid using them if possible (extra complexity, and
don't want to rename tables & make new views each time I need to make
backwards-incompatible app updates), but they are there if I need
them.

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database design: Backwards-compatible field addition

2008-06-19 Thread David
Thanks for you reply.

On Wed, Jun 18, 2008 at 9:15 PM, Shane Ambler <[EMAIL PROTECTED]> wrote:
> David wrote:
>>
>> Hi list.
>>
>> If you have an existing table, and apps which use it, then how do you
>> add new fields to the table (for new apps), but which might affect
>> existing apps negatively?
>>
>
> If you know you are going to add a column then add it now and just not have
> your app do anything with any data there.
>

I don't have a problem with this case. The problem is when older apps
need to do something different (usually ignore) records which have
certain values (usually anything non-NULL) in the new fields.

Simple (toy) example. You have a table like this:

employee
 - id
 - name
 - ...etc..

You have a lot of software which uses this table.

Later, you need to add an 'employed' boolean field, to reflect whether
an employee is still working at the company

Your new apps know the difference between employed and unemployed
employee, but old apps all assume that all employees in the table are
currently employed, and will want to send them pay checks, emails,
etc.

Furthermore, assume that this kind of change happens fairly often.

Would you make more views & rules each time the requirements change?

Would you need to update all the apps each time too?

Or are there other methods (version columns, etc) which can reduce the
work required in cases like this?

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Inter-app communication via DB

2008-06-19 Thread David
Hi list.

One pattern I've used is for apps to communicate events to each other
through the database.

ie:

- App 1 sents a boolean value to True
- App 2 queries the field every 10s, sets the value to False, and does
something.

Is this reasonable, or should apps avoid this pattern?

I have seen the NOTIFY and LISTEN SQL statements. However:

1) App 2 might not be running at the time (eg: it's launched from
cron, or it was temporarily stopped), and the expectation is that App
2 will run the special logic when it is started.

2) App 2 is usually single-threaded, and needs to do other things in
it's main thread besides wait for a DB notification.

I also know of RPC, but haven't used it before, and don't see a need
if you can use the above pattern.

I would use RPC (or unix signals if on the same host) if App 2 needed
to respond quickly, and I didn't want to hammer the DB & network with
constant polling.

Any comments?

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inter-app communication via DB

2008-06-19 Thread David
On Thu, Jun 19, 2008 at 11:25 AM, Karsten Hilbert
<[EMAIL PROTECTED]> wrote:
> On Thu, Jun 19, 2008 at 11:09:12AM +0200, David wrote:
[...]
>
>> One pattern I've used is for apps to communicate events to each other
>> through the database.
>
> Works nicely with LISTEN/NOTIFY. We use it a lot in GNUmed.
>
>> - App 1 sents a boolean value to True
>> - App 2 queries the field every 10s, sets the value to False, and does
>> something.
>>
>> Is this reasonable, or should apps avoid this pattern?
> Add more app instances and you'll have a lot of polling.
>

Good point.

>> I have seen the NOTIFY and LISTEN SQL statements. However:
>>
>> 1) App 2 might not be running at the time (eg: it's launched from
>> cron, or it was temporarily stopped), and the expectation is that App
>> 2 will run the special logic when it is started.
>
> That will happen anyway, no matter what the message
> transport is like. Apps will have to read state at startup
> anyway, no ?

Another good point.

I have a small problem with this. If app1 wants to tell app2 to
perform an expensive operation (which you don't want app2 to do each
time it starts up), in the original pattern it could just set a
boolean variable. Now it needs to both set a boolean variable (in case
app2 isn't running at the moment) and use NOTIFY (for when it is),
which seems a bit redundant.

>
>> 2) App 2 is usually single-threaded, and needs to do other things in
>> it's main thread besides wait for a DB notification.
> Well, threads in Python aren't *that* hard to get right. But
> you could also write a small listener demon which gets
> started on behalf of the local app instance which writes to
> a local watch file which is being polled by the local app
> instance. Takes the poll pressure off the database and
> avoids having to thread the app, too.

I don't mind threads in Python. It's mostly for C and C++ apps where I
don't want to add threading and an extra db connection to the code
just to be able to receive notifications from other apps.

For cases like that, your idea of a separate listener daemon will be
useful if there are a lot of instances that want to poll at the same
time :-)

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dump and restore problem

2008-06-19 Thread David
On Wed, Jun 18, 2008 at 11:41 PM, Stuart Luppescu <[EMAIL PROTECTED]> wrote:

[...]

> pg_restore: [tar archiver] could not open TOC file for input: No such
> file or directory

It sounds like the tar file is no longer being created.

Try manually running the commands, and verify that the dump, restore,
and rsync still work correctly at each step.

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dump and restore problem

2008-06-22 Thread David
On Sun, Jun 22, 2008 at 3:11 AM, Stuart Luppescu <[EMAIL PROTECTED]> wrote:
> On 木, 2008-06-19 at 11:57 +0200, David wrote:
>> > pg_restore: [tar archiver] could not open TOC file for input: No
>> such
>> > file or directory
>>
>> It sounds like the tar file is no longer being created.
>>
>> Try manually running the commands, and verify that the dump, restore,
>> and rsync still work correctly at each step.
>
> It looks like it's working:
> file postgresql_database-db-backup
> postgresql_database-db-backup: tar archive
>
> tar tfv postgresql_database-db-backup
> -rw--- 2048/1024 62764 2008-06-20 03:00 toc.dat
> -rw--- 2048/1024   4590795 2008-06-20 03:00 1765.dat
> -rw--- 2048/1024391040 2008-06-20 03:00 1771.dat
> -rw--- 2048/1024   1262932 2008-06-20 03:00 1770.dat
> -rw--- 2048/1024  4472 2008-06-20 03:00 1769.dat
> -rw--- 2048/1024 5 2008-06-20 03:00 1796.dat
> -rw--- 2048/1024 5 2008-06-20 03:00 1787.dat
> -rw--- 2048/1024  5456 2008-06-20 03:00 1762.dat
> -rw--- 2048/1024435981 2008-06-20 03:00 1767.dat
> -rw--- 2048/1024 54185 2008-06-20 03:00 1774.dat
> -rw--- 2048/1024   109 2008-06-20 03:00 1764.dat
>
> etc.
>
> This only weird thing is the ownership is weird. Where did that
> 2048/1024 come from?
>

I'm not sure about that. But if your manual process is working, then
your script should work fine too. Try adding more logging & error
checking/capturing to your mirroring scripts to check where the
problem is occurring. Debug lines like 'echo "TESTING ($LINENO) -
"' to trace which parts of the scripts get run when can
also help.

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] maintaining referential integrity

2009-06-05 Thread David
On Fri, Jun 5, 2009 at 6:27 PM, Brandon
Metcalf wrote:
> What would be the best way to maintain referential integrity in the
> following situation?   Let's say I have the following table
>
>  CREATE TABLE workorder (
>      workorder_id INTEGER  NOT NULL,
>      part_id      INTEGER  DEFAULT NULL,
>      generic      BOOLEAN  DEFAULT FALSE,
>
>      PRIMARY KEY (workorder_id)
>  );
>
> and another
>
>  CREATE TABLE generic (
>      generic_id   INTEGER NOT NULL,
>      workorder_id INTEGER,
>
>      PRIMARY KEY (generic_id),
>
>      FOREIGN KEY (workorder_id)
>          REFERENCES workorder
>          ON DELETE RESTRICT
>          ON UPDATE CASCADE
>  );
>
> This is straight forward.
>
> What if a generic_id can reference more than one workorder_id?  If I
> knew the upper limit on the number a generic_id could reference and
> that number was small, I suppose I could define workorder_id1,
> workorder_id2, etc and defined foreign keys for each.  However, I
> don't know this.
>

You probably want a third table, generic_workorder, that links tables
generic and work_order together in a many-to-many relationship.
Something like:

CREATE TABLE generic_workorder (
generic_workorder_id SERIAL PRIMARY KEY,
generic_id NOT NULL REFERENCES generic(generic_id),
workorder_id NOT NULL REFERENCES generic(generic_id)
);

(I'm not sure if the above syntax is 100% correct), and then possibly
drop the generic.workorder_id column.

The new table, generic_workorder, will link generic and workorder
records together in a many-to-many relationship, and also enforce
referential integrity.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] When to use cascading deletes?

2009-06-11 Thread David
Hi there.

When is a good time to use cascading deletes?

Usually, for safety reasons, I prefer to not ever use cascading
deletes. But that can lead to some complex code that uses topological
sorts etc to manually delete records in the right order, when a
cascading delete is needed.

Ideally, I'd like postgresql to not do cascading deletes, *except*
when I tell it to, and the rest of the time fail when the user didn't
explicitly "opt in" for cascading deletes. When it comes to enabling
cascading deletes, I don't really like the idea that deleting or
updating a row from one table can have a possibly unexpected (to the
app programmer, using the database) chain reaction to other tables.

I don't know, maybe I have the wrong mindset, and cascading is
preferable (in terms of object model) in some cases? I'd like to read
more on this subject (general best practices for what types of
cascading are appropriate to use when).

Any tips?

Thanks,

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When to use cascading deletes?

2009-06-11 Thread David
Thanks for the tips, those make sense.

I was thinking through this some more after sending my mail, and came
to similar conclusions.

It would be nice though if this info was more available to people
doing research on the subject. Where did you pick up these ideas? At
least this thread should start turning up in Google at some point :-)

Also, probably some of my confusion comes from being used to
app-centric logic, where the app (and programmer/sysadmin/etc) likes
to be in control of all the database updates, rather than additional
deletes/updates/etc taking place behind the scenes. I guess it's a
kind of trade-off. Handing off more logic to the db to simplify db
interaction, at the expense of the database sometimes doing
unexpected/unwanted things.

For instance, it's possible to make the db rules *too* clever. That
kind of thing can bite you later, when the db makes updates in the
background that you don't want for some reason. Then you need to work
around them, and can't take those rules out permanently, since it
breaks other apps that depend on the db being intelligent. You need to
think really carefully about when to add db intelligence, which could
have unwanted consequences later...

For people like me, I think it helps to think of the data as living in
it's own world inside the DB, with extra rules there that apps don't
always know or care about, and the users & apps need to be aware of
the conventions around each table. As opposed to being a simple data
store for apps (with integrity checks). For similar reasons I've
avoided stored procedures and rules, preferring simpler database
schema, but more complex logic to handle them in apps.

Another consideration, is revision controlling of the extra rules/etc.
My current approach is to have all db schema setup & updates etc in a
Python script during development, which is revision controlled (and
then later re-used for remote db installs/upgrades).

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Arrays and LIKE

2009-08-08 Thread David
Done a bit of hunting and can't seem to find an answer as to if this sort of 
thing is possible:

SELECT * FROM mail WHERE recipients ILIKE 'david%';

Where recipients is a VARCHAR(128)[]

The above doesn't work but thats the sort of thing I want to do...
If this is possible and can use an index as well that would be wonderful...

-- 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Arrays and LIKE

2009-08-08 Thread David
Thanks all normally I would have gone with a linked table but since support for 
arrays has improved in pg lately I thought I would give them a go again but I 
guess they are still not ready for what I want.

I did think of another solution overnight though that still uses arrays but 
also a subtable. where I add address to a another table with a id sequence and 
then store the seqid in the array then I could do 
the like on the subtable returning ids and use the array of ids it returns to 
compare to the recipient ids in the array (this should use a GIN index as 
overlapping arrays is listed on the page 
http://www.postgresql.org/docs/8.3/interactive/functions-array.html). This has 
the added bonus that I can store stats about each email address seen with them. 
(Haven't tried it yet next on the list to do)

On Sat, Aug 08, 2009 at 02:10:18PM +0100, Sam Mason wrote:
> On Sat, Aug 08, 2009 at 05:04:29PM +0930, David wrote:
> > Done a bit of hunting and can't seem to find an answer as to if this
> > sort of thing is possible:
> > 
> > SELECT * FROM mail WHERE recipients ILIKE 'david%';
> > 
> > Where recipients is a VARCHAR(128)[]
> 
> It's a bit of a fiddle:
> 
>   CREATE FUNCTION flipilike(text,text)
>   RETURNS boolean IMMUTABLE LANGUAGE SQL
>   AS $$ SELECT $2 ILIKE $1; $$;
>   CREATE OPERATOR ~~~ (
> leftarg  = text,
> rightarg = text,
> procedure = flipilike
>   );
> 
> PG now understands:
> 
>   SELECT 'x%' ~~~ 'fred';
> 
> To be the same as:
> 
>   SELECT 'fred' ILIKE 'x%';
> 
> So you can solve your original problem as:
> 
>   SELECT * FROM mail WHERE 'david%' ~~~ ANY(recipients);
> 
> > The above doesn't work but thats the sort of thing I want to do...
> > If this is possible and can use an index as well that would be wonderful...
> 
> No idea about that, but I'd look to a GIN index to start with.  I think
> you really want to stop using arrays and do it "properly" with a
> relation:
> 
>   CREATE TABLE mailaddrs (
> msgid TEXT REFERENCES mail,
> ord INTEGER,
>   PRIMARY KEY (msgid, ord),
> type TEXT CHECK (type IN ('to','from','cc','bcc')),
> address TEXT
>   );
>   CREATE INDEX mailaddrs_address_idx ON mailaddrs (address);
> 
> then you can do:
> 
>   SELECT DISTINCT msgid
>   FROM mailaddrs
>   WHERE address ILIKE 'david%';
> 
> and it should do the right thing.  Not sure if you have this
> flexibility though.
> 
> -- 
>   Sam  http://samason.me.uk/
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 110,000,000 rows

2010-05-26 Thread david
At work I have one table with 32 million rows, not quite the size you
are talking about, but to give you an idea of the performance, the
following query returns 14,659 rows in 405ms:

SELECT * FROM farm.frame
WHERE process_start > '2010-05-26'; 

process_start is a timestamp without time zone column, and is covered by
an index. Rows are reletively evenly distributed over time, so the index
performs quite well. 

A between select also performs well:
SELECT * FROM farm.frame
WHERE process_start 
  BETWEEN '2010-05-26 08:00:00' 
AND '2010-05-26 09:00:00';

fetches 1,350 rows at 25ms. 

I also have a summary table that is maintained by triggers, which is a
bit of denormalization, but speeds up common reporting queries. 

On 22:29 Wed 26 May , John Gage wrote:
> Please forgive this intrusion, and please ignore it, but how many  
> applications out there have 110,000,000 row tables?  I recently  
> multiplied 85,000 by 1,400 and said now way Jose.
> 
> Thanks,
> 
> John Gage
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] start/stop error message

2008-11-19 Thread David
I am unable to solve the following problem.

When I start or stop PostgreSQL using pg_ctl (without the -D option) the
system works fine. No errors.  I have the PGDATA environment variable
set and it points to my cluster.

When I run the /etc/init.d/postgresql-8.2 script with either of "stop"
or "start" as parameters I get the following error:


[EMAIL PROTECTED]:/etc/init.d$ sudo /etc/init.d/postgresql-8.2 start
 * Starting PostgreSQL 8.2 database server
 * Error: specified cluster does not exist
 [fail]

and the same error when passing "stop"

The cluster clearly exists and is recognized by pg_ctl. As a result
PostgreSQL does not start with the system and must be started and
stopped manually.

I am running Ubuntu linux (very close to Debian).

There must be a configuration error but I cannot find it.

David

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgAdmin error

2008-11-21 Thread David
I am trying to use pgAdmin 1.8.4 to edit the pg_hba.conf file on a
PostgreSQL 8.3 database running on  Ubuntu 8.10.  I get the following
error message:

An error has occurred:

ERROR: absolute path not allowed
CONTEXT: SQL function "pg_file_length" statement 1

then...

Backend Access Configuration Editor -
/etc/postresql/8.3/main/pg_hba.conf on Kepler...

with a blank screen.

My configuration seems ok.  Any help?


from the pgadmin.log

2008-11-22 01:20:21 INFO   : Server name: kepler.fdu.edu (resolved to:
132.238.16.3)
2008-11-22 01:20:21 INFO   : Opening connection with connection string:
host='kepler.fdu.edu' hostaddr=132.238.16.3 dbname='postgres'
user='david' password='XX' port=5432 sslmode=require
2008-11-22 01:20:24 QUERY  : Set query (kepler.fdu.edu:5432): SET
DateStyle=ISO;SELECT oid, pg_encoding_to_char(encoding) AS encoding,
datlastsysoid
  FROM pg_database WHERE oid = 11511
2008-11-22 01:20:24 INFO   : Setting client_encoding to 'UNICODE'
2008-11-22 01:20:24 QUERY  : Set query (kepler.fdu.edu:5432): SELECT
proname, pronargs, proargtypes[0] AS arg0, proargtypes[1] AS arg1,
proargtypes[2] AS arg2
  FROM pg_proc
  JOIN pg_namespace n ON n.oid=pronamespace
 WHERE proname IN ('pg_tablespace_size', 'pg_file_read',
'pg_logfile_rotate', 'pg_postmaster_starttime', 'pg_terminate_backend',
'pg_reload_conf', 'pgstattuple', 'pgstatindex')
   AND nspname IN ('pg_catalog', 'public')
2008-11-22 01:20:24 QUERY  : Scalar query (kepler.fdu.edu:5432): SHOW
hba_file
2008-11-22 01:20:24 QUERY  : Query result:
/etc/postgresql/8.3/main/pg_hba.conf
2008-11-22 01:20:24 QUERY  : Scalar query (kepler.fdu.edu:5432): SELECT
pg_file_read('/etc/postgresql/8.3/main/pg_hba.conf', 0,
pg_file_length('/etc/postgresql/8.3/main/pg_hba.conf'))
2008-11-22 01:20:24 ERROR  : ERROR:  absolute path not allowed
CONTEXT:  SQL function "pg_file_length" statement 1


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Sequence Incrementing by 2 insted of 1

2009-04-27 Thread David
Our internal task database is doing something odd in that the sequence is 
incrementing by 2 instead of 1 and I can't find any reason why I have checked 
the 
sequence itself to see if it had somehow got set to increment by 2 but no. The 
table in question has a number of both before and after triggers that do 
various bits of business logic as well as some rewrite rules to track field 
changes. Using 8.3.0. Can anyone think of a reason why?

-- 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sequence Incrementing by 2 insted of 1

2009-04-27 Thread David
Looked for that can't find anything and there is no reason why someone would 
have added that.

On Tue, Apr 28, 2009 at 08:24:46AM +0200, A. Kretschmer wrote:
> In response to David :
> > Our internal task database is doing something odd in that the sequence is 
> > incrementing by 2 instead of 1 and I can't find any reason why I have 
> > checked the 
> > sequence itself to see if it had somehow got set to increment by 2 but no. 
> > The table in question has a number of both before and after triggers that 
> > do 
> > various bits of business logic as well as some rewrite rules to track field 
> > changes. Using 8.3.0. Can anyone think of a reason why?
> 
> Wild guess: there are an other call to nextval() in one of the triggers.
> But without the code it is hard to guess...
> 
> 
> Regards, Andreas
> -- 
> Andreas Kretschmer
> Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
> GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 

-- 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sequence Incrementing by 2 insted of 1

2009-04-28 Thread David
Ok got permission to send the code I think I have got all the relevent bits.

On Tue, Apr 28, 2009 at 01:01:39AM -0600, Scott Marlowe wrote:
> On Tue, Apr 28, 2009 at 12:38 AM, David  wrote:
> > Looked for that can't find anything and there is no reason why someone 
> > would have added that.
> 
> Well, unless you've got some example code for us to look at, it's
> kinda hard to tell you what's wrong.
> 
> Often, you see people do something like this:
> 
> insert into autoincrementingtable values (DEFAULT, 'a', 500);
> select nextval('autoinctable_seq');
> yada yada yada
> 
> or something like that.  i.e. there's an implicit call to the nextval
> with the insert, and another explicit one somewhere in your code.
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 

-- 
SET client_encoding = 'SQL_ASCII';
SET standard_conforming_strings = off;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET escape_string_warning = off;
SET search_path = public, pg_catalog;
SET default_tablespace = '';
SET default_with_oids = true;
CREATE TABLE currentcall (
cadc_taskno integer DEFAULT 
nextval(('currentcall_cadc_taskno_seq'::text)::regclass) NOT NULL,
coversight character varying(4),
cprojman character varying(4),
cbarcode integer,
clink integer,
cdatemod date,
cdatein date DEFAULT date(now()) NOT NULL,
ctimein smallint DEFAULT ((date_part('hour'::text, now()) * (100)::double 
precision) + date_part('minute'::text, now())),
cloggedby character varying(4),
clogging_id character varying(4) NOT NULL,
clogging_id2 character varying(5) NOT NULL,
ccustname character varying(6) NOT NULL,
cbilling_id character varying(6) NOT NULL,
csubcont_to character varying(6),
ccustomer character varying(44),
caddress character varying(44),
csuburb character varying(20),
cfloor character varying(2),
cequipmod character varying(30) NOT NULL,
cextras character varying(40),
cserial character varying(20) NOT NULL,
cfprior smallint,
cfrespond_time smallint NOT NULL,
cjobtype character varying(1) NOT NULL,
cjob_loc character varying(1) NOT NULL,
cspecial character varying(1),
cexinclusions character varying(1),
cloanunit character varying(1),
clock character varying(1) NOT NULL,
cpercent double precision,
ccallback character varying(1),
cname2 character varying(22),
cphone2 character varying(14),
cfault character varying(120),
cfcode character varying(2),
ccustcomment character varying(64),
cdatealloc date,
ctimemod smallint,
ctechalloc character varying(4),
callocby character varying(4),
cdateclear date,
ctimeclear smallint,
cfollowup character varying(1),
cfollupdate date,
cfollowwho character varying(4),
cstatus character varying(1) NOT NULL,
ctimeoutst integer DEFAULT 0,
cresptime numeric(10,5),
ctaxexempt character varying(1),
ctaxnumber character varying(12),
ccustorder character varying(20),
cequipcode character varying(8),
csubchrg numeric(16,4),
cneg_cost numeric(16,4),
ctimevalue numeric(16,4),
cpartvalue numeric(16,4),
ctax_total numeric(16,4),
cfreight numeric(16,4),
cmileage numeric(16,4),
ctotvalue numeric(16,4),
csubcost numeric(16,4),
cfixcost numeric(16,4),
ctimecost numeric(16,4),
cpartcost numeric(16,4),
ctaxcost numeric(16,4),
cmilecost numeric(16,4),
cfrtcost numeric(16,4),
ctotcost numeric(16,4),
cfixcode character varying(4),
cest_complete smallint,
cpmdone character varying(1),
clastinit character varying(3),
cloanser character varying(15),
cloanused character varying(1),
ctot_time numeric(6,1),
cchrg_time numeric(6,1),
cext_num character varying(4),
cequip_type character varying(1),
cmrate numeric(16,4),
carate numeric(16,4),
cbrate numeric(16,4),
ccrate numeric(16,4),
cdrate numeric(16,4),
ctrate numeric(16,4),
ctmacost numeric(16,4),
ctmbcost numeric(16,4),
ctmccost numeric(16,4),
ctmtcost numeric(16,4),
cmlcost numeric(16,4),
cminhrsa double precision,
cminhrsb double precision,
cminhrsc double precision,
cfax_no character varying(30),
cfax_stat character varying(1),
cucredit character varying(1),
cuclass character varying(1) NOT NULL,
cusubinfo character varying(80),
csubjobno character varying(15),
cstate character varying(4),
cpostcode character varying(4),
cwarinv character varying(20),
cwardate date,
cwarsup character varying(6),
copfnum integer,
cestcomplete date,
cnodeid character varying(64),

Re: [GENERAL] Sequence Incrementing by 2 insted of 1

2009-04-28 Thread David
Cool thanks that makes sense was hard to pin down because we didn't notice for 
a while.

On Tue, Apr 28, 2009 at 09:31:26AM +0200, Daniel Verite wrote:
>   David wrote:
> 
> >the sequence is incrementing by 2 instead of 1 and I can't
> >find any reason why I have checked the 
> >sequence itself to see if it had somehow got set to increment
> >by 2 but no.
> >The table in question has a number of both before and after triggers
> >that do various bits of business logic as well as some rewrite rules 
> >to track field changes.
> >Using 8.3.0. Can anyone think of a reason why?
> 
> I'd say rewrite rules are your prime suspect.
> Check out for example:
> http://archives.postgresql.org/pgsql-sql/2007-03/msg00334.php
> 
> Cordialement,
> -- 
> Daniel
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 

-- 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Getting ID of last-inserted row

2009-05-20 Thread David
Hi there.

I never found an adequate (simple and efficient) method for getting
the primary key ID of the just-inserted row, and usually used
transactions and "select last value, ordered by id"-type queries to
get the last id value, or other ugly logic.

That was until I found how SQLalchemy[1] handles it for PostgreSQL.
What they do is:

1) First, get the next value from the sequence, eg:

SELECT nextval('clients_id_seq');

2) Then, run an insert statement, where the retrieved value is
explicitly given, rather than automatically assigned, eg:

INSERT INTO clients (id, name) VALUES (12345, 'John Smith');

(Where 12345 is the id retrieved from the previous query).

I wanted to add this info to the wiki[2], but there doesn't seem to be
a way to sign up.

Anyway, I thought that other people might find this info useful.

David.

[1] http://www.sqlalchemy.org/
[2] http://wiki.postgresql.org/wiki/Main_Page

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can I pause the pg_dump or pg_restore

2009-05-20 Thread David
2009/5/20 Grzegorz Jaśkiewicz :
> restore you can, kind of pause, by pausing data feed to it. (assuming
> you have a control over pipe, and you are using text dump format).
> as for pg dump, I don't think there's a way.
>

pg_dump should be pauseable, if you use your OS's job-handling tools.
For instance, in Linux, you can press "Ctrl+Z" while running a
program, and then run run "fg" later to resume it. You can also script
this with various command-line utilities.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Fwd: [GENERAL] Getting ID of last-inserted row

2009-05-20 Thread David
>> I never found an adequate (simple and efficient) method for getting
>> the primary key ID of the just-inserted row, and usually used
>> transactions and "select last value, ordered by id"-type queries to
>> get the last id value, or other ugly logic.
>
> use currval() instead, see
> http://www.postgresql.org/docs/current/static/functions-sequence.html
> --

I think I researched that before also, but I wasn't sure at the time
how safe it was against race conditions. Although I see now (reading
the docs again) that it is tracked for different sessions so it should
be safe. There might also be issues (for instance, where you are using
a complex database-handling library), where you can't always guarantee
that your currval() call is made at the right time. But most of the
time it should be fine. Thanks for the reminder.

> Also, you can do insertreturning... (as of version 8.2, I think):
>
> INSERT INTO clients (id, name)
>  VALUES (nextval('clients_id_seq'), 'John Smith')
>  RETURNING id;

Thanks. I think I saw that too, not too long ago, but forgot about it.
Some of the PostgreSQL services I use are on older versions, so I need
to use older syntax. But, this will be useful when the db version is
guaranteed to be recent. Thanks also for your reminder.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Suggested way of associating PG user metadata with record

2009-05-20 Thread David
On Wed, May 20, 2009 at 12:28 PM, Peter Geoghegan
 wrote:
> Hello,
>
> I'd like to associate the PG user who created certain sorts of records
> with that record in my application. The obvious way to do this is by
> creating a text column with a default value of current_user in the
> record's table, or perhaps something similar with a trigger. However,
> because what I've described strikes me as a fairly common requirement,
> I suspect there may be a purpose-built, more direct and efficient
> feature I could use, such as a referential user datatype or something
> like that.
>
> Is there? If not, does the obvious approach seem reasonable?
>
> Regards,
> Peter Geoghegan
>

You should be careful about linking records to PostgreSQL user
accounts. If the PostgreSQL accounts are removed, there could be
problems with the foreign key references, if you don't implement it
carefully.

That said, you could also use the value in pg_user.usesysid as a
unique ID, rather than the account name (if you want to distinguish
between different users with the same login name, over a period of
time where users were removed and re-added).

http://www.postgresql.org/docs/8.2/interactive/view-pg-user.html

David.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Suggested way of associating PG user metadata with record

2009-05-20 Thread David
>>
>> That said, you could also use the value in pg_user.usesysid as a
>> unique ID, rather than the account name (if you want to distinguish
>> between different users with the same login name, over a period of
>> time where users were removed and re-added).
>
> Perhaps I should just not use a foreign key, and accept that users may
> be removed, and do a LEFT JOIN to pg_user and SELECT
> COALESCE(pg_use.username, 'user removed'). Since I imagine that
> removing users will rarely or never occur in practice, this may be the
> thing to do. Then again, perhaps I'm "prematurely pessimising" by not
> just doing the simple and obvious thing and using a text column as
> described in my original mail. I'd like to do whatever is considered
> canonical, but I'm not sure what that is in this case.
>

A text column should be perfectly fine, and is the simplest
implementation. But it does have a few possible issues that you need
to keep in mind (besides things like users being removed). For
instance, if PostgreSQL users are renamed (eg: ALTER USER name RENAME
TO newname), then your text column becomes incorrect.

If you want to record additional user info (besides what PostgreSQL
already tracks), I'd suggest creating your own users table, and inside
there add a (non foreign key reference) entry for pg_user.usesysid, as
well as any other application-specific details. Then, you link other
tables to your table (with foreign key reference), instead of using
text columns.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpython function problem workaround

2005-03-15 Thread David
On Tue, Mar 15, 2005 at 08:09:47AM +0200, Sim Zacks wrote:
> I don't think that this type of solution should be discussed as an official
> patch.
> If it was, I would recommend solving the problem in source code when the
> function is passed to the translator. That way each platform could fix the
> code to work with as is needed and the code would be portable.
> 
> I ran into this problem about half a year ago and it didn't go anywhere,
> then I saw a long discussion about it that also didn't go anywhere. I had
> given up on using plpython until now because I actually need it.

I'd like to insert one note here.  While I'm not particularly familiar
with either perl or python, when I encountered the referred-to thread
regarding Windows/Unix newline incompatibilities in Python, I ran some
tests on my Linux system.  I wrote some very simple scripts in Perl and
Python and tested them.  They ran OK when I saved them in native Linux
newlines, they ran, but if I saved them in DOS mode (vim), neither the
Python nor the Perl scripts would run.  I noted that some of the writers
in that thread were critical of Python for not taking care of the
conversions, and it may be that the Perl interfaces in postgresql may
take care of this, I have not tried this, but it seems that my tests, if
valid, indicate that this problem is not restricted to Python alone as
far as the interpreters are concerned.

This observation may have nothing to do with the pl/Perl and pl/Python
interfaces, but I thought I'd pass this along FWIW.

> So I
> figured out how to make it work and thought that it would be helpful to
> others, but I couldn't figure out how to automate the fix.
> 
> "Tom Lane" <[EMAIL PROTECTED]> wrote in message

> > I don't know Python at all, so I don't know how complicated its lexical
> > structure is, but ISTM you'd at least need enough smarts to distinguish
> > literals from unquoted whitespace.
> >
> > The other small fly in the ointment is that when the server is running
> > on Windows, I suppose we would have to *put in* rather than remove CRs.
> > Sim probably doesn't care about that case, but we couldn't accept an
> > official patch that doesn't handle it.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] plpython function problem workaround

2005-03-16 Thread David
On Tue, Mar 15, 2005 at 02:50:06PM -0700, Michael Fuhr wrote:
> On Tue, Mar 15, 2005 at 03:41:37PM +, Ragnar Hafstað wrote:
> 
> > actually, perl scripts with \r\n line endings will run just fine in
> > unix/linux.
> 
> Indeed, and PL/Perl doesn't care.  I just tested several PLs with
> PostgreSQL 8.0.1 on Solaris 9 and here are the results:
> 
> PL/pgSQL   CRLF ok
> PL/PerlCRLF ok
> PL/RubyCRLF ok
> PL/Tcl CRLF ok
> PL/Python  CRLF fails
> PL/R   CRLF fails
> 
> Details:

Thanks for the detailed test you ran.  This is something I should have
done before spouting off.

So, really, AIUI, one of the "selling points" for each of the
above-mentioned languages is their portability.  It would appear to me
that this newline incompatibility ought to be considered a major bug in
python.  (Maybe I'm spouting off half-cocked again, though).

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] plpython function problem workaround

2005-03-16 Thread David
On Tue, Mar 15, 2005 at 03:41:37PM +, Ragnar Hafstað wrote:
> On Tue, 2005-03-15 at 07:33 -0600, David wrote:
> 
> > [about the line-termination problem in plpython]
> 
> > I'd like to insert one note here.  While I'm not particularly familiar
> > with either perl or python, when I encountered the referred-to thread
> > regarding Windows/Unix newline incompatibilities in Python, I ran some
> > tests on my Linux system.  I wrote some very simple scripts in Perl and
> > Python and tested them.  They ran OK when I saved them in native Linux
> > newlines, they ran, but if I saved them in DOS mode (vim), neither the
> > Python nor the Perl scripts would run.
> 
> actually, perl scripts with \r\n line endings will run just fine in
> unix/linux.

Yes, I believe Michael Fuhr demonstrated that quite admirably in another
reply to this message.

> what you might have been experiencing, is the fact that the
> unix shell is expecting \n lineendings, and the #! line could have been
> failing to run the perl executable.

Yes, I'm now certain you are correct.  On retesting, I caught something
I didn't when I first ran them - the error message was identical for
each language.  The message was
": bad interpreter: No such file or directory"

If the error had been internal to the language, I'm sure the messages
from each would have been different.

I guess this shows what happens when you shoot off your mouth when you
don't know what you're talking about.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Replication fell out of sync

2015-03-02 Thread David Kerr
Howdy,

I had an instance where a replica fell out of sync with the master.

Now it's in in a state where it's unable to catch up because the master has 
already removed the WAL segment.

(logs)
Mar  2 23:10:13 db13 postgres[11099]: [3-1] user=,db=,host= LOG:  streaming 
replication successfully connected to primary
Mar  2 23:10:13 db13 postgres[11099]: [4-1] user=,db=,host= FATAL:  could not 
receive data from WAL stream: FATAL:  requested WAL segment 
0006047C001F has already been removed


I was under the impression that when you setup streaming replication if you 
specify a restore command like : restore_command = 'cp /arch/%f %p'

Then even if the slave falls out of sync, and the master removes the WAL 
segment, as long as you can still retrieve the WAL files, then it can bring 
itself back into sync.


But that doesn't seem to be happening.

The restore_command is working
# Slave's $PGDATA/pg_xlog/
-rw--- 1 postgres postgres 16777216 Mar  2 21:29 0006047C001F
-rwx-- 1 postgres postgres 16777216 Mar  2 23:13 RECOVERYXLOG

I'm on PG 9.2.7, which i know is old, but I'm upgrading shortly.

recovery.conf:
standby_mode  = 'on'
primary_conninfo  = 'host=pgmaster port=5432'
restore_command   = 'cp /arch/%f %p'

relevant info from postgresql.conf:
wal_level = hot_standby
max_wal_senders = 5
wal_keep_segments = 32
archive_mode = on
hot_standby = on
hot_standby_feedback = true


I know that to avoid this entirely I need to set wal_keep_segments higher, 
although in this particular case it wouldn't have mattered because a rogue 
program slammed the database and basically 32/64/128 WAL segments went by in a 
short span of time.

However, I really thought that as long as PG could get the archived logs i'd be 
able to recover. 

Was I wrong with that assertion or did i just run into a bug?

Thanks


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication fell out of sync

2015-03-02 Thread David Kerr
On Mon, Mar 02, 2015 at 03:33:22PM PDT, Joshua D. Drake wrote:
> 
> On 03/02/2015 03:25 PM, David Kerr wrote:
> >
> >Howdy,
> >
> >I had an instance where a replica fell out of sync with the master.
> >
> >Now it's in in a state where it's unable to catch up because the master has 
> >already removed the WAL segment.
> >
> >(logs)
> >Mar  2 23:10:13 db13 postgres[11099]: [3-1] user=,db=,host= LOG:  streaming 
> >replication successfully connected to primary
> >Mar  2 23:10:13 db13 postgres[11099]: [4-1] user=,db=,host= FATAL:  could 
> >not receive data from WAL stream: FATAL:  requested WAL segment 
> >0006047C001F has already been removed
> >
> >
> >I was under the impression that when you setup streaming replication if you 
> >specify a restore command like : restore_command = 'cp /arch/%f %p'
> >
> >Then even if the slave falls out of sync, and the master removes the WAL 
> >segment, as long as you can still retrieve the WAL files, then it can bring 
> >itself back into sync.
> 
> If the archive command is also set so that the restore command has a
> file to retrieve, then yes it will work that way.

Yeah it is, it's actually pulling the file down. 

Glad that's how it's supposed to work. I'd rather be unlucky then crazy.  =)



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication fell out of sync

2015-03-02 Thread David Kerr
On Mon, Mar 02, 2015 at 04:06:02PM PDT, Adrian Klaver wrote:
> On 03/02/2015 03:25 PM, David Kerr wrote:
> >Howdy,
> >
> >I had an instance where a replica fell out of sync with the master.
> >
> >Now it's in in a state where it's unable to catch up because the master has 
> >already removed the WAL segment.
> >
> >(logs)
> >Mar  2 23:10:13 db13 postgres[11099]: [3-1] user=,db=,host= LOG:  streaming 
> >replication successfully connected to primary
> >Mar  2 23:10:13 db13 postgres[11099]: [4-1] user=,db=,host= FATAL:  could 
> >not receive data from WAL stream: FATAL:  requested WAL segment 
> >0006047C001F has already been removed
> >
> >
> >I was under the impression that when you setup streaming replication if you 
> >specify a restore command like : restore_command = 'cp /arch/%f %p'
> >
> >Then even if the slave falls out of sync, and the master removes the WAL 
> >segment, as long as you can still retrieve the WAL files, then it can bring 
> >itself back into sync.
> >
> >
> >But that doesn't seem to be happening.
> >
> >The restore_command is working
> ># Slave's $PGDATA/pg_xlog/
> >-rw--- 1 postgres postgres 16777216 Mar  2 21:29 0006047C001F
> >-rwx-- 1 postgres postgres 16777216 Mar  2 23:13 RECOVERYXLOG
> 
> Trying to figure out why the error occurred at Mar  2 23:10:13 and
> the file shows a time stamp of Mar  2 21:29, especially since you
> say the WAL segments flew past?
> 
> Are there any other WAL files in the slave ~/pg_xlog?

Turns out just that file had gotten corrupt on copy. When i re-pulled it from 
source life was good.

(phew!)

Thanks all for looking.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] segmentation fault postgres 9.3.5 core dump perlu related ?

2015-03-27 Thread Day, David
Hi,

Update : A storey with a happy ending.

I have not seen this segmentation fault since converting the pgperl functions 
to python within the
FreeBSD 9.x environment. So I believe Guy Helmer’s suggested causation was 
likely spot on.
Due to the inability to reproduce the  issue on demand there is a small chance 
this is not the root cause, but  I’ll let the current empirical health of the 
system speak loudest on this matter.

We are in the process of migrating development efforts to 10.x so selection of 
perl over python should become a non-issue.

Thanks all who assisted me in figuring this out.

Best Regards


Dave

From: Day, David
Sent: Wednesday, February 18, 2015 8:07 AM
To: 'Guy Helmer'
Cc: 'pgsql-general@postgresql.org'
Subject: RE: [GENERAL] segmentation fault postgres 9.3.5 core dump perlu 
related ?

Update/Information sharing: ( FreeBSD 10.0 (amd64) – Postgres 9.3.5 – Perl 5.18 
)

I have converted our Postgres  plperlu functions to plpython2u to see if the 
postgres segmentation faults disappear.
Lacking a known way to reproduce the error on demand, I will have to wait a few 
weeks for the absence of the symptom before I might conclude that this bug 
reported to me by Guy  Helmer was the issue.  Migration/Upgrade  to FreeBsd 
10.1 was not an immediate option.


Regards

Dave



Guy,

No I had not seen that bug report before.  ( 
https://rt.perl.org/Public/Bug/Display.html?id=122199 )

We did migrate from FreeBSD 9.x (2?) and I think it true
that we were not experiencing the problem at time.
So it might be a good fit/explanation for our current experience

There were a couple of suggestions to follow up on.
I’ll keep the thread updated.

Thanks, a  good start to my  Friday the 13th.


Regards


Dave Day






From: Guy Helmer [mailto:ghel...@palisadesystems.com]
Sent: Thursday, February 12, 2015 6:19 PM
To: Day, David
Cc: pgsql-general@postgresql.org<mailto:pgsql-general@postgresql.org>
Subject: Re: [GENERAL] segmentation fault postgres 9.3.5 core dump perlu 
related ?


On Feb 12, 2015, at 3:21 PM, Day, David 
mailto:d...@redcom.com>> wrote:

Update/Information sharing on my pursuit of  segmentation faults

FreeBSD 10.0-RELEASE-p12 amd64
Postgres version 9.3.5

Below are three postgres core files generated from two different machine ( 
Georgia and Alabama ) on Feb 11.
These cores would not be caused  from an  environment update issue that I last 
suspected might be causing the segfaults
So I am kind of back to square one in terms of thinking what is occurring.

?  I am not sure that I understand the associated time events in the  postgres 
log file output.  Is this whatever happens to be running on the other postgress 
forked process when the cored  process was detected ?
If this is the case then I have probably been reading to much from the content 
of the postgres log file at the time of core.
This probably just represents collateral damage of routine transactions that 
were in other forked  processes at the time one of the processes cored ?

Therefore I would now just assert  that postgres has a sporadic segmentation 
problem,  no known way to reliably cause it
and am uncertain as to how to proceed to resolve it.

. . .

 Georgia-Core 8:38 -  Feb 11
[New process 101032]
[New Thread 802c06400 (LWP 101032)]
Core was generated by `postgres'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00080c4b6d51 in Perl_hfree_next_entry () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
(gdb) bt
#0  0x00080c4b6d51 in Perl_hfree_next_entry () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
#1  0x00080c4cab49 in Perl_sv_clear () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
#2  0x00080c4cb13a in Perl_sv_free2 () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
#3  0x00080c4e5102 in Perl_free_tmps () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
#4  0x00080bcfedea in plperl_destroy_interp () from 
/usr/local/lib/postgresql/plperl.so
#5  0x00080bcfec05 in plperl_fini () from 
/usr/local/lib/postgresql/plperl.so
#6  0x006292c6 in ?? ()
#7  0x0062918d in proc_exit ()
#8  0x006443f3 in PostgresMain ()
#9  0x005ff267 in PostmasterMain ()
#10 0x005a31ba in main ()
(gdb) info threads
  Id   Target Id Frame
* 2Thread 802c06400 (LWP 101032) 0x00080c4b6d51 in 
Perl_hfree_next_entry () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18
* 1Thread 802c06400 (LWP 101032) 0x00080c4b6d51 in 
Perl_hfree_next_entry () from 
/usr/local/lib/perl5/5.18/mach/CORE/libperl.so.5.18


Given two of the coredumps are in down in libperl and this is FreeBSD 10.0 
amd64, have you seen this?

https://rt.perl.org/Public/Bug/Display.html?id=122199

Michael Moll suggested trying setting vm.pmap.pcid_enabled to 0 but I don’t 
recall seeing if that helped.

Guy




[GENERAL] unexpected error " tables can have at most 1600 columns"

2015-04-13 Thread Day, David
Situation

I have a co-developer installing a new Virtual Machine and encountering a 
postgres error during the installation.
One of our  SQL patch files is failing unexpectedly.

The patch is attempting to add columns to a table,  The table involved  
currently has only 2 columns,
Interactively I can generate the same error in his current state.

psql -h ohio -U redcom ace_db
psql (9.3.6)
Type "help" for help.

ace_db=# select * from log.conference_history;
conf_id | max_size
-+--
(0 rows)

ace_db=# ALTER TABLE log.conference_history ADD talker_limit integer DEFAULT 0;
ERROR:  tables can have at most 1600 columns
ace_db=#
ace_db=#



Puzzled ?


Any thoughts ?


Regards


Dave Day


Re: [GENERAL] unexpected error " tables can have at most 1600 columns"

2015-04-13 Thread Day, David
Pavel,

Thanks so much. This seems to help explain the  problem.

I can say that the patch file had a later statement with  an error related to 
search_path setup.
Given that the logic that applies patches repeat attempts for an extended 
period of time,
I speculate that the rollback of the patch leaves these invisible columns 
remaining?
Otherwise I fail to see  from where these columns originate.

When I drop the database and recreate it with the schema reference issues 
resolved.
( ie. Issues related to search_path  setup ). The database builds without issue.

These patches were not an issue for ongoing developers because at some point
In the installation the default search_path gets setup so that this err in the
patch writing is masked.


Thanks so much.



Regards


Dave Day



select attname from pg_attribute where 
attrelid='log.conference_history'::regclass and attnum > 0;
 attname
-
pg.dropped.11
pg.dropped.13
pg.dropped.14
pg.dropped.15
pg.dropped.41
pg.dropped.56
pg.dropped.42
pg.dropped.43..

select count(*) from pg_attribute where 
attrelid='log.conference_history'::regclass and attnum > 0 and attisdropped;
count
---
  1598

From: Pavel Stehule [mailto:pavel.steh...@gmail.com]
Sent: Monday, April 13, 2015 12:06 PM
To: Day, David
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] unexpected error " tables can have at most 1600 columns"



2015-04-13 17:57 GMT+02:00 Day, David mailto:d...@redcom.com>>:
Situation

I have a co-developer installing a new Virtual Machine and encountering a 
postgres error during the installation.
One of our  SQL patch files is failing unexpectedly.

The patch is attempting to add columns to a table,  The table involved  
currently has only 2 columns,
Interactively I can generate the same error in his current state.

psql -h ohio -U redcom ace_db
psql (9.3.6)
Type "help" for help.

ace_db=# select * from log.conference_history;
conf_id | max_size
-+--
(0 rows)

ace_db=# ALTER TABLE log.conference_history ADD talker_limit integer DEFAULT 0;
ERROR:  tables can have at most 1600 columns
ace_db=#
ace_db=#

There can be removed (invisible columns)

select attname from pg_attribute where attrelid = 'test'::regclass and attnum > 
0;

postgres=# select attname from pg_attribute where attrelid = 'test'::regclass 
and attnum > 0;
┌─┐
│ attname │
╞═╡
│ a   │
│ c   │
│ d   │
└─┘
(3 rows)



alter table test drop column a, drop column c;


postgres=# select attname from pg_attribute where attrelid = 'test'::regclass 
and attnum > 0;
┌──┐
│   attname│
╞══╡
│ pg.dropped.1 │
│ pg.dropped.2 │
│ d│
└──┘
(3 rows)

postgres=# select count(*) from pg_attribute where attrelid = 'test'::regclass 
and attnum > 0 and attisdropped;
┌───┐
│ count │
╞═══╡
│ 2 │
└───┘
(1 row)
So maybe it can be a reason of this issue?
Pavel






Puzzled ?


Any thoughts ?


Regards


Dave Day



[GENERAL] plpgsql question: select into multiple variables ?

2015-06-29 Thread Day, David
Hi,



Postgres version 9.3.9


What is wrong with my usage of the plpgsql  "select into" concept 
I have a function to look into a calendar table to find the first and 
Last weekend date of a month.

In this simplified concept function I end up with a NULL for first or last 
weekend variable.


create or replace function sys.time_test ()
returns date as 
$$
DECLARE
   first_weekend date;
   last_weekend date;
BEGIN

  SELECT MIN(CAL_DATE),MAX(CAL_DATE) INTO first_weekend::date, 
last_weekend::date FROM sys.calendar
  WHERE month_of_year = (extract(MONTH FROM current_date))::int AND
  year_of_date = (extract(YEAR FROM current_date))::int AND
 day_of_week IN ( 'Sat','Sun');

 RETURN( COALESCE(last_weekend,'01-jun-2014'));
   
END
$$
LANGUAGE plpgsql volatile;


If I execute the same select logic from a psql shell I get the correct result.  


(1 row)

ace_db=# ace_db=#  SELECT MIN(CAL_DATE),MAX(CAL_DATE) FROM sys.calendar cal 

WHERE cal.month_of_year = 
(extract(MONTH FROM current_date))::int AND 
 
cal.year_of_date = (extract(YEAR FROM current_date))::int AND   

 cal.day_of_week IN ( 'Sat','Sun');
min |max
+
 2015-06-06 | 2015-06-28
(1 row)


If I simplify to a single variable it works. i.e


create or replace function sys.time_test ()
returns date as 
$$
DECLARE
   first_weekend date;
   last_weekend date;
BEGIN

  SELECT MIN(CAL_DATE) INTO first_weekend::date FROM sys.calendar
  WHERE month_of_year = (extract(MONTH FROM current_date))::int AND
  year_of_date = (extract(YEAR FROM current_date))::int AND
 day_of_week IN ( 'Sat','Sun');

 RETURN( COALESCE(first_weekend,'01-jun-2014'));
   
END
$$
LANGUAGE plpgsql volatile;



I suppose I can adjust to write my actual function to have 2 selects; one for 
each variable.
However, I thought according to the documentation the targets could/must match 
the result columns for select into ?


Thoughts


Thanks


Dave Day




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql question: select into multiple variables ?

2015-06-29 Thread Day, David

-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com] 
Sent: Monday, June 29, 2015 4:03 PM
To: Day, David; pgsql-general@postgresql.org
Subject: Re: [GENERAL] plpgsql question: select into multiple variables ?

On 06/29/2015 12:07 PM, Day, David wrote:
> Hi,
>
>
>
> Postgres version 9.3.9
>
>
> What is wrong with my usage of the plpgsql  "select into" concept I 
> have a function to look into a calendar table to find the first and 
> Last weekend date of a month.
>
> In this simplified concept function I end up with a NULL for first or last 
> weekend variable.
>
>
> create or replace function sys.time_test () returns date as $$ DECLARE
> first_weekend date;
> last_weekend date;
> BEGIN
>
>SELECT MIN(CAL_DATE),MAX(CAL_DATE) INTO first_weekend::date, 
> last_weekend::date FROM sys.calendar
>WHERE month_of_year = (extract(MONTH FROM current_date))::int AND
>year_of_date = (extract(YEAR FROM current_date))::int AND
>   day_of_week IN ( 'Sat','Sun');
>
>   RETURN( COALESCE(last_weekend,'01-jun-2014'));
>
> END
> $$
> LANGUAGE plpgsql volatile;

The ::date cast seem to be the problem. When I tried a version of the function 
here with them I got the same output. Eliminating them got the correct output. 
They are redundant as you already DECLAREd first_weekend and last_weekend to be 
DATE type. So:

  SELECT MIN(CAL_DATE),MAX(CAL_DATE) INTO first_weekend, last_weekend .
>
>
> If I execute the same select logic from a psql shell I get the correct result.
>
>
> (1 row)
>
> ace_db=# ace_db=#  SELECT MIN(CAL_DATE),MAX(CAL_DATE) FROM sys.calendar cal   
>   
> WHERE cal.month_of_year = 
> (extract(MONTH FROM current_date))::int AND   
>
> cal.year_of_date = (extract(YEAR FROM current_date))::int AND 
>   
>  cal.day_of_week IN ( 'Sat','Sun');
>  min |max
> +
>   2015-06-06 | 2015-06-28
> (1 row)
>
>
> If I simplify to a single variable it works. i.e
>
>
> create or replace function sys.time_test () returns date as $$ DECLARE
> first_weekend date;
> last_weekend date;
> BEGIN
>
>SELECT MIN(CAL_DATE) INTO first_weekend::date FROM sys.calendar
>WHERE month_of_year = (extract(MONTH FROM current_date))::int AND
>year_of_date = (extract(YEAR FROM current_date))::int AND
>   day_of_week IN ( 'Sat','Sun');
>
>   RETURN( COALESCE(first_weekend,'01-jun-2014'));
>
> END
> $$
> LANGUAGE plpgsql volatile;
>
>
>
> I suppose I can adjust to write my actual function to have 2 selects; one for 
> each variable.
> However, I thought according to the documentation the targets could/must 
> match the result columns for select into ?
>
>
> Thoughts
>
>
> Thanks
>
>
> Dave Day
>
>
>
>


--
Adrian Klaver
adrian.kla...@aklaver.com



I agree with your evaluation.
I originally had that, but in playing around with the function had added the 
casts with no benefit and seemingly no harm either.
I later noted that I was comparing to 'SAT','SUN' rather then 'Sat','Sun' in my 
calendar table.
After repairing that I forgot to back out the date cast.  

Although the cast was redundant as you pointed out. I am  not quite sure why it 
made it not work.
Nontheless,  I am happy to move on to other issues.


Thanks very much for your assistance.



Dave Day



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql question: select into multiple variables ?

2015-06-30 Thread Day, David
Hi Yari,

Thanks for the response.
You did make the “simplified concept” function more rational.

However,
This was kind of a non-sense function to demonstrate the problem I was having 
with the “select fields” and the “into variables”.
As pointed out by Adrian Klaver and  Tom Lane,  the real problem was in casts 
that I was using were confusing the parser and were un-necessary.

Appreciate your thought and effort.


Regards


Dave



From: Yasin Sari [mailto:yasinsar...@googlemail.com]
Sent: Tuesday, June 30, 2015 3:26 AM
To: Day, David
Subject: Re: [GENERAL] plpgsql question: select into multiple variables ?

Hi David,

this works for me.

CREATE OR REPLACE FUNCTION sys.time_test (
  out first_weekend date,
  out last_weekend date
)
RETURNS SETOF record AS
$body$
BEGIN


  SELECT 
COALESCE(MIN(CAL_DATE),'01-jun-2014'),COALESCE(MAX(CAL_DATE),'01-jun-2014')
  into first_weekend,last_weekend
  FROM sys.calendar
WHERE month_of_year = (extract(MONTH FROM current_date))::int AND
  year_of_date = (extract(YEAR FROM current_date))::int AND
 day_of_week IN ( 'Sat','Sun');
return next;

END
$body$
LANGUAGE 'plpgsql'
VOLATILE
CALLED ON NULL INPUT
SECURITY INVOKER
COST 100 ROWS 1000;

On Mon, Jun 29, 2015 at 10:07 PM, Day, David 
mailto:d...@redcom.com>> wrote:
Hi,



Postgres version 9.3.9


What is wrong with my usage of the plpgsql  "select into" concept
I have a function to look into a calendar table to find the first and
Last weekend date of a month.

In this simplified concept function I end up with a NULL for first or last 
weekend variable.


create or replace function sys.time_test ()
returns date as
$$
DECLARE
   first_weekend date;
   last_weekend date;
BEGIN

  SELECT MIN(CAL_DATE),MAX(CAL_DATE) INTO first_weekend::date, 
last_weekend::date FROM sys.calendar
  WHERE month_of_year = (extract(MONTH FROM current_date))::int AND
  year_of_date = (extract(YEAR FROM current_date))::int AND
 day_of_week IN ( 'Sat','Sun');

 RETURN( COALESCE(last_weekend,'01-jun-2014'));

END
$$
LANGUAGE plpgsql volatile;


If I execute the same select logic from a psql shell I get the correct result.


(1 row)

ace_db=# ace_db=#  SELECT MIN(CAL_DATE),MAX(CAL_DATE) FROM sys.calendar cal 

WHERE cal.month_of_year = 
(extract(MONTH FROM current_date))::int AND 
 
cal.year_of_date = (extract(YEAR FROM current_date))::int AND   

 cal.day_of_week IN ( 'Sat','Sun');
min |max
+
 2015-06-06 | 2015-06-28
(1 row)


If I simplify to a single variable it works. i.e


create or replace function sys.time_test ()
returns date as
$$
DECLARE
   first_weekend date;
   last_weekend date;
BEGIN

  SELECT MIN(CAL_DATE) INTO first_weekend::date FROM sys.calendar
  WHERE month_of_year = (extract(MONTH FROM current_date))::int AND
  year_of_date = (extract(YEAR FROM current_date))::int AND
 day_of_week IN ( 'Sat','Sun');

 RETURN( COALESCE(first_weekend,'01-jun-2014'));

END
$$
LANGUAGE plpgsql volatile;



I suppose I can adjust to write my actual function to have 2 selects; one for 
each variable.
However, I thought according to the documentation the targets could/must match 
the result columns for select into ?


Thoughts


Thanks


Dave Day




--
Sent via pgsql-general mailing list 
(pgsql-general@postgresql.org<mailto:pgsql-general@postgresql.org>)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



  1   2   3   4   5   6   7   8   9   10   >