Re: [GENERAL] The case of PostgreSQL on NFS Server (II)

2015-04-02 Thread Steve Atkins

On Apr 2, 2015, at 5:09 PM, Octavi Fors  wrote:

> 
> And second, because I need the database to be accessible from two computers 
> in the same LAN.

If you do this, you will destroy your database[1].

Why not have the database running on one machine, all the time, potentially 
with a real disk subsystem then just access it from wherever on the LAN you 
need to? Postgresql is a client-server database, and you can access it over the 
network as easily as you can from the machine it's running on.

Cheers,
  Steve

[1] Almost certainly, unless you make absolutely sure postgresql is only 
started on one machine at a time, amongst other things.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql Development Options

2015-04-05 Thread Steve Atkins

On Apr 5, 2015, at 1:21 PM, Ray Madigan  wrote:

> I have been using postgresql in java off and on for many years.   I now have 
> an assignemtn where I have to build a very straight forward networked 
> application that needs to be able to insert Array data types from a windows 
> environment to a remote Linux Postgresql database.  
> 
> My first attempt was to use Qt.  Their is a postgresql driver and found out 
> the hard way that it doesn't support the Array data type.  I need a small ui 
> so the user can set options for the application.   My question is, are there 
> other UI options that I can use to development this application.

The Qt database driver is not great for general use. Where it shines is when 
you want to do simple CRUD queries and to have them connected to widgets with 
minimal work. It should support arrays, though, with a little data 
transformation.

If you're looking to use C++ then Qt is an excellent framework for a GUI app - 
one that you won't beat for cross-platform work - but you might consider 
whether using libpqxx or libpq to connect to the database might suit your needs 
better.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Moving Specific Data Across Schemas Including FKs

2015-04-23 Thread Steve Atkins

On Apr 23, 2015, at 10:09 AM, Cory Tucker  wrote:

> I have the need to move a specific set of data from one schema to another.  
> These schemas are on the same database instance and have all of the same 
> relations defined.  The SQL to copy data from one table is relatively 
> straightforward:
> 
> INSERT INTO schema_b.my_table 
> SELECT * FROM schema_a.my_table WHERE ...

Would ALTER TABLE ... SET SCHEMA do what you need? A schema is mostly just a 
name space, so there's no need to create new tables or copy data around.

Cheers,
  Steve

> 
> What I am trying to figure out is that if I also have other relations that 
> have foreign keys into the data I am moving, how would I also move the data 
> from those relations and maintain the FK integrity?  
> 
> The tables are setup to use BIGSERIAL values for the id column which is the 
> primary key, and the foreign keys reference these id columns.  Ideally each 
> schema would use it's own serial for the ID values, but I'm open to clever 
> alternatives.
> 
> I am using PG 9.3.5 (Amazon RDS), but with an option to move to 9.4 should 
> the situation call for it.
> 
> thanks
> --Cory



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Converting Postgres SQL constraint logic to PHP?

2016-06-10 Thread Steve Atkins

> On Jun 10, 2016, at 1:01 PM, Ken Tanzer  wrote:
> 
> Hi.  I was hoping this list might be able to offer some 
> help/advice/suggestions/opinions about feasibility for something I want to 
> implement, namely converting Postgres constraints into PHP logic.  Here's the 
> context and explanation:
> 
> I work on a PHP web app using Postgres.  When possible, we try to build as 
> much logic as possible directly into the DB.  The app already automatically 
> reads NOT NULL and foreign key constraints from the DB, and enforces them 
> through the UI thus preventing people from getting ugly database errors.  It 
> doesn't do that with check constraints and table constraints though, which 
> means we either end up duplicating the constraint logic in PHP, or else 
> sometimes get lazy/expedient and only put the constraint into PHP.  Obviously 
> neither of those is ideal.
> 
> What would be ideal is for the app to handle those constraints automatically. 
>  It looks like I can pull them out (as SQL) from 
> information_schema.check_constraints, with the remaining issue being how to 
> make them usable in PHP.
> 
> I'm wondering if anyone has done this already, or if there is some kind of 
> library available for this purpose?
> 
> If not, and absent any better suggestions,

You could name the check constraints, catch the errors and use a client-side 
mapping between constraint name and a friendly error message for display in the 
web interface.

You could implement the checks in PHP in the database. 
https://public.commandprompt.com/projects/plphp/wiki

You could look at one of the existing SQL parsers implemented in PHP, and use 
those to parse the constraint to a tree from which you could easily pull PHP.

I'd go for that first one, if possible. Robust, and zero overhead in the happy 
path.

> I'm looking at trying to parse/search/replace.  This might well be imperfect, 
> and error-prone.  But if I can get something that at least works in a lot of 
> cases, that would help a lot.  So as a simple example, converting from

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgresSQL and HIPAA compliance

2016-06-17 Thread Steve Atkins

> On Jun 17, 2016, at 3:03 AM, Alex John  wrote:
> 
> Hello, I have a few questions regarding the use of PostgreSQL and HIPAA
> compliance. I work for a company that plans on storing protected health
> information (PHI) on our servers. We have looked at various solutions for 
> doing
> so, and RDS is a prime candidate except for the fact that they have explicitly
> stated that the Postgres engine is *not* HIPAA compliant.

There's nothing fundamental to postgresql that would make HIPAA compliance
difficult, and *probably* nothing major with the way it's deployed on RDS. 
Actual
certification takes time and money, though.

> 
> Users on the IRC channel generally say that the guidelines are more catered
> towards building better firewalls and a sane access policy, but I would like 
> to
> know if there is anything within the implementation of Postgres itself that
> violates said compliance.
> 
> If anyone works at a similar company and utilizes postgresql to store PHI,
> please let me know.

EnterpriseDB are helping provide HIPAA compliant postgresql on AWS; it
might be worth having a chat with them.

http://www.enterprisedb.com/postgres-plus-edb-blog/fred-dalrymple/postgres-meets-hipaa-cloud
http://www.slideshare.net/EnterpriseDB/achieving-hipaa-compliance-with-postgres-plus-cloud-database

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Database performance

2016-09-06 Thread Steve Atkins

> On Sep 6, 2016, at 12:08 PM, Scott Marlowe  wrote:
> 
> On Fri, Sep 2, 2016 at 9:38 PM, Pradeep  wrote:
>> 
>> max_connections = 100
>> shared_buffers = 512MB
>> effective_cache_size = 24GB
>> work_mem = 110100kB
> 
> This is WAY too high for work_mem. Work_mem is how much memory a
> single sort can grab at once. Each query may run > 1 sort, and you
> could have 100 queries running at once.
> 
> This setting is 110GB. That's about 109.9GB too high for safety. When
> things go wrong with this too big, they go very wrong, sending the
> machine into a swap storm from which it may not return.

It's an oddly spelled 110MB, which doesn't seem unreasonable.

> 
> It's far more likely that you've just got poorly written queries. I'd
> make a post with explain analyze output etc. Here's a good resource
> for reporting slow queries:
> 
> https://wiki.postgresql.org/wiki/Slow_Query_Questions

+1

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Integer fields and auto-complete clients

2016-10-26 Thread Steve Atkins

> On Oct 26, 2016, at 6:59 AM, Tim Smith  wrote:
> 
> Hi,
> 
> I'm curious as to what the current advice would be in relation to
> auto-complete type applications (e.g. "AJAX" type java-script "guess
> as you type" applicatoins).
> 
> In relation to text fields, I know the general suggestion is gin_trgm_ops.
> 
> Is there much point even thinking about using gin_trgm_ops on integers
> ?  I'm thinking perhaps the 'prefix' extension is better suited ?  Or
> is there something else altogether I should be considering to support
> such applications ?

It depends on whether you want to return results that have the typed value
as a prefix or results that include the typed value as a substring. i.e. where
foo like 'bar%' vs where foo like '%bar%'.

If the latter, pg_trgm is the way to go. If the former then a regular btree 
index
on the (case-folded text form of the) value, possibly using text_pattern_ops, 
is the right thing.

The prefix module isn't what you want - it's for matching, e.g., an entire 
phone number
against a table of possible prefixes, not a prefix against a table of possible 
matches.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Hardware recommendations?

2016-11-02 Thread Steve Atkins
I'm looking for generic advice on hardware to use for "mid-sized" postgresql 
servers, $5k or a bit more.

There are several good documents from the 9.0 era, but hardware has moved on 
since then, particularly with changes in SSD pricing.

Has anyone seen a more recent discussion of what someone might want for 
PostreSQL in 2017?

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index size

2016-12-03 Thread Steve Atkins

> On Dec 3, 2016, at 3:57 PM, Samuel Williams  
> wrote:
> 
> Thanks everyone for your feedback so far. I've done a bit more digging:
> 
> MySQL in MBytes (about 350 million rows):
> 
> index_user_event_on_what_category_id_created_at_latlng | 22806.00
> index_user_event_for_reporting | 18211.00
> index_user_event_on_created_at | 9519.00
> index_user_event_on_user_id | 6884.00
> index_user_event_on_poi_id | 4891.00
> index_user_event_on_deal_id | 3979.00
> 
> Postgres (about 250 million rows):
> 
> index_user_event_on_what_category_id_created_at_latlng | 25 GB
> index_user_event_for_reporting | 19 GB
> index_user_event_on_created_at | 7445 MB
> index_user_event_on_user_id | 7274 MB
> index_user_event_on_deal_id | 7132 MB
> index_user_event_on_poi_id | 7099 MB
> 
> So, the index is a bit bigger, plus there is also the PKEY index which
> increases disk usage by another whole index. Keep in mind in the
> above, MySQL has about 40% more data.
> 
> With some indexes, it looks like MySQL might not be adding all data to
> the index (e.g. ignoring NULL values). Does MySQL ignore null values
> in an index? Can we get the same behaviour in Postgres to minimise
> usage? What would be the recommendation here?

It's unlikely anyone will be able to usefully answer the questions you
should be asking without seeing the schema and index definitions,
and maybe some clues about how you're querying the data.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Importing SQLite database

2016-12-10 Thread Steve Atkins

> On Dec 10, 2016, at 11:32 AM, Igor Korot  wrote:
> 
> Hi, guys,
> I'm working thru my script and I hit a following issue:
> 
> In the script I have a following command:
> 
> CREATE TABLE playersinleague(id integer, playerid integer, ishitter
> char, age integer, value decimal, currvalue decimal, draft boolean,
> isnew char(1), current_rank integer, original_rank integer, deleted
> integer, teamid integer, notes varchar(125), PRIMARY KEY(id,playerid),
> foreign key(id) references leagues(id), foreign key(playerid)
> references players(playerid),foreign key(teamid) references
> teams(teamid));
> 
> Now this command finished successfully, however trying to insert a
> record with following command:
> 
> INSERT INTO  playersinleague  
> VALUES(1,1,'1',27,42.0,42.0,0,'0',1,1,0,23,NULL);
> 
> gives following error:
> 
> psql:/Users/igorkorot/draft.schema:10578: ERROR:  column "draft" is of
> type boolean but expression is of type integer
> 
> Looking at https://www.postgresql.org/docs/9.5/static/datatype-numeric.html,
> I don't see a 'boolean' as supported data type.

Booleans aren't numeric.

https://www.postgresql.org/docs/9.5/static/datatype-boolean.html

Boolean will take a range of formats, including '0' - an untyped literal
"0". But it won't take an integer, which is what an unquoted 0 is.

You'll need to modify your insert statement slightly to use a valid boolean
value for that field ("true" or "false" are idiomatic).

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] LYDB: What advice about stored procedures and other server side code?

2016-12-27 Thread Steve Atkins

> On Dec 27, 2016, at 2:03 PM, Guyren Howe  wrote:
> 
> I am putting together some advice for developers about getting the most out 
> of SQL servers in general and Postgres in particular. I have in mind the 
> likes of most web developers, who through ignorance or a strange cultural 
> preference that has emerged, tend to treat their database server as a dumb 
> data bucket.
> 
> I call the project Love Your Database (LYDB). It is starting as a series of 
> blog posts:
> 
> https://medium.com/@gisborne/love-your-database-lydb-23c69f480a1d#.4jngp2rcb
> https://medium.com/@gisborne/love-your-database-simple-validations-68d5d6d0bbf3#.az4o2s152
> 
> I would next like to cover server-side code such as stored procedures and 
> triggers.
> 
> I am inclined to advise folks to use PL/V8 on Postgres, because it is a 
> reasonable language, everyone knows it, it has good string functions, decent 
> performance and it tends to be installed everywhere (in particular, Amazon 
> RDF offers it).
> 

Think hard about the "impedance mismatch" between parts of the system.

pl/pgsql uses sql data types and operators, and so interfaces very cleanly with 
the rest of postgresql. pl/v8 uses javascript data types and *for database 
related things* is likely to be a less perfect match to the rest of the system 
- as it's translating (or, in some cases, failing to translate) between sql 
data types and javascript data types that may not be entirely compatible, or 
which may not exist at all.

So if your functions are mostly doing databasey things, pl/pgsql may well be a 
better choice. If they're mostly doing appy things, that just happen to be in 
the database, then pl/v8 may be a better choice (but so might just doing the 
work in the app, perhaps with some listen/notify assistance).

Most of the functions I write are short trigger functions, or data 
wrapper/modification functions for migration or making business logic available 
for SQL. For the majority of those I find pl/pgsql the best match (if I can't 
get away with sql functions).

If you're trying to convince people to get the most out of their database, 
pushing them towards pl/v8 as their first choice of embedded language might not 
be the best path. (That it might encourage them to write code to iterate 
through tables rather than taking advantage of SQL where they can might be a 
thing too).

Cheers,
  Steve


> Broadly, what advice should I offer that isn’t obvious? Not just about PL/V8 
> but server side code in general.
> 
> TIA



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Means to emulate global temporary table

2017-01-11 Thread Steve Atkins

> On Jan 11, 2017, at 7:02 PM, David G. Johnston  
> wrote:
> 
> ​"throughout" mustn't mean "by other sessions" or this becomes unwieldy.
> 
> Here's a mock-up:
> 
> CREATE TABLE template_table ();
> CREATE VIEW view_over_my_template_instance AS SELECT * FROM 
> my_instance_of_template_table; --fails if done here without the desired 
> feature
> 
> In a given session:
> 
> CREATE TEMP TABLE my_instance_of_template_table LIKE template_table;
> SELECT * FROM view_over_my_template_table; -- returns only this session's 
> temp table data
> 
> Other sessions can simultaneously execute the same SELECT * FROM view_over_* 
> and get their own results.
> 
> The goal is to avoid having to CREATE TEMP TABLE within the session but 
> instead be able to do:
> 
> CREATE GLOBAL TEMP TABLE my_instance_of_template_table LIKE template_table;
> 
> And have the CREATE VIEW not fail and the session behavior as described.

Would this differ in any user-visible way from what you'd have if you executed 
at the start of each session:

CREATE TEMPORARY TABLE my_instance_of_template_table LIKE template_table;
CREATE TEMPORARY VIEW view_over_my_template_instance AS SELECT * FROM 
my_instance_of_template_table;

There'd be a small amount of session startup overhead, but that could be 
handled at the pooler level and amortized down to zero.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY to question

2017-01-17 Thread Steve Atkins

> On Jan 17, 2017, at 10:23 AM, Rich Shepard  wrote:
> 
>  Running -9.6.1. I have a database created and owned by me, but cannot copy
> a table to my home directory. Postgres tells me it cannot write to that
> directory. The only way to copy tables to files is by doing so as the
> superuser (postgres).
> 
>  Why is this, and can I change something so I, as a user, can copy tables
> directly to ~/?

You can use "\copy" from psql to do the same thing as the SQL copy command,
but writing files as the user running psql, rather than the postgresql superuser
role. That's probably what you need.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does this hot standy archive_command work

2017-01-20 Thread Steve Atkins

> On Jan 20, 2017, at 7:03 PM, bto...@computer.org  
> wrote:
> 
> While learning a bit about basic hot standby configuration, I was reviewing 
> an article that used these parameters
> 
> wal_level = 'hot_standby'
> archive_mode = on
> archive_command = 'cd .'
> max_wal_senders = 1
> hot_standby = on
> 
> 
> How or why that particular archive_command actually works (... and it does 
> ... I tried it ...) is not clear to me based on reading of the Postgresql 
> documentation on this topic. I would have expected to see an actual copy or 
> rsync command, as described in the fine manual at section 25.3.1. "Setting Up 
> WAL Archiving"
> 
> The entire example appears at 
> 
> https://www.digitalocean.com/community/tutorials/how-to-set-up-master-slave-replication-on-postgresql-on-an-ubuntu-12-04-vps
> 
> Can anyone enlighten on this topic, or provide a link to an existing 
> explanation?

It's not archiving logs at all, instead relying on streaming them directly to 
the slave.

Changing archive_mode requires a server restart, while changing archive_command 
from a command that does nothing, successfully, to a command that actually 
archives logs just requires a reload. So this lets you enable archiving without 
halting the server by changing the command.

Or that's how I vaguely recall it working some years ago. Things may have 
changed now - you're following a very old tutorial.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Testing an extension exhaustively?

2017-02-01 Thread Steve Atkins

> On Feb 1, 2017, at 4:03 PM, John R Pierce  wrote:
> 
> On 2/1/2017 3:39 PM, postgres user wrote:
>> If I have the Postgresql server installed on my machine i.e I have all the 
>> bins, libs and share directories of the Postgresql and I have the libs and 
>> sql's installed for one of the contrib extensions lets say "chkpass", how 
>> does one go about testing this extension exhaustively on the server? I ask 
>> this because I would want to do this manually first and then go about 
>> automating the testing of this extension. So rather than just execute CREATE 
>> EXTENSION and DROP EXTENSION I want some solid evidence that the extension 
>> is working fine under all circumstances and is not crashing the server at 
>> any moment? Looking for some new strategies and ideas to come my way through 
>> this.
> 
> you would write test cases for all the functionality provided by this 
> extension, same as you'd test any other sort of API.

And you might find http://pgtap.org convenient for doing that.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] CREATE EXTENSION without superuser access

2016-04-27 Thread Steve Atkins
I have an app that would benefit from being able to use pg_partman rather than 
doing it's own ad-hoc partition management.

Unfortunately, some of the places where the app needs to run don't have root 
access to the database server filesystem, so I can't install the extension in 
the postgresql extensions directory.

I could get all the pg_partman functionality by modifying the extension SQL 
script to remove the extension functionality and installing the functions in it 
manually, but then I can't take advantage of the extension features for 
backups, config tables, upgrades and so on - and it's probably not going to 
interact well if someone does a "create extension pg_partman" in the database.

Is there any way to install an extension either from a SQL connection or from a 
user-defined directory instead of .../extensions?

(And if not, is there a TODO here?)

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CREATE EXTENSION without superuser access

2016-04-27 Thread Steve Atkins

> On Apr 27, 2016, at 2:47 PM, Adrian Klaver  wrote:
> 
> On 04/27/2016 01:22 PM, Steve Atkins wrote:
>> I have an app that would benefit from being able to use pg_partman rather 
>> than doing it's own ad-hoc partition management.
>> 
>> Unfortunately, some of the places where the app needs to run don't have root 
>> access to the database server filesystem, so I can't install the extension 
>> in the postgresql extensions directory.
>> 
>> I could get all the pg_partman functionality by modifying the extension SQL 
>> script to remove the extension functionality and installing the functions in 
>> it manually, but then I can't take advantage of the extension features for 
>> backups, config tables, upgrades and so on - and it's probably not going to 
>> interact well if someone does a "create extension pg_partman" in the 
>> database.
>> 
>> Is there any way to install an extension either from a SQL connection or 
>> from a user-defined directory instead of .../extensions?
> 
> Have not tried it, but you might want to take a look at:
> 
> http://www.postgresql.org/docs/9.5/interactive/extend-extensions.html
> 
> A control file can set the following parameters:
> 
> directory (string)
> 
>The directory containing the extension's SQL script file(s). Unless an 
> absolute path is given, the name is relative to the installation's SHAREDIR 
> directory. The default behavior is equivalent to specifying directory = 
> 'extension'.

That's a useful feature, for sure, but I think the control file itself still 
needs to be in the .../extensions directory.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CREATE EXTENSION without superuser access

2016-04-27 Thread Steve Atkins

> On Apr 27, 2016, at 3:47 PM, Adrian Klaver  wrote:
> 
> On 04/27/2016 03:30 PM, Steve Atkins wrote:
>> 
>>>> 
>>>> Is there any way to install an extension either from a SQL connection or 
>>>> from a user-defined directory instead of .../extensions?
>>> 
>>> Have not tried it, but you might want to take a look at:
>>> 
>>> http://www.postgresql.org/docs/9.5/interactive/extend-extensions.html
>>> 
>>> A control file can set the following parameters:
>>> 
>>> directory (string)
>>> 
>>>The directory containing the extension's SQL script file(s). Unless an 
>>> absolute path is given, the name is relative to the installation's SHAREDIR 
>>> directory. The default behavior is equivalent to specifying directory = 
>>> 'extension'.
>> 
>> That's a useful feature, for sure, but I think the control file itself still 
>> needs to be in the .../extensions directory.
> 
> Yeah, that would be a problem, I needed to read a couple of paragraphs up 
> from the above:(
> 
> Got to thinking, when you say root do you really mean root or the user the 
> database cluster is initdb'ed as?

Neither - I don't have write access to the postgresql extensions directory.

The use case for me is distributing an enterprise app that talks to a database 
which the people who use the app don't have administrative access to. The 
admins who install and run the database don't install extensions (they're not 
responsive, they just don't trust third party extensions, ...). For most of my 
users the database would be running on a machine they have filesystem access 
to, so being able to point to SQL scripts in another directory would be enough, 
but in a few cases it's running on a separate system and they only have access 
via port 5432.

Any solution that didn't require filesystem access at all would probably be 
really convenient for people using managed PostgreSQL services too.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CREATE EXTENSION without superuser access

2016-04-28 Thread Steve Atkins

> On Apr 28, 2016, at 6:44 AM, Vik Fearing  wrote:
> 
> On 04/27/2016 10:22 PM, Steve Atkins wrote:
>> Is there any way to install an extension either from a SQL connection or 
>> from a user-defined directory instead of .../extensions?
>> 
>> (And if not, is there a TODO here?)
> 
> There is actually a TODON'T here. Search the pgsql-hackers archives for
> some very, very long reading about "extension templates".
> 
> This was (I think) the last thread about it:
> www.postgresql.org/message-id/flat/m2bo5hfiqb@2ndquadrant.fr

That's a very long thread. OK, I'll give up on using the extension
infrastructure.

Time to either fork pg_partman or write a little perl script that converts
extensions to not-extensions, I guess.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Beta testers for database development tool wanted

2016-05-11 Thread Steve Atkins

> On May 11, 2016, at 11:24 PM, Martijn Tonies (Upscene Productions) 
>  wrote:
> 
> Hello everyone,
>  
> I’ll just get at it right away --
>  
> We’re developing a database development tool called Database Workbench, it 
> currently supports MySQL, InterBase, Firebird, Oracle, SQL Server, NexusDB 
> and SQL Anywhere (see http://www.upscene.com/database_workbench/ )

Windows only, judging from the screenshots?

Cheers,
  Steve

>  
> We’re adding PostgreSQL support and the first beta is ready for testing
>  
> We would like to have people who:
> - would use this product on a daily basis, like they're using any other 
> PostgreSQL tool (eg PgAdmin) now
> - work with larger databases, both data volume and meta data object count
> - are able to report bugs in a (reasonable) detailed manner
> - are able to discuss new features or enhancements
> - are able to regularly download updates and use them
> - don’t mind being put on a private e-mail list to report issues
> 
> Limitations:
> - stored function overloading currently not supported
> - version 9.1 and up supported
>  
>  
> If anyone of you is interested is testing this tool, with a free license for 
> the PostgreSQL module and a second module of your choice, drop me an e-mail 
> at m.tonies @ upscene.com
>  
>  
> With regards,
> 
> Martijn Tonies
> Upscene Productions
> http://www.upscene.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Thoughts on "Love Your Database"

2016-05-20 Thread Steve Atkins

> On May 20, 2016, at 1:43 PM, Guyren Howe  wrote:
> 
> On May 20, 2016, at 13:38 , Pierre Chevalier Géologue 
>  wrote:
>> 
>> Le 04/05/2016 18:29, Szymon Lipiński a écrit :
>>> On the other hand, when I was trying to store all my logic in a
>>> database, there was just one thing that made me hate it. Testing.
>>> Testing the procedures inside the database was not easy, not funny, and
>>> too much time consuming.
>> 
>> Yes, very good point.
> 
> Are there any best practices or tricks to make this easier?

In-database unit tests help. pgTap is a decent framework
for building that sort of test-suite in a way that'll play nice with
reporting and CI tools.

http://pgtap.org

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Moving pg_xlog to another drive

2007-06-04 Thread Steve Atkins


On Jun 4, 2007, at 11:15 AM, Ben wrote:


On Mon, 4 Jun 2007, Joshua D. Drake wrote:


Ben wrote:
I'm trying to move my WAL to another drive, but am having  
difficulties with this seemingly simple process. Every time I  
start up with pg_xlog symlinked to my other drive, I get this:
FATAL:  could not open file "pg_xlog/0001.history":  
Permission denied
If I move pg_xlog back into its normal place then things work  
fine. The postgres user can certainly create files when using the  
symlink, so I really don't think it's a permission issue... but  
I'm at a loss as to what else it might be.


Is the parent directory of the place you are moving pg_xlog to a  
place that the postgres user can enter?


Sorry, yes, I neglected to mention that the postgres user can enter  
every directory along the path to the new pg_xlog directory. In  
addition, pg_xlog remains owned by postgres.postgres, as does its  
parent directory, and the new pg_xlog directory has permissions of  
0700.


So I really can't see why it would be a permission issue.


Are you running SELinux? It's main goal in life is to break disk  
access by denying permission to files anywhere other than where it  
thinks an application should be allowed to access.


Cheers,
  Steve


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Encrypted column

2007-06-05 Thread Steve Atkins


On Jun 5, 2007, at 7:28 AM, Ron Johnson wrote:


On 06/05/07 08:59, Alvaro Herrera wrote:

Ron Johnson wrote:

On 06/04/07 17:54, Guy Rouillier wrote:
Many people consider two-way encryption to be insecure; two-way  
encryption means you can decrypt a value if you know the key,  
and it is insecure because you usually have to put the key into  
the source code. That means at least one person in your company,  
the programmer maintaining the source code, can learn all of  
your users' passwords.
Two-way encryption is needed for companies that store customer  
credit cards.
I thought that the advice for companies storing customer CCs was:  
don't.


Sometimes you "must".

An example from my industry: transponder "toll tags" and toll  
roads.  The customer pre-pays a certain amount based on expected  
usage, and every time he drives thru a plaza, his balance  
decreases.  Once it drops to a certain threshold, more money needs  
to be added to the account.


If he is a CASH/CHEK customer, a light at the lane flashes yellow  
and (depending on the Agency) a message pops up saying, "Balance  
low", so he drives over to the customer service center, stands in  
line and pays his cash.


If he is a CC customer, the system (which I am DBA of) bills his  
card directly, saving the customer much time and effort.


Public key encryption can help here. Encrypt with the public key when  
it goes into the database, keep the private key on a separate, well  
protected system that's just used for recurring CC billing.


Cheers,
  Steve


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PostGreSQL for a small Desktop Application

2007-06-12 Thread Steve Atkins


On Jun 11, 2007, at 10:44 AM, Gabriele wrote:


I'm going to develop a medium sized business desktop client server
application which will be deployed mostly on small sized networks and
later eventually, hopefully, on medium sized networks.
It will probably be developed using C#.

I do need a solid DBMS wich can work with .Net framework. I do know
PostGreSQL is a good DBMS in general (it sports most of the advanced
DBMS features, transactions and stored procedure included) but i
wonder if it is suited for my application.

Knowledge base of my users is very low and "servers" will be standard
class desktop computers most probably ran on Windows XP (and Vista
later on, i suspect).
The service should be enough lightweight to be ran on such "server"
and I need silent installation and configuration because i can't
expect my user to be able to configure a DBMS.
Additionally i need a passable to good data provider to interface
PostGreSQL with .Net which possibly provide better performance than
ODBC (don't know if it exists and i hope it is free).

Anyway performance shoudn't be a big issue, i expect low concurrency
level (less than 10 users) and low to medium volume of rows and
queries. If more users and more data are needed for especially big
customer i can simply suggest bigger and dedicated server. (different
problems will arise for the aggregated data which will feed the web
application, but for these we will have a real server).

Is PostGreSQL suited for such use? If not which alternatives are there
to be used? When using PostGreSQL in such a way is there any
suggestion to be followed? Links to sources which i may find
interesting (how to make a silent install, basic hardware
requirements, so on).


Postgresql is reasonably well suited to such use, as long as
you're prepared to spend a little effort to preconfigure it. We
ship a client-server CRM system that bundles postgresql
as part of the installation, and is mostly run by people with
little database expertise. We have users who are handling
a quarter million tickets a day some days (I'm not sure what
that translates to as transactions, but it's a lot worse than 1:1 :) ).

That's on linux/solaris rather than windows, so the details
will be different, but PG is certainly capable of running, and
running well, in that sort of situation. I've seen discussion
of bundling the postgresql windows installer to do a "silent
install", so it's probably worth searching the list archives
for that. I think there's a decent .net provider for postgresql,
but I'm a C++/Qt guy so I know nothing beyond that.

(There can certainly be business reasons to support other
databases as well as PG, but PG can handle the range
from small single-user to medium sized multi-user quite
nicely).

Cheers,
  Steve



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] VACUUM ANALYZE extremely slow

2007-06-18 Thread Steve Atkins


On Jun 17, 2007, at 2:15 PM, Sergei Shelukhin wrote:


This is my first (and, by the love of the God, last) project w/pgsql
and everything but the simplest selects is so slow I want to cry.
This is especially bad with vacuum analyze - it takes several hours
for a database of mere 15 Gb on a fast double-core server w/2Gb of RAM
and virtually no workload at the moment. Maintenance work mem is set
to 512 Mb.

Is there any way to speed up ANALYZE? Without it all the queries run
so slow that I want to cry after a couple of hours of operation and
with it system has to go down for hours per day and that is
unacceptable.


There's no need to stop the database to run vacuum or analyze (heck,
with autovacuum in modern versions of postgresql you can get away
without running them at all, sometimes).

I suspect you're doing the wrong thing (perhaps running vacuum full,
rather than plain vacuum). Can you tell us what version of postgresql
you're running and how you're vacuuming it (what commands you're
running)?

Cheers,
  Steve



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Running OS-level programs from Postgres?

2007-06-20 Thread Steve Atkins


On Jun 20, 2007, at 12:07 PM, Sean Murphy wrote:


Maybe this is a well duh kind of question, or maybe there's no
straightforward way to do it, but is there any way that I could have a
pg function initiate a process on the host system?


You can use pl/perlu or any of the other untrusted languages for
that.



Specifically I'd like to script an email to send off on an insert
trigger, but the ability to initiate system processes in general seems
like it would come in handy.

My present need is for email notification; if there's a pg function or
module that would handle this (I haven't turned up anything in my
searches, but maybe I'm using the wrong search terms in the wrong
places) I'd be OK for now, but I'd rather have the flexibility of
starting whatever process a given circumstance calls for.


Another approach, and one that can be more robust in the case
of external failures, is to have the trigger put the message it wants
to send into a queue table and have an external process that
monitors the table (via polling or listen/notify) and sends the
queued mail.

Cheers,
  Steve


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Proposed Feature

2007-06-23 Thread Steve Atkins


On Jun 23, 2007, at 11:06 AM, Magnus Hagander wrote:



I fully agree with the not part of the default installation. And  
make it

easy to turn the thing off.

In that respect it sound like a good feature for developer systems  
(not

servers per se). Just ensure for a small memory footprint, preferably
within a few 100 KB of memory. If you are going over 1 MB you are
seriously doing something wrong. (Obviously that might be acceptable
when I'm actively using it, but otherwise it should keep memory  
usage as

low as resonably possible).


Good point. I could hack it up in C# in 15 minutes, and it'd be a 32Kb
EXE file. But it'd use 15-20Mb RAM at least, making it completely
unacceptable of course. So that's absolutely an important figure.


For something with such trivial functionality, using your application
framework of choice (C#, Java, MFC, Qt, Delphi) is, however tempting,
not going to be a good fit, for dependency reasons as much as
resource usage.

But it's probably under a couple of hundred lines of c/win32, and
wouldn't be too hard to cobble together.

Cheers,
  Steve


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] dynamic table/col names in plpgsql

2007-06-23 Thread Steve Atkins


On Jun 23, 2007, at 8:47 PM, gary jefferson wrote:

Is there a way to use a variable as the name of a table or column  
in plpgsql?


This might be a simple question, but I can't find the answer in the
docs.  http://www.postgresql.org/docs/8.2/interactive/plpgsql- 
declarations.html

hints that there are data types that correspond to table/column, but I
don't see them listed here
http://www.postgresql.org/docs/8.2/interactive/datatype.html ...

Also, I assume that once I figure out the right datatypes, I can
assign to them from functions like substring()?


You're probably looking for "execute".

http://www.postgresql.org/docs/8.2/static/plpgsql- 
statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN


Cheers,
  Steve


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] General search problem - challenge

2007-07-02 Thread Steve Atkins


On Jul 2, 2007, at 3:36 PM, Postgres User wrote:


I have a table of around 6,000 places in the world.  Everytime my
server receives a ping, I'm grabbing the content of an article from an
RSS feed.  Then I search the article for the presence of any the 6000
terms.
A typical article is around 1200 words.

I don't need to save the article in a table and the search is
performed only once, so it's not about FTS.

Any thoughts on the best way to execute these searches using a
traditional language like C++ ?


That'll depend heavily on the performance you need and the
language you use. C++ is very different to C++/STL is very
different to C++/Qt.

Naive approach: On receiving an article, read all 6000 terms
from the search table. See if any of them are in the article, with
strstr(3).

If that's fast enough for you, you're done. If not, you'll need to
do some work to cache / precompile search patterns in core,
or preprocess the articles for fast multi-term search. It's very
unlikely you'd need to do that, though.

(Also, this is an application that screams "I could be written
faster in perl than c++").

Cheers,
  Steve


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Updates/Changes to a database

2007-07-12 Thread Steve Atkins


On Jul 12, 2007, at 10:07 AM, imageguy wrote:


I am building an application with Postrges as the backend foundation.
This is my first application and it has struck me that as we add
features/functionality to the application and database with each new
version, we will need some method of obtaining the current structure
of the customers database and then modifying/updating the structure so
that it matches the application revision standard.

Are there pre-existing tools out there that does this sort of thing ??

My present direction is to create a small SQLite db that has there
expected structure, compare each table against the SQL
"information_Schema.columns" and the create a series of SQL commands
to be executed that would add columns and/or table as needed.

-- any thoughts or comments ?


I tend to keep a schema version field (typically in a one-row, one- 
column

table) in the database.

Then I have a set of SQL scripts that'll upgrade from version n to
version n+1, and they can be applied manually or automatically
in sequence to bring the schema version up to the version
required by the application.

That's pretty common amongst big database backed apps, and
if you're nice you also provide downgrade scripts to back out
revisions.

Maintaining the SQL patch scripts by hand isn't too hard to do,
but I've found these tools useful too:
http://dbmstools.sourceforge.net/

Cheers,
  Steve



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL equivelant of this MySQL query

2007-07-13 Thread Steve Atkins


On Jul 13, 2007, at 6:39 PM, Madison Kelly wrote:


Hi all,

  I am reading through some docs on switching to Postfix with a SQL  
backend. The docs use MySQL but I want to use PgSQL so I am trying  
to adapt as I go. I am stuck though; can anyone help give me the  
PgSQL equiv. of:


SELECT CONCAT(SUBSTRING_INDEX(usr_email,'@',-1),'/',SUBSTRING_INDEX 
(usr_email,'@',1),'/') FROM users WHERE usr_id=1;


  If the 'usr_email' value is '[EMAIL PROTECTED]' this should  
return 'domain.com/person'.


A direct conversion would be something like:

select split_part(usr_email, '@', 2) || '/' || split_part(usr_email,  
'@', 1) from users where usr_id=1;


You could also do this:

select regexp_replace(usr_email, '(.*)@(.*)', '\2/\1') from users  
where usr_id=1;


http://www.postgresql.org/docs/8.2/static/functions-string.html and  
http://www.postgresql.org/docs/8.2/static/functions-matching.html are  
the bits of the docs that cover these functions.


Cheers,
  Steve


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] What's the logical counterpart of the to_hex function?

2007-07-16 Thread Steve Atkins


On Jul 16, 2007, at 11:36 AM, Michael Nolan wrote:

I have data (from an external source) which is in text format as a  
hex number (it's actually an IP address, but that's probably not  
relevant.)


It likely is relevant, as it means it's a 32 bit unsigned integer,  
which isn't something postgresql supports. Depending on what you need  
you might want to look at using 32 bit signed, with a 2^31 offset, 64  
bit signed, inet or ip4 (which is on pgfoundry, not built-in).




to_hex gets from integer to hex, I can cast a hex constant to  
integer (ie x'12a7'31'::int, but how do I get a database column  
from string/hex to integer?




Last time I checked the suggested way to do this was with a plpgsql  
function that dynamically creates the cast and does it with exec.  
Google for postgresql and hex and you can likely find sample code.


Cheers,
  Steve



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] yet another image: db or filesystem ? question

2007-07-17 Thread Steve Atkins


On Jul 17, 2007, at 8:09 AM, Rick Schumeyer wrote:

I've read the earlier threads on whether to store images in the  
database or filesystem.  I think I understand the pros and cons of  
each method, but I have a question on a specific use case.


Let's say I have a web app, and I want to display the images in a  
web page.


a) if the images are in the filesystem (and also under the web  
root), no problem.  Just use 


That's likely to be by far the lowest system overhead way of doing  
things, and unless you have a really good reason not to do it this  
way, do it this way. (There are really good reasons not to, in some  
cases.)


b) if the images are in the database...do I need to create a  
temporary file first in order to use the  tag?  Or is there  
some other HTML way of doing this?


Not HTML, no. You have your webapp read the file into memory and send  
it directly to the client. The HTML is likely to look identical, but  
rather than the webserver receiving the request for "/foo/ 
filename.jpg" and reading it from disk it'll pass the request to the  
webapp to handle. Unless the webapp is much, much better written than  
most of them are then you'll be tying up an entire fat apache  
instance and an entire webapp instance to feed the image to the client.


If you use a large object to store it, you can stream it from the  
database through the webapp to the client.


If you use bytea you'll need to read the entire thing off disk into  
postgresqls memory, decompress it, copy it into your webapps memory,  
then trickle it out to the client - which might be fine for 100x100  
user icons, but which will start getting very memory hungry when you  
have 2 or 3 copies of every multi-megabyte image currently being  
viewed on a photo site.


There are various options in between too. One approach is to receive  
the request for the image in the webapp, get the metadata from the  
database, copy the image to an on-disk cache from the database if it  
has changed or is not in the cache already, then tell the webserver  
to return that file directly from the cache.


Cheers,
  Steve


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Import to excel to postgres based website?

2007-07-23 Thread Steve Atkins


On Jul 22, 2007, at 11:26 PM, novnov wrote:



The answer may or many not be very postgres specific but...what are  
some
possible routes that I could take to allows users to upload data  
from excel
into a postgres via a website? I've never tried anything like this  
before.

Are there standard modules that might help with this or will I have to
create from scratch? I am obviously hoping to find something I can  
just plug
in. Users would be expected to have the xls ordered properly for  
the routine

to work.


It's something that would be pretty easy, given the right web  
platform, quite
hard using the wrong one. Using perl, say, it would be less than a  
hundred
lines of code. (Take uploaded file, crack xls format to extract data  
using one
of the several CPAN excel reader modules, use DBI to insert it into  
the database).


Uploading CSV (comma separated values) is likely to be easier in  
languages
that don't happen to have support for xls, but CSV can only represent  
a small

subset of xls.

If the webserver itself it running on windows then there's all sorts  
of games you
can play by remote controlling an instance of Excel, but doing that  
tends to

be pretty fragile.

Cheers,
  Steve


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Move database from Solaris to Windows

2007-08-01 Thread Steve Atkins


On Aug 1, 2007, at 11:44 AM, Barry C Dowell wrote:

Please forgive me if this question is being asked in the wrong area  
(and
please suggest the proper one so I can ask there :-) ), but I'm in  
search of
assistance in moving a database from a Solaris system over to a  
Windows

system.

Solaris 5.8, postgreSQL 8.01, to Windows Server 2003, postgreSQL  
8.0.13.


Installing on the Windows Server has gone simple enough, and I've  
actually
experimented a little with pg_dump to dump the contents of the  
database that
I get by default when installing the application that I'm working  
with which

requires the db.

I tried using pg_dump --format=t to dump the original database, and  
then
tried to restore into a database with the same name on the Windows  
side but

that restore has not been successful.


I find it a lot easier to diagnose what's going on with plain format,  
generally.




I've been trying to follow information found here:
http://www.postgresql.org/docs/8.0/static/app-pgrestore.html but  
have not

been successful yet.

Any assistance that can be lent my way is greatly appreciated.


You'll need to post a few more details (what commands you ran,
what errors you got) before people will be able to help you much.

Cheers,
  Steve


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] finding out vacuum completion %, and vacuum VS vacuum full

2007-08-07 Thread Steve Atkins


On Aug 7, 2007, at 1:17 AM, Sergei Shelukhin wrote:


Ok here's the update after ~30 hours we have killed vacuum full and
did vacuum on the tables we freed.
However, VACUUM hasn't freed any space at all 0_o
We want to launch vacuum full on per-table basis but we can't have any
more downtime right now so we will launch it at night today.

The original question still stands, is there any way to diagnose
vacuum full time-to-run?


It could easily take many days. VACUUM FULL is painfully slow.
Dropping indexes and suchlike can make it faster, but it's still
painfully slow.


Or any way to optimize it besides the obvious (maintenace_work_mem &
max_fsm_pages increases and no workload)?
Can someone please help with this one?


VACUUM FULL is about the worst thing you can do in this case.

If you have adequate disk space free (enough to hold another
copy of the new table) and the table has an index on it, then
CLUSTER the table.

If not, dump and restore the table.

Cheers,
  Steve

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Running a query from the OS CLI

2007-08-08 Thread Steve Atkins


On Aug 8, 2007, at 11:13 AM, Gauthier, Dave wrote:


If I have a DB called “foo”

...and...

I want to run “select name from table_a where name like ‘john%’”

...and...

I want no table header “NAME” in the output

...and...

I want to do this as a one-liner from the linux command line

...and...

I don’t want to have to deal with intermediate files or home-grown  
programs...

Something like this:

psql -A -q  -t -d foo -c “select name from table_a where name like  
‘john%’”


You may need to use -U to set a user, and there are a bunch of other
useful flags to set the output format. There are also flags and  
environment

variables you can set to set the host and port to connect to.

Depending on how your access control permissions are setup you may
need to get a password to psql, typically by using a ~/.pgpass file.  
Check

the psql man page and the main postgresql docs for the gory details.

Cheers,
  Steve



Re: [GENERAL] Help creating a function

2007-08-17 Thread Steve Atkins


On Aug 16, 2007, at 9:35 AM, Madison Kelly wrote:

Note: This is being sent again (in case it shows up later). It  
never seemed to have made it to the list.


Hi all,

  I'm using ulogd with PostgreSQL which stores IP addresses as 32bit
unsigned integers. So when I select some data I get something like:

ulogd=> SELECT id, ip_saddr, ip_daddr, raw_pktlen, ip_totlen,  
tcp_window

FROM ulog LIMIT 20;
 id |  ip_saddr  |  ip_daddr  | raw_pktlen | ip_totlen | tcp_window
++++---+
  1 | 3232235874 | 1074534522 | 46 |46 |  25825

  Where 'ip_saddr' and 'ip_daddr' are 'bigint'. I know I can convert
these numbers to dotted-decimal in perl with a small script like:

-=-=-
#!/usr/bin/perl

# This would be the number read from the DB
my $num=3232235874;

# Now do the math
my $temp=$num/256;
my $D=256*($temp-int($temp));
$temp=(int($temp))/256;
my $C=256*($temp-int($temp));
$temp=(int($temp))/256;
my $B=256*($temp-int($temp));
my $A=int($temp);
my $ip="$A.$B.$C.$D";

# Print the results
print "'num': [$num] -> 'IP': [$ip]\n";
-=-=-

  What I would like to do is create a function that would do the same
thing so I could read out the IP addresses as standard dotted-decimal
format. Could anyone help me with this? I am quite the n00b when it
comes to functions. :)


These functions convert between signed 32 bit integers (with a -2^31
offset) and dotted quads. You should be able to tweak them pretty
easily:

create or replace function ip2int(text) returns int as '
DECLARE
  a int;
  b int;
  c int;
  d int;
BEGIN
  a := split_part($1, ''.'', 1);
  b := split_part($1, ''.'', 2);
  c := split_part($1, ''.'', 3);
  d := split_part($1, ''.'', 4);
  RETURN (a-128) * 16777216 + b * 65536 + c * 256 + d;
END;
' LANGUAGE plpgsql IMMUTABLE;

create or replace function int2ip(int) returns text as '
DECLARE
  a int;
  b int;
  c int;
  d int;
BEGIN
   a := (($1 >> 24) & 255) # 128;
   b := ($1 >> 16) & 255;
   c := ($1 >> 8) & 255;
   d := $1 & 255;
  RETURN to_char(a, ''FM999'') || ''.'' || to_char(b, ''FM999'') ||  
''.'' || to_char(c,

''FM999'') || ''.'' || to_char(d, ''FM999'');
END;
' LANGUAGE plpgsql IMMUTABLE;

There's probably a neater way to do it via the inet (or ip4) data
types, but these functions should be easier to tweak to use bigint.

Cheers,
  Steve



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Add Column BEFORE/AFTER another column

2007-08-24 Thread Steve Atkins


On Aug 24, 2007, at 2:18 PM, Matthew wrote:


Hey Bill,


It does not.


Bummer.


To get your columns in a specific order, specify the column names in
that order in your SELECT statement.  The SQL standard doesn't  
provide
for any other way to guarantee column order, and neither does  
Postgres.


Yes, I realize this and we do identify our columns during select
statements, but when you look at a table using a tool like  
phpPGAdmin or

pgAdmin3, the columns are displayed in some defined order.  It's much
easier to see your data/columns in some logical order (ie: all the  
cost

columns next to each other).


Using a view might give you what you're looking for:

  abacus=# select * from access_role;
   id | name
  +---
1 | user
2 | admin
  (2 rows)

  abacus=# create view bass_ackwards as select name, id from  
access_role;

  CREATE VIEW
  abacus=# select * from bass_ackwards;
   name  | id
  ---+
   user  |  1
   admin |  2
  (2 rows)

Cheers,
  Steve


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] psql connect_timeout feature

2007-09-05 Thread Steve Atkins


On Sep 5, 2007, at 9:59 AM, Steve Crawford wrote:


I've dug through the docs and peeked at the source and found no way to
specify a connect_timeout so:

1. Did I, in fact, just overlook something?


PGCONNECT_TIMEOUT=5 psql ?

There are a lot of useful environment variables that libpq and
anything using libpq supports. They're not documented much
in the client app docs, but you can find them in the libpq docs.

Cheers,
  Steve


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Time Zone design issues

2007-09-11 Thread Steve Atkins


On Sep 11, 2007, at 2:48 AM, Gregory Stark wrote:


"Ron Johnson" <[EMAIL PROTECTED]> writes:


On 09/10/07 19:50, Tom Lane wrote:


This whole sub-thread actually is predicated on an assumption not
in evidence, which is that there is any browser anywhere that will
tell the http server timezone information.  I'm quite sure no such
thing is required by the http standard.


I'm really surprised.


I think all you get is the localized language. If it's localized to a
particular country then that might be good enough for a guess from  
some

countries but there's not much you can do with en_US or ru_RU.

I think most big commercial sites that decide they need this just  
buy access
to one of the ip to geographic location services which are far from  
perfect
but in my experience are plenty good enough to get a reasonable  
time zone.


Or, more likely, use one of several approaches to either get the
timezone from the browser or get the browsers view of localtime
and do a little math on the server. Javascript, mostly.

(Though, AIUI, if you're using Javascript the elegant trick is to send
UTC qpoch time to the browser and have it do the rendering to the  
local timezone

anyway).

Cheers,
  Steve




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] importing pgsql schema into visio (for diagramming)

2007-09-12 Thread Steve Atkins


On Sep 12, 2007, at 4:32 PM, Andrew Hammond wrote:

Does anyone know where I could find a tool which allows importing  
schema information from a postgres database into visio? The boss  
guys want some pretty pictures...


Visio has that built-in for a long time - point it at the database  
via ODBC and let it go.


Cheers,
  Steve




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] ON INSERT => execute AWK/SH/EXE?

2007-09-18 Thread Steve Atkins


On Sep 17, 2007, at 9:50 AM, Bima Djaloeis wrote:


Hi there,

I am new to PostgreSQL, is it possible to create something so that

1) If I insert / update / delete an item from my DB...
2) ... an awk / shell / external program is executed in my UNIX  
System?


If yes, how do I do this and if no, thanks for telling.



Yes it's possible, but it's probably a really bad idea, so I'm not  
going to tell you how.


Instead, use a trigger to store a message in a queue table, then have  
an external persistent process poll the queue table (or use listen/ 
notify to sleep until new messages to be added to the queue).


Cheers,
  Steve



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] slow request

2007-10-09 Thread Steve Atkins


On Oct 9, 2007, at 6:34 AM, [EMAIL PROTECTED] wrote:


Hi !
I am using postgreSQL v8.1.5 with latest odbc driver v8.02.0500.
I have very slow request between my server and my client. They are
both on the same switch 100Mb/s. I have no particular network
problems.
I use the pgadmin tool to do my request.

My database is compose of one table. This table has some simple fields
(varchar & int, less than 500 bytes) and its primary key is a
varchar(32) (CLSID). This table has 14 records.
I know the primary key as a clsid is not the best choice, but it is
mine :)

The request "select * from mytable" on the server takes
approximatively 30 seconds.
The same request on the client takes approximatively 400 seconds !
What I do not understand is that my network traffic during this
request on the client side is very low. It is less than 100KB/s !

Why is it so slow ? I suppose that my 14 records are retrieve one
by one... is it true ? if yes, why ?

I try to do the same thing with another table with a primary key as
'int4'. The result is the same : 540 secs to retrieve 15 records
at 45 KB/s (average speed) (132 octets/record * 15 = 19MB / 45 KB/
s = 430 seconds)

How can I improve these performances ?


This has been reported before, IIRC, and one issue was that pgadmin
spent a lot longer rendering the data than it did retrieving it.

So before you look any further, run the same query from psql and
see if that changes anything.

Cheers,
  Steve




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] uniquely indexing Celko's nested set model

2007-10-19 Thread Steve Atkins


On Oct 19, 2007, at 7:37 PM, Scott Marlowe wrote:


On 10/19/07, Richard Broersma Jr <[EMAIL PROTECTED]> wrote:
Is it possible to constraint both the LEFT and RIGHT fields of a  
record to use the same index?  I am looking for a way to ensure  
for all LEFTs and RIGHTs in a table, that is it is impossible for  
any LEFT or RIGHT to have to same value.


a check constraint ought to do it

check (field1<>field2)


That won't catch {1,2} {3,1}.

I don't think there's any way to have an index cover two fields in  
that way. The only way I can see to do it with an index would be to  
have each row of the OPs mental model to map onto two rows of the  
table, along with a boolean saying whether the value was for a "left"  
or a "right".


There's probably a much, much more elegant way to do it, but this  
might work in an existence proof sort of way:


create table moststuff {
  id integer primary key,
  whatever text
};

create table leftright {
  a integer primary key,
  b integer references moststuff(id),
  lr text unique,
  constraint foo check (b = abs(a))
};

Cheers,
  Steve


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] data statistic functions

2007-10-23 Thread Steve Atkins


On Oct 23, 2007, at 1:43 PM, Kevin Hunter wrote:


Hullo List,

What does Postgres offer in terms of statistics support (not the
statistics about the database, but functions to operate on the data).

I know there are simple things like AVG, COUNT, MAX, and MIN, but  
what else?


I'm not sure where to begin looking, other than a cursory  
inspection of

http://www.postgresql.org/docs/8.3/static/functions.html . . .


Take a look under "Aggregate Functions" ( http://www.postgresql.org/ 
docs/8.3/static/functions-aggregate.html ). There's probably some  
interesting stuff if you look at PL/R too ( http://pgfoundry.org/ 
projects/plr/ ).


Cheers,
  Steve




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] pg_dumpall and authentication

2007-11-09 Thread Steve Atkins


On Nov 9, 2007, at 8:52 AM, Tom Hart wrote:

I'm sure you guys have heard this about 100 times, and I've done  
some research on Google and found out some things, but I still have  
a couple questions.


As I'm sure you may have guessed from the subject, I'm trying to  
schedule (under windows) pg_dumpall to run each night/morning/full  
moon/whatever. The hitch in this is that it asks for a password for  
each database as it dumps it. I know I can use the PGPASS  
environment variable, or a ~/.pgpass file. What I'm wondering is  
what's considered 'best practice' in practical applications. What  
solutions do you guys use? Is it worth changing PGPASSFILE to point  
to a different .pgpass?


Any of those approaches should be fine. I'd probably stick with the  
default pgpass file, just for the sake of whoever may have to  
maintain it next.


I tend to create a unix user just for doing backups and other  
scheduled maintenance, then give that user access to the database via  
ident authentication from the local system only. If PG-on-Windows has  
equivalent functionality that's another approach to consider.


Cheers,
  Steve


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] PLpgsql debugger question

2007-11-15 Thread Steve Atkins


On Nov 15, 2007, at 8:57 AM, Andrew Sullivan wrote:


On Thu, Nov 15, 2007 at 10:41:23AM -0600, Tony Caduto wrote:


So if you are using Oracle do you have to go through the hassle of
finding it, then compiling and installing it?


This canard comes up every time we discuss keeping the codebase  
lean, and I
say the same thing every time: Oracle (and DB2, and MySQL, and SQL  
Server,
and everyone else) _do so_ have all these add ons.  They just  
package it in
one thing sometimes, and you think it's "one system".  It is one  
system.

It's not one program.

This is what packagers are for.


Yup. Core vs not-core is for the convenience of the developers,
not the users.

But one difference between PG and others is that there are a lot of
different packagers, who bundle a variety of different subsets of the
addons, effectively creating a number of quite different products,
even though they're all called postgresql.

At one point you could expect that stuff in contrib would likely
be available via package (at least as a -contrib package) and
that stuff on gborg wouldn't be.

Now contrib is mostly going away, the windows installer bundles
all sorts of things that aren't obviously available via other
packages, so there are an awful lot of versions of the postgresql
application, some very different from a users point of view.

I tend to think of that as feature as much as bug, but it does
cause some confusion while trying to provide ad-hoc support
to new users.

Cheers,
  Steve


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] IP addresses

2007-11-20 Thread Steve Atkins


On Nov 20, 2007, at 3:41 PM, Tom Lane wrote:


"Sander Steffann" <[EMAIL PROTECTED]> writes:

From: "Harald Fuchs" <[EMAIL PROTECTED]>
Forget inet.  Check out http://pgfoundry.org/projects/ip4r/ and  
be happy.


I would be happy if it would support IPv6 :-)  Are there plans to  
make ip6r

or something like that?


What's the point?  You might as well use the regular inet type if you
need to handle ipv6.


ip4r's main advantage over inet is that it allows you to answer
the question "is this IP address in any of these large number of
address ranges" efficiently. It's useful for customer address
allocation, email filtering blacklists, things like that.

A range-indexable ipv6 type would be useful in theory, but I've
not seen a need for it in production yet. When there is, extending
ip4r to become ip6r would be possible.

Cheers,
  Steve


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Disk arrangement in a cheap server

2007-11-24 Thread Steve Atkins


On Nov 24, 2007, at 8:17 AM, Ron Johnson wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/24/07 09:12, Scott Marlowe wrote:
On Nov 24, 2007 5:09 AM, Clodoaldo  
<[EMAIL PROTECTED]> wrote:
I will build a cheap server and I'm in doubt about what would the  
the

best for performance:

1 - everything in one lonely fast 10,000 rpm Raptor HD;

2 - two cheap 7,200 rpm 16MB cache HDs like this:

disk 1 - system and pg_xlog


This doesn't really buy you much. The supposed advantage of having
pg_xlog on its own drive is so that the head doesn't need to seek. If
it's on the system drive it'll be competing with, at least, syslog.


disk 2 - pg_data without pg_xlog
or a better arrange suggested by you;

3 - The two cheap HDs above in Raid 0.


From a DBA perspective, none of those seem like a good choice, as
there's no redundancy.

I'd make the two 7200 RPM drives a RAID-1 and have some redundancy so
a single disk failure wouldn't lose all my data.  then I'd start
buying more drives and a good RAID controller if I needed more
performance.


It depends on what the box is used for, but for most cases where the  
data

is valuable, that sounds like a much better idea.

For batch data crunching, where the data is loaded from elsewhere then
processed and reported on, the cost of losing the data is very low, and
the value of the machine is increased by RAID0-ing the drives to make
the crunching faster... RAID0 could be good. That's probably not the  
case

here.



Remember: disks are *cheap*.  Spend an extra US$250 and buy a couple
of 500GB drives for RAID 1.  You don't mention what OS you'll use,
but if you really need cheap then XP & Linux do sw RAID, and FreeBSD
probably does too.



Disks aren't necessarily cheap. Disks are fairly expensive, especially
when you need more spindles than will fit into the servers chassis  
and you

need to move to external storage. Disk n+1 is very expensive, likely
more expensive than the cheap 1U server you started with.

Two, though, does seem to be false economy for a server that'll be
running a database, when you can get a 1U chassis that'll take 4 drives
pretty cheaply.

Cheers,
  Steve


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] select from an index

2007-11-27 Thread Steve Atkins


On Nov 27, 2007, at 3:15 AM, Pau Marc Munoz Torres wrote:


Hi

 Recently i created an index in a table using a function (not a  
column) as following


create index H2IAb on precalc (idr(p1,p4,p6,p7,p9,'H-2*IAb'));,


now, i would like to perform a query using this index, something like


Select * from precalc where h2iab>2

but obviously h2iab  is not a column...

some of you knows what i should do?



select * from precalc where idr(p1,p4,p6,p7,p9,'H-2*IAb') > 2

Cheers,
  Steve


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] SQL design pattern for a delta trigger?

2007-12-07 Thread Steve Atkins


On Dec 7, 2007, at 6:29 AM, Ted Byers wrote:



--- Erik Jones <[EMAIL PROTECTED]> wrote:



On Dec 6, 2007, at 2:36 PM, Ted Byers wrote:

[snip]
What you want to do here for handling the update v.
insert is called
an "UPSERT".  Basically, what you do is run the
update as if the row
exists and catch the exception that is thrown if it
doesn't at which
point you insert the record with the end date =
now().  After that
you can proceed normally with creating the new
record with start date
= now() and end date = NULL.


Thanks Eric.  Do you know of an URL where this is
discussed or where I can find an example.  None of my
books discuss this, and my search using google has so
far produced only noise.


http://www.postgresql.org/docs/current/static/plpgsql-control- 
structures.html#PLPGSQL-UPSERT-EXAMPLE might be a good place to start.


Cheers,
  Steve


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] SQL design pattern for a delta trigger?

2007-12-08 Thread Steve Atkins


On Dec 8, 2007, at 7:54 AM, John D. Burger wrote:

So two design patterns for a makeshift UPSERT have been presented -  
one is to check beforehand, and only insert if the item isn't  
present already


... which will give the wrong results if there's any concurrent  
updates...


, the other is to do the insert blindly and let PG check for you,  
and catch any exceptions.


I'm also wondering what people's ideas are for a sort of BULK  
UPSERT.  I often find myself inserting the results of a SELECT and  
wanting a similar check for already existing rows.  The idiom I've  
stumbled upon looks like this:


insert into foo (x, y, z)
select a, b, c from bar join bax ...
EXCEPT
select x, y, z from foo;

Namely, I subtract from the results to be inserted any rows that  
are already present in the target table.



This can actually even be used for UPSERTing a single row, and has  
the virtue of being pure SQL, but I've wondered about its efficiency.


Worry more about it's correctness. Doing entirely the wrong thing,  
quickly, isn't always what you want. If there's any concurrency  
involved at all, this is likely to do the wrong thing.


One alternative would be to iterate over the SELECT result with a  
procedural language, and do a series of UPSERTS, but that seems  
unlikely to be as efficient for a large result set.


Just take the idiom that's been pointed out in the documentation and  
wrap a loop around it.


Cheers,
  Steve


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Hijack!

2007-12-11 Thread Steve Atkins


On Dec 11, 2007, at 8:43 AM, Joshua D. Drake wrote:


On Tue, 11 Dec 2007 16:31:40 +
"Raymond O'Donnell" <[EMAIL PROTECTED]> wrote:


// 
Please note in particular the following points of netiquette:

* Don't top-post, as it makes for confusing reading.

* Don't start a new thread by replying to an old one, because [insert
suitable technical explanation here].

Failure to observe the above may result in your question going
unanswered. // <



O.k. this might be a bit snooty but frankly it is almost 2008. If you
are still a top poster, you obviously don't care about the people's
content that you are replying to, to have enough wits to not top post.

However, I would also note that in "windows" world, it is very common
to top post. I am constantly retraining very smart, just very ignorant
customers.


In the business world it's common to top-post and not cut previous  
content

- and often appropriate, as it tends to be a communication between a
smaller number of people, and the uncut content provides context for
future reference.

Those who rant about anyone who top posts, or state that you should
never top-post are mostly clueless or arrogant, or are making over-broad
generalizations.

That's an entirely different thing to observing that while on the  
broader internet
you should follow local etiquette, and that here (as on most technical  
lists that have
a fair number of posters with an, uh, old-school background) part of  
that is the points listed above.

(And I'd probably add "and trim your response appropriately - removing
stuff not needed for context, but leaving enough to have enough  
context").


Cheers,
  Steve


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Hijack!

2007-12-11 Thread Steve Atkins


On Dec 11, 2007, at 9:40 AM, Tom Lane wrote:


Steve Atkins <[EMAIL PROTECTED]> writes:

In the business world it's common to top-post and not cut previous
content
- and often appropriate, as it tends to be a communication between a
smaller number of people, and the uncut content provides context for
future reference.



Those who rant about anyone who top posts, or state that you should
never top-post are mostly clueless or arrogant, or are making over- 
broad

generalizations.


Sure, there are contexts where that makes sense.  On the PostgreSQL
lists, however, you are writing for the archives as much as for the
immediate readers (and if you don't understand that, *that* is the  
first

thing you need to learn).  The in-line, trimmed-quotations style is a
lot easier to read when looking through a thread in the archives.
Another advantage is that trimming quoted text reduces the number of
useless matches when searching the archives.


Which is pretty much what I said in the relevant context you removed.

The risk of removing the wrong context is that it makes it look like
we're disagreeing. :)


In short: this is the community consensus on how to post, there are
good reasons for it, and we need to try to educate newbies in it.
Not just say "it's okay to ignore the conventions".


Cheers,
  Steve


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] running Apple Mail on Postgres?

2007-12-17 Thread Steve Atkins


On Dec 17, 2007, at 12:10 PM, Matthew Hixson wrote:

This may be a tad off topic, but thought a PG enthusiast might have  
some insight on this.  Apple Mail sits on top of Sqlite.  I was  
wondering if it would be possible to substitute Postgres as the  
underlying database.  I do know how to vacuum Sqlite to speed up  
Mail, but with the massive amount of email I have I was wondering if  
Postgres could more easily handle the workload.

 Thanks,


sqlite is (usually) an embedded database. That means a couple of  
things - one is that it's not something that you can simply swap out  
easily, it's linked into the Mail.app binary. The other is that, for  
this particular application (single reader/writer, simple workload)  
it's probably quite a lot faster than postgresql would be in theory,  
and both would be dominated by disk i/o in practice.


(And if you haven't upgraded to Leopard yet, you should. Mail.app  
sucks less on large IMAP boxes than with previous versions.)


If you want to do complex data-mining on email, there are several ways  
to pull email into a postgresql database, and then make it available  
via IMAP to a standard client. dbmail.org is one that springs to mind,  
archiveopteryx another.


Cheers,
  Steve


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Postgres from PHP in Leopard

2007-12-24 Thread Steve Atkins


On Dec 20, 2007, at 7:35 AM, Gordon wrote:


On Dec 19, 11:32 am, Gordon <[EMAIL PROTECTED]> wrote:

I'm a web developer who does a lot of work in PHP.  The back end
database we use is bases on Postgres.  I am trying to set my new
MacBook Pro up as a development system.

I have managed to install Postgres 
fromhttp://sourceforge.net/projects/pgsqlformac/
and can access the database on my mac with tools such as pgAdmin.  I
have also enabled the bundled Apache and PHP extensions.

However, the problem is that I can't get PHP to talk to Postgres.
Running phpinfo () on the built in PHP shows that database support
extends as MySql, Mysqli and SQLite extensions, and SQLite drivers  
for
PDO.  What I really need is a Postgres extension for maintaining  
older

code and a Postgres PDO driver for new code in development.

When setting up a similar dev system on Windows it was relatively
simple to set this all up.  The extensions for PHP were provided
in .dll files that you just had to include in the php.ini file.  This
doesn't seem to be the case with the MacOS bundled PHP.

Is it possible to download .so files (I believe that's what the MacOS
equivalent to .dll files are) for the drivers and include them from
the php.ini file like on the Windows box?  Or is there something else
I'm meant to be doing?


Anyone?


It sounds like a PHP problem, not a Postgresql problem. You're
more likely to get help with it if you ask on a PHP mailing list, I
suspect.

Cheers,
  Steve


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] weird date/times in mailing list

2007-12-27 Thread Steve Atkins


On Dec 27, 2007, at 8:58 AM, Thomas Hart wrote:

Am I the only one seeing very strange send times for the last dozen  
messages or so, or is something else going on with my end (maybe my  
mail server is catching up...)


It's not just you. [EMAIL PROTECTED] is reinjecting  
old mail into the list. Until he gets kicked from the list you can  
probably safely discard any email with [TLM] in the subject line.


Cheers,
  Steve


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] server process (PID 27884) was terminated by signal 4 (SIGILL)

2008-01-04 Thread Steve Atkins


On Jan 4, 2008, at 12:44 AM, [EMAIL PROTECTED] wrote:


Hi,

i had a rather strange crash of my server (log file at the end of my  
mailling)

and i was googling for Signal 4 and read http://en.wikipedia.org/wiki/SIGILL

i am running on linux 2.6.18-5-686 and postgresql-8.1.9-0etch2.

Most (all?) other processes on this machine got signal 4 at this  
time also

(postfix, munin), etc

As i understood my reading about signal 4, there must be some kind  
of hardware
failure as the postgresql log says, too. I rebooted the server and  
since then
everything works fine. But i am going to drop this server and  
replace it with
a new one of course. I just want to ask if there is something else  
besideds

hardware failure which could force a signall 4 (ILL)?


Software bugs can on rare occasions (by overwriting return stack data
and heading off into the weeds, say), but there's no way they'd do  
that in

multiple processes simultaneously

It's conceivable it was just a transient problem ("a cosmic ray" -  
they really do

happen occasionally) but it's much more likely you have bad hardware.
Probably either RAM, disk or disk controller.

Cheers,
  Steve


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Disk configurations....

2008-01-24 Thread Steve Atkins


On Jan 24, 2008, at 11:21 AM, Nathan Wilhelmi wrote:

Hello - Typically case of a software guy needing to spec hardware  
for a new DB server. Further typified case of not knowing exact  
amount of data and I/O patterns. So if you were to spec a disk  
system for a new general purpose PostgreSQL server any suggestions  
of what to start with?


Details I have:

1) We think 500GB is enough storage for the DB.
2) I would guess 66% read 33% write patterns, maybe closer to 50/50.

I would assume that we would want to go SCSI raid. Would you do the  
whole disk array as Raid 5 or whole you partition it up differently?  
Would you go with 3x300gb disks or would you use more smaller disks  
to get there?


Any other gotchas or suggestions to look for?


You really need to have some idea of how much performance you need. A  
sensible spec for 10tps is not the same as one for 1000tps.


If you need decent midrange performance, and don't want to spend a  
fortune on tweaking and hardware...


I'd avoid RAID5. It's efficient use of disks for redundancy, but not  
really the best for random write performance. RAID10 is more expensive  
in number of spindles, but a good trade.


A good disk controller, with a battery-backed writeback cache is a  
must, IMO, if you want to get decent performance without having to do  
a lot of tuning, segregating WAL logs to separate drive mirrors, and  
so on. Some people swear by software RAID, but the battery-backed  
cache buys you quite a lot by hiding fsync latency.


There are some pretty good SATA based systems out there, and  
performance is quite acceptable given a decent controller. High end  
3ware and Areca get good reviews from people here. Don't assume you  
have to go with SCSI. Also, more spindles are often better for  
performance, and you can get a lot more SATA spindles for a given  
budget than you can SCSI.


You might want to look at previous threads, mostly over in the - 
performance mailing list. It's a fairly common question.


Cheers,
  Steve




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] NOTIFY/LISTEN why is not a callback as notice processing.

2010-11-10 Thread Steve Atkins

On Nov 10, 2010, at 4:30 PM, Daniel Verite wrote:

>   Filonenko Michael wrote:
> 
>> I create simple mechanism to inform user about something in database  
>> triggers. In my front-end I use PQsetNoticeReceiver, and display messages  
>> in QTextEdit.
>> I think about multi-user environment. I read about NOTIFY/LISTEN, but find  
>> no callback mechanism. Is it planning?
> 
> Not sure what the multi-user environment implies, but since you're using Qt,
> you're probably interested in getting a Qt signal when a db notification
> arrives. It turns out it's not hard to do: before calling LISTEN, instantiate
> a QSocketNotifier object bound to the socket of the libpq connection, and
> connect its activated() signal to a slot. In this slot, your code can deal
> with the notification at the libpq level with PQconsumeInput() and
> PQnotifies(), do your own processing, and just return.
> With this method, you don't have any need for a callback or managing your own
> loop: it is Qt's event loop that calls the slot, just as it does for
> GUI-related events.

It's even easier than that - Qt supports listen/notify directly.

QSqlDriver::subscribeToNotification(), and connect the the notification()
signal.

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] equivalent of mysql's SET type?

2011-03-08 Thread Steve Atkins

On Mar 8, 2011, at 5:06 PM, Reece Hart wrote:

> I'm considering porting a MySQL database to PostgreSQL. That database uses 
> MySQL's SET type. Does anyone have advice about representing this type in 
> PostgreSQL?
> 
> MySQL DDL excerpt:
> CREATE TABLE `transcript_variation` (
>   `transcript_variation_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
>   `transcript_stable_id` varchar(128) NOT NULL,
>...
>   `consequence_type` 
> set('ESSENTIAL_SPLICE_SITE','STOP_GAINED','STOP_LOST','COMPLEX_INDEL','SPLICE_SITE')
> ) ENGINE=MyISAM AUTO_INCREMENT=174923212 DEFAULT CHARSET=latin1;
> 
> 
> I'm considering implementing this as a new type based on a bit vector, but I 
> wonder if anyone has a more flexible general solution.


tsvector or intarray might be other options, depending on how you're going to 
query it and whether those are really strings or more enum-ish.

The more SQLy way of doing it would be to have that stored in a separate table, 
with a foreign key to this one.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How is sorting work?

2014-05-30 Thread Steve Atkins

On May 30, 2014, at 5:13 PM, Quang Thoi  wrote:

> Any one knows how sorting works?
> 
> I am using postgresql 9.3 and runs on Linux machines.
> I see different sorting order for the same set of return data.
> 
> On linux machines, databases are configured the same.
> Database have encoding set to 'utf8' and locale='C'
>  
> query:
> Select host_id, host_name from host_view order by host_id
> 
> hos_id (character varying 128)
> host_name (character varying 255)
> 
> - On one linux machine (locate in U.S) , the query returned following:
> 
> host_id   host_name
>   --
> "00017486";"lnx2.xx.yy.com"
> "00017486";"lnx1.xx.yy.com"
> 
> - On a different linux machine (locate in India), the query returned 
> following:
> 
> host_id   host_name
>   --
> "00017486";"lnx1.xx.yy.com"
> "00017486";"lnx2.xx.yy.com"

Both results are correct. If you don't specify a sort order, postgresql
will return results in whatever order is convenient - which won't be
consistent from query to query or machine to machine.

You're only sorting by host_id. If you want to sort consistently by
host_id and host_name, so that the sort order is well defined for
identical host_ids, you'll want to do something like

select host_id, host_name from host_view order by host_id, host_name.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows Installation User account - Correct database for us

2014-07-10 Thread Steve Atkins

On Jul 10, 2014, at 5:01 PM, Don Brown  wrote:

> Hello
> 
> We are writing a small application and we are trying to determine if 
> PostgreSQL is the right database for us. 
> 
> The application at this stage is only for a single user and commonly for 
> persons with little computer expertise. 
> 
> When the database is installed a postgreSQL user account is created which in 
> most cases will be the second user account on the PC.  The result of this is 
> the user now has to select the user account when ever the computer is 
> restarted.

I'd be surprised if that was required behaviour, but I'm not really familiar 
with current PostgreSQL packaging for Windows.

>  The programmer working on this application has suggested looking at an 
> imbedded database, something like H2 as an alternative. 

Installation and management of PostgreSQL on Windows hits occasional minor 
roadbumps - not a problem for someone deploying and using PostgreSQL, but 
potentially a cause of support overhead if you're "invisibly" installing the 
database along with your app and not expecting your user to be aware of it.

If you need the power and flexibility of PostgreSQL, or want to allow your 
users direct database access and want to give them a good experience there, 
then the advantages probably outweigh the possible issues. If you don't need 
that then an embedded database might be a better match.

I'd look at SQLite as an embedded alternative, myself. It's a solid embedded 
SQL database. If you're entirely a Java shop then H2 might well be a good 
choice too.

> 
> I was hoping the members of this group may have some comments or suggestions 
> as to the direction we should look at. 
> 
> Thank you and appreciate any comments/suggestions 

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copying a database.

2014-07-20 Thread Steve Atkins

On Jul 20, 2014, at 11:09 AM, maillis...@gmail.com wrote:

> I send a nightly dump of my production database to a development server. A 
> script drops the existing development database and replaces it with the 
> current production copy. 
> 
> Each dev uses her own copy of the database. Is there a way to copy the 
> current development database to a differently named db on the same machine, 
> including the data,  without using the sql dump? 

Look at CREATE DATABASE developer_db WITH TEMPLATE production_copy;

createdb with the --template option is a convenient way to do that from the 
commandline.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] change the value of "unix_socket_directories" , must used "-h /xx/xx" to use the Unix domain socket

2014-08-16 Thread Steve Atkins

On Aug 16, 2014, at 8:49 AM, John R Pierce  wrote:

> On 8/16/2014 8:41 AM, lin wrote:
>> I change the value of "unix_socket_directories" in postgresql.conf , then 
>> restart the database, but it cannot connect the database used like this
>> "psql -d postgres -p 5432" , it must given the parameter " -h /xx/xx" to use 
>> the Unix domain socket。
>> how to fix this issue ?
> 
> the client has no access to postgresql.conf, it has no idea you changed it. 
> the default value is baked into libpq.so at compile time.

You might find the environment variable PGHOST useful.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] change the value of "unix_socket_directories" , must used "-h /xx/xx" to use the Unix domain socket

2014-08-16 Thread Steve Atkins

On Aug 16, 2014, at 9:01 AM, Nick Guenther  wrote:

> 
> 
> On August 16, 2014 11:41:02 AM EDT, lin  wrote:
>> Hi all,
>> I change the value of "unix_socket_directories" in postgresql.conf , 
>> then restart the database, but it cannot connect the database used like
>> this 
>> "psql -d postgres -p 5432" ,  it must given the parameter " -h /xx/xx"
>> to use the Unix domain socket。
>>   how to fix   this issue ?
> 
> I'll start by saying that your test case is very clear, and thank you for it. 
> I am unsure what your goal is, however. I assume you are trying to set up 
> parallel postgres processes, for debugging. I've done this recently, for that 
> reason.
> 
> First thing to point out is that you need only one of -h and -p.  They are 
> redundant options, because you only connect to postgres either over TCP (-p) 
> or with a unix domain socket (-h).

Not really. In the case of a TCP connection you need -h for the hostname and -p 
for the port. For a unix socket connection you use -h to specify the directory 
the unix socket is in, and -p is used to generate the name of the socket within 
that directory. If you omit one or both then the compiled-in defaults will be 
used, but it still uses both values to connect.

> 
> Second, what you're seeing is necessary. If you change the default, then psql 
> doesn't know where to look. However, you can recover the old behaviour with 
> shell tricks:
> $ alias psql='psql -h /xx/xx'
> $ psql -d postgres

Setting environment variables to point to your preferred instance will also 
work - and it'll work with any client that uses libpq (which is probably almost 
everything that's not java).

Cheers,
  Steve

> 
> (Personally, I wrote a short wrapper script called "client.sh" which 
> depth-first searches for a postgres db directory and the runs 'psql -h' with 
> the first one it finds; equally well you could have this script install an 
> alias)
> 
> Are you perhaps confused about what a unix domain socket is? Why are you 
> changing it? This is a decent description of it: 
> http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man4/unix.4
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 9.1 issues running data directory from VMware shared folder

2014-08-26 Thread Steve Atkins

On Aug 26, 2014, at 3:08 PM, Arze, Cesar  wrote:
> 
> I probably should be posting to the VMware mailing list with this question 
> but I wanted to see if anyone had any insight or suggestions here. I’ve seen 
> many similar issues but none of the solutions proposed there worked for me.

This might not be what you're seeing, but there was a hideous bug in the shared 
folder (hgfs) driver for linux guest OSes that'll silently corrupt your 
filesystem if it's accessed via more than one filehandle (e.g. multiple opens, 
multiple processes, ...).

If you're using vmware tools bundled with workstation 10.0.1 or fusion 6.0.2, 
you have that bug and cannot safely use hgfs mounts for any files, let alone 
postgresql. (There was a different bug, with similar results, for earlier 
versions too, including at least fusion 5.0.1). VMWare claim it's fixed in the 
tools bundled with 10.0.2 / 6.0.3 (I've not tested it). If you're not running 
the very latest vmware, upgrade to it and install the latest tools (or avoid 
using hgfs).

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Async IO HTTP server frontend for PostgreSQL

2014-09-10 Thread Steve Atkins

On Sep 10, 2014, at 12:16 AM, Dmitriy Igrishin  wrote:

> Hello, David
> 
> 2014-09-10 4:31 GMT+04:00 David Boreham :
> Hi Dmitriy, are you able to say a little about what's driving your quest for 
> async http-to-pg ?
> I'm curious as to the motivations, and whether they match up with some of my 
> own reasons for wanting to use low-thread-count solutions.
> For many web projects I consider Postgres as a development platform. Thus,
> I prefer to keep the business logic (data integrity trigger functions and
> API functions) in the database. Because of nature of the Web, many concurrent
> clients can request a site and I want to serve maximum possible of them with
> minimal overhead. Also I want to avoid a complex solutions. So, I believe that
> with asynchronous solution it's possible to *stream* the data from the 
> database
> to the maximum number of clients (which possible can request my site over a
> slow connection).

That's going to require you to have one database connection open for each
client. If the client is over a slow connection it'll keep the database 
connection
open far longer than is needed, (compared to the usual "pull data from the
database as fast as the disks will go, then spoonfeed it out to the slow client"
approach). Requiring a live database backend for every open client connection
doesn't seem like a good idea if you're supporting many slow concurrent clients.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Portable

2014-09-10 Thread Steve Atkins

On Sep 10, 2014, at 2:00 PM, Daniel Begin  wrote:

> First, I am a Newbie regarding PostgreSQL …
>  
> I just started to look at PostgreSQL to implement a large GIS DB (1Tb).  The 
> data must reside in an external disk with eSATA connection and may be moved 
> to different locations (and Windows desktops/laptops). I was looking to 
> install PostgreSQL and PostGIS extensions on each PC (setting-up the proper 
> PGDATA directory to the external disk) until I read about PostgreSQL and 
> PgAdmin Portable …
>  
> http://sourceforge.net/projects/pgadminportable/
> http://sourceforge.net/projects/postgresqlportable/


>  Is that a viable alternative considering the expected size of the DB? Any 
> comments or proposal would be appreciated .

Adding postgis to that, if it's not already included, might take some work. Not 
impossible but you'd be making some unneeded work for yourself.

The external disk isn't going to be blindingly fast, however you use it. For 
Windows in particular, it's probably going to be more conservative in caching 
the external drive than it would an internal one. Any large or unindexed 
queries are likely to be a bit painful.

I do use an external drive for some work, though, and it's usable. I have all 
of postgresql and the tools I use installed on the drive, with nothing for that 
instance installed on my laptop. I just have the external drives bin directory 
early in my PATH, so I can plug in the drive and do pg_ctl start, and it all 
works. That's on a mac, I'm sure you could do the same with Windows.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] installing on mac air development machine

2014-10-02 Thread Steve Atkins

On Oct 2, 2014, at 8:04 PM, John R Pierce  wrote:

> On 10/2/2014 4:37 PM, Adrian Klaver wrote:
>> On 10/02/2014 03:50 PM, john.tiger wrote:
>>> we've always installed on linux so need help with a new mac air running
>>> latest osx
>>> 
>>> in the instructions it shows several methods:
>>> 1) enterprisedb (but this does not look open source ?)
>> 
>> It is just the community version of Postgres behind a graphical installer, 
>> so yes it is open source. 
> 
> postgres is of course open source.  the enterprisedb installer I'm less sure 
> of, but its free to use.
> 
> if you just need postgres running while you're doing software development, 
> the postgresql.app version may be the simplest to use.   you run it on the 
> desktop and postgres is running.  close it and its not.  your user id owns 
> the pgdata and the process, so you don't have to jump through sudo hoops to 
> edit the config files.
> 
> http://postgresapp.com/
> 
> (Caveat:  I don't own a mac)

I do, and use postgres.app to develop against - and you're right. postgres.app 
is a trivial install, and it works beautifully for development using 
postgresql. It isn't really a desktop app, it's a tiny GUI controller that 
lives in your menu bar and controls a fairly standard postgresql installation 
under the covers. It can start up and shut down as you log in and log out, or 
you can start and stop it manually.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Comparing results of regexp_matches

2014-11-16 Thread Steve Atkins

On Nov 16, 2014, at 3:52 PM, Seamus Abshere  wrote:

> hi,
> 
> I want to check if two similar-looking addresses have the same numbered
> street, like 20th versus 21st.
> 
>2033 21st Ave S
>2033 20th Ave S (they're different)
> 
> I get an error:
> 
># select regexp_matches('2033 21st Ave S', '\m(\d+(?:st|th))\M') =
>regexp_matches('2033 20th Ave S', '\m(\d+(?:st|th))\M');
>ERROR:  functions and operators can take at most one set argument
> 
> I've tried `()[1] == ()[1]`, etc. but the only thing that works is
> making it into 2 subqueries:
> 
># select (select * from regexp_matches('2033 21st Ave S',
>'\m(\d+(?:st|th))\M')) = (select * from regexp_matches('2033 20th
>Ave S', '\m(\d+(?:st|th))\M'));
> ?column?
>--
> f
>(1 row)
> 
> Is there a more elegant way to compare the results of
> `regexp_matches()`?

Probably not - that's the documented way to force regexp_matches() to return a 
single row, whether it matches or not.

But I think you want to use substring(), rather than regexp_matches(), eg:

select substring('2033 21st Ave S' from '\m(\d+(?:st|th))\M') = 
substring('2033 20th Ave S' from '\m(\d+(?:st|th))\M');

substring() will return the first capturing group, if there is one, or the 
whole match otherwise.

Given that the whole pattern you're using here, other than some zero-width 
assertions, is a capturing group the result is the same either way. You could 
rewrite it without capturing and get the same result:

select substring('2033 21st Ave S' from '\m\d+(?:st|th)\M') = 
substring('2033 20th Ave S' from '\m\d+(?:st|th)\M');

Cheers,
  Steve




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] to_char(timestamp, format) is changing the year!

2014-11-30 Thread Steve Atkins

On Nov 30, 2014, at 1:05 PM, Stephen Woodbridge  wrote:

> Hi,
> 
> I am have a problem when I format a timestamp in that it is changing the 
> year. This can't be right, so either I don't understand

You're using "IYYY" which is the "ISO year", which is based on Mondays or 
Thursdays or something equally useless. You probably want "" instead.

Cheers,
  Steve

> or I have found a nasty corner case bug.
> 
> This does not happen on all dates
> 
> select '2014-12-31 00:00:00'::timestamp without time zone,
>   to_char('2014-12-31 00:00:00'::timestamp without time zone, 'IYYY-MM-DD 
> HH24:MI:SS');
> 
> "2014-12-31 00:00:00";"2015-12-31 00:00:00"
> 
> It appears that this also happens for all timestamps after "2014-12-28 
> 23:59:59" to the end of the year and then "2015-01-01 00:00:00" is ok again.
> 
> I have found this on 9.2 and 9.3.
> 
> "PostgreSQL 9.2.9 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 
> 4.6.3-1ubuntu5) 4.6.3, 64-bit"
> 
> "PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 
> 4.6.3-1ubuntu5) 4.6.3, 64-bit"
> 
> Any thoughts on how to work around this?
> 
> Thanks,
>  -Steve
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Blocking access by remote users for a specific time period

2014-12-14 Thread Steve Atkins

On Dec 13, 2014, at 10:38 PM, Michael Nolan  wrote:

> Yeah, a cron job to swap pg_hba.conf files is the best solution I've come up 
> with so far.  It's not one web app, it's closer to two dozen of them, on 
> multiple sites.   

If they use persistent connections you'll also have to kill existing 
connections after you've prevented new connections via pg_hba.conf

This ...

psql -q -c 'select application_name as "Client", procpid as "PID", 
pg_terminate_backend(procpid) as "Disconnected" from pg_stat_activity where 
procpid <> pg_backend_pid()' database_name

... will kill all connections to the server. You can use application_name, 
client_addr, datname or usename to be more selective about who lives and who 
dies.

Cheers,
  Steve

> --
> Mike Nolan
> 
> On Sat, Dec 13, 2014 at 11:10 PM, Adrian Klaver  
> wrote:
> On 12/13/2014 08:13 PM, Michael Nolan wrote:
> I have several web apps that access our Postgresql database that I'd
> like to lock out of the database for about an hour during a weekly
> maintenance interval. (There are some internal users that do not get
> locked out, because they're running the maintenance tasks.)
> 
> There are no time-of-day access limitation parameters in the pg_hba.conf
> file, are there any simple ways to do this?
> 
> Use a cron job that at beginning of period swaps out the pg_hba.conf with one 
> that denies access, reloads server and then at end of time period reverse 
> procedure ?
> 
> --
> Mike Nolan
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 4B row limit for CLOB tables

2015-01-29 Thread Steve Atkins

On Jan 29, 2015, at 9:53 AM, Roger Pack  wrote:

> On 1/29/15, Roger Pack  wrote:
>> Hello.  I see on this page a mention of basically a 4B row limit for
>> tables that have BLOB's
> 
> Oops I meant for BYTEA or TEXT columns, but it's possible the
> reasoning is the same...

It only applies to large objects, not bytea or text.

>> https://wiki.postgresql.org/wiki/BinaryFilesInDB

Some of that looks incorrect or out of date. (e.g. large objects can be a lot
bigger than 2GB in 9.3+).


>> 
>> Is this fact mentioned in the documentation anywhere? Is there an
>> official source for this? (If not, maybe consider this a feature
>> request to mention it in the documentation on BLOB).
>> Cheers and thanks.
>> -roger

I'm not sure whether it's mentioned explicitly, but large objects are
referenced by an OID, which is a 32 bit value (and a global resource).

If you had 4B BLOBs, though, running out of OIDs would probably be
the least of your worries.

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Longest prefix matching CTE

2015-02-24 Thread Steve Atkins

On Feb 24, 2015, at 3:50 PM, Tim Smith  wrote:

> 
> 
> The goal being to match the longest prefix given a full phone number, e.g.
> 
> 
> 61234567890  would match "australia proper 61"
> whilst
> 61134567890 would match "Australia premium 6113"
> and
> 61894321010 would match "Australia - Sydney 61893"
> 
> I know the answer involves Postgres CTE, but I haven't used CTEs much
> yet... let alone in complex queries such as this.
> 
> Thanking you all in advance for your kind help.

There's probably a CTE approach for it, but you might want to look
at https://github.com/dimitri/prefix too - it's an extension that's designed
specifically for longest prefix matching, and that uses gist indexes to
do it efficiently.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] VALUES() evaluation order

2013-01-13 Thread Steve Atkins
Is the order in which the expressions in a VALUES() clause defined?

I'm doing this: INSERT INTO foo (a, b) VALUES (nextval('bar'), currval('bar'))

It works fine, but I'm wondering whether it's guaranteed to work or whether I'm 
relying on an artifact of the implementation.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] VALUES() evaluation order

2013-01-13 Thread Steve Atkins

On Jan 13, 2013, at 2:36 PM, Tom Lane  wrote:

> Steve Atkins  writes:
>> Is the order in which the expressions in a VALUES() clause defined?
>> I'm doing this: INSERT INTO foo (a, b) VALUES (nextval('bar'), 
>> currval('bar'))
> 
>> It works fine, but I'm wondering whether it's guaranteed to work or whether 
>> I'm relying on an artifact of the implementation.
> 
> I'd say it's an artifact.  It probably does work reliably at the moment,
> but if we had a reason to change it we'd not feel much compunction about
> doing so.  (The most obvious potential reason to change it is parallel
> evaluation of expressions, which is a long way off, so you probably
> don't have any near-term reason to worry.  But ...)
> 
> Consider sticking the nextval() into a WITH.

Thanks. WITH it is, then.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux Distribution Preferences?

2013-01-13 Thread Steve Atkins

On Jan 13, 2013, at 10:27 AM, Shaun Thomas  wrote:

> Hey guys,
> 
> I'm not sure the last time I saw this discussion, but I was somewhat curious: 
> what would be your ideal Linux distribution for a nice solid PostgreSQL 
> installation? We've kinda bounced back and forth between RHEL, CentOS, and 
> Ubuntu LTS, so I was wondering what everyone else thought.


Either would be fine. RHEL is a bit more Enterprisey - which is either good or 
bad, depending on your use case. They're more conservative with updates than 
Ubuntu - which is good for service stability, but can be painful when you're 
stuck between using ancient versions of some app or stepping into the minefield 
of third party repos. (CentOS is pretty much just RHEL without support and 
without some of the management tools).

Ubuntu LTS is solid, and has good support for running multiple Postgresql 
clusters simultaneously, which is very handy if you're supporting multiple apps 
against the same database server, and they require different releases. I've 
been told that they occasionally make incompatible changes across minor 
releases, which is Bad, but it's never happened anywhere I've noticed - I've no 
idea if it's an actual issue or "Well, back in the 2004 release, they…" 
folklore.

I run both in production, both on VMs and real metal. I tend to use Ubuntu LTS 
for new installations just because I'm marginally more comfortable in the 
Ubuntu CLI environment, but there's really not much to choose between them.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best method to compare subdomains

2013-01-16 Thread Steve Atkins

On Jan 16, 2013, at 12:23 PM, Robert James  wrote:

> Is there a recommended, high performance method to check for subdomains?
> 
> Something like:
> - www.google.com is subdomain of google.com
> - ilikegoogle.com is not subdomain of google.com
> 
> There are many ways to do this (lowercase and reverse the string,
> append a '.' if not there, append a '%', and do a LIKE).  But I'm
> looking for one that will perform well when the master domain list is
> an indexed field in a table, and when the possible subdomain is either
> an individual value, or a field in a table for a join (potentially
> indexed).

If you've already dealt with any punycode encoding then the lowercased,
reversed string works pretty well, either as a (probably trigger-managed)
field or as a functional index.

If you need to get fancier, you might want to take a look at the approach
https://github.com/dimitri/prefix takes.

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Where Can I Find...

2013-01-25 Thread Steve Atkins

On Jan 25, 2013, at 8:21 AM, Gene Poole  wrote:

> I'm looking for a  tutorial on how to move a Oracle 11gR2 database 
> that consists on 4 instances with table spaces spread across multiple file 
> systems to PostgreSQL.  I need to do this because most of the open source CMS 
> packages do not play with Oracle.  The amount of data stored in the four 
> instances (or schemas) would fit on 6 single layer DVDs, so it's not a large 
> amount of data.

I doubt you'll find one. http://wiki.postgresql.org/wiki/Oracle probably has 
something useful, and http://ora2pg.darold.net

Given it's not a desperately huge database your best bet might just be to build 
a scratch system or VM (if a dump is less than 30gigs it's the sort of size you 
can handle on your laptop) and just do it. Convert the schema manually and 
import the exported data, or use ora2pg to automate some of it, then ask 
questions when you hit specific problems.

If you're using stored procedures things get more interesting, and they'll need 
to be rewritten. Once you've got the data ported you'll likely need to rewrite 
some of the queries your apps use, as oracle has some differences.

If you have budget, you might want to take a look at 
http://www.enterprisedb.com.

> Also I don't want to use RPM because I like to control where software is 
> installed. Another piece on information is that I am running on CentOS 5.8. 

Avoiding RPMs is probably not a good operational choice, BTW.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing PostgreSQL on OSX Server

2013-01-28 Thread Steve Atkins

On Jan 28, 2013, at 6:45 AM, Stephen Cook  wrote:

> Hello!
> 
> I have convinced a client to use PostgreSQL instead of MySQL (hooray), which 
> means it falls on me to install and configure it. I'm planning on doing this 
> from the command line (I have SSH access).
> 
> I have installed and configured PostgreSQL on Windows, FreeBSD, and a few 
> Linux flavors, but never OSX. I have the basic directions 
> (http://www.enterprisedb.com/resources-community/pginst-guide) and found a 
> couple of articles / blogs, but really I'm not an Apple guy so I don't want 
> to miss anything or screw anything up.
> 
> Is there anything I should watch out for? I have some wiggle room (this is a 
> development server at first), but I'd rather not break anything.
> 
> uname -a returns: Darwin xxx.local 10.8.0 Darwin Kernel Version 10.8.0: Tue 
> Jun  7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386
> PostgreSQL version: Latest and greatest - 9.2.2
> 
> Thanks!

You're not planning on using this in production, I hope? OS X is a very solid 
desktop OS, but it's server variant is packed full of weird and plain broken 
behaviour.

OS X includes the postgresql clients and libraries, and OS X Server includes 
the postgresql server. You don't want to use the included postgresql server, or 
the included libpq and binaries, but you'll want to make sure that they don't 
clash with the version you're installing - for the server that's not too 
painful, but for the clients you'll want to make sure that the PATH of all 
users is set up to find your installed versions of psql etc. before the ones in 
/usr/bin, and that they're linking with the right libpq.dylib, not the one in 
/usr/lib. "otool -L" is the OS X equivalent to ldd.

OS X doesn't have readline installed, it has libedit. Libedit is poor compared 
to readline, and the OS X installed version of libedit was, for years, 
hideously broken such that tab completion would cause SEGVs. It might have been 
fixed in the latest releases, or it might not. It's worth avoiding anyway.

If you'll be installing using the point-and-click enterprisedb installer it 
should take care of some of the issues for you. If you end up installing from 
source you need to be aware that OS X is a dual-architecture system with fat 
binaries and libraries (for i386 and x86_64 on recent releases). Depending on 
what your developers are doing that may be an issue. The usual way of building 
fat binaries doesn't work for postgresql, or didn't the last time I tried it; 
you may need to build twice, once for each architecture, then glue the results 
together to make fat libraries.

Also, on a non-postgreql note, you'll find that the OS X userspace, 
particularly when it comes to system administration tools, is strange and scary 
compared to the unixalikes you've used. You create users with multiple dscl 
commands, not adduser. Daemons are managed by launchd, not started from 
/etc/init.d.

http://labs.wordtothewise.com/postgresql-osx/ has a few notes on building and 
installing from source that might be useful.

Recent versions of OS X server (10.6 and later, I think) can be installed in 
VMWare, as long as the host is running on Apple hardware (so either VMWare 
Fusion or ESXi running on a mini) if you want to build a play / staging 
environment where you can roll back snapshots.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing PostgreSQL on OSX Server

2013-01-28 Thread Steve Atkins

On Jan 28, 2013, at 8:47 AM, Stephen Cook  wrote:

> On 1/28/2013 11:15 AM, Steve Atkins wrote:
>> You're not planning on using this in production, I hope? OS X is a very 
>> solid desktop OS, but it's server variant is packed full of weird and plain 
>> broken behaviour.
>> 
> 
> Ouch. These are the servers they have and use, I don't really get a say in 
> that.
> 
> Are these problems PostgreSQL-specific? They are already running Apache with 
> PHP and MySQL on these, so if it is a general "broken-ness" I guess they are 
> already used to it?

Not postgresql specific at all, just a lot of very flaky user-space code (and 
it's really just regular desktop OS X with more applications added, not really 
a differently tuned system). You're also going to find that it's really poorly 
suited for remote management via ssh, and the remote management apps have 
ridiculous OS version requirements for the management console. Plan on setting 
up VNC or using screen sharing (which is available on OS X client, 
/System/Library/CoreServices/Screen Sharing.app, or something like that).

If they're already using them in production with apache/php, nothing they're 
doing should break when they switch to postgresql.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Testing Technique when using a DB

2013-03-12 Thread Steve Atkins

On Mar 12, 2013, at 8:41 AM, Perry Smith  wrote:
> 
> 
> One choice would be to create the database, use it, and then drop it for each 
> test.  I would create the database from a template that already has data 
> taken from the production database (and probably trimmed down to a small 
> subset of it).  This requires some crafty dancing in the Rails set up since 
> it likes to just attach to a database and run but it could be done.  From 
> first blush, this sounds like it would be really slow but may be not. 

I do this. It's not blindingly fast, but plenty fast enough for automated 
testing as long as your tests aren't too fine-grained and your test database 
isn't too big. It takes no more than two or three seconds on my (slow, 
IO-starved) QA VMs.

By parameterizing the database name you can parallelize tests - each test 
creates it's own copy of the template database, runs whatever it needs to run, 
then drops the database. That lets you hide a lot of the setup/teardown latency.

> 
> The other choice would be to somehow copy the data to temporary tables before 
> the test run and then copy it back.  The advantage to this is it is not very 
> PostgreSQL specific.  Indeed, if the template database is already set up, 
> then only one copy would be needed at the start of the test. 

You'd also need to undo any other state that was changed. Sequences, for 
instance, if they can affect the meaning of your test or expected results in 
any way.

> 
> The other thought I had is if there is some type of "leaky" transaction.  A 
> transaction where another process can see the data but the roll back would 
> still work and be effective.  Essentially I'm asking if all the protections a 
> database offers could be dropped... but I thought I'd see if that was 
> possible. 

That - or anything else involving rolling back transactions - would only work 
if you were testing an app that didn't use transactions.

> The other thought is perhaps there is a "snap shot" type concept.  I don't 
> see it in the list of SQL commands.  A "snap shot" would do exactly what it 
> sounds like.  It would take a snap shot and save it somehow.  Then a "restore 
> to snap shot" would restore the DB back to that state. 

That's pretty much what creating a database from a template does, other than 
the need to have everybody disconnect from the database before doing the 
drop+create.

Dump / restore will do that too - somewhat slower, but doesn't require 
disconnecting from the DB.

File-system level snapshots are another option, but I'm pretty sure you'd need 
to shut down and restart the database server, which'd cost far more than you'd 
save.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When did this behavior change (and where else might it bite me)?

2013-03-18 Thread Steve Atkins

On Mar 18, 2013, at 9:49 AM, Jeff Amiel  wrote:

> In prepping for an upgrade to 9.2.3, I stumbled across this:
> 
> CREATE TABLE foo
> (
>   myint integer,
>   string1 text,
>   string2 text
> )
> WITH (
>   OIDS=FALSE
> );
> 
> insert into foo values (12345,'Y','N');
> 
>  select * from foo f where f.myint = 12345 or f.name='Y'
> 
> In 9.2.3, this returns:
> ERROR:  column f.name does not exist
> LINE 1:  select * from foo f where myint = 12345 or f.name='Y'
> 
> in 8.4.6 ,this returns no error (and gives me the row from the table)

That's (unintentionally) an attribute style data type cast - bar.name is the 
same as name(bar), and tries to cast bar to type "name" (an internal-use string 
type) 

Try "select foo from foo", "select name(foo::text) from foo" and "select 
name(foo) from foo" to see what's going on.

That was tightened up in 9.1, I think:

Casting

Disallow function-style and attribute-style data type casts for composite 
types (Tom Lane)

For example, disallow composite_value.text and text(composite_value). 
Unintentional uses of this syntax have frequently resulted in bug reports; 
although it was not a bug, it seems better to go back to rejecting such 
expressions. The CASTand :: syntaxes are still available for use when a cast of 
an entire composite value is actually intended.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] crash proof for semi-embedded system

2013-04-11 Thread Steve Atkins

On Apr 11, 2013, at 5:11 AM, David Welton  wrote:

> Hi,
> 
> I'm going to be deploying Postgres in a semi-embedded system where end
> users might simply power the thing off from one moment to the next.
> Or the disk might start to go wonky, or any number of other problems.
> Because it's a standalone device, it may well run in an environment
> where we can't ship backups off of the machine.
> 
> I've been reading this:
> 
> http://www.postgresql.org/docs/9.2/static/continuous-archiving.html

You should take a look at http://www.postgresql.org/docs/current/static/wal.html
too.

> 
> And it looks pretty good.  It appears that, since I can't really back
> things up to a separate disk unit, it will "just work" unless the disk
> gets corrupted?  In other words, in the case of someone pulling the
> power plug, it ought to be able to get things up and running more or
> less automatically, correct?  Besides utilizing that, and keeping the
> fsync option set to true, what other steps can I take to make sure
> that data is not lost even in extraordinary circumstances?  Having to
> manually fix things up is acceptable in the use case we're planning
> for, even if it's clearly preferable to not have to intervene.

If you don't do anything extra, postgresql should survive power
being pulled, the disk being pulled and anything else you do, 
as long as the underlying filesystem isn't damaged in the process.
It writes, and commits, all changes to the WAL as they're made,
and uses that to replay changes at startup if needed. That's
all assuming that your storage layer doesn't lie about fsync -
which is something to check, especially on embedded hardware
(there's a tool to do that at the link above).

It does mean that the database might not be immediately
available at system startup, while it's recovering, so your app
should be able to deal with that.

If you only have a single storage device, that's about the best
you can do (though taking a periodic backup for disaster recovery
wouldn't be the worst idea in the world).

Cheers,
  Steve




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dataset lock

2013-04-16 Thread Steve Atkins

On Apr 16, 2013, at 7:50 AM, Philipp Kraus  wrote:

> Hello,
> 
> I use a PG database on a HPC system (cluster). My processes get a dataset 
> from the database and change the row, each process is independend.
> My table shows something like: id, status, data
> 
> id = PK a unqiue number
> status a enum value which "open", "waiting", "working", "done"
> 
> So each process calls a SQL statement select * from where status = "waiting", 
> so the process should get the next waiting task, after the process
> gets the task, the status should be changed to "working", so no other process 
> shouldn't get the task. My processes are independed, so it can
> be, that 2 (or more) processes call the select statement at the same time and 
> get in this case equal tasks, so I need some locking. How can
> I do this with Postgres, that each row / task in my table is read / write by 
> one process. On threads I would create a mutex eg:
> 
> lock()
> row = select * from table where status = waiting
> update status = working from table where id = row.id
> unlock()
> 
> do something with row
> 
> Which is the best solution with postgres? should I create a procedure which 
> takes the next job, change it and returns the id, so each process
> calls "select getNextJob()" ?

"select for update" might be the answer to what you're asking for - it'll lock 
the rows matched until the end of the transaction, blocking any other select 
for update on the same rows.

If performance is important then you might want to look at some of the off the 
shelf queuing systems instead - PgQ or queue_classic, for instance.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why are stored procedures looked on so negatively?

2013-07-25 Thread Steve Atkins

On Jul 25, 2013, at 1:44 AM, Some Developer  wrote:
>> 
> 
> When I was talking about improving speed I was talking about reducing load on 
> the app servers by putting more of the work load on the database server. I 
> know that it won't actually save CPU cycles (one of the machines has to do 
> it) but it will save load on the app servers. As I said above using the 
> asynchronous abilities of libpq helps keep the app servers serving requests 
> whilst the database gets on with its tasks.
> 

App servers don't tend to maintain much global state, so are almost perfectly 
parallelizable. If you run out of CPU there, drop another cheap box in the rack.

Database servers aren't. Once you top out a database server your main options 
are to replace it with a bigger box (increasingly expensive) or rearchitect the 
application (even more expensive).

I'll always put more work on the cheaply scalable app servers if I can reduce 
the load on the database. Moving code to the database server for reasons of CPU 
cost (as opposed to, say, data or business rule consistency) seems an odd 
approach.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sharing data directories between machines

2013-08-06 Thread Steve Atkins

On Aug 6, 2013, at 10:45 AM, JD Wong  wrote:

> Hi all!
> 
> I have two servers, which share a large mounted drive.  I would like to share 
> postgres databases between them dynamically so that when one makes changes, 
> they are immediately available in the other.  
> 
> I tried moving the data directory over to the mounted drive, and pointing 
> both postgresql.confs to that one.  I was able to have both access the same 
> databases, but they can't share changes.  It's like they're running on two 
> separate data directories, even though show data_directory reports the same 
> path for each.

That'll likely damage your database, probably irrecoverably.

> How can I make them play nicely?

You can't do it by sharing the disk files, at all. The two instances will trash 
each others data.

If you want multiple database servers for redundancy, or you want to be able to 
offload read access to a second server, take a look at hot standby servers. 
http://www.postgresql.org/docs/9.2/static/high-availability.html

If you really want to be able to write to either database and have it 
replicated to the other one immediately, you should probably rethink what you 
need. It can be done (with multimaster replication) but it's almost never the 
right approach.

Cheers,
  Steve





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostrgreSQL Commercial restrictions?

2013-08-07 Thread Steve Atkins

On Aug 7, 2013, at 8:23 AM, Eliseo Viola  wrote:

> Hello.
> I have been reading the -http://opensource.org/licenses/postgresql- to know 
> if i can use PostgreSQL in a Privative Closed Commercial Software  (The worst 
> of the worst :P ). in the company where i work.
> Is there any restriction or limit to do it.?

There isn't, really. It's very liberally licensed, and you're free to 
distribute it.

However, as someone who writes "private closed commercial software" that uses 
postgresql as a backend I'd fairly strongly suggest that you at least provide 
end users the opportunity to use their own installation of postgresql if they 
want to. "Hiding" the existence of postgresql in your package doesn't magically 
make it as low maintenance as sqlite, so users will still know it's there and 
might want to point your app at their supported, tuned installation instead. 
(And may want to fire up psql to see what's in there - you'll get happier 
customers if you put connection info and a database schema diagram in your docs 
than if you try to hide your use of a database).

I no longer distribute postgresql bundled with the app at all. On platforms 
with decent package managers I just ship a package that relies on the 
OS-installed postgresql. For other environments I provide a package for the 
app, and a separate package with postgresql.

If you do decide to distribute postgresql with your app (by bundling the 
enterprisedb installer as part of a windows app installer, say) be very careful 
that it's configured to not clash with a used-installed copy of postgresql.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] batch insertion

2013-08-24 Thread Steve Atkins

On Aug 24, 2013, at 5:15 PM, Korisk  wrote:

> Hi!
> I want quick insert into db a lot of data (in form of triplets). Data is 
> formed dynamical so "COPY" is not suitable.

COPY works just fine for dynamically generated data, and it's probably the 
right thing to use if you're bulk loading data (it's about as fast as you can 
get for a single threaded load).

Take a look at the PQputCopyData() and PQputCopyEnd() functions.

Cheers,
  Steve

> I tried batch insert like this:
> 
> insert into triplets values (1,1,1);
> insert into triplets values (1,1,1), (3,2,5), (4,5,5);
> ...
> insert into triplets values (1,1,1), (3,2,5), (4,5,5)  ;
> 
> The more triplets I use the quicker operation is.
> With preparation it looks like this:
> 
> res = PQprepare(conn, "qu", "insert into triplets values ($1::bigint, 
> $2::bigint, $3::float);",3, NULL);
> ...
> res = PQprepare(conn, "qu", "insert into triplets values ($1::bigint, 
> $2::bigint, $3::float), ($4::bigint, $5::bigint, $6::float), ($7::bigint, 
> $8::bigint, $9::float), ($10::bigint, $11::bigint, $12::float);",12, NULL);
> ...
> 
> The question:
> Is there any way to prepare query with any number of triplets without casting 
> such a long string?
> 
> Thank you.
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Logging broken messages

2017-02-07 Thread Steve Atkins

> On Feb 6, 2017, at 9:21 AM, Rui Pacheco  wrote:
> 
> Hello,
> 
> I’m trying to implement a version of the wire protocol but I’ve hit a 
> problem: whenever I send a Close Statement message to the remote, it just 
> hangs indefinitely. I suspect the problem could be on my side but I can’t 
> find anything on my code that doesn’t match the manual.
> 
> Is there a way to configure Postgres to debug this? I’ve tried setting a 
> number of log parameters but nothing shows up on syslog:

Wireshark has a v3 postgresql protocol dissector. It might be worth comparing 
your code with libpq and see if anything looks different.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 64 and 32 bit libpq

2017-02-12 Thread Steve Atkins

> On Feb 12, 2017, at 5:03 PM, Jerry LeVan  wrote:
> 
> Hello, I am trying to upgrade my postgresql ‘stuff’ to 64 bits from 32 bits.
> 
> I am running MaxOS Sierra.
> 
> I have built the 64 bit version of the server and have loaded
> my database into the new server ( 9.6.2 ). Everything seems to be working.
> 
> I have some legacy apps that are 32 bit and talk to the server 
> via libpq.dylib. They no longer work with a complaint about
> not finding a libpq with the right architecture.
> 
> Can I go back and do a 32 bit rebuild of everything and then
> take the 32 bit libpq and the 64 bit libpq and use lipo to
> glue them together and create a ‘fat’ libpq and replace the
> installed libpq? 
> 
> Is this a safe thing to do?

I've done it in the past (http://labs.wordtothewise.com/postgresql-osx/) and it
seemed to work fine.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] get inserted id from transaction - PG 9.2

2017-02-14 Thread Steve Atkins

> On Feb 14, 2017, at 2:55 PM, Patrick B  wrote:
> 
> Hi all,
> 
> I'm simply doing an insert and I want to get the inserted id with a select. 
> I'm doing this all in the same transactions.
> 
> Example:
> 
> BEGIN;
> 
> INSERT INTO test (id,name,description) VALUES (default,'test 1','testing 
> insert');
> SELECT FROM test ORDER BY id DESC; -- I don't see the inserted row here

You want "select * from test ..." or "select id from test ..." here. Should 
work fine then.

> 
> COMMIT;
> 
> I only can see that inserted row if I do the select outside of this 
> transaction.
> 
> How could I get that ? 

This'd be the idiomatic way of doing it:

INSERT INTO test (name,description) VALUES ('test 1','testing insert') 
RETURNING id;

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't restart Postgres

2017-02-15 Thread Steve Atkins

> On Feb 14, 2017, at 8:47 PM, Shawn Thomas  wrote:
> 
> No it doesn’t matter if run with sudo, postgres or even root.  Debian 
> actually wraps the command and executes some some initial scripts with 
> different privileges but ends up making sure that Postgres ends up running 
> under the postgres user.  I get the same output if run with sudo:
> 
> sudo systemctl status postgresql@9.4-main.service -l
>Error: could not exec   start -D /var/lib/postgresql/9.4/main -l 
> /var/log/postgresql/postgresql-9.4-main.log -s -o  -c 
> config_file="/etc/postgresql/9.4/main/postgresql.conf”

There's a suspicious hole between "exec" and "start" where I'd expect to see 
the full path to the pg_ctl binary. As though a variable were unset in a script 
or config file.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to evaluate "explain analyze" correctly after "explain" for the same statement ?

2017-02-15 Thread Steve Atkins

> On Feb 15, 2017, at 3:58 PM, Patrick B  wrote:
> 
> Hi all,
> 
> I just got a quick question about warm-cache. I'm using PG 9.2.
> 
> When I execute this statement soon after I start/restart the database:
> 
> explain select id from test where id = 124;
> 
> The runtime is 40ms.
> 
> Then, If I execute this statement just after the above one;
> 
> explain analyze select id from test where id = 124;
> 
> The runtime is 0.8ms.

This doesn't make seem to make sense.

"explain select ..." doesn't run the query. All it shows is the plan the 
planner chose and some estimates of the "cost" of different steps, with no 
time. Where are you getting 40ms from in this case?

"explain analyze select ..." does run the query, along with some - potentially 
non-trivial - instrumentation to measure each step of the plan, so you can see 
whether the planner estimates are reasonable or wildly off.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not translate host name

2017-02-24 Thread Steve Atkins

> On Feb 24, 2017, at 1:37 PM, Tom Ekberg  wrote:
> 
> I'm running postgres 9.6.2 (also happened on 9.3.14) and have a cron job that 
> runs hourly that runs a program that does mostly postgres SELECTs on a 
> different host. Occasionally I get email (not hourly) from the cron daemon 
> that contains a stack trace that ends with this:
> 
> sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not 
> translate host name "db3.labmed.uw.edu" to address: Name or service not known
> 
> It has happened about 9 times so far this month. I have one of our network 
> people look into this but there is no real answer. I could use the IP address 
> but I'd rather not. This problem only happens on one host. I moved the data 
> from db2 to db3. I was getting similar emails regarding db2 which runs the 
> older postgres.
> 
> Any ideas on how to proceed?

It looks like a DNS issue. That hostname authoritatively doesn't exist, 
according to any of UW's nameservers.

If it works sometimes then you have some sort of internal name resolution hack, 
and it's not reliable.

Cheers,
  Steve

> 
> Tom Ekberg
> Senior Computer Specialist, Lab Medicine
> University of Washington Medical Center
> 1959 NE Pacific St, MS 357110
> Seattle WA 98195
> work: (206) 598-8544
> email: tekb...@uw.edu
> 
> 
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] appropriate column for storing ipv4 address

2017-03-01 Thread Steve Atkins

> On Mar 1, 2017, at 8:39 AM, jonathan vanasco  wrote:
> 
> 
> I have to store/search some IP data in Postgres 9.6 and am second-guessing my 
> storage options.  
> 
> 
> The types of searching I'm doing:

[...]

> 
>   2. on tracked_ip_block, i search/join against the tracked_ip_address to 
> show known ips in a block, or a known block for an ip.
> 
> i used cidr instead of inet for the ip_address because it saved me a cast on 
> joins and appears to work the same.  was that the right move?  is there a 
> better option?

If you're looking to do fast searches for "is this IP address in any of these 
CIDR blocks" you might want to look at https://github.com/RhodiumToad/ip4r as a 
possible alternative.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mysql_config_editor feature suggestion

2017-03-21 Thread Steve Atkins

> On Mar 21, 2017, at 3:03 PM, Tom Ekberg  wrote:
> 
> I have been working with MySQL a bit (yes, I know, heresy) and encountered a 
> program called mysql_config_editor. In my opinion it does a better job of 
> local password management than using a ~/.pgpass file. Instead of assuming 
> that a mode of 600 will keep people from peeking at your password, it 
> encrypts the password, but keeps the other parameters like host, port and 
> user available for viewing as plaintext. You can read more about it here:
> 
>  https://dev.mysql.com/doc/refman/5.7/en/mysql-config-editor.html
> 
> The host, user, password values are grouped into what are called login paths 
> which are of the form:
> 
>  [some_login_path]
>  host = localhost
>  user = localuser

Looks rather like a postgresql service file. :)

> 
> Just like the config files you have no doubt seen before. The only way to set 
> a password is to use the command:
> 
>  mysql_config_editor set --login-path=some_login_path --password
> 
> which will prompt the user to enter the password for the specified login 
> path. The password is never seen as plain text. There are other commands to 
> set, remove, print and reset values for a login path. The print command that 
> shows a password will display this instead:
> 
>  password = *

This seems like it'd give people a false sense of security. If someone can read 
that file, they can log in to that account. Obfuscating the password just makes 
naive users think they're secure when they're anything but, and means they're 
less likely to be careful about making that file unreadable and avoiding 
checking it into revision control and so on. It'd protect against 
shoulder-surfing, but it's not like you're going to have .pg_pass open in an 
editor too often.

A commandline tool for managing pgpass might be interesting, I guess. Though 
for local databases using peer authentication is likely better than saving 
passwords in a file.

> Adding a similar feature for PostgreSQL will also require a change to the 
> psql program to specify and handle --login-path used for authentication. This 
> may also be the case for some of the other pg_* utilities.
> 
> I think adding a feature like mysql_config_editor to PostgreSQL is an easy 
> way to set up multiple "personalities" for connecting to different PostgreSQL 
> servers. The password protection will deter the curious user from gaining 
> access to your data. It will not stop a determined hacker, but the idea is to 
> make it more difficult.
> 
> Other than this mailing list, is there a way to make a feature request for 
> PostgreSQL?

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-04-30 Thread Steve Atkins

> On Apr 30, 2017, at 4:37 AM, Thomas Güttler  
> wrote:
> 
> Is is possible that PostgreSQL will replace these building blocks in the 
> future?
> 
> - redis (Caching)
> - rabbitmq (amqp)
> - s3 (Blob storage)

No.

You can use postgresql for caching, but caches don't require the data
durability that a database offers, and can be implemented much more
efficiently.

You can use postgresql to provide message queue services and it
does so reasonably well, particularly when the messages are generated within
the database. But it's not going to do so as efficiently, or be as easy to
monitor, to make highly redundant or to scale across a whole datacenter
as a dedicated message queue service.

You could use postgresql to store binary blobs, but it'd be a horrifically
inefficient way to do it. (Using postgresql to store the metadata, while
the content is stored elsewhere, sure).

Use the right tool for the job.

Cheers,
  Steve

> 
> One question is "is it possible?", then next "is it feasible?"
> 
> I think it would be great if I could use PG only and if I could
> avoid the other types of servers.
> 
> The benefit is not very obvious on the first sight. I think it will saves you
> time, money and energy only in the long run.
> 
> What do you think?
> 
> Regards,
>  Thomas Güttler
> 
> 
> -- 
> I am looking for feedback for my personal programming guidelines:
> https://github.com/guettli/programming-guidelines
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   5   >