Re: [GENERAL] Missing uuid_generate_v1()

2010-10-07 Thread Mike Christensen
On Wed, Oct 6, 2010 at 10:47 PM, Magnus Hagander  wrote:
> On Thu, Oct 7, 2010 at 05:09, Mike Christensen  wrote:
>> On Wed, Oct 6, 2010 at 7:56 PM, Tom Lane  wrote:
>>> Mike Christensen  writes:
>>>> On Wed, Oct 6, 2010 at 7:38 PM, Tom Lane  wrote:
>>>>> If you have a libossp-uuid.so.16, you might try symlinking libuuid.so.16
>>>>> to that instead of carrying a separate file.
>>>
>>>> So now what you're saying is if it's not broke, fix it till it is :)
>>>
>>> Well, it's hard to argue with that position ;-).  But I'll try anyway:
>>> the platform-provided version of the library will be updated for bug
>>> fixes, compatibility rebuilds, etc.  Your private copy won't be, unless
>>> you remember to do it.  Eventually that's gonna bite ya.
>>>
>>> Of course the best fix would be for EDB to ship a build of Postgres
>>> that actually follows the platform-standard naming convention for this
>>> library.  I'm still wondering why they're linking to libuuid.so.
>>> Dave?
>>>
>>>                        regards, tom lane
>>>
>>
>> Agreed.  However, if you go to
>> http://www.postgresql.org/download/linux then it points you right to
>> EnterpriseDB to download a bin installer.  So your choices are:
>
> The very first paragraph says we recommend you use apt packages.
>
> I wanted to move the packages higher up in the actual list, but was
> voted down ;)
>
>>
>> 1) Use apt-get: This won't give you 9.0 yet (I think 10.04 only has 8.4)
>
> It is. You need to enable the PPA at
> https://launchpad.net/~pitti/+archive/postgresql, per
> http://www.piware.de/2010/09/postgresql-9-0-final-released/.
>
> We should probably add a link to that PPA from the donwload page.
> Martin - any reason *not* to do that?
>
>
>
>> 2) Build it yourself (I'm not enough of a hacker to do this, probably
>> the case with most "novice" users)
>
> Yeah, that's definitely not recommended.
>
>> 3) One click installer maintained by EDB.
>> 4) Some other repository out there?  Dunno if anyone maintains one.
>>
>> I agree, EDB needs to make this "just work" on the latest Ubuntu.
>
> Yes, in this case it looks like a bug in the EDB installer that should be 
> fixed.
>
> But even when that is fixed, the recommendation is still to use the
> APT packages since you get all the package management integration etc.
>
> BTW - Dave, I notice the edb page says only ubuntu 8.04 and up, fedora
> 10 and up, etc  are supported by the installers from 9.0 and newer -
> the download page on pg.org should probably be updated with that
> information.
>
> --
>  Magnus Hagander
>  Me: http://www.hagander.net/
>  Work: http://www.redpill-linpro.com/
>

Thanks!  In the future, I'll use the apt package..  I prefer this
option when available..

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres won't start after setting ssl=on

2010-10-09 Thread Mike Christensen
Hi, I'm trying to require SSL for Postgres connections from certain
IPs..  This is on Postgres 9.0.

First, I've followed the directions at:

http://www.postgresql.org/docs/9.0/static/ssl-tcp.html

I've created the files server.crt and server.key.  I've also removed
the passphrase from the key so Postgres can start automatically.
Finally, I ran:

chmod 0600 server.key

The permissions on server.key are now:

-rw--- 1 root root 887 Oct 10 03:42 server.key

However, when I set ssl = on in postgresql.conf and start the server,
I get the logged error:

2010-10-10 03:47:07 UTC FATAL:  could not load private key file
"server.key": Permission denied

I'm logged on as root.  Any ideas?  Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres won't start after setting ssl=on

2010-10-09 Thread Mike Christensen
Sweet!  That fixed it..  Rock on..

Of course now let's see if I can connect from my Mac client :)

On Sat, Oct 9, 2010 at 9:00 PM, Ben Carbery  wrote:
> The private keys needs to be readable by the same user the server runs
> under. This is distribution-dependent and may not be 'root'.
> In my case I run Red Hat which uses the 'postgres' user, so:
>
> chown postgres.postgres /var/lib/pgsql/data/server.*
>
>
> On Sun, Oct 10, 2010 at 2:52 PM, Mike Christensen 
> wrote:
>>
>> Hi, I'm trying to require SSL for Postgres connections from certain
>> IPs..  This is on Postgres 9.0.
>>
>> First, I've followed the directions at:
>>
>> http://www.postgresql.org/docs/9.0/static/ssl-tcp.html
>>
>> I've created the files server.crt and server.key.  I've also removed
>> the passphrase from the key so Postgres can start automatically.
>> Finally, I ran:
>>
>> chmod 0600 server.key
>>
>> The permissions on server.key are now:
>>
>> -rw--- 1 root root 887 Oct 10 03:42 server.key
>>
>> However, when I set ssl = on in postgresql.conf and start the server,
>> I get the logged error:
>>
>> 2010-10-10 03:47:07 UTC FATAL:  could not load private key file
>> "server.key": Permission denied
>>
>> I'm logged on as root.  Any ideas?  Thanks!
>>
>> Mike
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>
>
>
> --
>  ---
> "Because it that the times revive as time is fresh somehow, and it to feel
> wins why, and, as for it, all forget an old thing" - Japanese saying
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres won't start after setting ssl=on

2010-10-09 Thread Mike Christensen
Yup, my bad..  I should have noticed all the other files were owned by
postgres (and I assume that's what the process is running under)..
I'm still a Unix newbie, but learning quickly..

Everything's working, and to my surprise pgAdmin connected using SSL
on the first try..  No need to mess with anything on the client side
of things..

So now I have my database, which only accepts TCP/IP "host"
connections from the IP addresses of my web servers, and then requires
"hostssl" from my home cable modem IP.  Everything else is blocked.
Should be decently secure :)

Mike

On Sat, Oct 9, 2010 at 9:04 PM, Darren Duncan  wrote:
> The owner of these new files needs to be the same as that of your Pg data
> dir in general or postgresql.conf specifically, and that owner be the same
> as the process that runs the Pg server.  Are you running Pg as root?  (In
> any event, you should have another user; running programs or servers as root
> when they don't need root powers is generally a bad idea.) -- Darren Duncan
>
> Mike Christensen wrote:
>>
>> Hi, I'm trying to require SSL for Postgres connections from certain
>> IPs..  This is on Postgres 9.0.
>>
>> First, I've followed the directions at:
>>
>> http://www.postgresql.org/docs/9.0/static/ssl-tcp.html
>>
>> I've created the files server.crt and server.key.  I've also removed
>> the passphrase from the key so Postgres can start automatically.
>> Finally, I ran:
>>
>> chmod 0600 server.key
>>
>> The permissions on server.key are now:
>>
>> -rw--- 1 root root 887 Oct 10 03:42 server.key
>>
>> However, when I set ssl = on in postgresql.conf and start the server,
>> I get the logged error:
>>
>> 2010-10-10 03:47:07 UTC FATAL:  could not load private key file
>> "server.key": Permission denied
>>
>> I'm logged on as root.  Any ideas?  Thanks!
>>
>> Mike
>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres won't start after setting ssl=on

2010-10-10 Thread Mike Christensen
While I do appreciate the vote of confidence, rest assured you will
never see a post from me that starts with "So I've been hacking the pg
code and..."

On Sat, Oct 9, 2010 at 11:54 PM, Scott Marlowe  wrote:
> On Sat, Oct 9, 2010 at 10:04 PM, Darren Duncan  
> wrote:
>> The owner of these new files needs to be the same as that of your Pg data
>> dir in general or postgresql.conf specifically, and that owner be the same
>> as the process that runs the Pg server.  Are you running Pg as root?  (In
>> any event, you should have another user; running programs or servers as root
>> when they don't need root powers is generally a bad idea.) -- Darren Duncan
>
> Actually, unless you've hacked the pg code, it won't start as root.
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need some help setting up pgAgent

2010-10-17 Thread Mike Christensen
Okay my required n00b question of the week, hopefully this'll be an easy one..

I decided to give pgAgent a shot, because there's this stored sproc
(sorry, function) I need to run nightly and I think spending hours
figuring out pgAgent would somehow be better than the 3 minutes it
would take to add a cron job that calls psql..

I'm following the directions at:
http://www.pgadmin.org/docs/1.6/pgagent-install.html

However, the heading "Daemon installation on *nix" needs me to believe
that running pgagent from the command line should install the daemon,
and then things should be happily running in the background.  However,
when I run the command line as instructed, it just sits there and does
nothing.  I never get returned to the bash prompt, it's as if the
daemon is just running in interactive mode until I stop the process..
Maybe I'm missing something, or I have to add that command line to
some config file.  I'm not a unix guru but from my past experiences, I
can usually run daemons with "/etc/init.d/blah start" - pgAgent
doesn't seem to be installed in that manner.

Second, assuming the daemon is running in interactive mode, like..
now what?  The docs have no next steps.  How do I create a new job to
run my SQL function every night?  From some documentation from
Postgres Plus, I was led to believe that there should be a "Jobs" tree
in pgAdmin created..  But I see nothing.  Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help setting up pgAgent

2010-10-18 Thread Mike Christensen
On Mon, Oct 18, 2010 at 2:07 AM, Dave Page  wrote:
> On Sun, Oct 17, 2010 at 2:32 PM, Mike Christensen  wrote:
>> Okay my required n00b question of the week, hopefully this'll be an easy 
>> one..
>>
>> I decided to give pgAgent a shot, because there's this stored sproc
>> (sorry, function) I need to run nightly and I think spending hours
>> figuring out pgAgent would somehow be better than the 3 minutes it
>> would take to add a cron job that calls psql..
>>
>> I'm following the directions at:
>> http://www.pgadmin.org/docs/1.6/pgagent-install.html
>
> Why such an old version? Try
> http://www.pgadmin.org/docs/1.12/pgagent-install.html
>
>> However, the heading "Daemon installation on *nix" needs me to believe
>> that running pgagent from the command line should install the daemon,
>> and then things should be happily running in the background.  However,
>> when I run the command line as instructed, it just sits there and does
>> nothing.  I never get returned to the bash prompt, it's as if the
>> daemon is just running in interactive mode until I stop the process..
>
> Did you use the -f option?
>
>> Maybe I'm missing something, or I have to add that command line to
>> some config file.  I'm not a unix guru but from my past experiences, I
>> can usually run daemons with "/etc/init.d/blah start" - pgAgent
>> doesn't seem to be installed in that manner.
>
> If you built from source, it's up to you to create the startup script.
> The pgAgent source tree doesn't contain OS-specific startup scripts.
>
>> Second, assuming the daemon is running in interactive mode, like..
>> now what?  The docs have no next steps.  How do I create a new job to
>> run my SQL function every night?  From some documentation from
>> Postgres Plus, I was led to believe that there should be a "Jobs" tree
>> in pgAdmin created..  But I see nothing.  Thanks!
>
> Did you setup the database per the part of the docs prior to that section?
>
> The "next steps" are most certainly there - see the index page at
> http://www.pgadmin.org/docs/1.12/pgagent.html
>
> --
> Dave Page
> Blog: http://pgsnake.blogspot.com
> Twitter: @pgsnake
>
> EnterpriseDB UK: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>

Thanks for the info!  First, I have no idea what version of pgAgent it
is.  Where does it say?  I installed it using apt-get..

apt-cache search pgagent
pgagent - job scheduler for PostgreSQL

It doesn't say the version, but that's the only match..  Maybe there's
a bin file or something better?

I ran it using:

/usr/bin/pgagent hostaddr=127.0.0.1 dbname=KitchenPC user=root

and it immediately returned to the bash prompt, and it appears to be running:

ps | grep pgagent
 4134 pts/000:00:00 pgagent

I ran the .sql file that it came with, and it created a new catalog
called pgAgent.

However, I'm still not sure what to do next.  According to the
instructions, pgAdmin should show a node called "Jobs".  I don't see
that.  I'm using pgAdmin for OS/X and connecting to the server from
remote over SSL.  Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help setting up pgAgent

2010-10-18 Thread Mike Christensen
On Mon, Oct 18, 2010 at 2:21 AM, Mike Christensen  wrote:
> On Mon, Oct 18, 2010 at 2:07 AM, Dave Page  wrote:
>> On Sun, Oct 17, 2010 at 2:32 PM, Mike Christensen  wrote:
>>> Okay my required n00b question of the week, hopefully this'll be an easy 
>>> one..
>>>
>>> I decided to give pgAgent a shot, because there's this stored sproc
>>> (sorry, function) I need to run nightly and I think spending hours
>>> figuring out pgAgent would somehow be better than the 3 minutes it
>>> would take to add a cron job that calls psql..
>>>
>>> I'm following the directions at:
>>> http://www.pgadmin.org/docs/1.6/pgagent-install.html
>>
>> Why such an old version? Try
>> http://www.pgadmin.org/docs/1.12/pgagent-install.html
>>
>>> However, the heading "Daemon installation on *nix" needs me to believe
>>> that running pgagent from the command line should install the daemon,
>>> and then things should be happily running in the background.  However,
>>> when I run the command line as instructed, it just sits there and does
>>> nothing.  I never get returned to the bash prompt, it's as if the
>>> daemon is just running in interactive mode until I stop the process..
>>
>> Did you use the -f option?
>>
>>> Maybe I'm missing something, or I have to add that command line to
>>> some config file.  I'm not a unix guru but from my past experiences, I
>>> can usually run daemons with "/etc/init.d/blah start" - pgAgent
>>> doesn't seem to be installed in that manner.
>>
>> If you built from source, it's up to you to create the startup script.
>> The pgAgent source tree doesn't contain OS-specific startup scripts.
>>
>>> Second, assuming the daemon is running in interactive mode, like..
>>> now what?  The docs have no next steps.  How do I create a new job to
>>> run my SQL function every night?  From some documentation from
>>> Postgres Plus, I was led to believe that there should be a "Jobs" tree
>>> in pgAdmin created..  But I see nothing.  Thanks!
>>
>> Did you setup the database per the part of the docs prior to that section?
>>
>> The "next steps" are most certainly there - see the index page at
>> http://www.pgadmin.org/docs/1.12/pgagent.html
>>
>> --
>> Dave Page
>> Blog: http://pgsnake.blogspot.com
>> Twitter: @pgsnake
>>
>> EnterpriseDB UK: http://www.enterprisedb.com
>> The Enterprise PostgreSQL Company
>>
>
> Thanks for the info!  First, I have no idea what version of pgAgent it
> is.  Where does it say?  I installed it using apt-get..
>
> apt-cache search pgagent
> pgagent - job scheduler for PostgreSQL
>
> It doesn't say the version, but that's the only match..  Maybe there's
> a bin file or something better?
>
> I ran it using:
>
> /usr/bin/pgagent hostaddr=127.0.0.1 dbname=KitchenPC user=root
>
> and it immediately returned to the bash prompt, and it appears to be running:
>
> ps | grep pgagent
>  4134 pts/0    00:00:00 pgagent
>
> I ran the .sql file that it came with, and it created a new catalog
> called pgAgent.
>
> However, I'm still not sure what to do next.  According to the
> instructions, pgAdmin should show a node called "Jobs".  I don't see
> that.  I'm using pgAdmin for OS/X and connecting to the server from
> remote over SSL.  Thanks!
>
> Mike
>

Sorry, it looks like it defaulted to the wrong DB.  I created the
schema in the "postgres" database and now I see a Jobs node..

One last question - How do I configure pgAgent to start automatically?

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help setting up pgAgent

2010-10-18 Thread Mike Christensen
On Mon, Oct 18, 2010 at 2:34 AM, Dave Page  wrote:
> On Mon, Oct 18, 2010 at 10:31 AM, Mike Christensen  wrote:
>> Sorry, it looks like it defaulted to the wrong DB.  I created the
>> schema in the "postgres" database and now I see a Jobs node..
>
> :-)
>
>> One last question - How do I configure pgAgent to start automatically?
>
> You'll need to put together a startup script for your OS, if the
> debian/ubuntu packager hasn't done so already. I'd suggest copying one
> from /etc/init.d and tweaking it as required.

This is a bit outside my comfort zone, do you have a startup script
you can share with me?  I can copy it into /etc/init.d - Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help setting up pgAgent

2010-10-18 Thread Mike Christensen
On Mon, Oct 18, 2010 at 2:43 AM, Dave Page  wrote:
> On Mon, Oct 18, 2010 at 10:41 AM, Mike Christensen  wrote:
>> On Mon, Oct 18, 2010 at 2:34 AM, Dave Page  wrote:
>>> On Mon, Oct 18, 2010 at 10:31 AM, Mike Christensen  
>>> wrote:
>>>> Sorry, it looks like it defaulted to the wrong DB.  I created the
>>>> schema in the "postgres" database and now I see a Jobs node..
>>>
>>> :-)
>>>
>>>> One last question - How do I configure pgAgent to start automatically?
>>>
>>> You'll need to put together a startup script for your OS, if the
>>> debian/ubuntu packager hasn't done so already. I'd suggest copying one
>>> from /etc/init.d and tweaking it as required.
>>
>> This is a bit outside my comfort zone, do you have a startup script
>> you can share with me?  I can copy it into /etc/init.d - Thanks!
>
> No I don't (I use a Mac - and never start pgagent automatically
> anyway). Can you copy the PostgreSQL script and make appropriate
> changes?

Okay I found one that I can use..

One question..  Should the connection string in the script have the
password for "root" hard coded in it?  Or will it use a password from
~/.pgpass automatically?  If so, what user account will it find the
.pgpass file under?  Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help setting up pgAgent

2010-10-18 Thread Mike Christensen
On Mon, Oct 18, 2010 at 3:07 AM, Dave Page  wrote:
> On Mon, Oct 18, 2010 at 10:46 AM, Mike Christensen  wrote:
>> Okay I found one that I can use..
>>
>> One question..  Should the connection string in the script have the
>> password for "root" hard coded in it?  Or will it use a password from
>> ~/.pgpass automatically?  If so, what user account will it find the
>> .pgpass file under?  Thanks!
>
> Have the script start pgagent under the postgres account eg;
>
> su - postgres -c 'p/path/to/pgadmin'
>
> Then it should be able to use postgres' pgpass file. Don't put the
> password in the connection string!

Ok, that worked..  I can at least start and stop it now, and it
remains running when I'm logged off..

So does anything in /etc/init.d get automatically run when the server boots?

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need some help setting up pgAgent

2010-10-18 Thread Mike Christensen
On Mon, Oct 18, 2010 at 3:37 AM, Dave Page  wrote:
> On Mon, Oct 18, 2010 at 11:34 AM, Mike Christensen  wrote:
>> On Mon, Oct 18, 2010 at 3:07 AM, Dave Page  wrote:
>>> On Mon, Oct 18, 2010 at 10:46 AM, Mike Christensen  
>>> wrote:
>>>> Okay I found one that I can use..
>>>>
>>>> One question..  Should the connection string in the script have the
>>>> password for "root" hard coded in it?  Or will it use a password from
>>>> ~/.pgpass automatically?  If so, what user account will it find the
>>>> .pgpass file under?  Thanks!
>>>
>>> Have the script start pgagent under the postgres account eg;
>>>
>>> su - postgres -c 'p/path/to/pgadmin'
>>>
>>> Then it should be able to use postgres' pgpass file. Don't put the
>>> password in the connection string!
>>
>> Ok, that worked..  I can at least start and stop it now, and it
>> remains running when I'm logged off..
>>
>> So does anything in /etc/init.d get automatically run when the server boots?
>
> No, you have to enable it. On redhat based distros, you'd do something
> like "chkconfig  on". On Debian based distros, I believe
> you use the update-rc.d command.

Well, I guess that worked:

etc/init.d# update-rc.d pgagent defaults
 Adding system startup for /etc/init.d/pgagent ...
   /etc/rc0.d/K20pgagent -> ../init.d/pgagent
   /etc/rc1.d/K20pgagent -> ../init.d/pgagent
   /etc/rc6.d/K20pgagent -> ../init.d/pgagent
   /etc/rc2.d/S20pgagent -> ../init.d/pgagent
   /etc/rc3.d/S20pgagent -> ../init.d/pgagent
   /etc/rc4.d/S20pgagent -> ../init.d/pgagent
   /etc/rc5.d/S20pgagent -> ../init.d/pgagent

Thanks!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Missing uuid_generate_v1()

2010-10-22 Thread Mike Christensen
Oh, BTW, Tom - You were right about symlinking..  What I did totally
hosed Apache (though it didn't crash until 2 days later, then wouldn't
restart)..  Apache then griped about libuuid not loading.  I had to
get my friend Brian (Linux guru) to SSH in and clean up the whole mess
I made..  I think now it's legit now..

On Fri, Oct 22, 2010 at 7:00 PM, Alexia Lau  wrote:
> Does anyone know where I can see what’s already fixed at 9.0.2?
>
> Thanks,
>
> Alexia
>
> On 2010-10-07 09:54, Dave Page wrote:
>> On Thu, Oct 7, 2010 at 3:56 AM, Tom Lane  wrote:
>> > Mike Christensen  writes:
>> >> On Wed, Oct 6, 2010 at 7:38 PM, Tom Lane  wrote:
>> >>> If you have a libossp-uuid.so.16, you might try symlinking
>> >>> libuuid.so.16
>> >>> to that instead of carrying a separate file.
>> >
>> >> So now what you're saying is if it's not broke, fix it till it is :)
>> >
>> > Well, it's hard to argue with that position ;-).  But I'll try anyway:
>> > the platform-provided version of the library will be updated for bug
>> > fixes, compatibility rebuilds, etc.  Your private copy won't be, unless
>> > you remember to do it.  Eventually that's gonna bite ya.
>> >
>> > Of course the best fix would be for EDB to ship a build of Postgres
>> > that actually follows the platform-standard naming convention for this
>> > library.  I'm still wondering why they're linking to libuuid.so.
>> > Dave?
>>
>> Because that's what comes with ossp-uuid 1.6.2, and I assume is what
>> configure chooses when we use --with-ossp-uuid:
>>
>> [buildf...@bf2-linux ~]$ uuid-config --libs
>> -luuid
>>
>> FYI, there was also a bug in the installer which didn't copy the
>> library properly, which has been fixed for 9.0.2.
>>
>>
>>
>>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgAdmin on Mac connecting to Postgres 9 on Linux - SSL/timeout issue

2010-10-29 Thread Mike Christensen
I have a Postgres 9 server running on a server out on the Internet and
I connect to it with pgAdmin on OS/X over an SSL connection.

I notice if I keep the connection open and idle for maybe an hour or
so, when I try to run a query it either times out or pgAdmin just
kinda freezes up and I have to force quit..  Is there some sort of
idle timeout setting on SSL connections, or maybe it has to
re-negotiate the connection after a certain amount of idle time and
it's not doing that right?

Anyone run into this before?  Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgAdmin on Mac connecting to Postgres 9 on Linux - SSL/timeout issue

2010-10-30 Thread Mike Christensen
Maybe that's it..  It's definitely some sort of SSL thing since it
didn't start happening until I enabled SSL.  I guess I'll just have to
close pgAdmin when I'm not using it..

On Sat, Oct 30, 2010 at 12:57 AM, Basil Bourque  wrote:
>
> On Oct 29, 2010, at 15:36, Mike Christensen wrote:
>
>> I have a Postgres 9 server running on a server out on the Internet and
>> I connect to it with pgAdmin on OS/X over an SSL connection.
>>
>> I notice if I keep the connection open and idle for maybe an hour or
>> so, when I try to run a query it either times out or pgAdmin just
>> kinda freezes up and I have to force quit..  Is there some sort of
>> idle timeout setting on SSL connections, or maybe it has to
>> re-negotiate the connection after a certain amount of idle time and
>> it's not doing that right?
>>
>> Anyone run into this before?  Thanks!
>
> I don't know about pgAdmin or SSL, but I have heard that routers and switches
> can have a timeout limit: "the idle time after which an established
> connection of any protocol closes".
>
> http://search.gmane.org/?query=router+timeout&group=gmane.comp.lang.inug-4d.tech
>
> --Basil Bourque
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] large xml database

2010-10-30 Thread Mike Christensen
Gz.

Maybe you can lease a bunch of Amazon EC2 high computing slices and
parallelize it?  I think throwing ridiculous amounts of hardware at
things is always the best approach.

On Sat, Oct 30, 2010 at 2:48 PM, Viktor Bojović
 wrote:
> Hi,
> i have very big XML documment which is larger than 50GB and want to import
> it into databse, and transform it to relational schema.
> When splitting this documment to smaller independent xml documments i get
> ~11.1mil XML documents.
> I have spent lots of time trying to get fastest way to transform all this
> data but every time i give up because it takes too much time. Sometimes more
> than month it would take if not stopped.
> I have tried to insert each line as varchar into database and parse it using
> plperl regex..
> also i have tried to store every documment  as XML and parse it, but it is
> also to slow.
> i have tried to store every documment as varchar but it is also slow when
> using regex to get data.
> many tries have failed because 8GB of ram and 10gb of swap were not enough.
> also sometimes i get that more than 2^32 operations  were performed, and
> functions stopped to work.
> i wanted just to ask if someone knows how to speed this up.
>
> thanx in advance
> --
> ---
> Viktor Bojović
> ---
> Wherever I go, Murphy goes with me
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is Dialect

2010-11-22 Thread Mike Christensen
If I understand your question correctly, a Dialect is an abstraction
layer that allows Hibernate to talk with different database backends
(MySQL, PG, Oracle, SQLServer, etc).  Since different databases have
different syntaxes, various features, etc.  This seems more of a
Hibernate question though, unless there's a "Dialect" feature in PG
that I'm unaware of.

On Sun, Nov 21, 2010 at 11:32 PM, Adarsh Sharma
 wrote:
> Dear all,
>
> I am reading about Dialects of different databases. Yet I can't understand
> what is the need of dialect in Postgres or any other like Hibernate uses
> Dialect of all Databases for ORM.
> What is it &
> How can we create our own Dialect ?
>
> Thanks in Advance
> Adarsh Sharma
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is Dialect

2010-11-22 Thread Mike Christensen
On Mon, Nov 22, 2010 at 12:26 AM, Andreas  wrote:
> Am 22.11.2010 08:32, schrieb Adarsh Sharma:
>>
>> I am reading about Dialects of different databases. Yet I can't understand
>> what is the need of dialect in Postgres or any other like Hibernate uses
>> Dialect of all Databases for ORM.
>> What is it &
>
>
> As far as I know, the term dialect is used to express the fact that each
> DBMS has it's own implementation of SQL though they have a lot that works
> the same.
> Like you might find that most of PG's commands work similar to say Oracle
> but there are still differences.
> Or Postgresql has "serials" while MySQL has "autoincrement" (not sure) and
> MS-Access has "AutoValue" or however it might be in english. Thats 3 times a
> different name for basically the same thing.
>
> That's what I understood as dialect in the realm of IT systems.
> Maybe Hibernate uses just a certain subset of commands that is proven to
> work the same in every "dialect". This has the benefit that there is no
> special Hibernate for every DBMS but on the otherhand it omits advantages of
> individual DBMS which are not to be found in other DBMS' "dialects" of SQL.
>
>
>> How can we create our own Dialect ?
>
> You don't.
> They happen.

Well in Hibernate, you /could/ create your own Dialect if you wanted
to (it's an abstract base class) but they already have them
implemented for most any DBMS you can think of these days.  It's more
common you'd override something in the dialect class that you want to
work differently.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] json data type

2010-11-23 Thread Mike Christensen
> A project  in which i'm involved, make use of json data type for storing
> some data sets. I have read that there is some work in progress to implement
> json datatype support in postgres. So my question is; when and in which
> version of postgres we can expect this implementation?

I actually store a pretty good amount of JSON serialized data for my
website, kitchenpc.com which uses Postgres 9.0.  I don't have any
problems with this, I just use a text data type.  I'd be interested in
knowing what functionality would come with a JSON data type, like
could you query for all rows with an X property of > 5 or something,
eg {Y: 1, X: 6}

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Type cast removal - proposed exceptions for xml,enum

2010-12-06 Thread Mike Christensen
On Mon, Dec 6, 2010 at 7:49 PM, Craig Ringer
 wrote:
> Hi all
>
> I'm finding a few areas where PostgreSQL's refusal to implicitly cast
> from 'text' to another type is causing real problems, particularly when
> using the PgJDBC driver. I'd like to propose a couple of relaxations of
> the implicit cast rules for certain text-like types:
>
> - user-defined enums; and
> - xml
>
> ('json' should also be castable from text when introduced).
>
> Why? Because client interfaces don't always know about the Pg-specific
> types. They can use text-like types just fine if they can use text-typed
> parameters when inserting/updating them. I feel that PostgreSQL is
> overzealous to the point of being counterproductive by refusing to
> implicitly cast these types, as they are subject to validation by the
> input function in any case.
>
> Can anybody show me a case where permitting implicit casts from text for
> enums, xml or json types might introduce an error or cause SQL with a
> mistake in it to execute instead of failing when it should?
>
> This is driving me nuts when working with PgJDBC via various ORM layers
> (I know, I know, but they're life at this point) that would work happily
> with these types if they were implicitly castable to/from strings, but
> don't understand how to explicitly specify these postgresql-specific
> types when talking to the JDBC layer.

I totally must +1 for this idea, especially for ENUMs.  It was a
complete nightmare getting nHibernate to map a Postgres ENUM to a C#
Enum automatically.  I had to hack around all sorts of things. mostly
because the SQL syntax would assume the data was a string.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UUID column as pimrary key?

2011-01-05 Thread Mike Christensen
2011/1/5 Grzegorz Jaśkiewicz :
> On Wed, Jan 5, 2011 at 2:37 PM, Scott Ribe  
> wrote:
>> On Jan 5, 2011, at 1:31 AM, Radosław Smogura wrote:
>>
>>> * simple to generate, and 128bit random is almost globally unique,
>>
>> Almost? Should be totally unique, as long as your random source is decent 
>> quality.
>
> But I would never rely on that alone. You always have a strategy in
> place, in case there's a duplicate.

As long as all your UUIDs are generated with the same algorithm, they
are guaranteed to be unique.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing Postgres with MS SQL Server 2005

2011-01-20 Thread Mike Christensen
> I have a machine on which MS SQL Server 2005 is already installed. Now I
> want to install PostgreSQL 8.3 along with MS SQL Server.
>
>
>
> Can  this combination cause  any problems to any of the database servers?

You should not run into any problems, both servers run on different
ports (by default) so both should run happily side by side.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] Error trying to install Ruby postgres gems on OS/X

2011-01-28 Thread Mike Christensen
I'm trying to install the Postgres gem on OS/X but getting errors no
matter what I try..  In theory, it should be as simple as "gem install
postgres", correct?  Here's what I get:

>sudo gem install postgres
Building native extensions.  This could take a while...
ERROR:  Error installing postgres:
ERROR: Failed to build gem native extension.

/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb
extconf.rb:73: command not found: pg_config --bindir

===   WARNING   ===

You are building this extension on OS X without setting the
ARCHFLAGS environment variable, and PostgreSQL does not appear
to have been built as a universal binary. If you are seeing this
message, that means that the build will probably fail.

Try setting the environment variable ARCHFLAGS
to '-arch i386' before building.

For example:
(in bash) $ export ARCHFLAGS='-arch i386'
(in tcsh) $ setenv ARCHFLAGS '-arch i386'

Then try building again.

===
extconf.rb:46: command not found: pg_config --includedir
extconf.rb:53: command not found: pg_config --libdir
checking for main() in -lpq... no
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers.  Check the mkmf.log file for more
details.  You may need configuration options.

Provided configuration options:
--with-opt-dir
--without-opt-dir
--with-opt-include
--without-opt-include=${opt-dir}/include
--with-opt-lib
--without-opt-lib=${opt-dir}/lib
--with-make-prog
--without-make-prog
--srcdir=.
--curdir

--ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby
--with-pgsql-dir
--without-pgsql-dir
--with-pgsql-include
--without-pgsql-include=${pgsql-dir}/include
--with-pgsql-lib
--without-pgsql-lib=${pgsql-dir}/lib
--with-pqlib
--without-pqlib
Could not find PostgreSQL build environment (libraries & headers):
Makefile not created


Gem files will remain installed in
/Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28 for inspection.
Results logged to
/Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28/ext/gem_make.out

I have Postgres 9.0 installed at /Library/PostgreSQL/9.0.  Any ideas?

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error trying to install Ruby postgres gems on OS/X

2011-01-28 Thread Mike Christensen
Now I get:

/Library/PostgreSQL/9.0>export ARCHFLAGS='-arch i386'
/Library/PostgreSQL/9.0>sudo -E gem install postgres
Password:
Building native extensions.  This could take a while...
ERROR:  Error installing postgres:
ERROR: Failed to build gem native extension.

/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb
extconf.rb:73: command not found: pg_config --bindir
extconf.rb:46: command not found: pg_config --includedir
extconf.rb:53: command not found: pg_config --libdir
checking for main() in -lpq... no
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers.  Check the mkmf.log file for more
details.  You may need configuration options.

Provided configuration options:
--with-opt-dir
--without-opt-dir
--with-opt-include
--without-opt-include=${opt-dir}/include
--with-opt-lib
--without-opt-lib=${opt-dir}/lib
--with-make-prog
--without-make-prog
--srcdir=.
--curdir

--ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby
--with-pgsql-dir
--without-pgsql-dir
--with-pgsql-include
--without-pgsql-include=${pgsql-dir}/include
--with-pgsql-lib
--without-pgsql-lib=${pgsql-dir}/lib
--with-pqlib
--without-pqlib
Could not find PostgreSQL build environment (libraries & headers):
Makefile not created


Gem files will remain installed in
/Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28 for inspection.
Results logged to
/Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28/ext/gem_make.out
/Library/PostgreSQL/9.0>


On Fri, Jan 28, 2011 at 5:46 PM, Thom Brown  wrote:
> On 29 January 2011 01:37, Mike Christensen  wrote:
>> I'm trying to install the Postgres gem on OS/X but getting errors no
>> matter what I try..  In theory, it should be as simple as "gem install
>> postgres", correct?  Here's what I get:
>>
>>>sudo gem install postgres
>> Building native extensions.  This could take a while...
>> ERROR:  Error installing postgres:
>>        ERROR: Failed to build gem native extension.
>>
>> /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby 
>> extconf.rb
>> extconf.rb:73: command not found: pg_config --bindir
>>
>>                ===   WARNING   ===
>>
>>                You are building this extension on OS X without setting the
>>                ARCHFLAGS environment variable, and PostgreSQL does not appear
>>                to have been built as a universal binary. If you are seeing 
>> this
>>                message, that means that the build will probably fail.
>>
>>                Try setting the environment variable ARCHFLAGS
>>                to '-arch i386' before building.
>>
>>                For example:
>>                (in bash) $ export ARCHFLAGS='-arch i386'
>>                (in tcsh) $ setenv ARCHFLAGS '-arch i386'
>>
>>                Then try building again.
>>
>>                ===
>>                extconf.rb:46: command not found: pg_config --includedir
>> extconf.rb:53: command not found: pg_config --libdir
>> checking for main() in -lpq... no
>> *** extconf.rb failed ***
>> Could not create Makefile due to some reason, probably lack of
>> necessary libraries and/or headers.  Check the mkmf.log file for more
>> details.  You may need configuration options.
>>
>> Provided configuration options:
>>        --with-opt-dir
>>        --without-opt-dir
>>        --with-opt-include
>>        --without-opt-include=${opt-dir}/include
>>        --with-opt-lib
>>        --without-opt-lib=${opt-dir}/lib
>>        --with-make-prog
>>        --without-make-prog
>>        --srcdir=.
>>        --curdir
>>        
>> --ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby
>>        --with-pgsql-dir
>>        --without-pgsql-dir
>>        --with-pgsql-include
>>        --without-pgsql-include=${pgsql-dir}/include
>>        --with-pgsql-lib
>>        --without-pgsql-lib=${pgsql-dir}/lib
>>        --with-pqlib
>>        --without-pqlib
>> Could not find PostgreSQL build environment (libraries & headers):
>> Makefile not created
>>
>>
>> Gem files will remain installed in
>> /Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28 for inspection.
>> Results logged to
>> /Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28/ext/gem_make.out
>>
>> I have Postg

Re: [GENERAL] Error trying to install Ruby postgres gems on OS/X

2011-01-28 Thread Mike Christensen
So I installed the postgres-pg library, which if I understand
correctly is a Ruby implementation of the adapter (as opposed to
native code that has to be built) and that's working fine..

From what I've read, this adapter is much slower but probably fine for
non-production use (I'm just learning for now)..

However, if anyone has any ideas how to install the native adapter, lemme know!

On Fri, Jan 28, 2011 at 5:50 PM, Mike Christensen  wrote:
> Now I get:
>
> /Library/PostgreSQL/9.0>export ARCHFLAGS='-arch i386'
> /Library/PostgreSQL/9.0>sudo -E gem install postgres
> Password:
> Building native extensions.  This could take a while...
> ERROR:  Error installing postgres:
>        ERROR: Failed to build gem native extension.
>
> /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb
> extconf.rb:73: command not found: pg_config --bindir
> extconf.rb:46: command not found: pg_config --includedir
> extconf.rb:53: command not found: pg_config --libdir
> checking for main() in -lpq... no
> *** extconf.rb failed ***
> Could not create Makefile due to some reason, probably lack of
> necessary libraries and/or headers.  Check the mkmf.log file for more
> details.  You may need configuration options.
>
> Provided configuration options:
>        --with-opt-dir
>        --without-opt-dir
>        --with-opt-include
>        --without-opt-include=${opt-dir}/include
>        --with-opt-lib
>        --without-opt-lib=${opt-dir}/lib
>        --with-make-prog
>        --without-make-prog
>        --srcdir=.
>        --curdir
>        
> --ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby
>        --with-pgsql-dir
>        --without-pgsql-dir
>        --with-pgsql-include
>        --without-pgsql-include=${pgsql-dir}/include
>        --with-pgsql-lib
>        --without-pgsql-lib=${pgsql-dir}/lib
>        --with-pqlib
>        --without-pqlib
> Could not find PostgreSQL build environment (libraries & headers):
> Makefile not created
>
>
> Gem files will remain installed in
> /Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28 for inspection.
> Results logged to
> /Library/Ruby/Gems/1.8/gems/postgres-0.7.9.2008.01.28/ext/gem_make.out
> /Library/PostgreSQL/9.0>
>
>
> On Fri, Jan 28, 2011 at 5:46 PM, Thom Brown  wrote:
>> On 29 January 2011 01:37, Mike Christensen  wrote:
>>> I'm trying to install the Postgres gem on OS/X but getting errors no
>>> matter what I try..  In theory, it should be as simple as "gem install
>>> postgres", correct?  Here's what I get:
>>>
>>>>sudo gem install postgres
>>> Building native extensions.  This could take a while...
>>> ERROR:  Error installing postgres:
>>>        ERROR: Failed to build gem native extension.
>>>
>>> /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby 
>>> extconf.rb
>>> extconf.rb:73: command not found: pg_config --bindir
>>>
>>>                ===   WARNING   ===
>>>
>>>                You are building this extension on OS X without setting the
>>>                ARCHFLAGS environment variable, and PostgreSQL does not 
>>> appear
>>>                to have been built as a universal binary. If you are seeing 
>>> this
>>>                message, that means that the build will probably fail.
>>>
>>>                Try setting the environment variable ARCHFLAGS
>>>                to '-arch i386' before building.
>>>
>>>                For example:
>>>                (in bash) $ export ARCHFLAGS='-arch i386'
>>>                (in tcsh) $ setenv ARCHFLAGS '-arch i386'
>>>
>>>                Then try building again.
>>>
>>>                ===
>>>                extconf.rb:46: command not found: pg_config --includedir
>>> extconf.rb:53: command not found: pg_config --libdir
>>> checking for main() in -lpq... no
>>> *** extconf.rb failed ***
>>> Could not create Makefile due to some reason, probably lack of
>>> necessary libraries and/or headers.  Check the mkmf.log file for more
>>> details.  You may need configuration options.
>>>
>>> Provided configuration options:
>>>        --with-opt-dir
>>>        --without-opt-dir
>>>        --with-opt-include
>>>        --without-opt-include=${opt-dir}/include
>>>        --with-opt-lib
>>>        --without-opt-lib=${opt-dir}/lib
>>>        --with-m

Re: [GENERAL] Error trying to install Ruby postgres gems on OS/X

2011-01-28 Thread Mike Christensen
> You might have to tell it where the PostgreSQL binaries live first then:
>
> export PATH=$PATH:/Library/PostgreSQL/9.0/bin

Hey that seems to have fixed it!  Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] One last Ruby question for tonight - Regarding UUID type

2011-01-28 Thread Mike Christensen
My goal is to learn Ruby by porting one of my existing PG web
applications over to Rails..  However, my existing data heavily relies
on the UUID data type.  I've noticed when I create a new model with
something like:

guidtest name:string value:uuid

And then do a rake:migrate, the CREATE TABLE that gets generated looks like:

CREATE TABLE guidtests
(
  id serial NOT NULL,
  "name" character varying(255),
  created_at timestamp without time zone,
  updated_at timestamp without time zone,
  CONSTRAINT guidtests_pkey PRIMARY KEY (id)
)
...

In other words, it just ignores my "uuid" type.  However, the views
and stuff do include this column so the page will crash when I load it
since the column doesn't exist in the DB.

Is there some special thing I have to do to use the uuid type in
ActiveRecord?  Thanks!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] One last Ruby question for tonight - Regarding UUID type

2011-01-28 Thread Mike Christensen
> My goal is to learn Ruby by porting one of my existing PG web
> applications over to Rails..  However, my existing data heavily relies
> on the UUID data type.  I've noticed when I create a new model with
> something like:
>
> guidtest name:string value:uuid
>
> And then do a rake:migrate, the CREATE TABLE that gets generated looks like:
>
> CREATE TABLE guidtests
> (
>  id serial NOT NULL,
>  "name" character varying(255),
>  created_at timestamp without time zone,
>  updated_at timestamp without time zone,
>  CONSTRAINT guidtests_pkey PRIMARY KEY (id)
> )
> ...
>
> In other words, it just ignores my "uuid" type.  However, the views
> and stuff do include this column so the page will crash when I load it
> since the column doesn't exist in the DB.
> Is there some special thing I have to do to use the uuid type in
> ActiveRecord?  Thanks!

Update: If I manually add the column in using pgAdmin (as a uuid type
of course), the program actually runs (I can create new rows and
display data)..  So RoR does support this type (probably gets
marshalled as a string??) but I guess the ActiveRecord schema
generation stuff just doesn't support uuid.  Hmmm.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Permission denied error - best way to fix?

2011-02-08 Thread Mike Christensen
Here's the error:

pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  permission denied for
relation pantryitems
pg_dump: The command was: LOCK TABLE public.pantryitems IN ACCESS SHARE MODE

Does the user need to be a superuser, or is there some way to GRANT
this permission (if possible, I don't want this user to be able to
modify the schema)..

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Permission denied error - best way to fix?

2011-02-08 Thread Mike Christensen
>> Here's the error:
>>
>> pg_dump: SQL command failed
>> pg_dump: Error message from server: ERROR:  permission denied for
>> relation pantryitems
>> pg_dump: The command was: LOCK TABLE public.pantryitems IN ACCESS SHARE
>> MODE
>>
>> Does the user need to be a superuser, or is there some way to GRANT
>> this permission (if possible, I don't want this user to be able to
>> modify the schema)..
>>
>> Mike
>>
>
> pg_dump is an admin function so it will need to run as a role that can touch
> all requested objects and take action. For this particular case you could
> GRANT SELECT on that table to the restricted user. The trouble being is that
> this turns into a game of Whack-a-Mole fairly quickly. This is one of those
> situations where taken a long view on your permissions scheme will pay
> dividends.

Hmm interesting, I wonder if this error is happening because pg_dump
is being run from another program (RubyMine) and the process doesn't
have whatever access it needs.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Any feedback on this query?

2011-02-17 Thread Mike Christensen
Here's my query:

SELECT R.RecipeId, R.Title, R.Description, R.ImageUrl, R.Rating,
R.PrepTime, R.CookTime, R.OwnerId, U.Alias
FROM Recipes R
INNER JOIN Users U ON U.UserId = R.OwnerId
WHERE (R.PrepTime <= :maxprep)
ORDER BY R.Rating DESC LIMIT 100;
SELECT COUNT(*) FROM Recipes R
WHERE (R.PrepTime <= :maxprep);

The idea is I can show the top 100 matches, and then in the UI say:

"Displaying top 100 results out of 150 recipes."

I'm guessing doing two queries (one to get the top 100 rows and the
other to get the total DB count) is faster than getting all the rows
and trimming the data in code (there could be tens of thousands).
What I'm guessing is since Postgres just ran the query, the second
query will be near instant since any relevant data is still in memory.

BTW, the query can potentially be way more complicated depending on
the user-entered search criteria.

Feedback on this approach?

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any feedback on this query?

2011-02-18 Thread Mike Christensen
On Fri, Feb 18, 2011 at 1:05 AM, Dean Rasheed  wrote:
> On 18 February 2011 07:19, Mike Christensen  wrote:
>> Here's my query:
>>
>> SELECT R.RecipeId, R.Title, R.Description, R.ImageUrl, R.Rating,
>> R.PrepTime, R.CookTime, R.OwnerId, U.Alias
>> FROM Recipes R
>> INNER JOIN Users U ON U.UserId = R.OwnerId
>> WHERE (R.PrepTime <= :maxprep)
>> ORDER BY R.Rating DESC LIMIT 100;
>> SELECT COUNT(*) FROM Recipes R
>> WHERE (R.PrepTime <= :maxprep);
>>
>> The idea is I can show the top 100 matches, and then in the UI say:
>>
>> "Displaying top 100 results out of 150 recipes."
>>
>> I'm guessing doing two queries (one to get the top 100 rows and the
>> other to get the total DB count) is faster than getting all the rows
>> and trimming the data in code (there could be tens of thousands).
>> What I'm guessing is since Postgres just ran the query, the second
>> query will be near instant since any relevant data is still in memory.
>>
>> BTW, the query can potentially be way more complicated depending on
>> the user-entered search criteria.
>>
>> Feedback on this approach?
>>
>
> The second query by itself isn't guaranteed to return the same count
> that the first query would without the limit, unless you have FK and
> NOT NULL constraints on OwnerId.
>
> If you're on 8.4 or later, you could use a window function to return
> the count in the first query. I'm not sure that there will be much
> difference in performance, but it will be less prone to errors having
> only one WHERE clause to maintain. So something like:
>
> SELECT R.RecipeId, R.Title, R.Description, R.ImageUrl, R.Rating,
> R.PrepTime, R.CookTime, R.OwnerId, U.Alias,
> count(*) OVER ()
> FROM Recipes R
> INNER JOIN Users U ON U.UserId = R.OwnerId
> WHERE (R.PrepTime <= :maxprep)
> ORDER BY R.Rating DESC LIMIT 100;

Oh very interesting!  I will look into this method, it looks a lot cleaner..

FYI, yes OwnerId is NOT NULL and has a FK constraint.

Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Explicit NULL for no INTEGER data? -- Update

2011-02-18 Thread Mike Christensen
Judging from:

http://www.postgresql.org/docs/8.1/static/sql-copy.html

It looks like you have to specify your own NULL string with the NULL
AS parameter of the COPY command.

On Fri, Feb 18, 2011 at 3:12 PM, Rich Shepard  wrote:
> On Fri, 18 Feb 2011, Rich Shepard wrote:
>
>> Do I need an explicit NULL in that column?
>
>  Update: Placing NULL or a blank produces the same error.
>
>  How should I represent no value for an integer column in a .csv file?
>
> Rich
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Explicit NULL for no INTEGER data? -- Update

2011-02-18 Thread Mike Christensen
>> It looks like you have to specify your own NULL string with the NULL
>> AS parameter of the COPY command.
>
> Mike,
>
>  I completely missed that option when I've read the copy page. My
> apologies!
>
> Rich

Awesome, I'm the one usually asking easy questions on this mailing
list so I'm just glad I could answer one for once!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Web Hosting

2011-03-05 Thread Mike Christensen
On Sat, Mar 5, 2011 at 1:08 PM, matty jones  wrote:
> I already have a domain name but I am looking for a hosting company that I
> can use PG with.  The few I have contacted have said that they support MySQL
> only and won't give me access to install what I need or they want way to
> much.  I don't need a dedicated host which so far seems the only way this
> will work, all the companies I have researched so far that offer shared
> hosting or virtual hosting only use MySQL.  I will take care of the setup
> and everything myself but I have already written my code using PG/PHP and I
> have no intention of switching.
> Thanks.

Well there's this list:

http://www.postgresql.org/support/professional_hosting

Also, maybe something like Amazon EC2 if you want your own "box"?  I
think the small instances are even free..

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WARNING: database must be vacuumed within 8439472 transactions

2014-07-07 Thread Mike Christensen
Sounds like you just have to wait until it finishes..


On Mon, Jul 7, 2014 at 12:56 PM, Prabhjot Sheena <
prabhjot.she...@rivalwatch.com> wrote:

> Hello
>We are using postgresql 8.3 database for last 5 yrs for this
> production database and its running fine. This is our critical database
> which runs 24*7. This weekend we started getting these messages
>
> HINT:  To avoid a database shutdown, execute a full-database VACUUM.
> WARNING:  database  must be vacuumed within 8439472 transactions
>
> i am currently running this command
>
> vacuumdb --analyze db
>
> while this command is running i m still getting these messages
>
> WARNING:  database  must be vacuumed within 2645303 transactions.
>
> The value of number of transactions is going down every minute
>
> Can anyone tell me what is the best way to sort up this issue.
>
> Thanks
> Avi
>
>
>
>


[GENERAL] Regular expression question with Postgres

2014-07-24 Thread Mike Christensen
I'm curious why this query returns 0:

SELECT 'AAA' ~ '^A{,4}$'

Yet, this query returns 1:

SELECT 'AAA' ~ '^A{0,4}$'

Is this a bug with the regular expression engine?


Re: [GENERAL] Regular expression question with Postgres

2014-07-24 Thread Mike Christensen
Yea seems right.  I was testing the expression on Rubular (Which uses the
Ruby parser) and it worked.  I guess Ruby allows this non-standard
expression with the missing lower bounds.  Every reference I could find,
though, agrees only the upper bound is optional.


On Thu, Jul 24, 2014 at 1:42 PM, David G Johnston <
david.g.johns...@gmail.com> wrote:

> Mike Christensen-2 wrote
> > I'm curious why this query returns 0:
> >
> > SELECT 'AAA' ~ '^A{,4}$'
> >
> > Yet, this query returns 1:
> >
> > SELECT 'AAA' ~ '^A{0,4}$'
> >
> > Is this a bug with the regular expression engine?
>
> Apparently since "{,#}" is not a valid regexp expression the engine simply
> interprets it as a literal and says 'AAA' != 'A{,4}'
>
>
> http://www.postgresql.org/docs/9.3/interactive/functions-matching.html#FUNCTIONS-POSIX-REGEXP
>
> Table 9-13. Regular Expression Quantifiers
>
> Note the all of the { } expressions have a lower bound (whether explicit or
> implied).
>
> David J.
>
>
>
>
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/Regular-expression-question-with-Postgres-tp5812777p5812778.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Regular expression question with Postgres

2014-07-24 Thread Mike Christensen
Yea looks like Postgres has it right, well.. per POSIX standard anyway.
 JavaScript also has it right, as does Python and .NET.  Ruby is just weird.


On Thu, Jul 24, 2014 at 1:57 PM, Tom Lane  wrote:

> Mike Christensen  writes:
> > I'm curious why this query returns 0:
> > SELECT 'AAA' ~ '^A{,4}$'
>
> > Yet, this query returns 1:
>
> > SELECT 'AAA' ~ '^A{0,4}$'
>
> > Is this a bug with the regular expression engine?
>
> Our regex documentation lists the following variants of bounds syntax:
> {m}
> {m,}
> {m,n}
> Nothing about {,n}.  I rather imagine that the engine is deciding that
> that's just literal text and not a bounds constraint ...
>
> regression=# SELECT 'A{,4}' ~ '^A{,4}$';
>  ?column?
> --
>  t
> (1 row)
>
> ... yup, apparently so.
>
> A look at the POSIX standard says that it has the same idea of what
> is a valid bounds constraint:
>
> When an ERE matching a single character or an ERE enclosed in
> parentheses is followed by an interval expression of the format
> "{m}", "{m,}", or "{m,n}", together with that interval expression
> it shall match what repeated consecutive occurrences of the ERE
> would match. The values of m and n are decimal integers in the
> range 0 <= m<= n<= {RE_DUP_MAX}, where m specifies the exact or
> minimum number of occurrences and n specifies the maximum number
> of occurrences. The expression "{m}" matches exactly m occurrences
> of the preceding ERE, "{m,}" matches at least m occurrences, and
> "{m,n}" matches any number of occurrences between m and n,
> inclusive.
>
> regards, tom lane
>


[GENERAL] Any Postgres experts not afraid of the camera?

2014-09-03 Thread Mike Christensen
http://meta.stackoverflow.com/questions/270574/an-experiment-stack-overflow-tv?cb=1


Re: [GENERAL] how to create materialized view in postgresql 8.3

2013-04-08 Thread Mike Christensen
This is the number one requested feature on Uservoice:

http://postgresql.uservoice.com/forums/21853-general/suggestions/247548-materialized-views


On Mon, Apr 8, 2013 at 9:27 AM, John R Pierce  wrote:

>  On 4/7/2013 11:58 PM, Zahid Quadri wrote:
>
>
> is it possible to created materialized view in postgresql 8.3 if yes
> please provide some sample.
>
>
> in older versions, the best you could do was to create a table and
> populate it with your 'view', then drop it when you're done (or truncate
> and repopulate it to update the 'view').
>
>
>
> --
> john r pierce  37N 122W
> somewhere on the middle of the left coast
>
>


[GENERAL] dblink does not resolve DNS, but works with IP

2013-05-14 Thread Mike Christensen
If I have this:

CREATE OR REPLACE VIEW Link.Foo AS
  select * from dblink(
'hostaddr=123.123.123.123 dbname=KitchenPC user=Website
password=secret',
'select * from Foo') as ...

Then it works.  However, if I do:

CREATE OR REPLACE VIEW Link.Foo AS
  select * from dblink(
'hostaddr=db.domain.com dbname=KitchenPC user=Website password=secret',
'select * from Foo') as ...

Then I get:

ERROR:  could not establish connection
DETAIL:  could not translate host name "db.domain.com" to address: Unknown
host

However, from a command prompt I can ping db.domain.com and get
123.123.123.123.

Does dblink just not support DNS resolution?  I really don't want to hard
code IP addresses in my scripts.  Thanks!


Re: [GENERAL] dblink does not resolve DNS, but works with IP

2013-05-14 Thread Mike Christensen
Excellent!  Thanks so much.


On Tue, May 14, 2013 at 9:25 PM, Adrian Klaver wrote:

> On 05/14/2013 09:17 PM, Mike Christensen wrote:
>
>> If I have this:
>>
>> CREATE OR REPLACE VIEW Link.Foo AS
>>select * from dblink(
>>  'hostaddr=123.123.123.123 dbname=KitchenPC user=Website
>> password=secret',
>>  'select * from Foo') as ...
>>
>> Then it works.  However, if I do:
>>
>> CREATE OR REPLACE VIEW Link.Foo AS
>>select * from dblink(
>>  'hostaddr=db.domain.com <http://db.domain.com> dbname=KitchenPC
>>
>> user=Website password=secret',
>>  'select * from Foo') as ...
>>
>> Then I get:
>>
>> ERROR:  could not establish connection
>> DETAIL:  could not translate host name "db.domain.com
>> <http://db.domain.com>" to address: Unknown host
>>
>>
>> However, from a command prompt I can ping db.domain.com
>> <http://db.domain.com> and get 123.123.123.123.
>>
>>
>> Does dblink just not support DNS resolution?  I really don't want to
>> hard code IP addresses in my scripts.  Thanks!
>>
>
> See below for explanation of hostaddr and host. Short version, you are
> looking for host:
>
> http://www.postgresql.org/**docs/9.2/interactive/libpq-**
> connect.html#LIBPQ-**PARAMKEYWORDS<http://www.postgresql.org/docs/9.2/interactive/libpq-connect.html#LIBPQ-PARAMKEYWORDS>
>
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


Re: [GENERAL] dblink does not resolve DNS, but works with IP

2013-05-14 Thread Mike Christensen
Though I'm a bit curious why there's a host and hostaddr.  Why can't it
just resolve whatever you give it?


On Tue, May 14, 2013 at 9:31 PM, Mike Christensen wrote:

> Excellent!  Thanks so much.
>
>
> On Tue, May 14, 2013 at 9:25 PM, Adrian Klaver wrote:
>
>> On 05/14/2013 09:17 PM, Mike Christensen wrote:
>>
>>> If I have this:
>>>
>>> CREATE OR REPLACE VIEW Link.Foo AS
>>>select * from dblink(
>>>  'hostaddr=123.123.123.123 dbname=KitchenPC user=Website
>>> password=secret',
>>>  'select * from Foo') as ...
>>>
>>> Then it works.  However, if I do:
>>>
>>> CREATE OR REPLACE VIEW Link.Foo AS
>>>select * from dblink(
>>>  'hostaddr=db.domain.com <http://db.domain.com> dbname=KitchenPC
>>>
>>> user=Website password=secret',
>>>  'select * from Foo') as ...
>>>
>>> Then I get:
>>>
>>> ERROR:  could not establish connection
>>> DETAIL:  could not translate host name "db.domain.com
>>> <http://db.domain.com>" to address: Unknown host
>>>
>>>
>>> However, from a command prompt I can ping db.domain.com
>>> <http://db.domain.com> and get 123.123.123.123.
>>>
>>>
>>> Does dblink just not support DNS resolution?  I really don't want to
>>> hard code IP addresses in my scripts.  Thanks!
>>>
>>
>> See below for explanation of hostaddr and host. Short version, you are
>> looking for host:
>>
>> http://www.postgresql.org/**docs/9.2/interactive/libpq-**
>> connect.html#LIBPQ-**PARAMKEYWORDS<http://www.postgresql.org/docs/9.2/interactive/libpq-connect.html#LIBPQ-PARAMKEYWORDS>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@gmail.com
>>
>
>


Re: [GENERAL] dblink does not resolve DNS, but works with IP

2013-05-15 Thread Mike Christensen
Ah, gotcha!  I guess whatever sample I was originally copying from used
hostaddr for some reason..  Thanks for the clarification, Tom!


On Wed, May 15, 2013 at 6:08 AM, Tom Lane  wrote:

> Mike Christensen  writes:
> > Though I'm a bit curious why there's a host and hostaddr.  Why can't it
> > just resolve whatever you give it?
>
> Well, it will ... if you use the "host" parameter.  The whole point of
> "hostaddr" is that for that parameter, it will not try a DNS lookup.
> You'd only use that if you had issues with the speed or reliability
> of your DNS service.
>
> regards, tom lane
>


Re: [GENERAL] Success stories of PostgreSQL implementations in different companies

2013-05-23 Thread Mike Christensen
On Thu, May 23, 2013 at 2:51 PM, Steve Crawford <
scrawf...@pinpointresearch.com> wrote:

> On 05/23/2013 02:36 PM, Oscar Calderon wrote:
>
>> Hi, this question isn't technical, but is very important for me to know.
>> Currently, here in El Salvador our company brings PostgreSQL support, but
>> Oracle and SQL Server are more popular here.
>>
>> Even with that, some clients are being encouraged to change to PostgreSQL
>> to lower their companies costs in technologies, but very often they ask if
>> there are success stories of PostgreSQL implementations in companies in our
>> region or around the world, success stories (if is possible) with some
>> information like number of concurrent users, some hardware specs or storage
>> size.
>>
>> I think that in my country is more common to hear success stories like
>> that about other databases like Oracle because is more expanded here, but i
>> would like if there's a place or if you can share with me some real
>> experiences or success stories that you ever heard of successful
>> implementations of PostgreSQL in companies to talk with people when they
>> ask that kind of things.
>>
>>  Start with the web-site - especially:
> http://www.postgresql.org/**about/users/
> http://www.postgresql.org/**about/quotesarchive/
>
> I don't know about name-recognition in El Salvador but Etsy, Wisconsin
> Courts, Skype, Affilias, FlightAware, NTT are quite recognizable here.
>
>
And don't forget about everyone's favorite recipe search engine,
www.kitchenpc.com - powered by Postgres 9.1..




[GENERAL] PERFORM statement

2013-07-08 Thread Mike Christensen
I was reading about Postgres stored procs in the FAQ:

https://wiki.postgresql.org/wiki/FAQ#Does_PostgreSQL_have_stored_procedures.3F

It claims that an alternative syntax to:

SELECT theNameOfTheFunction(arg1, arg2);

Is:

PERFORM theNameOfTheFunction(arg1, arg2);

However, when I try the following:

CREATE TABLE app_for_leave
(
  sno integer NOT NULL,
  eid integer,
  ename varchar(20),
  sd date,
  ed date,
  sid integer,
  status boolean DEFAULT false,
  CONSTRAINT pk_snoa PRIMARY KEY (sno)
);

CREATE FUNCTION MyInsert(_sno integer, _eid integer, _sd date, _ed date,
_sid integer, _status boolean)
  RETURNS void AS
  $BODY$
  BEGIN
INSERT INTO app_for_leave(sno, eid, sd, ed, sid, status)
VALUES(_sno, _eid, _sd, _ed, _sid, _status);
  END;
  $BODY$
  LANGUAGE 'plpgsql' VOLATILE
  COST 100;

PERFORM MyInsert(1,101,'2013-04-04','2013-04-04',2,'f' );

I get the error:

ERROR: syntax error at or near "PERFORM"
SQL state: 42601
Character: 1

Is the FAQ out of date or was this feature removed?  I'm using 9.2.1.
 Thanks!

Mike


Re: [GENERAL] PERFORM statement

2013-07-08 Thread Mike Christensen
Ah ok that makes sense.  The FAQ wasn't exactly clear about that.


On Mon, Jul 8, 2013 at 9:38 PM, Tony Theodore wrote:

>
> On 09/07/2013, at 2:20 PM, Mike Christensen  wrote:
>
>
> PERFORM MyInsert(1,101,'2013-04-04','2013-04-04',2,'f' );
>
> I get the error:
>
> ERROR: syntax error at or near "PERFORM"
> SQL state: 42601
> Character: 1
>
> Is the FAQ out of date or was this feature removed?  I'm using 9.2.1.
>  Thanks!
>
>
> I believe PERFORM is a PL/pgSQL construct, not an SQL one. You'd need to
> execute it inside a function.
>
> Cheers,
>
> Tony
>
>
>


Re: [GENERAL] function with unknown params

2013-07-09 Thread Mike Christensen
You passed in:

22/1/2013

Which is 22 divided by 1, divided by 2013 - which is an integer..


On Tue, Jul 9, 2013 at 10:17 AM, giozh  wrote:

> ok, it works. But why on error message i had that two unknown data type? if
> was an error on date type, why it don't signal that?
>
>
>
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/function-with-unknown-params-tp5763215p5763224.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Perf differences between timestamp and timestamp with timezone

2009-06-15 Thread Mike Christensen
Hi all -

I'm considering changing all my "timestamp" columns to "timestamp with
timezone" columns instead.  The reason is I want to use UTC time for
everything in the DB and on the web server, and only ever convert to local
time on the client itself.  I could use a timestamp and just "know" that the
timezone is UTC, but npgsql seems to handle this a lot better (like set the
"Kind" property to UTC, etc) if you have a timestamptz column type.

I'm curious if there's any perf drawbacks to doing this, or would the only
perf hit be if I were comparing timestamp with timestamptz and doing all
sorts of casts and stuff.  Thanks!

Mike

PS - If there's any way to make npgsql just manufacture UTC DateTime objects
to begin with, that would be nice too..  Digging through the code I coudn't
find an easy way.


Re: [GENERAL] Perf differences between timestamp and timestamp with timezone

2009-06-15 Thread Mike Christensen
Awesome!  One more followup question..

If I modify an existing table from timestamp to timestamptz, will it use the
current system timezone?  If so, how can I modify all the rows to convert to
UTC time (basically add 8 hrs to everything)..

On Mon, Jun 15, 2009 at 6:52 PM, Bruce Momjian  wrote:

> Mike Christensen wrote:
> > Hi all -
> >
> > I'm considering changing all my "timestamp" columns to "timestamp with
> > timezone" columns instead.  The reason is I want to use UTC time for
> > everything in the DB and on the web server, and only ever convert to
> local
> > time on the client itself.  I could use a timestamp and just "know" that
> the
> > timezone is UTC, but npgsql seems to handle this a lot better (like set
> the
> > "Kind" property to UTC, etc) if you have a timestamptz column type.
> >
> > I'm curious if there's any perf drawbacks to doing this, or would the
> only
> > perf hit be if I were comparing timestamp with timestamptz and doing all
> > sorts of casts and stuff.  Thanks!
>
> I don't think there is any measurable difference in the two data types,
> and timestamptz is usually superior, as you have found.
>
> --
>  Bruce Momjian  http://momjian.us
>  EnterpriseDB http://enterprisedb.com
>
>  + If your life is a hard drive, Christ can be your backup. +
>


Re: [GENERAL] SET TIMEZONE doesn't affect to SELECT statement

2009-06-18 Thread Mike Christensen
Strange, maybe there's some server setting because I get different results
on mine..

set timezone to 'Europe/London';
select '2008-01-01 12:00:00 GMT+2'::timestamptz;
Result: '2008-01-01 14:00:00+00'


set timezone to 'Europe/Moscow';
select '2008-01-01 12:00:00 GMT+2'::timestamptz;
Result: '2008-01-01 17:00:00+03'

2009/6/18 POST 

>  Hello; by tutorial I have tried the example:
>
>
> set timezone to 'Europe/London';
>
> select '2008-01-01 12:00:00 GMT+2'::timestamptz;
>
>
> Result: 2008-01-01 15:00:00
>
>
>
> set timezone to 'Europe/Moscow';
>
> select '2008-01-01 12:00:00 GMT+2'::timestamptz;
>
>
> Result: 2008-01-01 15:00:00
>
>
>
> Why is the result identical ? Is something wrong ?
>
>
> Thanks for suggestion.
>


[GENERAL] What's wrong with this query?

2009-06-21 Thread Mike Christensen
I just tracked down a bug in my software due to an "unexpected" behavior in
Postgres..  Can someone clarify why this doesn't work (I haven't tried it on
MSSQL or anything else, so I'm not sure if this is the official SQL standard
or anything)..

CREATE TABLE test
(
  value uuid
);

INSERT INTO test VALUES ('----');
INSERT INTO test VALUES ('----');
INSERT INTO test VALUES (null);

select * from test where value != '----';

What I expect to get is two rows: the '----'
row and the null row, as both those values are in fact not
'----'.  However, I only get the first one.

I can change my query to:

select * from test where value is null or value !=
'----';

and that will give me the null rows, or rows that don't match that UUID.  Is
there a better way of writing this query?  Thanks!

Mike


Re: [GENERAL] What's wrong with this query?

2009-06-21 Thread Mike Christensen
Thanks all, that's pretty much what I figured - just wanted to make sure..
I'm still trying to master this SQL thing you speak of.

On Sun, Jun 21, 2009 at 5:20 PM, Martin Gainty  wrote:

>  testcase for a null uuid?
>
> thanks,
> Martin
> __
> Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité
>
> Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene
> Empfaenger sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte
> Weiterleitung oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht
> dient lediglich dem Austausch von Informationen und entfaltet keine
> rechtliche Bindungswirkung. Aufgrund der leichten Manipulierbarkeit von
> E-Mails koennen wir keine Haftung fuer den Inhalt uebernehmen.
>
> Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
> destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
> l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci 
> est interdite. Ce message sert à l'information seulement et n'aura pas 
> n'importe quel effet légalement obligatoire. Étant donné que les email 
> peuvent facilement être sujets à la manipulation, nous ne pouvons accepter 
> aucune responsabilité pour le contenu fourni.
>
>
>
>
>
>
> > To: pgsql-general@postgresql.org
> > From: spam_ea...@gmx.net
> > Subject: Re: [GENERAL] What's wrong with this query?
> > Date: Mon, 22 Jun 2009 00:37:41 +0200
>
> >
> > Mike Christensen wrote on 22.06.2009 00:10:
> > > I just tracked down a bug in my software due to an "unexpected"
> behavior
> > > in Postgres.. Can someone clarify why this doesn't work (I haven't
> > > tried it on MSSQL or anything else, so I'm not sure if this is the
> > > official SQL standard or anything)..
> > >
> > > CREATE TABLE test
> > > (
> > > value uuid
> > > );
> > >
> > > INSERT INTO test VALUES ('----');
> > > INSERT INTO test VALUES ('----');
> > > INSERT INTO test VALUES (null);
> > >
> > > select * from test where value !=
> '----';
> > >
> > > What I expect to get is two rows: the
> > > '----' row and the null row, as both
> > > those values are in fact not '----'.
> > > However, I only get the first one.
> > >
> > That is standard behaviour.
> > A comparison with a NULL value always returns false (and that is not a
> Postgres
> > speciality).
> >
> > You need to use
> >
> > select *
> > from test
> > where value != '----'
> > or value is null;
> >
> > Thomas
> >
> >
> > --
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
>
> --
> Insert movie times and more without leaving Hotmail®. See 
> how.<http://windowslive.com/Tutorial/Hotmail/QuickAdd?ocid=TXT_TAGLM_WL_HM_Tutorial_QuickAdd_062009>
>


Re: [GENERAL] What's wrong with this query?

2009-06-21 Thread Mike Christensen
The table in question is a list of events associated with user actions.
Users who are subscribed to another user's "channel" will see those events
on the home page.  The column in question is a "RecipientId" which only
contains a link to a User Id if the action was performed on another user,
otherwise the value is null.  So to answer your question, yes the column has
a FK but it can also be null depending on the type of action the row
represents.

Mike

On Sun, Jun 21, 2009 at 7:03 PM, Martin Gainty  wrote:

>  hi mike-
>
> is null uuid valid ?
>
> my experience would suggest a uuid is a foreign key to some other entity
> so any table containing object types (classes/structs) or table created
> from an object type
> (class/struct) must have a not null uuid (other systems call it guid)
> would require the uuid/guid to be not null otherwise the entity you would
> be pointing to
> has been 'orphaned'
>
> thanks,
> Martin
> __
> Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité
>
> Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene
> Empfaenger sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte
> Weiterleitung oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht
> dient lediglich dem Austausch von Informationen und entfaltet keine
> rechtliche Bindungswirkung. Aufgrund der leichten Manipulierbarkeit von
> E-Mails koennen wir keine Haftung fuer den Inhalt uebernehmen.
>
> Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
> destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
> l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci 
> est interdite. Ce message sert à l'information seulement et n'aura pas 
> n'importe quel effet légalement obligatoire. Étant donné que les email 
> peuvent facilement être sujets à la manipulation, nous ne pouvons accepter 
> aucune responsabilité pour le contenu fourni.
>
>
>
>
>
>
> --
> Date: Sun, 21 Jun 2009 18:43:42 -0700
> Subject: Re: [GENERAL] What's wrong with this query?
> From: m...@kitchenpc.com
> To: mgai...@hotmail.com
> CC: spam_ea...@gmx.net; pgsql-general@postgresql.org
>
>
> Thanks all, that's pretty much what I figured - just wanted to make sure..
> I'm still trying to master this SQL thing you speak of.
>
> On Sun, Jun 21, 2009 at 5:20 PM, Martin Gainty wrote:
>
>  testcase for a null uuid?
>
> thanks,
> Martin
> __
> Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité
>
> Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene
> Empfaenger sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte
> Weiterleitung oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht
> dient lediglich dem Austausch von Informationen und entfaltet keine
> rechtliche Bindungswirkung. Aufgrund der leichten Manipulierbarkeit von
> E-Mails koennen wir keine Haftung fuer den Inhalt uebernehmen.
>
> Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
> destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
> l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci 
> est interdite. Ce message sert à l'information seulement et n'aura pas 
> n'importe quel effet légalement obligatoire. Étant donné que les email 
> peuvent facilement être sujets à la manipulation, nous ne pouvons accepter 
> aucune responsabilité pour le contenu fourni.
>
>
>
>
>
>
> > To: pgsql-general@postgresql.org
> > From: spam_ea...@gmx.net
> > Subject: Re: [GENERAL] What's wrong with this query?
> > Date: Mon, 22 Jun 2009 00:37:41 +0200
>
> >
> > Mike Christensen wrote on 22.06.2009 00:10:
> > > I just tracked down a bug in my software due to an "unexpected"
> behavior
> > > in Postgres.. Can someone clarify why this doesn't work (I haven't
> > > tried it on MSSQL or anything else, so I'm not sure if this is the
> > > official SQL standard or anything)..
> > >
> > > CREATE TABLE test
> > > (
> > > value uuid
> > > );
> > >
> > > INSERT INTO test VALUES ('----');
> > > INSERT INTO test VALUES ('----');
> > > INSERT INTO test VALUES (null);
> > >
> > > select * from test where value !=
> '----';
> 

Re: [GENERAL] Replication

2009-06-23 Thread Mike Christensen
Rubyrep looks very interesting, I just watched their 5min video and looks
very easy to setup.

Few questions..  The left/right database looks very limiting (you can only
replicate two databases at a time)..  Their documentation says that the
solution is to setup a chain.  To keep A, B and C in sync, A should
replicate with C and B should replicate with C:

http://www.rubyrep.org/replication_between_more_than_two_databases.html

There will be a set of triggers for each replication.  Since MySql doesn't
support more than one trigger on a table, this approach won't work which I
guess is their way of saying "We're database independent, as long as you use
either Postgres or MySql oh and btw we have no replication story above 2
nodes on MySQL"

Also, if database C goes down, then everything goes kaboom, right?  Even if
you did A replicates with B, B replicates with C, if one database goes down
your chain is broken.  I'm worried about this scenario, and any perf
implications with having a whole bunch of triggers on a table.  Maybe
someone can comment.

Mike

On Mon, Jun 22, 2009 at 10:23 PM, Arndt Lehmann wrote:

> Hi Craig,
>
> just wanted to mention that there is a new open-source solution
> available that now also enables asynchronous, row-based, master-master
> replication of PostgreSQL databases.
>
> Name: rubyrep
>
> Project website with full feature list, step-by-step tutorial and
> screencast (from zero to running replication in under 5 minutes)
> available here:
>http://www.rubyrep.org
>
> Best Regards,
>   Arndt Lehmann
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Replication

2009-06-23 Thread Mike Christensen
>
> Hi Mike
>
> thanks for your interest in rubyrep. I developed rubyrep. Let me
> answer your questions.
>
> On Jun 23, 4:16 pm, m...@kitchenpc.com (Mike Christensen) wrote:
> > There will be a set of triggers for each replication.  Since MySql
> doesn't
> > support more than one trigger on a table, this approach won't work which
> I
> > guess is their way of saying "We're database independent, as long as you
> use
> > either Postgres or MySql oh and btw we have no replication story above 2
> > nodes on MySQL"
> The first statement on the rubyrep project website says that rubyrep
> provides database independent - currently supporting PostgreSQL and
> MySQL - master-master replication [i. e. 2 databases].
> That statement is 100% correct.
> Regarding multi-master replication (i. e. more than 2 databases) the
> according sub page says that it only works for PostgreSQL.
> I intended both statements to be very clear and *not* misleading. If
> you think they are not, I am interested in hearing your improvement
> suggestions.
>
>
Hi Arndt - Sorry, my email was just being a little on the sarcastic side.  I
don't think your statements are misleading, just a bit marketed.  To me,
something "database independent" should be able to be configured to work on
any DB, not just two.  And the fact that there's a pretty big limitation in
one of those databases really limits your story for MySQL.  I can't say I'm
too concerned, as like many others on this thread I'm not the biggest fan of
MySQL (I tried it for about a day and kept saying "Wait, it can't do
/that/?" too many times so I gave up)

I think unless you want to either develop a full replication story for MySQL
or add support for other software such as IBM and Oracle, you might just
want to focus all your energy on being the best Postgres replication story
around and market as being just that.  It would definitely keep the support
costs down having to deal with random bugs on mySQL and trying to please
everybody.

>
> >
> > Also, if database C goes down, then everything goes kaboom, right?  Even
> if
> > you did A replicates with B, B replicates with C, if one database goes
> down
> > your chain is broken.  I'm worried about this scenario,
> Assuming that A replicates with C and C with B then once C goes down,
> indeed replication will stop.
> However nothing goes "kaboom". All changes in either A or B are still
> tracked in the according queue tables. As soon as C comes up again,
> all pending changes will be replicated.
> If let's say C goes down, then replication between A and B will not be
> affected.
> Same if A goes down: replication between B and C will continue.
> >and any perf
> > implications with having a whole bunch of triggers on a table.  Maybe
> > someone can comment.
> The triggers are designed to be as "lean" as possible. Actually all
> they do is to add the primary key of a created / modified / deleted
> row into the queue table. I don't think that this will cause any
> performance impact.
> (The actual work is done by the rubyrep process which applies all
> changes. That process can be run on a totally different server to
> avoid impacting the database server performance.)
>
>
Thanks for clarifying.  After sending the email I put some thought into what
a true fault tolerant replication story would entale and it seems like it'd
be a complete nightmare to design.  Does anyone actually have that (any node
can go down and the others still replicate amongst themselves?)

On my system, if the master DB goes down and some data is out of date by a
few minutes or an hour or so shouldn't cause too many problems so long as
"half a transaction" isn't replicated or anything silly like that.  The only
scenario I could come up with that would be annoying is if I user created a
new account on the system, then switched over to another webserver/DB node
and all of a sudden it said their user account didn't exist.

The "left/right" database idea I can see is a very straight forward
architecture and easy to setup and maintain, I'm just worried if you had a
/huge/ cluster (Like Facebook size) then it really puts a massive strain on
your master DB, since with every piece of data that changes, keys have to be
inserted in dozens of tables.  If seems if you could optimize this
redundancy and have all the triggers using the same table, it might scale
better.  However, I haven't done any conclusive studies to back this
statement up :)

Keep up the good work!  One of these days I have to start looking into
replication stories before my site goes live, and rubyrep will for sure be
on the list.

Mike


Re: [GENERAL] Replication

2009-06-24 Thread Mike Christensen
We need to stop this thread, you guys are making me want to ditch Postgres
and get Oracle (after taking out a second mortgage on my house that is)..

Mike

On Wed, Jun 24, 2009 at 9:40 AM, Thomas Kellerer  wrote:

> Craig Ringer wrote on 24.06.2009 04:07:
>
>> Thomas Kellerer wrote:
>>
>>> Mike Christensen wrote on 23.06.2009 19:37:
>>>
>>>> Does anyone actually have that (any node can go down and the others
>>>> still
>>>> replicate amongst themselves?)
>>>>
>>> I think this is what Oracle promises with their RAC technology.
>>>
>>
>> Isn't RAC a shared-storage cluster?
>>
>
> As far as I know it is also a cluster of servers which can even hand over
> the processing of a single statement "in-flight" if the node goes down that
> is current processing the statement.
>
> Thomas
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] haversine formula with postgreSQL

2009-09-17 Thread Mike Christensen
This behavior kinda gets me sometimes too, especially in WHERE clauses..

I'm a bit curious as to why this is so bad.  I could see why it would
be expensive to do, since your clause wouldn't be indexed - but why is
the syntax itself not allowed?  Repeating the clause isn't gonna gain
you any speed, and might make it even slower since the expression
would have to be evaluated twice right?  Perhaps I'm missing
something..

Mike

On Thu, Sep 17, 2009 at 2:47 PM, Tom Lane  wrote:
> Jonathan  writes:
>> Here is my PHP with SQL:
>> $query = sprintf("SELECT 'ID', 'FACILITY', 'ADDRESS', latitude,
>> longitude, ( 3959 * acos( cos( radians('%s') ) * cos( radians
>> ( latitude ) ) * cos( radians( longitude ) - radians('%s') ) + sin
>> ( radians('%s') ) * sin( radians( latitude ) ) ) ) AS distance FROM
>> aaafacilities HAVING distance < '%s' ORDER BY dist LIMIT 0 OFFSET 20",
>
> Sigh, you've been misled by MySQL's nonstandard behavior.  You cannot
> refer to output columns of a query in its HAVING clause; it's disallowed
> per spec and not logically sensible either.  The simplest way to deal
> with it is just to repeat the expression in HAVING.  If you really
> really don't want to write it twice, you can use a subquery.
>
>                        regards, tom lane
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Functions returning multiple rowsets

2009-09-28 Thread Mike Christensen
One thing I like about Microsoft SQL is you can write a sproc that does:

SELECT * FROM TableA
SELECT * FROM TableB

And in .NET, you'll have a DataSet object with two DataTables, one for
each table.  Do either of the techniques outlined below provided this
functionality, though I suppose in .NET you'd be using the NpgSql
adapter instead..

Mike

On Mon, Sep 28, 2009 at 11:17 AM, Merlin Moncure  wrote:
> On Mon, Sep 28, 2009 at 2:05 PM, Merlin Moncure  wrote:
>> On Mon, Sep 28, 2009 at 12:49 PM, Thom Brown  wrote:
>>> Hi,
>>>
>>> Is it possible to create a function using 'SQL' as language which could
>>> return multiple rowsets, such as "SELECT * FROM TABLE1; SELECT * FROM
>>> TABLE2;" where both results are returned in the output?  I know this can be
>>> done in stored procedures in other RBDMS but can this be done in a function?
>>
>> you have a couple of approaches:
>> *) declare refcursors inside the function and references them later in
>> the transaction
>> *) make temp tables
>> *) arrays:
>> create function two_sets(_foos out foo[], _bars out bar[]) returns record as
>> $$
>>  select array(select foo from foo), array(select bar from bar);
>> $$ language sql;
>>
>> with s as (select * from two_sets()),
>> foo as (select unnest(_foos) from s),
>> bar as (select unnest(_bars) from s)
>> select
>>  (select count(*) from foo) as no_foos,
>>  (select count(*) from bar) as no_bars;
>
> I should mention the query above only works in 8.4+.  the array
> approach generally only works as of 8.3 and has limits (don't return
> billion records).  Also, it's not good style (IMO) to name 'with'
> expressions same as actual tables:
>
> with s as (select * from two_sets()),
> f as (select unnest(_foos) from s),
> b as (select unnest(_bars) from s)
> select
>  (select count(*) from f) as no_foos,
>  (select count(*) from b) as no_bars;
>
> is cleaner.
>
> merlin
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Best data type to use for sales tax percent

2009-10-08 Thread Mike Christensen
(Sorry for the super-easy question)

I want to store sales tax (as a percent) in the DB, such as 9.5%.
What's the best data type for this?  I'm guessing numeric(2,3) should
be fine, yes?  I'm not too familiar with the numeric type (I was using
"real" before), but as I understand the data will be stored using the
necessary number of bits on the disk?  Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best data type to use for sales tax percent

2009-10-08 Thread Mike Christensen
Oops sorry I was thinking 2,3 meant 2 significant digits to the left
of the decimal point and 3 to the right.  I just re-read the docs and
now see what you mean.  6,3 would work fine though is maybe a bit
overkill since a 100% sales tax rate would cause a violent revolution
and lead to beheadings, at which point Postgres data integrity would
be the least of our worries..

I'll probably just use 3,3 and store this value between 0 and 1, since
all I'll be doing with this number is using it to multiply against a
subtotal.  3,3 gives me 0.000 through 0.999, correct?

Mike

On Thu, Oct 8, 2009 at 4:38 PM, Merlin Moncure  wrote:
> On Thu, Oct 8, 2009 at 6:04 PM, Mike Christensen  wrote:
>> (Sorry for the super-easy question)
>>
>> I want to store sales tax (as a percent) in the DB, such as 9.5%.
>> What's the best data type for this?  I'm guessing numeric(2,3) should
>> be fine, yes?  I'm not too familiar with the numeric type (I was using
>> "real" before), but as I understand the data will be stored using the
>> necessary number of bits on the disk?  Thanks!
>
> numeric(2,3) is not possible :-). IMO, a percentage should be stored
> numeric(6,3) or so.  That gives up to 100% down to thousandth of a
> percent.
>
> merlin
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best data type to use for sales tax percent

2009-10-09 Thread Mike Christensen
Wouldn't (4,3) let me store 0.000 through 9.999?  Maybe I'm still not
following what both numbers mean.

I understand the point about states/counties with 3 decimal digits of
sales tax, so I'd probably want to do (5,5) which should give me
0.0 - 0.9, and store 9.825% sales tax as .09825.  I'm
suggesting storing sales tax as a number between 0 and 1 so I can
easily multiply it against a subtotal to get the tax amount, storing
anything over 1.0 is unnecessary.

Also, if you just say "numeric" (without any numbers) then Postgres
lets you store any number you wish and will never do any rounding of
any sort, correct?  If there a price you pay for this in terms of
perf, bytes on disk, etc?

Another idea is if I'm tying myself down to a certain level of decimal
accuracy in the first place, why not just store everything as an Int2?
 9.825% would be stored as 9825 and I'll divide everything by 10
when I calc sales tax.  If I'm not mistaken, integral data types are
faster for Postgres and less bytes on disk, right?  BTW, I will never
be doing any math using Postgres, it's just for pure storage..

Mike

On Thu, Oct 8, 2009 at 5:23 PM, Rich Shepard  wrote:
> On Thu, 8 Oct 2009, Mike Christensen wrote:
>
>> I'll probably just use 3,3 and store this value between 0 and 1, since all
>> I'll be doing with this number is using it to multiply against a subtotal.
>> 3,3 gives me 0.000 through 0.999, correct?
>
> Mike,
>
>  No. The two digits represent the width of the column and the number of
> significant digits. Try (4,3).
>
> Rich
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best data type to use for sales tax percent

2009-10-09 Thread Mike Christensen
So back to my question about representing sales tax, it looks like I
have two choices:

1) Use a numeric(5,5) column.  This has the advantage of storing the
sales tax in the exact representation of a percent (I can directly
multiply it against any subtotal to get the sales tax).  It also
"looks" nicer in the DB and probably has some style points.  It
appears to me there is a massive amount of overhead with these columns
though, something like 8bytes plus whatever storage is required for
the precision.  However, even if I had a million rows that's only like
8 megs on the disk.  I'm more worried about perf than disk size.  I
would assume there's only a perf hit doing math with these types, not
so much just loading them into a dataset.

2) Use an Int2.  I'd have to use a multiplier in my source code after
loading the data.  In the DB, the data would look kinda funky since
9.825% would be represented as 9825.   I think this is a more
efficient number for storage since it only requires 2 bytes.  I'd be
able to store up to 65% or so which is fine for any sales tax I've
ever heard of.

I'm kinda leaning towards using the numeric column type, simply
because it seems "cleaner" to me.  At work, we use multipliers all
over the place in our DB and it has turned into a complete nightmare.
I'm somewhat of a believer in just storing data exactly how you need
to use it.

Thanks!

Mike

On Fri, Oct 9, 2009 at 4:13 AM, Sam Mason  wrote:
> On Fri, Oct 09, 2009 at 12:10:41AM -0700, Mike Christensen wrote:
>> Wouldn't (4,3) let me store 0.000 through 9.999?  Maybe I'm still not
>> following what both numbers mean.
>
> I think Rich was getting confused about how you wanted to represent your
> percentages.
>
>> I understand the point about states/counties with 3 decimal digits of
>> sales tax, so I'd probably want to do (5,5) which should give me
>> 0.0 - 0.9, and store 9.825% sales tax as .09825.  I'm
>> suggesting storing sales tax as a number between 0 and 1 so I can
>> easily multiply it against a subtotal to get the tax amount, storing
>> anything over 1.0 is unnecessary.
>
> This is how I'd normally do it.  Ratios for inside the code, just
> "format" them as percentages when you want the user to see them.
>
>> Also, if you just say "numeric" (without any numbers) then Postgres
>> lets you store any number you wish and will never do any rounding of
>> any sort, correct?  If there a price you pay for this in terms of
>> perf, bytes on disk, etc?
>
> It's not possible to do division accurately (not sure about the caveats
> in other operators).  For example, 1/3 is represented as "0.3" and
> multiplying this by three again will give "0.9".  When people say
> that numeric types are "exact" they're not giving you whole truth.
>
>> Another idea is if I'm tying myself down to a certain level of decimal
>> accuracy in the first place, why not just store everything as an Int2?
>>  9.825% would be stored as 9825 and I'll divide everything by 10
>> when I calc sales tax.  If I'm not mistaken, integral data types are
>> faster for Postgres and less bytes on disk, right?  BTW, I will never
>> be doing any math using Postgres, it's just for pure storage..
>
> Not sure what range of values you have to cover; you wouldn't be able to
> do this with fixed width integer types:
>
>  select numeric '100' ^ 300;
>
> Numeric types allow you to do the above, the flexibility of allowing the
> representation of a number to get this wide that causes things to be
> slower.  It's not much slower though, I'd benchmark a test case that's
> meaningful to you and then can you make a sensible decision.
>
> --
>  Sam  http://samason.me.uk/
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best data type to use for sales tax percent

2009-10-09 Thread Mike Christensen
Can you explain what you mean by "put it in a domain" - I'd love extra
style points, but this sounds like a feature I haven't learned about
yet.

On Fri, Oct 9, 2009 at 3:38 AM, Peter Eisentraut  wrote:
> On Fri, 2009-10-09 at 00:10 -0700, Mike Christensen wrote:
>> Wouldn't (4,3) let me store 0.000 through 9.999?  Maybe I'm still not
>> following what both numbers mean.
>
> Yes.  If you want 0.000 through 0.999, use numeric(3,3).  Adding a check
> constraint might increase clarity.  And put it in a domain for extra
> style points. :-)
>
>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best data type to use for sales tax percent

2009-10-09 Thread Mike Christensen
Thanks for the info!  I was thinking this would be a short thread but
I definitely appreciate all the information.

I will definitely create a domain for this (and probably for some
other types in my app since I now know about this).  However, is the
CHECK really necessary?  A numeric(5,5) already has a maximum value of
10^0, so it would already create an overflow error if you set it
higher.  Is there an advantage of using the CHECK constraint?  Perhaps
this is faster or doesn't lock the row on update or something?

Just to point out, NUMERIC and DECIMAL are one and the same yes?

Mike

On Fri, Oct 9, 2009 at 11:46 AM, Christophe Pettus  wrote:
>
> On Oct 9, 2009, at 11:36 AM, Mike Christensen wrote:
>
>> Can you explain what you mean by "put it in a domain" - I'd love extra
>> style points, but this sounds like a feature I haven't learned about
>> yet.
>
>
>        http://www.postgresql.org/docs/8.4/interactive/sql-createdomain.html
>
> Domains are basically type aliases with an optional CHECK clause, so you
> could do something like:
>
>        CREATE DOMAN sales_tax_rate AS DECIMAL(5,5) CHECK (VALUE >= 0);
>
> Then, you can use the type "sales_tax_rate" in your tables, etc. just as a
> normal first-class type.  (The only limitation, right now, is that you can't
> create an array of them.)
>
> In response to the other email, DECIMAL is definitely the better solution
> for what you are looking for.
>
> --
> -- Christophe Pettus
>   x...@thebuild.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent Help required

2009-10-16 Thread Mike Christensen
Hmm would this be a bad time to ask for PostGres 1.0 support?

On Fri, Oct 16, 2009 at 1:55 PM, Jeff Davis  wrote:
> On Fri, 2009-10-16 at 11:26 +0100, Neha Patel wrote:
>> We are running with postgres sql 7.3.2. We were trying to create an
>> index on a big table. The create index command ran for nearly 5 hours
>> at which point we decided to interrupt it. Since this was interrupted,
>> any operations attempted on the table on which the index was being
>> created gives following error in pgsql log:
>
>> LOG: all server processes terminated; reinitializing shared memory and
>> semaphor
>
> 1. Shut down postgresql and make a full filesystem copy of the PGDATA
> directory. This will ensure that anything else you do won't leave you in
> a worse position.
>
> 2. Upgrade to the latest version of postgresql 7.3, which is 7.3.21
>
> 3. Start up again
>
> 4. Try to fix the problem:
>  a. see if there are any indexes on the table
>  b. if so, drop them
>  c. try to get a good logical backup using pg_dump. You may want to
> disable index scans by using "SET enable_indexscan = f;".
>  d. If that doesn't work, you may have catalog corruption. Examine the
> catalogs (documented here:
> http://www.postgresql.org/docs/7.3/static/catalogs.html ), and look
> entries related to your table in pg_class and pg_index, and see if
> anything looks wrong.
>  e. start from a fresh install and restore using the logical backup
>
> 5. Upgrade all data to a recent version of postgresql. You're on a
> really ancient version that has fallen out of official support. This may
> require several upgrade steps, but you should get upgraded to 8.3.8 or
> 8.4.1. A lot of bugs have been fixed, and as long as you are on 7.3, you
> will still be at serious risk.
>
> Regards,
>        Jeff Davis
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Free Tool to design Postgres Databases

2009-10-21 Thread Mike Christensen
I tried Power Architect for about 5 minutes, just enough time to
notice it had no support for UUIDs which makes it all but useless..  I
mean, seriously who doesn't use UUIDs :)

Maybe they'll fix that, it does look promising..

>>
>> Search the archives this came up within the last couple of months.  I
>> currently use Power Architect, it's a beta product and still fairly
>> buggy but works.
>>
>> --
>> Peter Hunsberger
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Design question about partitioning order information across tables

2009-10-22 Thread Mike Christensen
Hi -

I have a fairly simple design question that I'd like some input on.  I
have a table called "Orders" which, along with various order
information, contains an "OrderState" column.  OrderState can contain
one of the following values:

1 - Order is in preview mode and has not been committed yet.  The user
may still make changes.
2 - Order is submitted and will be picked up by a queue service to
process the order and send it to the vendor.
3 - Order has been submitted to the vendor successfully.
4 - Order was processed, but there was some sort of error and we have
canceled the order.

As you can imagine, my application for the most part will just deal
with order states 1 and 2.  If an order is in state 3 or 4, it's
pretty much just "archived" information and the only time it would be
needed is if the user wanted to view their previous orders or
something.

It has occurred to me that there might be some advantages of creating
a separate table called "OrderArchive" which would be used to store
order states 3 or 4.  This would allow me to get rid of an index on
order state as well as probably use different caching techniques on
each table.  It would also keep the Orders table super light and fast
with only very volatile information.

Assuming this theory is correct, what's the best way to design this?
Should I create an OrderArchive table that "derives" from "Orders"
(since the schemas are exactly the same) - I've never done this, only
read about it.  Also, I was thinking about how to "move" a row between
tables.  The obvious way would be to write a function that inserts the
row into OrderArchive with the same values and the final state, then
deletes the row from the original Orders table (under a single
transaction of course).  However, I was thinking it might be nice to
have the DB just "manage" this for me.  I could create a trigger on
Orders and the second the OrderState changed to 3 or 4, move the row.
This would have the advantage of delegating this logic entirely to the
DB layer and keeping the app out of the order archiving business.

Any opinions on this subject?  Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Call for design: PostgreSQL mugs

2013-09-09 Thread Mike Christensen
How about something incredibly cheesy like

SELECT * FROM Mug;


On Mon, Sep 9, 2013 at 8:22 AM, Karsten Hilbert wrote:

> Inside of the mug:
>
> - runs of 0's and 1's = data
> - neatly aligned or in compartments/boxes/shelved ?
>
> Outside of mug:
>
> 10 elephants
>
> - each with a headband, carrying one of
> P-O-S-T-G-R-E-S-Q-L on the forehead
> - cordoning off towards the outsider/onlooker/mug user
> (mugger ? :-)
>   maybe all the way around the mug ?
> - another few doing various "housekeeping" tasks:
> - brooming up bits
> - packaging bits into a box
> - freshly painting bitbuckets
> - indexing/filing (paper) charts
> - ...
>
> Karsten Hilbert
> --
> GPG key ID E4071346 @ gpg-keyserver.de
> E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Call for design: PostgreSQL mugs

2013-09-12 Thread Mike Christensen
Oooh can we make the handle an elephant trunk? (Ok, now I'm sure I'm adding
all sorts of expense - but hey you'll save so much money using Postgres you
can afford an expensive coffee mug!)


On Thu, Sep 12, 2013 at 5:30 AM, Andreas 'ads' Scherbaum <
adsm...@wars-nicht.de> wrote:

> On 09/10/2013 10:15 PM, Kevin Grittner wrote:
>
>> patrick keshishian  wrote:
>>
>>> One more "cute" idea that came to me last night. Here is a very
>>>
>>
>>  poor attempt at it by yours truly; keep in mind I'm not a graphics
>>> artist. This image is for illustration purposes only!
>>>
>>>  
>>> http://sidster.com/gallery/**2013/09/10/elephant_paw.sml.**jpg
>>>
>>> Can you picture a bunch of these on a meeting table? If that image
>>> does not scream "Stampede!", I don't know what does. =)
>>>
>>> Again, a great conversation starter.
>>>
>>
>> I think I like this idea best so far!
>>
>> Some supporting documentation for a final version:
>>
>> http://www.asknature.org/**strategy/**29c12a353dab52ad8d4eb5d4337cef**b9
>>
>
> I agree, that's a good idea. Instead of the original PG logo near the top,
> the mug can show a real elephant face along with the foots on both sides.
> And then have "PostgreSQL" somewhere written - maybe turned 90° near the
> handle, on both sides?
>
> How does this sound? And next question: who can design this? ;-)
>
>
> --
> Andreas 'ads' Scherbaum
> German PostgreSQL User Group
> European PostgreSQL User Group - Board of Directors
> Volunteer Regional Contact, Germany - PostgreSQL Project
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>


[GENERAL] Npgsql - Where can I find Npgsql.NpgsqlServices

2013-12-11 Thread Mike Christensen
It seems I need NpgsqlServices to use Npgsql with EF6, however I can't
figure out where you get this thing!

I've tried installing it through NuGet:

PM> Install-Package Npgsql -pre
Installing 'Npgsql 2.0.14.1'.
Successfully installed 'Npgsql 2.0.14.1'.
Adding 'Npgsql 2.0.14.1' to EFTest.
Successfully added 'Npgsql 2.0.14.1' to EFTest.


However, this doesn't have it.  I've also tried installing the beta from:

http://pgfoundry.org/frs/download.php/3494/Npgsql2.0.13.91-bin-ms.net4.5Ef6.zip

No luck there.  Any ideas?

Mike


Re: [GENERAL] Npgsql - Where can I find Npgsql.NpgsqlServices

2013-12-12 Thread Mike Christensen
Thanks!  Got it working after messing around for a while..

I decided to check out EF for a new project I'm working on.  So far, I
think I like NHibernate better.  A lot more control, and works flawlessly
with Postgres and Npgsql.

Mike




On Thu, Dec 12, 2013 at 8:34 AM, Francisco Figueiredo Jr. <
franci...@npgsql.org> wrote:

>
> Hi, Mike.
>
> We are working to get EF6 properly supported and our master branch already
> has support for it.
> Unfortunately pgfoundry is down at the moment.
>
> Please, if you can compile Npgsql from source code, please get a current
> snapshot from github and give it a try: https://github.com/npgsql/Npgsql and
> directly https://github.com/npgsql/Npgsql/archive/master.zip
> Just open the project on visual studio and select the correct
> configuration (net-45) and you will get Npgsql.dll and
> Npgsql.EntityFramework.dll in the EntityFramework/bin folder.
>
> I hope it helps.
>
> Let me know if you have any question.
>
>
>
> On Thu, Dec 12, 2013 at 1:24 AM, Mike Christensen wrote:
>
>> It seems I need NpgsqlServices to use Npgsql with EF6, however I can't
>> figure out where you get this thing!
>>
>> I've tried installing it through NuGet:
>>
>> PM> Install-Package Npgsql -pre
>> Installing 'Npgsql 2.0.14.1'.
>> Successfully installed 'Npgsql 2.0.14.1'.
>> Adding 'Npgsql 2.0.14.1' to EFTest.
>> Successfully added 'Npgsql 2.0.14.1' to EFTest.
>>
>>
>> However, this doesn't have it.  I've also tried installing the beta from:
>>
>>
>> http://pgfoundry.org/frs/download.php/3494/Npgsql2.0.13.91-bin-ms.net4.5Ef6.zip
>>
>> No luck there.  Any ideas?
>>
>> Mike
>>
>
>
>
> --
> Regards,
>
> Francisco Figueiredo Jr.
> Npgsql Lead Developer
> http://www.npgsql.org
> http://gplus.to/franciscojunior
> http://fxjr.blogspot.com
> http://twitter.com/franciscojunior
>


Re: [GENERAL] PostgreSQL specific datatypes very confusing for beginners who use wrappers around JDBC

2014-01-28 Thread Mike Christensen
I've had the same problem as well with NHibernate (On .NET) with Postgres
ENUM types.  Luckily, NHibernate is incredibly powerful and you *can* get
everything working flawlessly, however it takes some serious digging into
the source code and reading the docs to figure it out.  The main issue is
that NHibernate, out of the box, wants to map an ENUM as a number.  For
example:

INSERT INTO FOO SomeEnumColumn VALUES (1);

This will cause an error, because PG is looking for a string value (Even
though ENUMs are stored as numeric values under the covers).  It's pretty
easy to configure NHibernate to convert ENUMs to strings (there's tons of
blog posts on that)..  However, this causes NHibernate to write:

INSERT INTO FOO SomeEnumColumn VALUES ('EnumValue'::text);

Which will also cause an error.  I've found the only way around it is to
configure NHibernate to treat ENUMs as "Objects" which will simply generate:

INSERT INTO FOO SomeEnumColumn VALUES ('EnumValue'); -- No casting here,
yay!

This works.  However, to agree with the original poster's point, if
Postgres could be a little more forgiving about values that could be
interpreted as correct (like an implicit cast between numeric and enum and
string and enum) then we wouldn't have these issues..

Mike


On Tue, Jan 28, 2014 at 1:37 PM, John R Pierce  wrote:

> On 1/28/2014 1:20 PM, Tom Lane wrote:
>
>> I think you can fix it by explicitly casting your placeholders, eg
>> "?::macaddr".
>>
>
> that might work for a wrapper that lets you roll your own SQL, but I
> thought he said one of these autogenerated SQL, taking it out of his
> control.
>
>
>
>
> --
> john r pierce  37N 122W
> somewhere on the middle of the left coast
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] PostgreSQL specific datatypes very confusing for beginners who use wrappers around JDBC

2014-01-28 Thread Mike Christensen
Interesting!

I'd be curious as to what types of bugs were caused by these implicit
casts..

Note 8.3 was in the days back before ORMs became popular, so "just write
better SQL" was a perfectly decent solution to the problem back then.  Now
days, this requirement might make Postgres incompatible with certain ORMs
out there, which is a bummer.  I'm wondering if these ambiguities you speak
of could be solved in other ways.  Such as implicitly cast iff the
intention is not ambiguous, otherwise raise some sort of "ambiguous" error
or default to some behavior.

Mike


On Tue, Jan 28, 2014 at 2:46 PM, John R Pierce  wrote:

> On 1/28/2014 2:35 PM, Mike Christensen wrote:
>
>> This works.  However, to agree with the original poster's point, if
>> Postgres could be a little more forgiving about values that could be
>> interpreted as correct (like an implicit cast between numeric and enum and
>> string and enum) then we wouldn't have these issues..
>>
>
> it had more implicit casts prior to (I think) 8.3, but there were many
> ambiguities where things could be interpreted to mean radically different
> sorts of operations, so they tightened things up in 8.3+ (or was it 8.4+ ?)
>
>
>
>
> --
> john r pierce  37N 122W
> somewhere on the middle of the left coast
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] PostgreSQL specific datatypes very confusing for beginners who use wrappers around JDBC

2014-01-28 Thread Mike Christensen
How do you create casts in Postgres?


On Tue, Jan 28, 2014 at 3:24 PM, Andrew Sullivan wrote:

> On Tue, Jan 28, 2014 at 02:55:03PM -0800, Mike Christensen wrote:
>
> > I'd be curious as to what types of bugs were caused by these implicit
> > casts..
>
> Typically, they were cases when there was an ambiguity that the
> programmer didn't understand, causing applications to blow up in
> surprising and wonderful ways.
>
> There are things you can do if you're really prepared for the gun
> aimed at your feet.  Since you can create casts in Postgres, you can
> actually add back many of the implicit casts yourself.
>
> > Such as implicitly cast iff the intention is not ambiguous
>
> I think if the developers could write code that read minds, they'd be
> working on more profitable enterprises ;-)
>
> Best,
>
> A
>
> --
> Andrew Sullivan
> a...@crankycanuck.ca
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] PostgreSQL specific datatypes very confusing for beginners who use wrappers around JDBC

2014-01-28 Thread Mike Christensen
Oh.  The CREATE CAST command.  Wow, I was totally unaware of this entire
feature!


On Tue, Jan 28, 2014 at 3:36 PM, Mike Christensen wrote:

> How do you create casts in Postgres?
>
>
> On Tue, Jan 28, 2014 at 3:24 PM, Andrew Sullivan wrote:
>
>> On Tue, Jan 28, 2014 at 02:55:03PM -0800, Mike Christensen wrote:
>>
>> > I'd be curious as to what types of bugs were caused by these implicit
>> > casts..
>>
>> Typically, they were cases when there was an ambiguity that the
>> programmer didn't understand, causing applications to blow up in
>> surprising and wonderful ways.
>>
>> There are things you can do if you're really prepared for the gun
>> aimed at your feet.  Since you can create casts in Postgres, you can
>> actually add back many of the implicit casts yourself.
>>
>> > Such as implicitly cast iff the intention is not ambiguous
>>
>> I think if the developers could write code that read minds, they'd be
>> working on more profitable enterprises ;-)
>>
>> Best,
>>
>> A
>>
>> --
>> Andrew Sullivan
>> a...@crankycanuck.ca
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>


[GENERAL] Converting uuid primary key column to serial int

2011-06-08 Thread Mike Christensen
Hi all -

For most of my database I use UUIDs as primary keys because, well, I
just like it better and like being able to generate a key in the
middle tier when I create new data.  However, I have one table that
has a very fixed and immutable set of data with a few thousand
ingredients in it.  This set very rarely changes (in fact when I
change it, I have to restart the whole website)..  The UUIDs for each
ingredient are kinda a hassle to manage though, and I'm thinking in
this case I might just want to use a 32bit integer for the primary
key..

Almost every other table in the database has FK relationships with
these ingredients, and there's all sorts of JOINs against it.  So,
making this change is gonna be fairly costly and require all sorts of
schema changes to everything.  I have a few related questions on this
topic..

1) What's the best way to convert an existing table from UUIDs to a
serial INT column?  I want to have Postgres just assign sequential
numbers starting at 1 to all the rows.

2) Once I do this, I obviously need to fix up all the foreign keys and
convert them from the old UUIDs to the new generated numeric value.
Is there an easy way to manage this sort of thing?  I'm thinking about
keeping the old UUID as a non-PK column to use as a "mapping" as I
migrate the rest of the schema over.  Is that a good approach?

3) I'm also worried about backup/restores/replication.  I often add
new ingredients to my production database, and every so often backup
this database and restore it to my dev box so I can work with
production data.  I need to make sure all these ingredient IDs stay
constant, just as the UUIDs did.  Does pg_dump generate SQL that will
"re-create" the existing keys and set the identity watermark
correctly?  If someday I have replication enabled, do the same IDs get
replicated consistantly?  Any other "gotchas" here?

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Converting uuid primary key column to serial int

2011-06-08 Thread Mike Christensen
On Wed, Jun 8, 2011 at 1:06 PM, David Johnston  wrote:
>> -Original Message-
>> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
>> ow...@postgresql.org] On Behalf Of Mike Christensen
>> Sent: Wednesday, June 08, 2011 2:57 PM
>> To: pgsql-general@postgresql.org
>> Subject: [GENERAL] Converting uuid primary key column to serial int
>
>> for each ingredient are kinda a hassle to manage though, and I'm thinking
> in
>> this case I might just want to use a 32bit integer for the primary key..
>
>> 2) Once I do this, I obviously need to fix up all the foreign keys and
> convert
>> them from the old UUIDs to the new generated numeric value.
>> Is there an easy way to manage this sort of thing?  I'm thinking about
> keeping
>> the old UUID as a non-PK column to use as a "mapping" as I migrate the
> rest
>> of the schema over.  Is that a good approach?
>>
>
> What specific hassles are you encountering?  Seems like a lot of effort to
> go through to make the data less-unique.  Since you already have the field
> setup for PK/FK use try to leave that alone and just add a secondary
> identifier that you use for queries.  Like you said in #2 leave the UUID in
> place as a non-PK (but still unique/not-null) field and leave all the
> foreign keys in place as well.  Add your "integer identity" column as the
> new Primary Key and use that field when you want to specify a record.  Then,
> for those cases where you have or want to use the integer PK but the field
> being compared is the UUID you can write a simple function to return the
> UUID associated with the given integer.
>
> Otherwise you probably should just add the integer field to EVERY TABLE and
> establish relational links.  Add the field as null-able, perform the update
> using the UUID, change to not null, add FK constraint.  As you then attempt
> to remove the UUID field from the tables the system will tell you where
> different function and views are relying upon the UUID field and you can
> change the object to use the new integer field instead.
>
> The main risk really is in the application since the database will not be
> able to enforce consistency.
>
> Serial is implemented via "DEFAULT" and sequences; and if you dump/restore a
> sequence it is done consistently.

All very good points.  A few reasons why I'm considering this:

1) On some pages, the user can pass in a search query (keywords to
find, ingredients to exclude, etc)..  I have a Base64 representation
of this search query, and it gets pretty long if a bunch of UUIDs are
included.  Making them ints would make for much shorter URLs.  There's
a few other pages that have an ingredient ID URL parameter directly.
I think URLs with UUIDs are kinda ugly.

2) I deal with a lot of this data through internal web based admin
tools as well as Excel.  There's a lot of places where I need to link
some metadata to an existing ingredient.  I'd like to just be able to
refer to "eggs" as, say, 53, rather than some UUID for eggs.  Sure,
the tools could have auto-complete and auto-lookup stuff but I haven't
had much time to work on these tools so they're super hacky right now.

Both of these requirements, as you said, could be satisfied by using a
non-primary key though.  In fact, there's then no reason to change any
other tables - internally the database would link to the ingredient
UUIDs.

I'm assuming I can still have a "Serial" column that is NOT a primary
key, and it'll incremement just the same as I add rows?  If that's the
case, I think that's a superior approach..

BTW, this table is too small to worry about disk space of UUIDs and/or
perhaps any sort of performance benefits to using int over uuid (if
there are any)..

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Converting uuid primary key column to serial int

2011-06-08 Thread Mike Christensen
>> I'm assuming I can still have a "Serial" column that is NOT a primary key,
> and
>> it'll incremement just the same as I add rows?  If that's the case, I
> think that's
>> a superior approach..
>>
>> BTW, this table is too small to worry about disk space of UUIDs and/or
>> perhaps any sort of performance benefits to using int over uuid (if there
> are
>> any)..
>>
>> Mike
>
> " CREATE TABLE t ( field serial ); " is simply short-hand for " CREATE TABLE
> t (field integer DEFAULT nextval('t_seq') ); " where the sequence "t_seq" is
> automatically created and linked to the table.
>
> Nothing more and nothing less.  Whether you add NOT NULL or, by extension,
> PRIMARY KEY, to the field as well as the "serial" datatype depends on
> whether you want to have those other properties.
>
> Have you considered giving the row for "eggs" the PK of "eggs"?  You did say
> you have multiple thousands of records but neither the UUID or the integer
> is going to stop you from then having 2+ records with "eggs" as the name.
> If you want to see how many recipes use "eggs" what would you do to make
> sure you are not missing any?  Even if you decide to keep the UUID and/or
> Integer as UNIQUE indices you should try and have something in the data
> itself that can be defined as UNIQUE.  Since you are dealing with discreet
> items, without any kind of time property, it should be possible to do so.
>
> From an implementation perspective you will want to create the sequence and
> all using "serial" but allow "NULL" for the field.  Once you've assigned all
> the existing records an ID (probably via the row() window function) you can
> setup the sequence to begin with the next available number.  See docs for
> syntax.

Yea, there's already a unique index on the ingredient name, so there
can only be 1 "eggs" row..  I'm not too sure on using the name as the
primary key..

The serial implementation makes sense..  If that's all that it does, I
should be able to just update all rows and set the numeric value to
nextval('t_seq') directly to order all my existing rows, then set the
column to NOT NULL when I'm done, and set the sequence to the next
available number..  Worst case I'll just write a little function that
loops through my rows and numbers them all.  I'll checkout the docs on
this, thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Constraint to ensure value does NOT exist in another table?

2011-06-15 Thread Mike Christensen
I know I can setup a FK constraint to make sure Table1.ColA exists in
Table2.Key, however what if I want to do the reverse?

I want to ensure Table1.ColA does NOT exist in Table2.Key..  Can I do
this with any sort of CHECK constraint, trigger, custom function, etc?
 Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] You could be a PostgreSQL Patch Reviewer!

2011-06-15 Thread Mike Christensen
Did anyone ever fix the annoying thing where uuid_generate_v4()
doesn't work on Windows 64bit?

On Wed, Jun 15, 2011 at 9:46 PM, Tom Lane  wrote:
> Craig Ringer  writes:
>> Any chance of flagging patches on the commitfest when they're platform
>> specific? I'm hurting for time but will check out Windows-specific stuff
>> if it's an area the project particularly needs help with.
>
> Well, a quick look through
> https://commitfest.postgresql.org/action/commitfest_view/inprogress
> suggests that these could use attention:
>
> Latch implementation: wake on postmaster death, reduce archiver wakeups
>        Has a Windows-specific implementation, which needs testing
>
> POSIX shared memory
>        Does this work on Windows?
>
> Allow multiple Postgres clusters running on the same machine to distinguish 
> themselves in the event log
>        Windows-only feature, I assume
>
> Add Support for building with Visual Studio 2010
>        Needs testing not only with VS2010, but older versions
>
> libpq SSL with non-blocking sockets (WIP)
>        Might need testing on Windows, not sure
>
>
>                        regards, tom lane
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] You could be a PostgreSQL Patch Reviewer!

2011-06-15 Thread Mike Christensen
Wouldn't it be faster/better/easier if Postgres just had its own built
in UUID generator?  Last I tested generating a bunch of UUIDs, it was
quite slow (well compared to MS SQL anyway)..

On Wed, Jun 15, 2011 at 10:23 PM, Tom Lane  wrote:
> Mike Christensen  writes:
>> Did anyone ever fix the annoying thing where uuid_generate_v4()
>> doesn't work on Windows 64bit?
>
> AFAIR that was an issue with the uuid-ossp library, so it would be a
> matter for that upstream group to deal with.
>
>                        regards, tom lane
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] You could be a PostgreSQL Patch Reviewer!

2011-06-15 Thread Mike Christensen
On Wed, Jun 15, 2011 at 10:34 PM, Tom Lane  wrote:
> Mike Christensen  writes:
>> Wouldn't it be faster/better/easier if Postgres just had its own built
>> in UUID generator?
>
> Why would it be?  If you think you can easily improve on uuid-ossp,
> you should go help them.

I have no knowledge on the subject one way or the other, that's why
I'm asking..  If the answer is "no, there would be no benefit" then
that's the answer..  However, it seems like generating UUIDs in Win64
has been broken, perhaps, forever which is why I question if there's a
better library to use on the Windows platform (or having a single UUID
generator built in for all platforms)..

I'd love to help out with Postgres (and several other open source
projects), perhaps some day though I'm more of a web/.NET guy..

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Constraint to ensure value does NOT exist in another table?

2011-06-15 Thread Mike Christensen
>> I know I can setup a FK constraint to make sure Table1.ColA exists in
>> Table2.Key, however what if I want to do the reverse?
>>
>> I want to ensure Table1.ColA does NOT exist in Table2.Key..  Can I do
>> this with any sort of CHECK constraint, trigger, custom function, etc?
>
>
> The most common constraints are provided for, but then after that you
> have to use triggers.
>
> PostgreSQL deliberately doesn't support queries in CHECK constraints
> for this reason.

Thanks!  I wrote a Trigger for this and it seems to work fairly well..
 I kinda figured that was the obvious way, but thought there might be
some new 9.x feature that made this sort of thing possible.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Constraint to ensure value does NOT exist in another table?

2011-06-16 Thread Mike Christensen
>> I know I can setup a FK constraint to make sure Table1.ColA exists in
>> Table2.Key, however what if I want to do the reverse?
>>
>> I want to ensure Table1.ColA does NOT exist in Table2.Key..  Can I do
>> this with any sort of CHECK constraint, trigger, custom function, etc?
>> Thanks!
>
>
> Perhaps it's possible to use a unique constraint in a third table to 
> guarantee those foreign keys can never have the same value. That would 
> probably be more efficient than executing stored procedure code.

You'd still have to use a TRIGGER to insert any new or updated values
into the third table.  Otherwise, you'd have to modify a bunch of code
to insert/update the keys into the third table and that somewhat goes
against the whole idea of making the database responsible for its own
integrity in the first place.

What I'm ideally looking for here is a way to ensure the DB cannot
possibly exist in this state.  Foreign keys let me do that, a trigger
(if written correctly) kinda does too so long as the data started out
in a valid state and the trigger is always run..

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Constraint to ensure value does NOT exist in another table?

2011-06-17 Thread Mike Christensen
 I know I can setup a FK constraint to make sure Table1.ColA exists in
 Table2.Key, however what if I want to do the reverse?

 I want to ensure Table1.ColA does NOT exist in Table2.Key..  Can I do
 this with any sort of CHECK constraint, trigger, custom function, etc?
 Thanks!
>>>
>>>
>>> Perhaps it's possible to use a unique constraint in a third table to 
>>> guarantee those foreign keys can never have the same value. That would 
>>> probably be more efficient than executing stored procedure code.
>>
>> You'd still have to use a TRIGGER to insert any new or updated values
>> into the third table.  Otherwise, you'd have to modify a bunch of code
>> to insert/update the keys into the third table and that somewhat goes
>> against the whole idea of making the database responsible for its own
>> integrity in the first place.
>
>
> No you don't.
>
> If Table1.ColA is an FK to Table3.ColA and Table2.ColA is also an FK to 
> Table3.ColA, you can put a unique constraint on Table3.ColA to make sure the 
> values are unique:
>        Table1          Table3          Table2
>        --          --          --
>         ColA >---|- ColA -|---< ColA
>
> If you insert a value in either Table1 or Table2, it first HAS to exist in 
> Table3, due to the FK constraints. However, that still allows for values that 
> are in both tables 1 and 3, just pointing to the same value in Table3.
>
> To solve that you add an extra column to all tables, for example:
>        ALTER TABLE Table1 ADD src CHAR(1) DEFAULT 'A';
>        ALTER TABLE Table2 ADD src CHAR(1) DEFAULT 'B';
> And you change the FK constraints in A and B to include "src":
>
>        Table1          Table3          Table2
>        --          --          --
>         ColA >---|- ColA -|---< ColA
>         src  >-/   \-|- src  -|-/   \-< src
>
> You also add back a UNIQUE constraint over Table3.ColA (without the "src" 
> column).
>
> Now, if you add a value to Table1, it requires a value of (ColA, 'A') in 
> Table3. If you add one to Table2, it requires a value of (ColA, 'B'). If 
> either of those already exist though, you violate the UNIQUE constraint on 
> Table3.ColA.
>
> It's probably convenient to write some triggers to auto-generate the records 
> in Table3, but those triggers are NOT needed for relational integrity - they 
> just make the task easier.

Yup yup, I see where you're going..  It's like the third table is a
"name broker" that grants the unique priveledge of using a name in the
database.  The other tables will have a FK on it so you'd have to add
that name to the table before it can be inserted elsewhere.  The third
table will be unique which ensures a name is only used once.

This would work great, however I'd have to modify a bunch of code to
insert a name into the third table before it could be used..  Since an
admin tool is the only thing that would be doing this (this data
hardly ever changes), this isn't out of the question.  I actually
don't need any TRIGGERS if I do this, I just need to modify some code.
 This design will ensure my data is always in a valid state.

A fine approach.  Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgAdmin on Mac connecting to Postgres 9 on Linux - SSL/timeout issue

2011-07-14 Thread Mike Christensen
Sorry this is an extremely old thread, but I finally got around to
tracking down the problem..  Added these lines to postgres.conf and
poof, problem gone..

tcp_keepalives_idle = 60# TCP_KEEPIDLE, in seconds;
tcp_keepalives_interval = 10# TCP_KEEPINTVL, in seconds;
tcp_keepalives_count = 5# TCP_KEEPCNT;

Hope this helps someone else!

On Sat, Oct 30, 2010 at 4:21 AM, Mike Christensen  wrote:
> Maybe that's it..  It's definitely some sort of SSL thing since it
> didn't start happening until I enabled SSL.  I guess I'll just have to
> close pgAdmin when I'm not using it..
>
> On Sat, Oct 30, 2010 at 12:57 AM, Basil Bourque  wrote:
>>
>> On Oct 29, 2010, at 15:36, Mike Christensen wrote:
>>
>>> I have a Postgres 9 server running on a server out on the Internet and
>>> I connect to it with pgAdmin on OS/X over an SSL connection.
>>>
>>> I notice if I keep the connection open and idle for maybe an hour or
>>> so, when I try to run a query it either times out or pgAdmin just
>>> kinda freezes up and I have to force quit..  Is there some sort of
>>> idle timeout setting on SSL connections, or maybe it has to
>>> re-negotiate the connection after a certain amount of idle time and
>>> it's not doing that right?
>>>
>>> Anyone run into this before?  Thanks!
>>
>> I don't know about pgAdmin or SSL, but I have heard that routers and switches
>> can have a timeout limit: "the idle time after which an established
>> connection of any protocol closes".
>>
>> http://search.gmane.org/?query=router+timeout&group=gmane.comp.lang.inug-4d.tech
>>
>> --Basil Bourque
>>
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgAdmin3 not working with Gnome3

2011-09-01 Thread Mike Christensen
> Hi all -
>
> I'm on openSuse running the latest stable release of Gnome3 (Just
> trying it out, so far the fact I can't minimize windows is perhaps
> more than my old school brain can handle)..
>
> I've noticed in pgAdmin, basically no popup works.  If I right click
> on the "Databases" branch and select "New Database", nothing happens.
> If I right click on the "Tables" brand and select "New Table", nothing
> happens..  I know for a fact this works when I logon with KDE..
>
> If anyone wants more info from my system, just lemme know what I can
> do to get that for you..  Thanks!!

Okay good news is that these menus do come up, bad news is the bug is
even more obscure.  Here's my setup:

I have two monitors.  Gnome3 allows you to create multiple desktops to
organize your windows, however this only affects your primary monitor.
 I actually kinda like this, since I have my web browser on my second
monitor and I can "scroll through" apps on my primary monitor.

In this case, I had pgAdmin on my second monitor and some random app,
like Eclipse on the primary monitor.  I selected "Create Database" on
pgAdmin and apparently nothing happened.  What it actually did it
spawned the new dialog on my primary monitor BUT on another virtual
desktop.  I finally noticed it when I flipped over to that one..

I guess the bug can be fixed as:

1) Always spawn new child windows on the same monitor.
or
2) In Gnome 3, create the window on the virtual desktop that's
currently active..

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgAdmin3 not working with Gnome3

2011-09-01 Thread Mike Christensen
Hi all -

I'm on openSuse running the latest stable release of Gnome3 (Just
trying it out, so far the fact I can't minimize windows is perhaps
more than my old school brain can handle)..

I've noticed in pgAdmin, basically no popup works.  If I right click
on the "Databases" branch and select "New Database", nothing happens.
If I right click on the "Tables" brand and select "New Table", nothing
happens..  I know for a fact this works when I logon with KDE..

If anyone wants more info from my system, just lemme know what I can
do to get that for you..  Thanks!!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Am I best off keeping large chunks of text in a separate table?

2011-09-17 Thread Mike Christensen
I have a table that looks something like this:

url - character varying(1024)
date - timestamptz
body - text

Url is a unique primary key.  Body can potentially be a couple hundred
k of text.

There will at first be perhaps 100,000 rows in this table, but at some
point it might get into the millions.

I need to be able to quickly insert into this table (I might be
inserting several rows per second at times).  I also need to be able
to very quickly see if a URL already exists in the table, and what the
date value is.  Or, query for all "urls" that have a "date" older than
x days.

Am I better off with two tables such as:

Table1:
id - uuid or integer (primary key)
url - unique index
date

Table2:

id - FK to Table2.id
body - text

It makes the program flow a bit more complicated, and I'd have to use
transactions and stuff when inserting new rows.  However, for years
I've been told that having rows with large chunks of text is bad for
perf and forces that data to be paged into memory and causes other
various issues.  Any advice on this one?  Thanks!

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Am I best off keeping large chunks of text in a separate table?

2011-09-17 Thread Mike Christensen
> On Sat, Sep 17, 2011 at 6:46 PM, Mike Christensen  wrote:
>> I have a table that looks something like this:
>>
>> url - character varying(1024)
>> date - timestamptz
>> body - text
>>
>> Url is a unique primary key.  Body can potentially be a couple hundred
>> k of text.
>>
>> There will at first be perhaps 100,000 rows in this table, but at some
>> point it might get into the millions.
>>
>> I need to be able to quickly insert into this table (I might be
>> inserting several rows per second at times).  I also need to be able
>> to very quickly see if a URL already exists in the table, and what the
>> date value is.  Or, query for all "urls" that have a "date" older than
>> x days.
>>
>> Am I better off with two tables such as:
>>
>> Table1:
>> id - uuid or integer (primary key)
>> url - unique index
>> date
>>
>> Table2:
>>
>> id - FK to Table2.id
>> body - text
>>
>> It makes the program flow a bit more complicated, and I'd have to use
>> transactions and stuff when inserting new rows.  However, for years
>> I've been told that having rows with large chunks of text is bad for
>> perf and forces that data to be paged into memory and causes other
>> various issues.  Any advice on this one?  Thanks!
>
> What would be really cool is if postgresql took values for body that
> were over a few k and compressed them and stored them out of line in
> another table.  Luckily for you, that's EXACTLY what it already does.
> http://www.postgresql.org/docs/9.1/static/storage-toast.html  Cool eh?
>

Man I've been reading this list for years now, and I kept on seeing
this "TOAST" thing and just figured you people liked it for sandwiches
or something.

I feel like the programmer who thinks he's smart using a left bitshift
operator to double an integer value just to find out the compiler
already takes that optimization anyway.  Are you saying I don't
actually need to de-frag my hard drive these days either?

Thanks for the quick reply!  I will design my table in a way that
logically makes sense to me.

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Clean way to insert records if they don't exist, update if they do

2011-09-17 Thread Mike Christensen
Here's my situation.  I have a table with a bunch of URLs and crawl
dates associated with them.  When my program processes a URL, I want
to INSERT a new row with a crawl date.  If the URL already exists, I
want to update the crawl date to the current datetime.  With MS SQL or
Oracle I'd probably use a MERGE command for this.  With mySQL I'd
probably use the ON DUPLICATE KEY UPDATE syntax.

I could do multiple queries in my program, which may or may not be
thread safe.  I could write a SQL function which has various IF...ELSE
logic.  However, for the sake of trying out Postgres features I've
never used before, I'm thinking about creating an INSERT rule -
something like this:

CREATE RULE Pages_Upsert AS ON INSERT TO Pages
   WHERE EXISTS (SELECT 1 from Pages P where NEW.Url = P.Url)
   DO INSTEAD
  UPDATE Pages SET LastCrawled = NOW(), Html = NEW.Html WHERE Url = NEW.Url;

This seems to actually work great.  It probably loses some points on
the "code readability" standpoint, as someone looking at my code for
the first time would have to magically know about this rule, but I
guess that could be solved with good code commenting and
documentation.

Are there any other drawbacks to this idea, or maybe a "your idea
sucks, you should do it /this/ way instead" comment?  I'm on PG 9.0 if
that matters.  BTW, add my name to the long list of people who would
love to see UPSERT and/or MERGE commands in the next version of PG.

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Materialized views in Oracle

2011-09-21 Thread Mike Christensen
So I used to think materialized views in Postgres would be an awesome
feature.  That is until I had to endure the hell hole which is Oracle's
implementation..  what a complete joke..  did MS SQL's indexed views do any
better?  Hopefully if PG 10 implements this, they'll make it actually useful
to people.


Re: [GENERAL] Materialized views in Oracle

2011-09-21 Thread Mike Christensen
Hmm I think a materialized view you have to update yourself is called a
"table"..  but after dealing with the nightmare that is Oracle 11g, I think
it'd be much more fun going that route with triggers and everything.  Yes
this thread is a complete vent, and also a plea to the pg guys to do
materialized views right!
On Sep 21, 2011 1:54 PM, "Ben Chobot"  wrote:
> On Sep 21, 2011, at 1:17 PM, Mike Christensen wrote:
>
>> So I used to think materialized views in Postgres would be an awesome
feature. That is until I had to endure the hell hole which is Oracle's
implementation.. what a complete joke.. did MS SQL's indexed views do any
better? Hopefully if PG 10 implements this, they'll make it actually useful
to people.
>>
>
> Not sure if this was a question or just blowing off frustration, but you
can do materialized views in PG just fine. It's just that you have to do the
work of updating them yourself. You can make a good case that's a good
thing, because it lets you optimize the process around your needs. but
it does add for dba complexity.
>
> (Hey, just like replication! :) )


Re: [GENERAL] Materialized views in Oracle

2011-09-21 Thread Mike Christensen
>> Hmm I think a materialized view you have to update yourself is called a
>> "table"..  but after dealing with the nightmare that is Oracle 11g, I
>> think it'd be much more fun going that route with triggers and
>> everything.  Yes this thread is a complete vent, and also a plea to the
>> pg guys to do materialized views right!
>
> In terms of defining "right," it'd be rather handy to know what's wrong with
> the implementation you're currently struggling with. The Oracle guys won't
> have set out to do it wrong; whatever's wrong with it won't be obvious until
> you try to use it a different way to how they envisioned it or will be an
> issue that only comes up in real-world use.
>
> It's entirely possible any future Pg support could fall into the same issue
> ... unless opportunities like this are used to learn from the experience of
> those who went before.
>
> Exactly what do you find frustrating with the mat views you're using?
>
> What's wrong with them? What's right with them? How are you trying to use
> them? What problem are you attempting to solve with them? How do you *want*
> and *expect* them to work?

Oh you've asked for it..

Lemme give you a bit of a brain dump about my day.  I have to first be
up front, I'm not a database expert by any means and I'm nowhere close
to an Oracle expert, or even an Oracle novice.  If Oracle was a
swimming pool, I would have those little floaty duck things on my
arms.  I took a contract gig for "a major cell phone company" that
happens to use Oracle, so I've been trying to figure out stuff as I
go.  I'm about 3 weeks in, so that's exactly how much Oracle
experience I have now.

One of the major components I'm working on is this reporting engine
that runs these major huge expensive queries (seriously, some of them
take 2-3 minutes to run) to generate data that gets dumped to an Excel
file for VPs to read (or make pretty charts out of and look smart in
board rooms).  I've been trying to speed some of these queries up, but
unfortunately the SQL code that generates these reports is about 4,000
lines long and is wrapped in a massive Oracle "package".  The SQL code
is not indented nor does it have any whitespace or anything, you'd
have more fun trying to debug into minimized Javascript files on
Facebook.  I've tried to run some SQL formatters on the code, and so
far any formatter will just crash.  Today, I decided to re-write one
of these reports from scratch since they wanted a bunch of changes
anyway.  The data, as you've probably imagined, is highly
un-normalized and the queries are pretty crazy, so I got the idea to
make a few strategic DB changes as well as stick this report in a
materialized view.  That way, we could just query from that, throw in
whatever filters we need and poof, instant report in like half a
second.  If done right.

Problem 1:

I spent quite a long time writing the query that I was planning on
turning into a view.  This involved doing things like a sub-query to
get the minimum date from another set of rows, and building a comma
delimited list out another nested select with the wm_concat function.
After finally getting this query to work, turning it into a MV ended
up being impossible.

Oracle has a few really cool features around materialized views.
There's a REFRESH ON COMMIT option that automatically updates the
materialized view any time the underlying data is changed (otherwise
you have to manually update it, or setup an update schedule for it).
There's also a FAST REFRESH option that will only update the data
that's changed, rather than run the entire query and re-build every
row (Doing a full refresh on every commit would be crazy expensive, so
no)..  In my opinion, doing a FAST REFRESH ON COMMIT seems to be the
only interesting feature - otherwise, there's not really a point -
just make a normal table and re-build it once a day.  Plus, they
demand these reports to be real-time data anyway.  However, if you
want both of these options (fast refresh and update on commit), Oracle
turns into a complete nightmare.

First, apparently there can be no JOINS in your query.  If you create
a view with:

SELECT A.Foo, B.Foo FROM A INNER JOIN B ON A.ID=B.ID

You'll get the cryptic error:

ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view

Now, you can do the SAME exact query without the ANSI joins:

SELECT A.Foo, B.Foo FROM A, B WHERE A.ID=B.ID

Now all of a sudden it works.  Yes, that makes a lot of sense, and
thanks for the helpful error message too.

Second: You have to create these materialized view logs for all the
underlying tables if you want FAST REFRESH.  This allows Oracle to log
the exact data that changed so it can figure out which parts of your
view table to update.  Why can't Oracle just log its own data when
there's fast refreshing views dependent on it?  You not only have to
create these logs, but you have to create them with special ROWID
parameters otherwise you get more cryptic unhelpful Oracle erro

Re: [GENERAL] Materialized views in Oracle

2011-09-23 Thread Mike Christensen
On Wed, Sep 21, 2011 at 11:34 PM, Craig Ringer  wrote:
[snip]
> This can get complicated when you have triggers acting recursively on a
> table and it isn't always that easy to understand exactly what a trigger
> will see.

I do agree with most all your points.  The value I got out of this
experience was the perspective I gained dealing with materialized
views and being forced to think about the logic behind their
implementation.  As you said, this feature is friggen tough to do
right.  I read up a bit about Microsoft SQL’s “indexed views” and they
too have a long list of things you can’t do, and from what I can tell,
they only support the idea of keeping them up to date on every commit.
So, since we’re not on this list to discuss how we can improve Oracle,
I’d like to not make that the focus of my response.  I’d rather have a
discussion on what materialized views could mean in the Postgres
world.  I think my expectations, now slightly more validated through
my realization of what’s possible with Oracle, would go something like
this:

1) Though I might have given the impression that a “manual complete
refresh” is not useful, I definitely see value in this especially for
data warehousing scenarios.  However, I’d almost want to call this a
“snapshot” and not a “materialized view” – the two things are so
different, I think it warrants different syntax.  I think the ability
to create a snapshot in time would be quite useful:

CREATE SNAPSHOT Foo AS
   SELECT * FROM Bar;

Now, I have “Foo” as a record in time, and can refer to it as I could
any other table. I’d also like the ability to refresh it (via a
schedule or a trigger):

REFRESH SNAPSHOT Foo;

Snapshots would remember their underlying query and could thus easily
update at any time.

2) The huge feature here (which MS SQL implements as indexed views)
are views that automatically update as the data underneath them
changes.  I’ve come to the conclusion that, while it’s impressive that
Oracle can do anything close to this and have it actually work pretty
well in a lot of scenarios, Oracle simply can’t have an
all-encompassing knowledge of exactly how my database works and where
all the data comes from.  You know who does have that knowledge?  Me.
I believe I should have the power to instruct Postgres exactly when
and how to update my view in situations where it cannot be
automatically ascertained through the engine, rather than the DB
saying “Sorry I can’t be perfect thus you can’t do that.”  For me, I
see this “chore” as a lot more appealing than trying to figure out why
I can’t make the view that I want.

I expect to be able to create two kinds of materialized views: Ones
that COMPLETE refresh any time any referred column changes, and ones
that do a FAST refresh.  If I specify FAST but the engine can’t infer
what it needs, I should get a NOTICE and it should fall back to a
COMPLETE.  If I specify nothing, it should create FAST if it can, and
if not fall back to COMPLETE without notice.

When creating a materialized view, I believe warnings should be issued
when a column’s source cannot be inferred by the parser, but at the
risk of being controversial, I think I should still be able to create
the view anyway.  I’ve always been partial to systems that allow you
to shoot yourself in the foot.  I could see doing something like:

CREATE MATERIALIZED VIEW Foo AS
   SELECT ID, Name FROM Users;

Postgres knows that ID is a primary key, and can thus update the view
when Users changes.  Had I not put in a primary key, I think one
should be generated for me automatically based on every primary key in
the referred tables.  If tables do not have primary keys, you’d get a
warning that the view has to be re-created on any change to the
referred keyless tables.

CREATE MATERIALIZED VIEW Foo AS
   SELECT State, AVG(Age) FROM Users GROUP BY State;

Ok this is a tough one to figure out.  Since the Age column is
referred to, we could force a refresh every time Age in any row
changes.  In theory, the engine could be smart enough to realize the
aggregate age came from a group, and thus update the appropriate
“State” row when any Age within that state changed.  Wow, this is
getting tough; I see why Oracle just said no way on aggregate
functions.

CREATE MATERIALIZED VIEW Foo AS
   SELECT ID, GetStateFromZip(ZipCode) as State FROM Users;

Ouch, a function.  Well, the engine could look at the volatility of my
GetStateFromZip function and make an assumption that passing in value
X would always result in Y for nonvolatile functions.  Then, update
row ID when ZipCode changes.  However, if the function is volatile or
the data that the function itself uses changes, then we’d definitely
run into issues.  Two options: One, you’d issue a NOTICE and say
something like “The source of column ‘State’ cannot be inferred.” and
create the view anyway.  Maybe there could be some STRICT option or
pragma to simply not allow creating the dangerous views that could get
stale.  Another option, the use

Re: [GENERAL] (another ;-)) PostgreSQL-derived project ...

2011-09-24 Thread Mike Christensen
> ~
>  I have been searching for a PostgreSQL-derived project with a
> "less-is-best" Philosophy. Even though I have read about quite a bit
> of PG forks out there, what I have in mind is more like a baseline
> than a fork.
> ~
>  My intention is not wrapping the same thing in a different package or
> code add-ons/value-added features on top of PG, but ridding PG of
> quite a bit of its internal capabilities and just use its very
> baseline.
> ~
>  All I would need PG for is raw data warehousing, memory,
> I/O-subsystem management, MVCC/transaction management ... No fanciness
> whatsoever. What do you need to, say, format dates in the database if
> formatting/pretty-printing and internalization can be taken care more
> appropriately in the calling environment say Python or Java? All is
> needed is to store a long representing the date. Why are arrays needed
> in a the DB proper when serialization and marshaling/casting can be
> taken care of in the calling environment. If you are using say, java,
> all you need PG to do is to faithfully store a sequence of bytes and
> you would do the (de)serialization very naturally indeed.
> ~
>  There used to be a postgresql-base- package with the bare
> minimum of source code to build and run PostgreSQL which I think would
> be a good starting point, but I don't find it in the mirror sites
> anymore
> ~
>  http://wwwmaster.postgresql.org/download/mirrors-ftp
> ~
>  Where can I find it?
> ~
>  I know the result will not be a SQL-compliant DBMS anymore, yet I
> wonder how much faster would SQL+client code doing such things as
> formatting "on-the-fly" work.
> ~
>  Do you know of such tests even in a regular PG installation?
> ~
>  Do you see any usefulness in such a project?
> ~
>  Do you know of such a project? Anyone interested? Any suggestions to
> someone embarking in it?
> ~
>  It would be great if PG developers see any good in it and do it themselves 
> ;-)
> ~
>  lbrtchx

Doesn't Yahoo! have some super modified mega-high-performant version
of Postgres?  Last I heard (which was like 2008) they were planning on
putting it online.  I think it involved a columnar oriented table
format or something.  Did this ever happen?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why PGSQL has no developments in the .NET area?

2011-10-02 Thread Mike Christensen
>> PgSQL has just one old NPGSQL driver for .NET, which is itself sluggish.
>> The ODBC driver works better as compared to NPGSQL, but I suspect the ODBC
>> driver is not the right choice for ORM framework of .NET.
>>
>> I want to know whether there is any efficient .NET provider and is PGSQL
>> compatible with the .NET entity framework.
>>
>> Regards,
>> Rohit.
>
> Won't bite on the "why" of it all, but is this
>  what you're after?

We use the DevArt drivers (their Oracle drivers anyway) at work and
they're pretty good.  They have solid tech support (helpful and quick
to respond) and they seem to write quality stuff.  I'd assume their PG
drivers are just as good.  However, this stuff isn't cheap.  You pay
per dev license and it's like a few hundred bucks per dev.

I use Npgsql for my own website (www.kitchenpc.com) and I've had zero
problems with these drivers.  They're 100% managed code, and they seem
to be quick even though I haven't done any in-depth speed comparisons.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Searching for "bare" letters

2011-10-02 Thread Mike Christensen
>> I don't see the problem - you can have a dictionary, which does all work
>> on recognizing bare letters and output several versions. Have you seen
>> unaccent
>> dictionary ?
>
> This seems to be the direction that everyone is suggesting, and I'm quite
> grateful for that.  (I really hadn't ever needed to deal with such issues in
> the past, having worked mostly with English and Hebrew, which don't have
> such accent marks.)
>
> As for the unaccent dictionary, I hadn't heard of it before, but just saw it
> now in contrib, and it looks like it might fit perfectly.  I'll take a look;
> thanks for the suggestion.

I wrote this code for something similar I was doing, feel free to rip
it off or copy the regular expressions:


input = Regex.Replace(input, @"[\xC0-\xC5\xE0-\xE5]", "a");  //Replace with "a"
input = Regex.Replace(input, @"[\xC8-\xCB\xE8-\xEB]", "e");  //Replace with "e"
input = Regex.Replace(input, @"[\xCC-\xCF\xEC-\xEF]", "i");  //Replace with "i"
input = Regex.Replace(input, @"[\xD1\xF1]", "n");//Replace with "n"
input = Regex.Replace(input, @"[\xD2-\xD6\xF2-\xF6]", "o");  //Replace with "o"
input = Regex.Replace(input, @"[\xD9-\xDC\xF9-\xFC]", "u");  //Replace with "u"
input = Regex.Replace(input, @"[\xDD\xDF\xFF]", "y");//Replace with "y"

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   >