On Mon, Jul 27, 2009 at 10:51:00PM -0700, Nick wrote:
> Is it possible to join two tables by the nearest date? For example how
> could I find out where the user was on their birthday?
DISTINCT ON[1] is normally the easiest way:
SELECT DISTINCT ON (u.id) u.id, l.id, l.created
FROM users u, use
In response to Nick :
> Is it possible to join two tables by the nearest date? For example how
> could I find out where the user was on their birthday?
>
> users (id, name, birthday)
> 1 | one | 2009-07-27
> 2 | two | 2009-07-28
> 3 | three | 2009-07-29
> 4 | four | 2009-07-30
> 5 | five | 2009-07
Is it possible to join two tables by the nearest date? For example how
could I find out where the user was on their birthday?
users (id, name, birthday)
1 | one | 2009-07-27
2 | two | 2009-07-28
3 | three | 2009-07-29
4 | four | 2009-07-30
5 | five | 2009-07-31
users_locations (id, user_id, creat
On Mon, 2009-07-27 at 16:53 -0700, Christophe Pettus wrote:
> On Linux, is there an advantage either way to using the RPMs as
> opposed to building from source?
In terms of performance and functionality, there is no difference. On
the other hand, many people, including me, find precompiled packa
Novice here :). I have PostgreSQL 8.2 installed on a single board computer
running Windows XP Embedded on a Compact Flash drive - 2 databases with no
more than 2000 ro. After 10 power cycles spaced 6 mins apart, I noticed the
postgres.exe processes no longer running. I located log file (see below)
Tim Uckun writes:
> What is the rationale for not giving the group any permissions at all?
On lots of systems, giving group permissions is nearly as bad as giving
world permissions (eg, all the users might be in a "users" group).
So we don't do it by default. If you want to poke holes in the sec
Robert James writes:
> Hi. I noticed that when clients (both psql and pgAdmin) disconnect or
> cancel, queries are often still running on the server. A few questions:
> 1) Is there a way to reconnect and get the results?
No.
> 2) Is there a way to tell postgres to automatically stop all querie
On Mon, 2009-07-27 at 21:05 -0400, Robert James wrote:
> 1) Introduction to Database Systems
> http://www.amazon.com/Introduction-Database-Systems-Kannan-Swamynathan/dp/B001BVYKY4/ref=sr_1_5?ie=UTF8&s=books&qid=1248742811&sr=1-5
>
> and
> 2) Database in Depth: Relational Theory for Practitioners
>
Thanks for all the good replies (both on and off list). It seems the
consensus is for me to read Christopher Date. I found two relevant Date
books:
1) Introduction to Database Systems
http://www.amazon.com/Introduction-Database-Systems-Kannan-Swamynathan/dp/B001BVYKY4/ref=sr_1_5?ie=UTF8&s=books&q
* Janet Jacobsen (jsjacob...@lbl.gov) wrote:
> If they are going to spend 95% of their time querying the
> records that meet the 'good' criteria, what are the good
> strategies for ensuring good performance for those queries?
> (1) Should I partition the table into two partitions based on
> the val
Hi. I noticed that when clients (both psql and pgAdmin) disconnect or
cancel, queries are often still running on the server. A few questions:
1) Is there a way to reconnect and get the results?
2) Is there a way to tell postgres to automatically stop all queries when
the client who queried them d
On Mon, Jul 27, 2009 at 8:24 PM, Greg Stark wrote:
> I think it would be even more interesting to have partial indexes --
> ie specified with "WHERE rbscore < cutoff".
Yes- that's what I actually meant. Word got scrambled between brain
and fingers...
--
- David T. Wilson
david.t.wil...@gmail.co
On Tue, Jul 28, 2009 at 1:08 AM, David Wilson wrote:
> On Mon, Jul 27, 2009 at 7:52 PM, Janet Jacobsen wrote:
>
>> Can you suggest other strategies?
>
> Something that might be easier to play with is to create a (or
> several, to speed up other queries) functional index on the comparison
> between
On Mon, Jul 27, 2009 at 7:52 PM, Janet Jacobsen wrote:
> Can you suggest other strategies?
Something that might be easier to play with is to create a (or
several, to speed up other queries) functional index on the comparison
between rbscore and the cutoff. It won't buy you anything on seq
scans,
On Mon, 2009-07-27 at 19:44 -0400, Tom Lane wrote:
> > because RH really can't be used as a production PostgreSQL server (if
> > date based data is important)
>
> I have open bugs about the lack of in-place upgrade. I have never once
> heard a customer complain about FP timestamps. So your posi
I am trying to monitor replication lag using zabbix. I have written a
simple script in ruby to get the lag it goes like this.
require 'date'
require 'yaml'
y = YAML.load `/usr/lib/postgresql/8.3/bin/pg_controldata
/var/lib/postgresql/8.3/main`
last_checkpoint = DateTime.parse( y['Time of latest c
I'm moving from a long time in BSD-land to using Linux. I've always
been in the habit of building PostgreSQL from the source tarballs. On
Linux, is there an advantage either way to using the RPMs as opposed
to building from source? Thanks!
--
-- Christophe Pettus
x...@thebuild.com
--
Hi. We have a table with 30 M records that is growing by
about 100 K records per day.
The experimentalists, whose data are in the table, have
decided that they will focus on the records for which the
value of one field, rbscore, is greater than a cut-off.
However, they want to continue to store a
"Joshua D. Drake" writes:
> On Mon, 2009-07-27 at 19:16 -0400, Tom Lane wrote:
>> Oh? You think RH/Cent is going to change that default now? Think again.
> I thought they would get around to changing it now.
"They" is me, and it's not changing. I'm not blowing a chance at
in-place upgrade to
On Mon, 2009-07-27 at 19:16 -0400, Tom Lane wrote:
> "Joshua D. Drake" writes:
> > It depends, 8.3 and 8.4 are not compatible by default (because of
> > --integer-datetimes). So, yeah if you are running Debian/Ubuntu but if
> > you are running Cent/RH with the defaults, pg_migrator isn't going to
"Joshua D. Drake" writes:
> It depends, 8.3 and 8.4 are not compatible by default (because of
> --integer-datetimes). So, yeah if you are running Debian/Ubuntu but if
> you are running Cent/RH with the defaults, pg_migrator isn't going to
> work unless you compile Pg from source.
Oh? You think R
Greetings:
Does anyone know if a function written in C and linked into the backend in a
shared library with a statically declared structure, maintain that data for
the life of the backend process such that, when the function is called again,
the structure data is intact?
Thanks for any insight
> This is good. Since 8.2 VACUUM age is done per table instead of per
> database. This should solve most of your problems.
> On older versions you need to do a database-wide vacuum (note this is
> not vacuumdb -a) once every billion transactions.
> You won't lose data, but you need to do a DB wide
On Mon, Jul 27, 2009 at 02:21:02PM -0600, Keaton Adams wrote:
> We are upgrading to 8.3.7 in September, if that helps the situation at all.
This is good. Since 8.2 VACUUM age is done per table instead of per
database. This should solve most of your problems.
> So my questions are:
>
> 1. Will
On Mon, 2009-07-27 at 22:48 +0200, Thomas Kellerer wrote:
> Tory M Blue wrote on 27.07.2009 22:45:
> > And those that have multiple TB's of data, weee another dump and
> > restore upgrade (pt!)
>
> Isn't that what pg_migrator is for?
It depends, 8.3 and 8.4 are not compatible by default (
On Mon, Jul 27, 2009 at 2:48 PM, Thomas Kellerer wrote:
> Tory M Blue wrote on 27.07.2009 22:45:
>>
>> And those that have multiple TB's of data, weee another dump and
>> restore upgrade (pt!)
>
> Isn't that what pg_migrator is for?
I use slony for such things, downtime = zero (ok a few se
On Mon, Jul 27, 2009 at 4:45 PM, Tory M Blue wrote:
>
>
> And those that have multiple TB's of data, weee another dump and
> restore upgrade (pt!)
pg_migrator doesn't need to dump -> restore, it can do an in-place upgrade
of the datafiles for you.
http://archives.postgresql.org/pgsql
Tory M Blue wrote on 27.07.2009 22:45:
And those that have multiple TB's of data, weee another dump and
restore upgrade (pt!)
Isn't that what pg_migrator is for?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.pos
On Mon, Jul 27, 2009 at 12:51 PM, Phoenix Kiula wrote:
> Just looking for experiences of people. Are people already using 8.4
> in serious live hosting environments? Thanks.
>
Wait..
8.3 is running fine and dandy. Lots of decent sized changes in 8.4
with awaiting fixes. So wait.
And those that
We are upgrading to 8.3.7 in September, if that helps the situation at all.
I just want to make sure I thoroughly understand how these two statements work
together:
"The price is this maintenance requirement: every table in the database must be
vacuumed at least once every billion transactions.
On Tue, 2009-07-28 at 03:51 +0800, Phoenix Kiula wrote:
> Are people already using 8.4 in serious live hosting environments?
Not yet. There are lots of (important) fixes in CVS which are waiting
for 8.4.1. For production, I'd wait for a while.
--
Devrim GÜNDÜZ, RHCE
Command Prompt - http://www.Co
Greetings,
The video recordings of the sessions for PG Day SJC '09 are now
available:
Version 8.4: Easier to Administer than Ever / Josh Berkus / PostgreSQL
Experts
http://media.postgresql.org/pgday-sjc-09/pgday-sjc-09-easier.mov
Rapid Upgrades with pg_Migrator / Bruce Momjian / Enterpri
Just looking for experiences of people. Are people already using 8.4
in serious live hosting environments? Thanks.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
In response to Keaton Adams :
> We are to a point in size and utilization of a set of our Postgres 8.1.17
> databases that a vacuumdb -a -z -U postgres is still running after 50 hours
> and we have to kill it off because the additional i/o is causing queries to
> stack up. We have archived off
On Sun, 2009-07-26 at 15:36 -0400, Robert James wrote:
> I'm working on improving my background database theory, to aid in
> practice. I've found learning relational algebra to be very helpful.
> One thing which relational algebra doesn't cover is aggregate
> functions. Can anyone recommend any
Alexey Klyukin writes:
> On Jul 27, 2009, at 6:52 PM, Michael Glaesemann wrote:
>> I don't have a solution, but am curious what your use case is for
>> timetz (as opposed to timestamptz).
> I'm writing a custom trigger function that has to compare values of
> time* types and make some actions
On Mon, 2009-07-27 at 11:50 -0400, Tom Lane wrote:
> Scott Marlowe writes:
> > On Mon, Jul 27, 2009 at 8:48 AM, Jamie
> > Lawrence-Jenner wrote:
> >> Our update statements are as follows
> >>
> >> Update table set col1=x,col2=y where pkid=1;
> >> Update table set col1=x,col2=y where pkid=2;
> >>
I would like to combine multiple databases (same schema) into one
master db. Does anyone know how I can reconcile all my primary and
foreign keys to maintain referential integrity.
Many thanks,
Bob
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to you
On Jul 27, 2009, at 6:52 PM, Michael Glaesemann wrote:
On Jul 27, 2009, at 10:54 , Alexey Klyukin wrote:
Hello,
I was looking for a way to get the difference (interval) between 2
timetz values, i.e.:
I don't have a solution, but am curious what your use case is for
timetz (as opposed
Hello
2009/7/27 Saleem EDAH-TALLY :
> Hello,
>
> Is there a way to run a pl/pgsql automatically at server startup ?
no - only you can modify startup scripts
>
> Is there a way to run a pl/pgsql function with an infinite loop as a daemon
> ?
infinite loop is possible, but probably you need orafc
Saleem EDAH-TALLY wrote:
Is there a way to run a pl/pgsql automatically at server startup ?
in your postgres startup script launch a session with `psql ... -c "some
sql commands"` or `psql ... -f somescript.sql` ...
Is there a way to run a pl/pgsql function with an infinite loop as a
daemo
Hello,
Is there a way to run a pl/pgsql automatically at server startup ?
Is there a way to run a pl/pgsql function with an infinite loop as a daemon ?
Is there a way to start a pl/pgsql function that would persist after the user
session has closed ?
Is there a way for an unprivileged user to
On Jul 27, 2009, at 10:54 , Alexey Klyukin wrote:
Hello,
I was looking for a way to get the difference (interval) between 2
timetz values, i.e.:
I don't have a solution, but am curious what your use case is for
timetz (as opposed to timestamptz).
Michael Glaesemann
grzm seespotcode net
We are to a point in size and utilization of a set of our Postgres 8.1.17
databases that a vacuumdb -a -z -U postgres is still running after 50 hours and
we have to kill it off because the additional i/o is causing queries to stack
up. We have archived off as much data as possible (100+ GB) and
Scott Marlowe writes:
> On Mon, Jul 27, 2009 at 8:48 AM, Jamie
> Lawrence-Jenner wrote:
>> Our update statements are as follows
>>
>> Update table set col1=x,col2=y where pkid=1;
>> Update table set col1=x,col2=y where pkid=2;
>> Update table set col1=x,col2=y where pkid=3;
>>
>> Very simple an
On Mon, Jul 27, 2009 at 8:48 AM, Jamie
Lawrence-Jenner wrote:
> Hi There
>
> Our update statements are as follows
>
> Update table set col1=x,col2=y where pkid=1;
> Update table set col1=x,col2=y where pkid=2;
> Update table set col1=x,col2=y where pkid=3;
>
> Very simple and straight forward. Som
Hello again,
Le 27/07/09 16:48, Jamie Lawrence-Jenner a écrit :
> Hi There
>
> Our update statements are as follows
>
> Update table set col1=x,col2=y where pkid=1;
> Update table set col1=x,col2=y where pkid=2;
> Update table set col1=x,col2=y where pkid=3;
>
> Very simple and straight forwar
Hi There
Our update statements are as follows
Update table set col1=x,col2=y where pkid=1;
Update table set col1=x,col2=y where pkid=2;
Update table set col1=x,col2=y where pkid=3;
Very simple and straight forward. Sometimes there could be as many as 50
update statements to process.
Many thank
Hello,
I was looking for a way to get the difference (interval) between 2
timetz values, i.e.:
postgres=# select '2:45+7'::timetz - '2:44+2'::timetz;
ERROR: operator does not exist: time with time zone - time with time
zone
LINE 1: select '2:45+7'::timetz - '2:44+2'::timetz;
I'd expect t
On Mon, Jul 27, 2009 at 07:31:37AM -0600, Scott Marlowe wrote:
> On Mon, Jul 27, 2009 at 2:10 AM, Jamie
> Lawrence-Jenner wrote:
> > Do 1 pass to retrieve the 5 primary keys, then update all rows in parallel
>
> I would do 5 passes. Better to have one update statement to reduce bloat.
You could
Hello,
Le 27/07/09 15:07, Raymond O'Donnell a écrit :
> On 27/07/2009 09:10, Jamie Lawrence-Jenner wrote:
>
>> Apart from saving on the overhead of having to open up 5 separate
>> connections, what are the benefits to passing in multiple updates in one
>> statement?
>
> If you do them all within
On Mon, Jul 27, 2009 at 2:10 AM, Jamie
Lawrence-Jenner wrote:
> Hi there
>
> We have a function which runs a set of update clauses and we are considering
> putting all the update clauses into one statement.
>
> I would like to understand how postgres handles multiple updates. If we were
> to send 5
On 27/07/2009 09:10, Jamie Lawrence-Jenner wrote:
> Apart from saving on the overhead of having to open up 5 separate
> connections, what are the benefits to passing in multiple updates in one
> statement?
If you do them all within one transaction -
begin;
update
update...
...
comm
Hi there
We have a function which runs a set of update clauses and we are considering
putting all the update clauses into one statement.
I would like to understand how postgres handles multiple updates. If we were
to send 5 update statements in one sql statement to the db would it:
Do 5
> What first post? The only thing I can find is a reference in a message
> by you from yesterday, to a two-year old post that you claim is about
> the same problem. Though it's possible that it is the same problem,
> you don't provide any data to back that up.
Strange - you can see the full
> postgres collect all necessary stats. Maybe an implicit analyze is
> necessary?
Should be: "explicit analyze".
> > > BUT I found the real cause of my problem - the "fk2" field from my
> > > example had not only an index, but it was also a foreign key to
> > > another table.
> > That seems unlik
Alvaro Herrera-7 wrote:
>
> This is why Tom was suggesting you to increase wal_buffers. Did
> you try that?
>
Thanks for the explanation. I will try increasing the wal_buffers.
Unfortunately this is on a system I can't restart for the next few days.
Tomer
--
View this message in context:
APseudoUtopia wrote:
> I'm having some problems when inserting special characters into a
> column. Here's the table:
>
> --
> Table "public.users_history_ip"
>Column |Type |
58 matches
Mail list logo