Hi all,
My company is having trouble managing how we upgrade schema changes across many
versions of our software. I imagine this is a common problem and there're
probably some neat solutions that we don't know about.
For the last 10 years we have been writing bash shell scripts essentially
On 11/02/2011, at 9:59 AM, Thomas Kellerer wrote:
> Royce Ausburn wrote on 10.02.2011 22:38:
>> I'm really interested to hear how you guys manage schema upgrades in
>> the face of branches and upgrading from many different versions of
>> the database.
>
> We are
>
> So, 10.0 at 10057.
> 11.0 at 11023.
>
> then 10.1 needs some fixes so db is bumped to 10058.
>
> Then, later, you can upgrade 10057 to 11023, but you cant get 10058 to 11023.
>
> Humm... maybe you need smarter upgrade scripts? Would having logic in the
> script help? Something like:
>
>
>
> Don't the bash scripts get checked in to .../perforce/cvs/svn/git/...?
> Aren't they part of the resources of the project(s)?
Yep - they absolutely are. The issue is that there're multiple branches
*potentially* having new scripts committed. Fortunately it's rare as the
release branches
Hi all,
Got an odd one.
test=# select version();
version
---
On 02/03/2011, at 2:16 PM, Andy Colson wrote:
> Its getting id1 from the parent table. (test2) You can use fields from the
> parent table in subselects.
>
> try this and it'll complain:
>
> select * from test2 where id1 in (select junk from test1) and charge=70;
Oh! Of course! What a fool
Hi all,
A few days ago one of our postgres (8.3.12) servers was a bit unhappy, and
someone decided to try a kill -9 on a backend process after a kill (TERM) was
ineffective. I've read many times in the past that a kill -9 can be pretty
hazardous to a postgres' health, and now it seems I get to
This might help you:
http://www.postgresql.org/docs/8.4/static/queries-with.html
On 24/08/2011, at 9:54 AM, Chris Hanks wrote:
> I have two tables:
>
> CREATE TABLE items
> (
> root_id integer NOT NULL,
> id serial NOT NULL,
> -- Other fields...
>
> CONSTRAINT items_pkey PRIMARY KEY (root_
void any query parsing overhead, and averaging the time over several
> thousand queries). I'm not sure why...?
I'm not sure, Chris - perhaps others on the mailing list can answer this?
>
>
>
> On Tue, Aug 23, 2011 at 8:14 PM, Royce Ausburn wrote:
>> This might
> I created a new view called "clients2" with the same member tables as clients
> and it works I can query it.
>
> But due to the many hardcoded places that use "clients" view, I have to have
> "clients" view. So I tried to drop clients view but cannot
>
> #DROP view clients;
> ERROR: missin
I'm in the process of testing out Postgres 9.0 for production use. I've been
using it for development on my mac, a build from EnterpriseDB. We've just
installed a 9.0.5 on an Ubuntu (Ubuntu 10.04.3 LTS) machine from a backport
from lucid. There's an existing 8.4.8 postgres install also on tha
On 30/09/2011, at 8:57 AM, Jason Long wrote:
> I thought I had read somewhere that Postges could ignore a join if it
> was not necessary because there were no columns from the table or view
> selected in the query. Is this possible?
You might be thinking of this enhancement introduced in 9.0:
>
> On 30/09/2011, at 8:57 AM, Jason Long wrote:
>
>> I thought I had read somewhere that Postges could ignore a join if it
>> was not necessary because there were no columns from the table or view
>> selected in the query. Is this possible?
>
> You might be thinking of this enhancement introd
Hi all,
A customer's database has started whining about a busted block:
postgresql-8.4-main.log:2012-10-02 18:51:33 EST ERROR: invalid page header in
block 8429809 of relation base/807305056/950827614
postgresql-8.4-main.log:2012-10-02 18:56:52 EST ERROR: invalid page header in
block 8429809
G'day all,
We recently had a bit of a catastrophe when one of our postgres databases
opened too many files. It was a reasonably easy fix, but it did get me
thinking. Is there a rule of thumb in determining how many file descriptors
should be available to a postgres database/cluster?
I'd be h
On 26/10/2011, at 1:17 PM, Lee Hachadoorian wrote:
> I need some advice on storing/retrieving data in large rows. Invariably
> someone points out that very long rows are probably poorly normalized, but I
> have to deal with how to store a dataset which cannot be changed,
> specifically the ~23
I'm having trouble understanding why these two queries produce different
results:
test=# select (select random()) from generate_series(1,10); -- rows are the same
?column?
---
0.770797704812139
0.770797704812139
0.770797704812139
0.770797704812139
0.770797704812139
On 02/11/2011, at 2:16 PM, Tom Lane wrote:
> Royce Ausburn writes:
>> [ random() is evaluated only once in ]
>> test=# select (select random()) from generate_series(1,10); -- rows are the
>> same
>
>> I understand that it's likely an optimisation thing -- p
18 matches
Mail list logo