Here you are my probably uncommon situation.
I have installed 4 different instances of postgresql-9.0.2 on the same
machine, on ports 5433, 5434, 5435, 5436.
On these instances I have splitted a huge database, dividing it per date
(from 1995 to 1998 on 5433, from 1999 to 2002 on 5434 and so on.
Hello list,
I'm currently thinking about a possibility to create a dynamic insert
statement inside a trigger function to put the data in the correct partition.
What am I talking about ?
I want to put data dependant on a timestamp column ("datetime") in seperate
partitions. Therefore I created th
Hi,
I am an MSc student in the department of Informatics and
Telecommunications of the University of Athens and as part of my
thesis I am examining some new cost models for DB optimizers. I have
successfully changed the optimizer of PostgreSQL in order to implement
these models, but I hav
Tom,
>From your commit notes:
"This wasn't a problem before 9.0 because we didn't support FOR UPDATE
below the top query level..."
FWIW I had been using a sub-query FOR UPDATE in one of my key queries (one
that was called multiple times per second) and relied upon the FOR UPDATE to
avoid having
On Tuesday, February 08, 2011 10:41:15 am Lee Hughes wrote:
> From section 15.4 of the manual:
>
> "If you are upgrading from PostgreSQL "9.0.x", the new version can use
> your current data files so you should skip the backup and restore steps"
>
> Is 9.0beta4 considered a 9.0.x version, or do I
Hello.
I'm probably doing some very basic error here, but I get
ERROR: record "new" is not assigned yet
The tuple structure of a not-yet-assigned record is indeterminate.
when I try this small example
create table foo(x int);
create or replace function trigger_foo() returns trigger language
On Thu, Feb 10, 2011 at 9:29 AM, A B wrote:
> Can someone help me spot the error? :-)
>
use a BEFORE INSERT trigger?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Thanks for the suggestion, but
CREATE TRIGGER trigger_foo BEFORE INSERT ON foo for each row EXECUTE
PROCEDURE trigger_foo();
gives me the same error.
2011/2/10 Vick Khera :
> On Thu, Feb 10, 2011 at 9:29 AM, A B wrote:
>> Can someone help me spot the error? :-)
>>
>
> use a BEFORE INSERT trigg
On Thursday, February 10, 2011 6:29:58 am A B wrote:
> Hello.
>
> I'm probably doing some very basic error here, but I get
>
> ERROR: record "new" is not assigned yet
> The tuple structure of a not-yet-assigned record is indeterminate.
>
> when I try this small example
>
> create table foo(x
On Thu, Feb 10, 2011 at 9:38 AM, A B wrote:
> CREATE TRIGGER trigger_foo BEFORE INSERT ON foo for each row EXECUTE
> PROCEDURE trigger_foo();
>
> gives me the same error.
>
Maybe "NEW" needs to be all caps? Also, with the BEFORE trigger,
you'll need to RETURN NEW.
This trigger of mine works jus
I'm very embarresed now.
There were another trigger that caused a problem. Now it works.
Thank you all for helping! :-)
2011/2/10 Adrian Klaver :
> On Thursday, February 10, 2011 6:29:58 am A B wrote:
>> Hello.
>>
>> I'm probably doing some very basic error here, but I get
>>
>> ERROR: record "
Thanks for this tip,
the german compound directory from
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/ works fine.
I think the problem was the rudimentary support of hunspell dictionaries.
Thanks for your help and your great software!
Am 08.02.2011 11:34, schrieb Oleg Bartunov:
> Jens,
>
"David Johnston" writes:
>> From your commit notes:
> "This wasn't a problem before 9.0 because we didn't support FOR UPDATE
> below the top query level..."
> FWIW I had been using a sub-query FOR UPDATE in one of my key queries (one
> that was called multiple times per second) and relied upon
Hello all,
I'm using a COPY statement to load data into a PostGIS. The issue I am
facing is that if I define fields with the REAL type, the COPY will only
preserve 4 decimals which is not sufficient for my application.
Here is the commands I'm running and a sample of the results :
CREATE TA
On %D, %SN wrote:
%Q
%C
--
Robert...
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of T ?apap?t???
> Sent: Thursday, February 10, 2011 2:18 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] GUC c
-General
How did we get the duckbill platypus? Cross Pollination of course and
today is the last day to get your content in for PostgreSQL Conference
East. A conference that is cross pollinating with MongoDB, one of those
"NoSQL" technologies. Now, we have experience, lots of experience on our
sid
I know that I have at least one instance of a varchar that is not valid UTF-8,
imported from a source with errors (AMA CPT files, actually) before PG's
checking was as stringent as it is today. Can anybody suggest a query to find
such values?
--
Scott Ribe
scott_r...@elevated-dev.com
http://w
On 10/02/2011 17:13, Samuel Gilbert wrote:
Hello all,
I'm using a COPY statement to load data into a PostGIS. The issue I am
facing is that if I define fields with the REAL type, the COPY will only
preserve 4 decimals which is not sufficient for my application.
If you use NUMERIC, you can
Hi all,
My company is having trouble managing how we upgrade schema changes across many
versions of our software. I imagine this is a common problem and there're
probably some neat solutions that we don't know about.
For the last 10 years we have been writing bash shell scripts essentially
On 2/10/2011 3:38 PM, Royce Ausburn wrote:
Hi all,
My company is having trouble managing how we upgrade schema changes across many
versions of our software. I imagine this is a common problem and there're
probably some neat solutions that we don't know about.
For the last 10 years we have be
I'm working on a project to convert a large database form SQL_ASCII to
UTF-8. I am using this procedure:
1) pg_dump the SQL_ASCII database to an SQL text file.
2) Run through a small (efficient) C program that logs each line that
contains ANY "unclean" ASCII text.
3) Parse that log with a small p
On Thu, Feb 10, 2011 at 1:02 PM, Scott Ribe wrote:
> I know that I have at least one instance of a varchar that is not valid
> UTF-8, imported from a source with errors (AMA CPT files, actually) before
> PG's checking was as stringent as it is today. Can anybody suggest a query to
> find such v
On 2/10/2011 4:14 PM, Andy Colson wrote:
On 2/10/2011 3:38 PM, Royce Ausburn wrote:
Hi all,
My company is having trouble managing how we upgrade schema changes
across many versions of our software. I imagine this is a common
problem and there're probably some neat solutions that we don't know
a
We have this kickass solution we built at work called dbsteward that
just takes care of all of this for us, automatically. You just give
it the new version and the old version and it generates update statements
to feed into PG.
The reason I'm bringing this up is that we're working to release
dbs
On 2/10/2011 4:18 PM, Bill Moran wrote:
We have this kickass solution we built at work called dbsteward that
just takes care of all of this for us, automatically. You just give
it the new version and the old version and it generates update statements
to feed into PG.
The reason I'm bringing th
Top-posting is frowned upon by some (not me), but since Bill started it...
I for one will be waiting to see your dbsteward. How does it compare
functionally or stylistically with Ruby's migration tools (which I found
to be pretty cool and frustrating all in one go).
On 02/10/2011 03:18 PM, Bill
>
> If you are interested, I can email to you the C and Perl source.
>
> It runs like this:
>
> # time pg_restore /db-dumps/some_ascii_pgdump.bin | ./ascii-tester |
> ./bad-ascii-report.pl > unclean-ascii.rpt
http://www.ecoligames.com/~djenkins/pgsql/
Disclaimer: I offer NO warranty. Use at your
In response to Andy Colson :
> On 2/10/2011 4:18 PM, Bill Moran wrote:
> >
> > We have this kickass solution we built at work called dbsteward that
> > just takes care of all of this for us, automatically. You just give
> > it the new version and the old version and it generates update statements
Royce Ausburn wrote on 10.02.2011 22:38:
I'm really interested to hear how you guys manage schema upgrades in
the face of branches and upgrading from many different versions of
the database.
We are quite happy with Liquibase. You can simply run it against a database and tell it
to migrate it t
In response to Rob Sargent :
> Top-posting is frowned upon by some (not me), but since Bill started it...
Oops ... the weird thing is that I'm usually really anal about not top-
posting ...
> I for one will be waiting to see your dbsteward. How does it compare
> functionally or stylistically wi
Bill Moran wrote on 10.02.2011 23:59:
The overview:
You store your schema and data as XML (this is easy to migrate to, because
it includes a tool that makes the XML from a live database)
Keep your XML schema files in some RCS.
When it's time for a new deployment, you run the dbsteward tool agains
On 02/10/2011 03:59 PM, Bill Moran wrote:
> In response to Rob Sargent :
>
>> Top-posting is frowned upon by some (not me), but since Bill started it...
>
> Oops ... the weird thing is that I'm usually really anal about not top-
> posting ...
>
>> I for one will be waiting to see your dbstewar
On Thu, 10 Feb 2011 17:59:30 -0500, Bill Moran
wrote:
The overview:
You store your schema and data as XML (this is easy to migrate to,
because
it includes a tool that makes the XML from a live database)
We're doing a similar thing here except we're a Lisp shop so our
schema is defined as
In response to Thomas Kellerer :
> Bill Moran wrote on 10.02.2011 23:59:
> > The overview:
> > You store your schema and data as XML (this is easy to migrate to, because
> > it includes a tool that makes the XML from a live database)
> > Keep your XML schema files in some RCS.
> > When it's time f
In response to Rob Sargent :
>
> On 02/10/2011 03:59 PM, Bill Moran wrote:
> > In response to Rob Sargent :
> >> I for one will be waiting to see your dbsteward. How does it compare
> >> functionally or stylistically with Ruby's migration tools (which I found
> >> to be pretty cool and frustratin
Bill Moran wrote on 11.02.2011 00:37:
Anyway ... based on nothing more than a quick scan of their quickstart
page, here are the differences I see:
* Liquibase is dependent on you creating "changesets". I'm sure this
works, but we took a different approach with dbsteward. dbsteward
expects
On 02/10/2011 02:38 PM, Royce Ausburn wrote:
> Hi all,
>
> My company is having trouble managing how we upgrade schema changes across
> many versions of our software. I imagine this is a common problem and
> there're probably some neat solutions that we don't know about.
>
> For the last 1
On 02/10/2011 04:44 PM, Bill Moran wrote:
> In response to Rob Sargent :
>>
>> On 02/10/2011 03:59 PM, Bill Moran wrote:
>>> In response to Rob Sargent :
I for one will be waiting to see your dbsteward. How does it compare
functionally or stylistically with Ruby's migration tools (whic
On Thu, Feb 10, 2011 at 2:02 PM, Scott Ribe wrote:
> I know that I have at least one instance of a varchar that is not valid
> UTF-8, imported from a source with errors (AMA CPT files, actually) before
> PG's checking was as stringent as it is today. Can anybody suggest a query
> to find such valu
I am trying to write a plsql routine that will delete a range of
characters based on their octal or hexadecimal values. Something like
the 'tr' shell command will do:
cat file| tr -d ['\177'-'\377']
Can't seem to figure this one out.
Pointers would be appreciated.
--
Until later, Geoffrey
On 11/02/2011, at 9:59 AM, Thomas Kellerer wrote:
> Royce Ausburn wrote on 10.02.2011 22:38:
>> I'm really interested to hear how you guys manage schema upgrades in
>> the face of branches and upgrading from many different versions of
>> the database.
>
> We are quite happy with Liquibase. You c
>
> So, 10.0 at 10057.
> 11.0 at 11023.
>
> then 10.1 needs some fixes so db is bumped to 10058.
>
> Then, later, you can upgrade 10057 to 11023, but you cant get 10058 to 11023.
>
> Humm... maybe you need smarter upgrade scripts? Would having logic in the
> script help? Something like:
>
>
>
> Don't the bash scripts get checked in to .../perforce/cvs/svn/git/...?
> Aren't they part of the resources of the project(s)?
Yep - they absolutely are. The issue is that there're multiple branches
*potentially* having new scripts committed. Fortunately it's rare as the
release branches
On Thu, Feb 10, 2011 at 6:44 PM, Bill Moran wrote:
> dbsteward can do downgrades ... you just feed it the old schema and
> the new schema in reverse of how you'd do an upgrade ;)
>
> Oh, also, it allows us to do installation-specific overrides. We use
> this ONLY for DML for lookup lists where so
In response to Glenn Maynard :
> On Thu, Feb 10, 2011 at 6:44 PM, Bill Moran wrote:
>
> > dbsteward can do downgrades ... you just feed it the old schema and
> > the new schema in reverse of how you'd do an upgrade ;)
> >
> > Oh, also, it allows us to do installation-specific overrides. We use
>
On Fri, Feb 11, 2011 at 12:16 AM, Bill Moran wrote:
> The big caveat is that 99.9% of the database changes don't fall into those
> "nontrivial" categories, and dbsteward makes those 99.9% of the changes
> easy to do, reliable to reproduce, and easy to track.
>
My experience is maybe more like 95%
On 10 Feb 2011, at 23:59, Bill Moran wrote:
> The overview:
> You store your schema and data as XML (this is easy to migrate to, because
> it includes a tool that makes the XML from a live database)
> Keep your XML schema files in some RCS.
That reminds me of something I've been wondering about -
47 matches
Mail list logo