In response to tomrevam :
>
>
>
> Scott Marlowe-2 wrote:
> >
> > Just wondering, what version of pgsql are you running? I noticed a
> > lot less degradation from heavily updated tables when I went to 8.3
> > and set the fill % for tables / indexes to 90% or so. If you're
> > running an older
Scott Marlowe-2 wrote:
>
> Just wondering, what version of pgsql are you running? I noticed a
> lot less degradation from heavily updated tables when I went to 8.3
> and set the fill % for tables / indexes to 90% or so. If you're
> running an older version, the upgrade to 8.3 may well be wort
Bill Moran wrote:
>
> The OP did mention that he's using autovac, which will take care of
> both vacuum and analyze for him. However, he didn't provide his
> autovac config, and it happens at times that the defaults are not
> aggressive enough to keep a table well-maintained.
>
Here are my a
On Tue, Sep 29, 2009 at 5:23 AM, tomrevam wrote:
>
> I have a table with 5 million rows. 10 inserts and deletes are performed on
> this table every second.
> The table has indexes on the columns I use to query it, and the query is
> returning about a 1000 rows. Initially the query takes a very sho
On Tue, Sep 29, 2009 at 3:56 PM, Andy Colson wrote:
> Unless I'm mistaken, if AutoCommit is enabled, then each statement will be
> commit for you. The commit() and the rollback() are both useless.
>
To clarify, the DBI driver turns off AutoCommit in postgres when
begin_work() is called. It does
On Tue, Sep 29, 2009 at 9:48 AM, Bill Moran wrote:
> There has (over the last few years) been a lot of speculation from people
> who think that indexes may suffer performance degradation under some
> workloads. I've yet to see any actual evidence.
>
Just last week I reindexed a 70+ million row t
On Sep 29, 2009, at 4:22 PM, Matt Friedman wrote:
Anyone have thoughts on decent hosting companies for this sort of
thing?
I've had good experiences with Slicehost (http://slicehost.com/).
They're a VPS provider, so you'll need to do some sysadmin-y stuff,
but it does have the advantage t
On 30/09/2009 00:22, Matt Friedman wrote:
> I'm trying to migrate a site to a new hosting company. The backend
> uses postgresql 8 and php.
>
> Anyone have thoughts on decent hosting companies for this sort of
> thing? I'm just looking at shared hosting as this isn't a resource
> intensive site.
Hi,
I'm trying to migrate a site to a new hosting company. The backend
uses postgresql 8 and php.
Anyone have thoughts on decent hosting companies for this sort of
thing? I'm just looking at shared hosting as this isn't a resource
intensive site.
Thanks,
Matt
--
Sent via pgsql-general mailing
On Tue, Sep 29, 2009 at 12:17:51PM -0400, Tom Lane wrote:
> Sam Mason writes:
> > On Tue, Sep 29, 2009 at 08:45:55AM -0700, Alan Hodgson wrote:
> >> I think a big reason is also that the client can stream the data without
> >> waiting for a network round trip ack on every statement.
>
> > I don'
On Tue, Sep 29, 2009 at 4:47 PM, Greg Smith wrote:
> P.S. On your write-heavy tests, increasing checkpoint_segments a lot should
> improve overall performance, if you re-test at some point.
Just wanted to add that in order to really test a db, you need a
benchmark that runs a lot longer than a fe
On Tue, 29 Sep 2009, Louis Fridkis wrote:
When I check the WAL file directories I find that all is in order. The file
0004.history does not exist, but it is not supposed to exist. The file
000300EF00A6 is not in the backup archive (archive_log) because
it had not been moved there
On Tue, Sep 29, 2009 at 09:11:19AM -0700, Alan Hodgson wrote:
> On Tuesday 29 September 2009, Sam Mason wrote:
> > I'm pretty sure you can send multiple statements in a
> > single round trip. libpq is defined to work in such cases anyway:
> >
> > http://www.postgresql.org/docs/current/static/li
On Tue, Sep 29, 2009 at 06:30:42PM +0200, Pavel Stehule wrote:
> 2009/9/29 Sam Mason :
> > I may have got that wrong somewhere else.
>
> I afraid so this technique is very buggy. You need unpacked serialised
> record.
Hum, I'm not sure what an "unpacked serialised record" is or why I'd
need one.
On Tue, 29 Sep 2009, Gy?rgy Vilmos wrote:
I've done a benchmark of recent versions of PostgreSQL's last five major
releases to see, how performance has changed during the past years from
version to version.
Your comments suggest V8.4 moves backwards as far as performance goes,
which is a bit
Kevin Kempter writes:
> Here's my scenario:
> 1) we setup a db on a local server and we created an external tablespace on a
> san. We loaded about 400GB in the san tablespace.
> 2) we had lots of HW issues, subsequently the server was re-purposed in an
> emergency scenario
> 3) now we want th
Hi all;
Here's my scenario:
1) we setup a db on a local server and we created an external tablespace on a
san. We loaded about 400GB in the san tablespace.
2) we had lots of HW issues, subsequently the server was re-purposed in an
emergency scenario
3) now we want the data back from the san h
I am testing PITR, following the instructions in: "23.3.3. Recovering
with
an On-line Backup." In step 9 I inspect the database and find that it is
working perfectly. All the data from the original is present and I am
able
to create a new table and insert rows. The problem is that there are
errors
CG writes:
> I use a custom datatype that's a good candidate for being included in the
> family of data types that fit in the "anynonarray" and "anyelement"
> categories. How can I get PostgreSQL to include that data type when it is
> considering selecting functions and operators that take the
Vick Khera wrote:
I'm running Pg 8.3.7 on FreeBSD 7.2.
I have some code in Perl that does a bunch of inserts and updates with
all constraints deferred. On occasion, one of the FK's gets violated
and the transaction commit fails.
I trap this with code like this:
unless ($dbh->commit()) {
I use a custom datatype that's a good candidate for being included in the
family of data types that fit in the "anynonarray" and "anyelement" categories.
How can I get PostgreSQL to include that data type when it is considering
selecting functions and operators that take the polymorphic type
On Tue, Sep 29, 2009 at 1:26 PM, Vick Khera wrote:
> I'm running Pg 8.3.7 on FreeBSD 7.2.
>
> I have some code in Perl that does a bunch of inserts and updates with
> all constraints deferred. On occasion, one of the FK's gets violated
> and the transaction commit fails.
>
> I trap this with code
On Tue, Sep 29, 2009 at 1:06 PM, György Vilmos wrote:
> Hello,
>
> I've done a benchmark of recent versions of PostgreSQL's last five major
> releases to see, how performance has changed during the past years from
> version to version.
> You can find the article here:
> http://suckit.blog.hu/2009/
Devrim GÜNDÜZ wrote
but yeah, trying to build it on a modern Linux system looks
a bit shaky. If you do go this route, I'd recommend making sure it
passes its regression tests before you risk letting it touch your real
data.
geometry and horology tests are failing here. regression.diffs i
Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= writes:
> On Tue, 2009-09-29 at 13:36 -0400, Tom Lane wrote:
>> You might be able to make it go by adding #define _GNU_SOURCE 1
>> in src/include/port/linux.h, and adding -fno-strict-aliasing -fwrapv
>> to CFLAGS,
> Yeah, these worked. I could build 7.2.8 on my
InterRob writes:
> Possibly this has been posted on this list already before, but I got the
> following error when opening the properties dialog in pgAdmin III:
> ERROR: relation "pg_autovacuum" does not exist
You need a newer pgAdmin version.
regards, tom lane
--
Sent
Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= writes:
> On Tue, 2009-09-29 at 10:03 -0700, John R Pierce wrote:
>> what version gcc is that?
> 4.4.1 . It also fails at the same place with GCC 3.4.6 (compat-gcc-34)
It's not the compiler version that's at issue. It's that the system
headers in F-11 don't de
Hi,
On Tue, 2009-09-29 at 13:36 -0400, Tom Lane wrote:
> You might be able to make it go by adding #define _GNU_SOURCE 1
> in src/include/port/linux.h, and adding -fno-strict-aliasing -fwrapv
> to CFLAGS,
Yeah, these worked. I could build 7.2.8 on my F-11 box.
> but yeah, trying to build it on
Dear List,
Possibly this has been posted on this list already before, but I got the
following error when opening the properties dialog in pgAdmin III:
ERROR: relation "pg_autovacuum" does not exist
LINE 1: SELECT * FROM pg_autovacuum WHERE vacrelid=17241::oid
This is a freshly installed PostgreSQ
Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= writes:
> On Tue, 2009-09-29 at 12:50 -0400, Tom Lane wrote:
>> I think the problems you had were probably with contrib stuff.
> Could be -- but this is what I got with 7.2.8:
[ pokes around ... ] Wow, I had forgotten exactly how frickin' ancient
7.2 is :-(.
I'm running Pg 8.3.7 on FreeBSD 7.2.
I have some code in Perl that does a bunch of inserts and updates with
all constraints deferred. On occasion, one of the FK's gets violated
and the transaction commit fails.
I trap this with code like this:
unless ($dbh->commit()) {
warn "commit fa
On Tue, 2009-09-29 at 10:03 -0700, John R Pierce wrote:
> what version gcc is that?
4.4.1 . It also fails at the same place with GCC 3.4.6 (compat-gcc-34)
Regards,
--
Devrim GÜNDÜZ, RHCE
Command Prompt - http://www.CommandPrompt.com
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.
On Tue, Sep 29, 2009 at 10:49 AM, Reid Thompson wrote:
> On Tue, 2009-09-29 at 07:54 -0400, Merlin Moncure wrote:
>
>> you are missing some quotes in there. also, don't use 'values', use
>> select. see my example above:
>> execute 'insert into foo_something select (''' || new::text || '''::foo).
Devrim GÜNDÜZ wrote:
Could be -- but this is what I got with 7.2.8:
make[3]: Entering directory
`/home/devrim/rpm/BUILD/postgresql-7.2.8/src/backend/libpq'
gcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations
-I../../../src/include -c -o hba.o hba.c
hba.c: In function ‘ident_unix’:
hba.
Devrim GÜNDÜZ wrote:
Also, however, I bet 7.2 cannot be compiled on Fedora 11. The oldest PG
version that I could build on Fedora 11 was 8.0, IIRC. Apart from these,
dump/restore from 7.2 to 8.3 will probably not work.
oh, thats going to be fun.fyi, the 7.2 sources are here...
ftp://ftp-
Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= writes:
> On Tue, 2009-09-29 at 10:18 -0600, Kelly Jones wrote:
>> Can I upgrade the data format w/o installing Postgres 7.2 (my
>> "data/base/1/PG_VERSION" file says "7.2")?
> You need to install 7.2 from sources, and start the server, and take the
> dump using
On Tue, 2009-09-29 at 12:50 -0400, Tom Lane wrote:
>
> > Also, however, I bet 7.2 cannot be compiled on Fedora 11. The oldest
> > PG version that I could build on Fedora 11 was 8.0, IIRC.
>
> I think the problems you had were probably with contrib stuff.
> Building the core server without any ext
Kelly Jones wrote:
I recently upgraded PostgreSQL, and now this happens:
# /etc/init.d/postgresql start
An old version of the database format was found.
You need to upgrade the data format before using PostgreSQL.
See /usr/share/doc/postgresql-8.3.8/README.rpm-dist for more information.
Unfort
2009/9/29 Sam Mason :
> On Tue, Sep 29, 2009 at 05:42:37PM +0200, Pavel Stehule wrote:
>> you cannot use double quotes. It's not php.
>
> Normally yes, but *inside* literals you do indeed want double quotes.
>
>
> I think the OP wants to be using quote_literal here. I.e. instead of:
>
> execute '
On Tue, 2009-09-29 at 10:18 -0600, Kelly Jones wrote:
> An old version of the database format was found.
> You need to upgrade the data format before using PostgreSQL.
> See /usr/share/doc/postgresql-8.3.8/README.rpm-dist for more
> information.
>
> Unfortunately, I upgraded due to a hardware fail
2009/9/29 Kelly Jones
> I recently upgraded PostgreSQL, and now this happens:
>
> # /etc/init.d/postgresql start
>
> An old version of the database format was found.
> You need to upgrade the data format before using PostgreSQL.
> See /usr/share/doc/postgresql-8.3.8/README.rpm-dist for more infor
I recently upgraded PostgreSQL, and now this happens:
# /etc/init.d/postgresql start
An old version of the database format was found.
You need to upgrade the data format before using PostgreSQL.
See /usr/share/doc/postgresql-8.3.8/README.rpm-dist for more information.
Unfortunately, I upgraded d
Sam Mason writes:
> On Tue, Sep 29, 2009 at 08:45:55AM -0700, Alan Hodgson wrote:
>> I think a big reason is also that the client can stream the data without
>> waiting for a network round trip ack on every statement.
> I don't think so. I'm pretty sure you can send multiple statements in a
> s
On Tuesday 29 September 2009, Sam Mason wrote:
> > I think a big reason is also that the client can stream the data
> > without waiting for a network round trip ack on every statement.
>
> I don't think so. I'm pretty sure you can send multiple statements in a
> single round trip. libpq is defin
On Tue, Sep 29, 2009 at 08:45:55AM -0700, Alan Hodgson wrote:
> On Tuesday 29 September 2009, Sam Mason wrote:
> > it's faster is because
> > parsing CSV data is easier than parsing SQL.
> >
> > At least I think that's the only difference; anybody know better?
>
> I think a big reason is also th
On Tue, Sep 29, 2009 at 05:42:37PM +0200, Pavel Stehule wrote:
> you cannot use double quotes. It's not php.
Normally yes, but *inside* literals you do indeed want double quotes.
I think the OP wants to be using quote_literal here. I.e. instead of:
execute 'insert into foo_something select (
On Tuesday 29 September 2009, Sam Mason wrote:
> ?? I'm not sure what you're implying about the semantics here, but it
> doesn't seem right. COPY doesn't somehow break out of ACID semantics,
> it's only an *optimization* that allows you to get large quantities of
> data into the database faster.
2009/9/29 Reid Thompson :
> On Tue, 2009-09-29 at 07:54 -0400, Merlin Moncure wrote:
>
>> you are missing some quotes in there. also, don't use 'values', use
>> select. see my example above:
>> execute 'insert into foo_something select (''' || new::text || '''::foo).*';
>>
>> the actual query sho
On Tue, 2009-09-29 at 09:41 -0500, Andy Colson wrote:
> FATAL: could not reattach to shared memory (Key=224, addr=025F):
> 487
>
> The OS is windows vista
> PostGIS version - 1.3.6
> Postgresql - 8.3
IIRC, this was fixed in 8.3.8. An update should be the first step.
Regards,
--
Devrim GÜNDÜ
On Tue, Sep 29, 2009 at 7:41 AM, Andy Colson wrote:
> Hi all,
>
> A user on the mapserver news group is getting this error. I have not run on
> windows so dont know what it means. Any hints would be appreciated.
I believe the latest releases of postgresql have fixed this bug. Here
is a write u
On Tue, 2009-09-29 at 07:54 -0400, Merlin Moncure wrote:
> you are missing some quotes in there. also, don't use 'values', use
> select. see my example above:
> execute 'insert into foo_something select (''' || new::text || '''::foo).*';
>
> the actual query should look like:
> insert into paym
Hi all,
A user on the mapserver news group is getting this error. I have not
run on windows so dont know what it means. Any hints would be appreciated.
FATAL: could not reattach to shared memory (Key=224, addr=025F): 487
The OS is windows vista
PostGIS version - 1.3.6
Postgresql - 8.3
On Tue, Sep 29, 2009 at 3:31 PM, Dave Huber <
dhu...@letourneautechnologies.com> wrote:
> All I have to say is wow! COPY works sooo much faster than the iterative
> method I was using. Even after having to read the entire binary file and
> reformat the data into the binary format that postgres ne
Bill Moran writes:
>> My interpretation of the OPs problem was that the inserts and deletes
>> were happening at similar rates. Thus this won't be a problem.
> There has (over the last few years) been a lot of speculation from people
> who think that indexes may suffer performance degradation un
All I have to say is wow! COPY works sooo much faster than the iterative method
I was using. Even after having to read the entire binary file and reformat the
data into the binary format that postgres needs it is an order of magnitude
faster than using a prepared INSERT. At least that's what my
Don't know if this will get attached to the conversation, but here is how
I have dealt with this.
insert into names (name, created_date) values ($1, default);
update names n set created_date = coalesce($2, n.craeted_date) where name
= $1;
So basically insert all of your non-defaulted columns wi
tomrevam wrote:
I have a table with 5 million rows. 10 inserts and deletes are performed on
this table every second.
The table has indexes on the columns I use to query it, and the query is
returning about a 1000 rows. Initially the query takes a very short time
(order of miliseconds), after a fe
In response to Sam Mason :
> On Tue, Sep 29, 2009 at 03:13:49PM +0200, Brian Modra wrote:
> > 2009/9/29 Sam Mason :
> > > Plain vacuum should allow things to reach a steady state after
> > > a while,
> >
> > If there are a lot of deletes, then likely the index parameters are
> > not the best.
>
On Tue, Sep 29, 2009 at 03:13:49PM +0200, Brian Modra wrote:
> 2009/9/29 Sam Mason :
> > Plain vacuum should allow things to reach a steady state after
> > a while,
>
> If there are a lot of deletes, then likely the index parameters are
> not the best.
My interpretation of the OPs problem was tha
Sorry, in oversimplifying for clarity I was making it more unclear!
Whenever I write a function, I like to write a quick ad hoc test to
go with it in the comments. It means I can also test combinations of
functions together quickly, without having to build other functions
first, and gives a
> > > What is recommended now for building PG 8.4 for WinXP? Is it better to
> > > use MSVC compiler?>
> Regards, Marina.
>
> > MingW is perfectly supported, and we know people use it.
> > MSVC is the recommended one I'd say, mainly because that's how the
> > official binaries are built, so the
unsubscribe pgsql-general
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
2009/9/29 Sam Mason :
> On Tue, Sep 29, 2009 at 02:25:52PM +0200, Brian Modra wrote:
>> 2009/9/29 tomrevam :
>> > My DB is auto-vacuuming all the time. The specific table I'm talking about
>> > gets vacuumed at least every 2 hours (usually a little more frequently than
>> > that).
>> > Deletes are
Hello,
thanks for comments. But what is a conclusion? Using manual vacuum is
considered as dangerous (with respect to locking) and the best is to
avoid using it? What does it mean "kick off autovacuum"? Only "cleaning
part" that needs lock will be omitted but dead tuples are freed, or
whole table
On Tue, Sep 29, 2009 at 02:25:52PM +0200, Brian Modra wrote:
> 2009/9/29 tomrevam :
> > My DB is auto-vacuuming all the time. The specific table I'm talking about
> > gets vacuumed at least every 2 hours (usually a little more frequently than
> > that).
> > Deletes are happening on the table at abo
2009/9/28 Merlin Moncure
>
>
> with s as (select * from two_sets()),
> foo as (select unnest(_foos) from s),
> bar as (select unnest(_bars) from s)
> select
> (select count(*) from foo) as no_foos,
> (select count(*) from bar) as no_bars;
>
> merlin
>
I can see this working as we will be using
On 29/09/2009 13:39, Merlin Moncure wrote:
> The BDE was terrible, as were the native drives that Borland provided
> with Delphi (Paradox, etc). 3rd party drives (Zeos) eventually
> arrived to fix the problem but by that point Delphi had lost the war
> with Visual Basic. That's really tragic, be
am 29.09.2009 11:21 Uhr schrieb Scott Marlowe unter scott.marl...@gmail.com:
> On Tue, Sep 29, 2009 at 2:52 AM, Paul Gaspar wrote:
>> Hi!
>>
>> We have big problems with collation in ORDER BY, which happens in binary
>> order, not alphabetic (lexicographical), like:.
>>
>> A
>> B
>> Z
>> a
>> z
On Tue, Sep 29, 2009 at 8:27 AM, Mark Morgan Lloyd
wrote:
> Andy Colson wrote:
>>
>> Nico Callewaert wrote:
>>>
>>> Hi List,
>>> I tried already in the ODBC list, but without success...
>>> I have to develop a client/server application, heavily transaction
>>> oriented, that will serve around 10
2009/9/29 Iain Barnett :
> If I have a function that returns an integer, how can I use the following
> code outside of a function block?
>
> declare
> _myint int;
>
> begin
> _myint := myfunction( );
> end;
>
select myfunction from myfunction();
or, you can use the function in SQLs wherever you'd
2009/9/29 tomrevam :
>
>
>
> Brian Modra-2 wrote:
>>
>> When did you last do an analyse and re-create indexes?
>> Is the table UPDATEd to or DELETEd from, or just INSERTed ?
>> Is your DB auto vacuuming?
>>
>
> My DB is auto-vacuuming all the time. The specific table I'm talking about
> gets vacuum
On 29/09/2009 13:06, Iain Barnett wrote:
> If I have a function that returns an integer, how can I use the
> following code outside of a function block?
>
> declare
> _myint int;
>
> begin
> _myint := myfunction( );
> end;
You can't use this code outside of a function - this just isn't valid
SQL
On Tue, Sep 29, 2009 at 8:06 AM, Iain Barnett wrote:
> If I have a function that returns an integer, how can I use the following
> code outside of a function block?
>
> declare
> _myint int;
>
> begin
> _myint := myfunction( );
> end;
you can...all postgresql functions can be used in queries:
se
any chance you can test the recent postgresql Cvs-head build (to be 8.5).
Andy Colson wrote:
Nico Callewaert wrote:
Hi List,
I tried already in the ODBC list, but without success...
I have to develop a client/server application, heavily transaction
oriented, that will serve around 100 users and database traffic will
be intense (lot's of reads, updates, inserts..
If I have a function that returns an integer, how can I use the
following code outside of a function block?
declare
_myint int;
begin
_myint := myfunction( );
end;
or is it just not possible?
Iain
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
Hello,
I've done a benchmark of recent versions of PostgreSQL's last five major
releases to see, how performance has changed during the past years from
version to version.
You can find the article here:
http://suckit.blog.hu/2009/09/26/postgresql_history
Thanks for working on this great piece of
Brian Modra-2 wrote:
>
> When did you last do an analyse and re-create indexes?
> Is the table UPDATEd to or DELETEd from, or just INSERTed ?
> Is your DB auto vacuuming?
>
My DB is auto-vacuuming all the time. The specific table I'm talking about
gets vacuumed at least every 2 hours (usually
On Mon, Sep 28, 2009 at 4:29 PM, Reid Thompson wrote:
> On Mon, 2009-09-28 at 11:05 -0400, Reid Thompson wrote:
>> We have a set of tables that we're partitioning by year and month -
>
>>
>
> We can't seem to quite get it right...
> This is our quick stub test.
>
> --
> -- Tables:
> --
On Tue, Sep 29, 2009 at 04:04:46AM -0700, Postgres User wrote:
> >> > I'm trying to write an INSERT INTO statement that will use a DEFAULT
> >> > value when an input parameter is null.
[ workaround given ]
> Again, this approach works for a simple example. But for a larger
> function with any n
2009/9/29 tomrevam :
>
> I have a table with 5 million rows. 10 inserts and deletes are performed on
> this table every second.
> The table has indexes on the columns I use to query it, and the query is
> returning about a 1000 rows. Initially the query takes a very short time
> (order of milisecon
I have a table with 5 million rows. 10 inserts and deletes are performed on
this table every second.
The table has indexes on the columns I use to query it, and the query is
returning about a 1000 rows. Initially the query takes a very short time
(order of miliseconds), after a few hours it takes
>> >
>> > I'm trying to write an INSERT INTO statement that will use a DEFAULT
>> > value when an input parameter is null.
>> >
>> Neither of my 2 methods are pretty.
>> 1) Use a trigger.
>> 2) Grab and cast the default value from the information_schema.columns
>> view and plug it in.
>>
>> Another
On Mon, Sep 28, 2009 at 08:33:45PM -0400, Martin Gainty wrote:
> INSERTS/UPDATES are historically slow especially with autocommit is
> on (implied autocommit on) the Database writer actually stops any
> processing and applies that one record to the database
That seems to be overstating the issue s
Dear list,
Say, I wish to have a function returning a record, taking a row as
parameter. However, from a performance perspective, I wish to be able to
perform this query only on a subset by using a WHERE-clause.
I can only think of the following approach:
WITH ss AS ( SELECT * FROM my_table) SELE
On Tue, Sep 29, 2009 at 2:52 AM, Paul Gaspar wrote:
> Hi!
>
> We have big problems with collation in ORDER BY, which happens in binary
> order, not alphabetic (lexicographical), like:.
>
> A
> B
> Z
> a
> z
> Ä
> Ö
> ä
> ö
>
> PG is running on Mac OS X 10.5 and 10.6 Intel.
I seem to recall there
Hi!
We have big problems with collation in ORDER BY, which happens in
binary order, not alphabetic (lexicographical), like:.
A
B
Z
a
z
Ä
Ö
ä
ö
This is what we have done:
1. initdb -U myuser -E UTF-8 --locale=de_DE.UTF-8 -D /mydata
2. pg_ctl -U myuser -D /mydata -l logfile start
3. created
Postgres User wrote:
>>> I'm trying to write an INSERT INTO statement that will use a DEFAULT
>>> value when an input parameter is null.
>>>
>> Neither of my 2 methods are pretty.
>> 1) Use a trigger.
>> 2) Grab and cast the default value from the information_schema.columns
>> view and plug it in.
On Mon, 2009-09-28 at 10:08 -0700, John wrote:
> On Monday 28 September 2009 09:56:33 am Filip Rembiałkowski wrote:
> > BTW, you did not specify what exactly did not work when you tried apgdiff.
> > this would help others to help you.
>
> To be honest I could not determine how to start the app.
89 matches
Mail list logo