I guess you can create a unique functional index with upper as the
function. Try to look up the docs for CREATE INDEX.
HTH,
Csaba.
On Wed, 2005-09-21 at 18:15, Andrus wrote:
> I want to disable dupplicate customer names in a database regardless to
> case.
>
> I tried
>
> CREATE TABLE customer
This is a known issue with Java 1.5 networking code, it tries to reverse
look up the IP addresses in certain circumstances (which obviously occur
in the JDBC driver) if you use IPs in your connection addresses.
You're best off to make sure DNS is set up locally on your client
machines for all the
Hi all,
I've recently asked a similar question, which received no useful answer
yet, so I'll drop in too.
In my case, the table I was inserting to was a quite big one already to
start with (and analyzed so), so I was expecting that it will not slow
down due to indexes, as they were quite big to s
On Mon, 2005-10-10 at 13:34, Lincoln Yeoh wrote:
[snip]
> It's actually quite surprising how many people get this wrong and don't
> realize it (I wonder how many problems are because of this). The SQL spec
> should have had a PUT/MERGE decades ago. The insert vs update format being
> different i
Venki,
Postgres is using a thread per _realized_ connection, and reserves some
memory per _configured_ connection. So setting the configured connection
count too high is not free even if normally you won't have that many
connections actually open. And of course at peak load the number of
connectio
Hi all,
I wonder what caused the error we just got on a postgres 8.0.3 data base
(using JDBC to connect, but I guess that's not really relevant for this
error):
[snip stack trace]
Caused by: org.postgresql.util.PSQLException: ERROR: could not access
status of transaction 0
[snip]
Could it be som
y".
Any advice of what should I do/check next ?
Thanks,
Csaba.
On Mon, 2005-10-17 at 17:21, Csaba Nagy wrote:
> Hi all,
>
> I wonder what caused the error we just got on a postgres 8.0.3 data base
> (using JDBC to connect, but I guess that's not really relevant for
Hi all,
Postgres version: 8.0.3
I wonder if I understood correctly what log_min_duration_statement
does... I set it to 2000, and the result is that all queries running
more than 2 seconds on _local_ connections are logged, but long running
queries on remote connections are not logged. Is this som
ssing with postgres
settings here...
Thanks,
Csaba.
On Wed, 2005-10-19 at 17:46, Tom Lane wrote:
> Csaba Nagy <[EMAIL PROTECTED]> writes:
> > I wonder if I understood correctly what log_min_duration_statement
> > does... I set it to 2000, and the result is that all queries run
Hi all,
I wonder how fair is the lock allocation of SELECT FOR UPDATE ?
Is it fair, i.e. the first transaction which requested the lock will get
it, or it is possible that new requests are served quicker ?
TIA,
Csaba.
---(end of broadcast)---
TIP
blems.
Thanks,
Csaba.
On Wed, 2005-10-19 at 20:22, Tom Lane wrote:
> Csaba Nagy <[EMAIL PROTECTED]> writes:
> > Now the remote connections are coming from Java (the JDBC driver),
>
> Oh, there's your problem. 8.0 doesn't have very good support for
> logging the
Romain,
If you're using JDBC and the V3 protocol is hurting you (which was my
case at one point), you can force the 8.0 driver to use the V2 protocol
which does a lot of things differently. For a quick solution that could
work until you fix your code to properly work with V3.
Just shooting in the
OK, it's quite some time from when the original question was posted, but
now I have more data... see below.
On Thu, 2005-09-29 at 19:24, Jim C. Nasby wrote:
> On Tue, Sep 27, 2005 at 10:24:02AM +0200, Csaba Nagy wrote:
> > Hi all,
> >
> > I've recently asked a simi
The variant you're showing here has absolutely no quoting for the
function body. You need to quote the body, and escape the quotes you
have inside the body (in this example you don't have any).
Wrap the body in BEGIN ... END.
Also put semicolons after each statement end.
Corrected code:
CREATE O
On Tue, 2005-10-25 at 11:18, WireSpot wrote:
> On 10/25/05, Michael Fuhr <[EMAIL PROTECTED]> wrote:
> > Are there indexes on the foreign key columns? That is, given
> > the following example,
> >
> > CREATE TABLE foo (id integer PRIMARY KEY);
> > CREATE TABLE bar (fooid integer NOT NULL REFERENCES
I guess your master table has a considerable tree of child records for
each deleted record, that's why the deletion takes so long. We have this
situation too on top level object deletion. Our solution is to make it
asynchronous, i.e. the user requests it and then a background job does
it. There's n
You can select it from the sequence's associated relation as from any
table... try:
select * from sequence_name;
However, be aware that what you see there is the situation only in your
transaction, and it is very possible that other transactions will use
higher values concurrently. So it really d
might be doing something wrong when restoring the database.
>
> Regards
> venki
>
> ---Original Message---
>
> From: Csaba Nagy
> Date: 10/26/05 18:24:48
> To: Venki
> Cc: Postgres general mailing list
> Subject: Re: [GENERAL] How to get the value in t
[snip]
On Thu, 2005-11-03 at 18:02, [EMAIL PROTECTED] wrote:
> Strange - I had never realized that PostgreSQL would allow you to UPDATE a
> primary key value. I thought that other db's I had used (e.g. Sybase,
> Oracle, SQL Server, etc.) in the past would not allow that, and you had to
> DELETE, t
See my comments below.
On Fri, 2005-11-04 at 15:24, vishal saberwal wrote:
> thanks for your response,
>
> We are kind of jailing (may be hiding would be a better term)
> resources behind the database/Stored procedures and GUI needs to have
> a feel as if the data is comming from database.
> Its
Use \df+ instead of \df
The additional "+" will give more details in many of the \ commands.
Cheers,
Csaba.
On Wed, 2005-11-09 at 14:40, ctobini wrote:
> Hello,
>
> I would like to know how to liste the contain of a fonction.
>
> Writing \df in psql, I have a list of functions and triggers, but
I guess the best solution is one which allows you to do it in batches,
as inserting is more efficient if you don't commit after each insert.
On Wed, 2005-11-09 at 15:45, Alex wrote:
> Hi,
> have just a general question...
>
> I have a table of 10M records, unique key on 5 fields.
> I need to upda
Quote from the link below:
"Tip: A block containing an EXCEPTION clause is significantly more
expensive to enter and exit than a block without one. Therefore, don't
use EXCEPTION without need."
I would think this places an automatic save-point at the begin of the
block. I doubt this would give t
This is also true in my situation, where I have some medium sized
tables, which have a always just a handful of rows heavily updated. The
amount of updates is not too big related to the size of the table, but
the repeated update of the same row will cause problems before
autovacuum will kick in, as
[snip]
> With the release of 8.1 and it's integrated version of autovacuum, you
> can now set per table settings for for vauum and analyze thresholds,
> vacuum cost delay, and table enable / disable. This addresses what was
> probably the largest deficiency with the old contrib version.
Cool !
OK, I've found it:
http://www.postgresql.org/docs/8.1/interactive/catalog-pg-autovacuum.html
One more incentive to upgrade as quickly as possible.
Cheers,
Csaba.
On Fri, 2005-11-11 at 17:40, Csaba Nagy wrote:
> [snip]
> > With the release of 8.1 and it's integrated version
The OP was complaining about the results of the above script, which I
could readily reproduce on a 8.1.0 installation on debian (see below).
The same select which returned 3 rows will return nothing after creating
the partial indexes, which looks as a bug to me...
I can't tell anything about why it
Hi all,
I've had today a strange lockup on our postgres data base.
Postgres 8.0.3
Debian GNU/Linux 3.1
autovacuum set up
The situation (diagnosed via pg_stat_activity): one table was locked by
an update, a VACUUM ANALYZE was running for the same table (triggered by
autovacuum), and a handful of
On Wed, 2005-11-16 at 19:41, Tom Lane wrote:
> Csaba Nagy <[EMAIL PROTECTED]> writes:
> > The situation (diagnosed via pg_stat_activity): one table was locked by
> > an update, a VACUUM ANALYZE was running for the same table (triggered by
> > autovacuum), and a handful
Hi all,
I have observed a strange behavior on 8.1 of an insert statement into a
temporary table done from a delete trigger.
I've attached a test case.
Observe that the NOTICE saying the rows were inserted occurs all 5 times
for the 8.0.3 server and only for the first 2 times for the 8.1
installati
Ok, I found a fix which works for me: don't use ON COMMIT DELETE ROWS on
the temporary tables, but explicitly delete the rows once processed.
However, I think it should work with ON COMMIT DELETE ROWS too, and it
works fine indeed in 8.0.3.
Cheers,
Csaba.
On Thu, 2005-11-17 at 16:22, Csaba
Hi all,
I have right now a locked backend, similar to what I had a few days ago.
It won't answer to pg_cancel_backend.
Is there a way to diagnoze what it is actually doing ?
If you answer, please do it quickly as in max 15 minutes I'll kill -9
...
Thanks,
Csaba.
---(en
7;ll try to make a test case out of it, maybe I'll catch
the error on a development machine...
Cheers,
Csaba.
On Fri, 2005-11-18 at 14:23, Richard Huxton wrote:
> Csaba Nagy wrote:
> > Hi all,
> >
> > I have right now a locked backend, similar to what I had a few days
> Well, I've had time to read your previous message too.
>
> The first time you seem to imply the machine slowed down across all
> processes - ssh etc. Was that the case this time?
OK, the slowdown/crash was a different problem, which might have been
caused by a "too many files open" problem co
[snip]
> So what is the UPDATE doing? What is the query (see pg_stat_activity)
> doing? Is it updating a lot of rows? If the query does run for a long
> time holding any kind of lock, you're going to get strange effects like
> this.
The UPDATE is processing ~ 100 rows in a transaction (I'm not sur
> > ... which still doesn't cover "RE" translated in most of the world's
> > languages, which I'm sure occasionally pop up from people who use
> > localized mail clients.
>
> * ! ^Subject:.*(Re|AW|Antwort|Reply|Reponse|Sv):
>
> Better?
The point was that you will never ever be able to cover all
> I wouldn't mind terribly getting an occasional "clippy" message like:
>
> It looks like you're trying to unsubscribe from this mailing list.
>
> Would you like some help with doing that?
>
> As opposed to a message "You've been unsubscribed" After sending
> something about slony to the list.
Note: this gets way off topic for this list. And the problem was solved
too in a satisfying manner...
And the new subject line will trigger quite a few adult SPAM filters too
;-)
> > The point was that you will never ever be able to cover all the variants
> > existing in the weirdest email client
> > Cons:
> - No reliability. On slow days, WAL logs could take a long time to
> rotate, so small but important transactions might not be replicated
> for a long time.
That's gone with 8.2, it will be possible to stream the last
modifications, or force a WAL recycle periodically, whatever fit
> I would think that the data pages are written and consistent while in
> recovery mode, so maybe it's reasonable to do. However, I'm only
> speculating and anything like this would probably not be coming soon.
I was thinking at one point about what problems could prevent the
standby to allow read
> [snip] Having an expensive process run once in a while and setting this value
> also
> sounds interesting. [snip]
What about externalizing the statistics calculation ? I mean, would it
make sense to have for e.g. a WAL-fed standby which has an additional
process which keeps the statistics in sy
> /var resides on /dev/sda, and /data in /dev/sdb
I bet you're running a default installation of postgres which has it's
data in /var.
Check your real data directory by running 'ps auxww|grep post', and see
what's after the '-D' parameter... and then when you figure out that the
startup script is
Hi all,
I know of 2 causes:
- hit CTRL-C in the psql client;
- have a non-zero statement timeout and have the statement actually
time out;
But I am seeing this via JDBC which can't cancel a statement AFAIK, and
the statement_timeout is set to 0 (in the config file globally, and
there's no over
> JDBC *can* cancel a running statement. You just need to call cancel()
> from another thread.
>
> http://java.sun.com/j2se/1.5.0/docs/api/java/sql/Statement.html#cancel()
>
> Works great in my SQL front end.
Are you sure ? Ever tried to cancel a long running statement and see if
it is still ru
[snip]
> I think the idea is to edit the postgresql.conf file on the fly and send
> a SIGHUP to the postmaster. I haven't ever heard of anyone doing that,
> but I don't see any reason why it wouldn't work.
It works, I did it for a while with the statement_timeout to change it
globally over night
> I'm confused, what does statement timeout have to do with this? I was
> assuming you would edit "autovacuum = off" to "autovacuum = on" wouldn't
> that work?
Sorry for the confusion, I was thinking about the procedure of changing
programatically the config file and reload it, all this from a
Hi all,
While postgres 8.x improved a lot the locking issues related to foreign
keys, the locking used is still stronger than needed.
The following test case deadlocks on postgres but goes through on
oracle:
preparation of tables:
create table test1(col_fk integer primary key, col_1 integer);
> FOR daycnt IN 1..31 LOOP
How about months with less than 31 days ? What do you get for those if
the day is 31 ?
Cheers,
Csaba.
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Fri, 2006-12-08 at 17:39, Bernhard Weisshuhn wrote:
> On Fri, Dec 08, 2006 at 05:26:22PM +0100, Harald Armin Massa <[EMAIL
> PROTECTED]> wrote:
>
> > Now we just need fast, stable and native replication for " The Girl
> > That Every Man Secretly Wishes He Was Married To!"
>
> I want replicati
> Yeah, we could add defenses one by one for the cases we could think of,
> but I'd never feel very secure that we'd covered them all.
What you all forget in this discussion is that reindexing concurrently
would have to be a highly administrative task, controlled by the DB
admin... so whoever has
On Wed, 2006-12-13 at 17:12, Tom Lane wrote:
> Csaba Nagy <[EMAIL PROTECTED]> writes:
> > So an implementation which optimistically builds the new index
> > concurrently while holding no lock, and then hopes for the 3rd
> > transaction to be able to get the exclusive l
On Tue, 2006-12-19 at 07:28, Tom Lane wrote:
> Glen Parker <[EMAIL PROTECTED]> writes:
> > I am still trying to roll my own auto vacuum thingy.
>
> Um, is this purely for hack value? What is it that you find inadequate
> about regular autovacuum? It is configurable through the pg_autovacuum
> ca
>From my POV, autovacuum is doing a very good job, with the exception of:
> - There might be time-based exclusions to the effect that large tables
> oughtn't be processed during certain periods (backup time?)
Either (per table!) exception or permission based control of when a
table can be vacuu
> Alternatively, perhaps a threshold so that a table is only considered
> for vacuum if:
>(table-size * overall-activity-in-last-hour) < threshold
> Ideally you'd define your units appropriately so that you could just
> define threshold in postgresql.conf as 30% (of peak activity in last 100
On Thu, 2006-12-21 at 18:03, Matthew T. O'Connor wrote:
> The maintenance window design as I understand it (Alvaro chime in if I
> get this wrong) is that we will be able to specify blocks of time that
> are assigned specific autovacuum settings. For example we might define
> a maintenance wind
Ben,
On Thu, 2006-12-21 at 20:10, Benjamin Smith wrote:
> I'm breaking up a database into several sets of data with similar layout. (we
> currently have multiple customers using a single database and tableset, we're
> splitting it out to give us more "wiggle room")
We have here a very similar
On Thu, 2006-12-21 at 18:41, Alvaro Herrera wrote:
> > From all the discussion here I think the most benefit would result from
> > a means to assign tables to different categories, and set up separate
> > autovacuum rules per category (be it time window when vacuuming is
> > allowed, autovacuum pro
On Sun, 2006-12-24 at 03:03, Christopher Browne wrote:
[snip]
> Seems to me that you could get ~80% of the way by having the simplest
> "2 queue" implementation, where tables with size < some threshold get
> thrown at the "little table" queue, and tables above that size go to
> the "big table" queu
On Mon, 2007-01-08 at 22:29, Chris Browne wrote:
[snip]
> Based on the three policies I've seen, it could make sense to assign
> worker policies:
>
> 1. You have a worker that moves its way through the queue in some sort of
>sequential order, based on when the table is added to the queue, to
>
On Mon, 2007-01-08 at 20:11, Ron Mayer wrote:
[snip]
> That's unlikely to work anyway. Organizations protecting valuable data
> using technical approaches (DVDs, etc) find it gets out anyway.
> Since you'll ship a client that can decrypt the data anyway, anyone with
> a debugger could decrypt it (u
On Tue, 2007-01-09 at 17:31, Matthew T. O'Connor wrote:
> Without getting into all the details, the autovacuum naptime is a GUC
> variable right now, so it can be much more frequent than the current
> default which is 60 seconds.
Hmm, for some reason I thought the granularity is minutes, but it
On Tue, 2007-01-09 at 17:36, Csaba Nagy wrote:
> On Tue, 2007-01-09 at 17:31, Matthew T. O'Connor wrote:
> > Without getting into all the details, the autovacuum naptime is a GUC
> > variable right now, so it can be much more frequent than the current
> > default whi
[snip]
> Come to think of it, an auto-reindex option might be nice in core someday.
> TODO item?
Marry it with autovacuum + online index build, and it will be cool ;-)
BTW, having a privileged background thread doing the reindex could be a
solution to most of the objections regarding online reind
[snip]
> I afraid I don't see how any of the answers I saw discussed fit a
> 24x7 operation. Reindex, drop index, vacuum full, ... they all
> block production queries of one sort or another for significant
> periods of time (minutes) on large (multi/tens of GB) tables,
> and thus are infeasibl
On Fri, 2007-01-19 at 20:03, Jeremy Haile wrote:
> Is it feasible to add a "reindex concurrently" that doesn't lock the
> table for the rebuild, then locks the table when doing a second pass to
> pickup rows that were changed after the first pass? Or something like
> that
IIRC, the objection
On Mon, 2007-01-22 at 20:56, Lenorovitz, Joel wrote:
[snip]
> ERROR: too many trigger records found for relation "item"
I've got this error on a development data base where we were
continuously creating new child tables referencing the same parent
table. The responsible code is in src/backend/com
On Tue, 2007-01-23 at 14:49, Brandon Aiken wrote:
> Nevertheless, the database should be able to handle any combination of
> syntactically correct SQL statements without throwing errors and
> maintaining the database in a consistent state. If what you're saying
> is right, the error thrown here is
On Tue, 2007-01-23 at 15:43, Tom Lane wrote:
> Csaba Nagy <[EMAIL PROTECTED]> writes:
> > The responsible code is in src/backend/commands/trigger.c, and I
> > think it only happens if you manage to create/drop a new trigger (which
> > also could be a FK trigger c
[Update: the post didn't make it to the list probably due to the attachment, so
I resend it inlined... and I was not able to trigger the same behavior on 8.2,
so it might have been already fixed.]
[snip]
> Well, if you can show a reproducible test case, I'd like to look at it.
OK, I have a test
> [snip] I believe this is fixed as of 8.2 --- can you duplicate it
> there? (No, backpatching the fix is not practical.)
No, I was not able to duplicate it on 8.2, so I think it's fixed (given
that on 8.1 the errors are triggered almost 100% of the runs).
> How sure are you about that uninterlo
> Why irrecoverably? If the command fails, you just wait and try it
> again.
> You could use the pg_stat tables to work out who is connected and use
> pg_cancel_backend() to kill them. You could "kill -INT" them yourself.
> You could change the pg_hba.conf to forbid logging in and then bouncing
>
On Tue, 2007-02-20 at 03:43, Tom Lane wrote:
> Nonsense. pg_stat_activity + "kill -TERM" should solve this problem
> reasonably well. Some of us don't trust kill -TERM 100%, which is why
> it's not currently exposed as a standard function, but if you're using
> a reasonably recent PG release it's
> I would like to convert a mysql database with 5 million records and
> growing, to a pgsql database.
I wouldn't qualify 5 million as "huge". We have here several 100 million
sized tables, and I still think it's a medium sized DB...
> All the stuff I have come across on the net has things like
>
> And due at least in part to government (and other institutions operated by
> damned fools) opting for the least expensive provider rather than paying for
> someone who actually knows what they're doing. Just as buying cheap junk
> always comes back to get you, hiring incompetent fools that do
(see below the original mail I forwarded to the list)
Zsombor,
As far as I know there's no way to disable the MVCC features in
postgres, that's a fundamental design choice in it's workings.
However, your problem can be solved by properly running vacuum on the
tables you update frequently. A firs
> I agree with everything you said except the point about the GWT.
> Using a framework like this you can have your query in the javascript,
> and pass it through directly the database and pass the data back using
> extremely simple (think 10 line) php or perl rpc that renders query
> result back in
Jiří,
I guess you should be able to do it via slony. Usually one wants to
upgrade using slony, but it should work the other way around too :-)
Cheers,
Csaba.
On Fri, 2007-04-13 at 15:09, Jiří Němec wrote:
> Hello,
>
> I need to import PostgreSQL DB from 8.2.3 server 8.0.8 server. Is
> there som
Sorry Jiří, that was a hasty answer from my part...
> I haven't used Slony by now. Do you have some advices or articles they
> may help? I have no ssh access at FreeBSD server box when PostgreSQL
> 8.0 server runs - so I hope this is no problem...
Slony is quite a heavy beast to install, I'm pret
> Other than hard disk space, are there any disadvantages on
> having a large number of page slots?
It's not using hard disk space, it's using shared memory, so you might
want to adjust it to make that memory available for other purposes...
> DETAIL: A total of 2233472 page slots are in use (in
> If you are updating a large portion of your tree, you will probably want to
> throw in a vacuum in
> between the two updates. This should reduce the bloat caused by dead tuples
> in both your index
> and table.
... but that will only work if you can commit the first set of changes
before you
> [snip] Take the example of a query "UPDATE tablename SET x = x + 1".
> When this query is erroneously issued twice, data corruption will occur.
Huh ? I thought slony is replicating data, not queries... what on the
master is "UPDATE tablename SET x = x + 1" will translate to "UPDATE
tablename SET
IIRC, this was discussed a few times on this list, searching the
archives might get you some results. AFAIR, the only way to do it
efficiently is to have a column specially assigned for this purpose, and
populate it with random numbers in a big range. The column should be
indexed to assure fast acc
[snip]
> SELECT TRIM(TRIM(TRIM(COALESCE(s_house,'') || ' ' ||
> COALESCE(s_post_dir,'')) || ' ' || COALESCE(s_street ,'')) || ' ' ||
> COALESCE(s_suffix,'')) FROM parcels WHERE s_pin = '1201703303520';
>
> The TRIMs are to remove surplus spaces from inside the result string.
Avoiding the inner tri
Check out:
src/include/utils/errcodes.h
in the postgres source tree.
In Java you will get these error codes using the getSQLState method on
the SQLException. You should also check the JDBC source to any further
JDBC specific codes, as it is possible to have JDBC side errors (i.e.
not coming from th
Cool, this goes to my "util" mail folder :-)
[snip]
> But it's still too cumbersome. How about creating a new operator? With
> the example below the query would simply be
>
> SELECT s_host ||~ s_post_dir ||~ s_street ||~ s_suffix
> FROM parcels
> WHERE s_pin = '1201703303520';
>
> alvherre=
I'm pretty sure the real poster was not Mike. It's very easy to forge
the mail headers, including the From, and comparing this mail with other
posts from Mike it's pretty obvious they were written from different
places.
Cheers,
Csaba.
On Wed, 2004-11-10 at 10:23, Joel wrote:
> > Hello. My name is
I would say it all depends on what you want to do with the data.
If you want to look up all the possible occurences for an event, it
might be useful to have the simple solution you described. If you will
only look up the next n occurences starting from a given date, you might
be better off storing
do and what to avoid.
> Thanks
>
> Matt
>
>
> --- UrsprÃngliche Nachricht ---
> Datum: 16.11.2004 16:32
> Von: Csaba Nagy <[EMAIL PROTECTED]>
> An: [EMAIL PROTECTED]
> Betreff: Re: [GENERAL] How to deal with almost recurring data?
>
> > I would say it
Why don't you check in your update trigger if the new record has the
flag "false" ? In that case you replace new with old, except you set the
flag to false. This way you can reset the flag by a simple update to
false of the flag field. All other queries should not touch the field.
In other words, u
What we are doing here is likely not applicable to you, but I'll still
tell it:
- keep the data definition in an XML document, which includes all the
comments about all the tables/fields in the schema;
- generate both the database schema and the (HTML) documentation out of
this XML using style s
The only advantage would be that an in-database solution would be OS
independent and it could be managed using the same tools which manage
the database itself, including the backup and management of it. I'm not
sure how the Oracle thing is working, but I suppose you can manage it
using plain SQL. T
No doubt about this, there are a lot of features which are way more
important, but this was not the point at all.
The question was if there is any advantage of having it in the DB, and
the answer is: yes there is some advantage. It's a totally different
problem how important it is - there are many
See my comments below.
On Thu, 2004-12-16 at 21:01, Greg Stark wrote:
> Csaba Nagy <[EMAIL PROTECTED]> writes:
>
> > The only advantage would be that an in-database solution would be OS
> > independent
>
> That kind of argument leads to monstrosities like Oracle
at 10:47:46 +0100,
> Csaba Nagy <[EMAIL PROTECTED]> wrote:
> > The only advantage would be that an in-database solution would be OS
> > independent and it could be managed using the same tools which manage
> > the database itself, including the backup and management of
On Wed, 2005-01-05 at 21:34, Jeff Davis wrote:
> Benefits of multiple instances:
> (1) Let's say you're using the one-instance method and one of your web
> users is a less-than-talented developer, and makes an infinite loop that
> fills the database with garbage. Not only will that hurt performance
[snip]
> The database could be clever and implement the same kind of sampling vacuum
> does. That picks a random sampling of pages from the table without using an
> index. But there's no way to implement the same kind of behaviour from the
> user-visible features.
... meaning perhaps a new keyword
[snip]
> See:
>
> http://www.jlcomp.demon.co.uk/faq/random.html
>
> I think the Oracle syntax looks like
>
> SELECT * FROM foo SAMPLE (0.1)
>
> I don't think I would have picked this syntax but it seems like a better idea
> to copy the existing practice rather than invent a new one.
>
> Th
Hi all,
I wonder if it makes sense to implement the units as separate data types
? Cause that's what they are really.
So "amper" would be a data type which aliases one of the numeric data
types (depending on what precision range you need), but does not allow
to be added with anything else than "am
Sid,
Postgres has an excellent documentation. You can find about what you
asked here:
http://www.postgresql.org/docs/8.0/static/functions-sequence.html
It always pays off to read the docs first.
Cheers,
Csaba.
On Fri, 2005-01-28 at 12:36, sid tow wrote:
> hi
>
> How to get the current val
[snip]
> Or if there are also blanks among those e-mail addresses:
>
> SELECT first_name,work_email FROM tb_contacts WHERE
> tb_contacts.work_email IS NOT NULL AND tb_contacts.work_email != '';
The "tb_contacts.work_email IS NOT NULL" clause is superfluous, the
other condition will evaluate to f
101 - 200 of 331 matches
Mail list logo