On May 23, 2007, at 1:12 PM, Donald Laurine wrote:
Now my question. The performance of each of these databases is
decreasing. I measure the average insert time to the database. This
metric has decreased by about 300 percent over the last year. I run
vacuum analyze and vacuum analyze full o
On May 25, 2007, at 5:28 PM, Tom Lane wrote:
That's true at the level of DDL operations, but AFAIK we could
parallelize table-loading and index-creation steps pretty effectively
--- and that's where all the time goes.
I would be happy with parallel builds of the indexes of a given
table. T
On Jun 8, 2007, at 3:33 PM, Guy Rouillier wrote:
Well, I'm not one of the developers, and one of them may have this
particular scratch, but in my opinion just about any available fish
has to be bigger than this one. Until someone comes out with a
standardized approach for utilizing whatev
On Jun 15, 2007, at 8:24 AM, Francisco Reyes wrote:
Understood. But at least it shows that the program was already
above the default of 512MB limit of the operating system.
But that is a false assertion that the limit is 512Mb. On a random
system of mine running FreeBSD/i386 it shows the
On Jun 18, 2007, at 2:10 PM, Francisco Reyes wrote:
Also the error is about running out of memory when trying to
allocate 84MB.
The default FreeBSD limit is 512MB so 84MB is well below that.
Try being less stingy than 128Mb for your stack. The default stack
is 512Mb.
--
On Jun 25, 2007, at 9:33 PM, Francisco Reyes wrote:
Therefore, the problem is only with the i386 version.
Should I report this as a bug or is this "nornal" and expected?
i wouldn't call it a bug to need more resources than you've got
available :-) obviously the limits on the i386 version
On Jun 25, 2007, at 10:32 PM, Francisco Reyes wrote:
Hm... now I am really confused.
The same settings on AMD64 work. So how are "more resources
available" when I have the same amount of memory and the same
settings?
you set your maxdsize to the same as on i386? on even my smallest
am
On Jun 26, 2007, at 3:31 PM, Bill Moran wrote:
VACUUM FULL and REINDEX are not required to maintain disk usage.
Good old-
fashoned VACUUM will do this as long as your FSM settings are high
enough.
I find this true for the data but not necessarily for indexes. The
other week I reindex
On Jul 16, 2007, at 9:26 AM, Francisco Reyes wrote:
I guess the next question is 'what does postgresql considers a blob'?
bytea fields? How about a large text with megabytes worth of data?
bytea and text fields are NOT blobs. they are what you access via
the 'large object' functions.
--
On Jul 27, 2007, at 8:29 PM, Jim Nasby wrote:
Double-check with the Slony guys, but ISTR that there's an issue
going all the way from 7.4 to 8.2 in a single shot.
I don't think that's a slony-specific issue. Moving from 7.4 to 8.0
introduces a fair number of incompatibilities one must add
On Aug 1, 2007, at 10:56 AM, Richard Huxton wrote:
You could write a small cron-script that dumped the schema once
every 5 minutes so it could be picked up by svn.
I think most people have a separate collection of schema-creation/
update scripts that they keep under version control. All cha
On Aug 9, 2007, at 9:38 AM, Brad Nicholson wrote:
I have the times that it takes to to do a regular
vacuum on the clusters, will vacuum full take longer?
almost certainly it will, since it has to move data to compact pages
rather than just tagging the rows as reusable.
you can speed thing
On Aug 15, 2007, at 7:41 AM, Ivan Zolotukhin wrote:
What is the best practice to process such a broken strings before
passing them to PostgreSQL? Iconv from utf-8 to utf-8 dropping bad
characters?
This rings of GIGO... if your user enters garbage, how do you know
what they wanted? You don'
On Aug 24, 2007, at 4:09 AM, Alban Hertroys wrote:
I'm not entirely sure what makes multi-threading be advantageous on a
specific operating system, but I think FreeBSD should be added to that
list as well... They've been bench marking their threading support
using
multi-threading in MySQL (n
On Aug 25, 2007, at 1:34 AM, Benjamin Arai wrote:
There has to be another way to do incremental indexing without
loosing that much performance.
This is the killer feature that prevents us from using the tsearch2
full text indexer on postgres. we're investigating making a foreign
table f
On Aug 25, 2007, at 8:12 AM, Phoenix Kiula wrote:
The sentence that caught my attention is "Nokia, Alcatel and Nortel
are all building real-time network nodes on top of MySQL Cluster."
My experiences with MySQL so far have been less than exhilerating
(only tried it for our web stuff, which is
On Aug 27, 2007, at 11:04 AM, Andrew Sullivan wrote:
It was a way to scale many small systems for certain kinds of
workloads. My impression is that in most cases, it's a SQL-ish
solution to a problem where someone decided to use the SQL nail
because that's the hammer they had. I can think of
On Aug 30, 2007, at 4:03 AM, Ow Mun Heng wrote:
2. how do I perform a list of SQL using transactions. eg: like above,
but wrap it into a transaction.
assuming $dbh is your open handle to the database via DBI, then you
do something like this:
$dbh->begin_work() or die;
$sth = $dbh->prepare
On Sep 12, 2007, at 7:32 PM, Andrew Hammond wrote:
Does anyone know where I could find a tool which allows importing
schema information from a postgres database into visio? The boss
guys want some pretty pictures...
See SQLFairy. it can generate pretty pictures directly from the
schemas
On Sep 18, 2007, at 1:14 AM, Joshua D. Drake wrote:
Q: To get rid of index bloat, is a VACUUM ANALYZE enough? Or must I
reindex/cluster indexes?
If you overrun your max_fsm_pages, no:
else yes;
my algorithm is: if (true) then yes;
my FSM is way bigger than I ever use (vacuum never report
On Sep 24, 2007, at 12:00 PM, Phoenix Kiula wrote:
I feel your pain. But I seem to have (mostly) solved my problem in
three ways:
My particular usage pattern (add data continuously, purge *some* of
the data once per week or every other week. The purge is what seems
to kill it. Last tim
On Sep 28, 2007, at 5:09 AM, Tom Allison wrote:
I know reiserfs does better performance wise, but there's no point
in going fast if you can't steer.
I recently had to replace 16 Western Digital 10kRPM SATA drives with
Hitachi 7.2kRPM drives because the WD drives kept randomly (and
falsel
On Sep 28, 2007, at 9:07 AM, Ottavio Campana wrote:
But why does pg_dump does not already exports data such that previous
tables do not depend on successive ones?
Because you can't always sort your tables that way. The restore
procedure is responsible for either sorting or disabling the FK
On Oct 5, 2007, at 9:10 AM, Kenneth Downs wrote:
I also found it very hard to pin down the penalty of the trigger,
but came up with rough figures of 30-50% overhead. The complexity
of the trigger did not matter.
in which language did you write your triggers?
---(
On Nov 1, 2007, at 8:51 PM, Ow Mun Heng wrote:
Another question is, based on what I've read in the archives (in my
laptop.. No-Inet conn @ work) Since I've overran my max_FSM, I'm
basically screwed and will have to do a vacuum verbose FULL on the
entire DB. Crap..
I've seen this repeated many
On Nov 12, 2007, at 12:29 PM, Sam Mason wrote:
You only need a 64bit address space when each process wants to see
more
than ~3GB of RAM.
And how exactly do you get that on a 32-bit CPU? Even with PAE
(shudders from memories of expanded/extended RAM in the DOS days), you
still have a 32
On Nov 12, 2007, at 12:01 PM, Greg Smith wrote:
Not the Mac OS BSD. Last time I looked into this OS X was still
dramatically slower than Linux on things like process creation.
On MacOS X, that's the Mach kernel doing process creation, not
anything BSD-ish at all. The BSD flavor of MacOS
On Nov 12, 2007, at 8:55 PM, Steve Manes wrote:
Steve Manes wrote:
What's the portupgrade process in FreeBSD??
(Fixed. The answer is to use pg_delete -f on the old package to
force the delete)
more elegantly,
portupgrade -Rrv -f -o databases/postgresql82-client postgresql-client
but y
On Nov 13, 2007, at 1:18 PM, [EMAIL PROTECTED] wrote:
Yep, this is a fork without exec. And the child processes often aren't
even doing any database access -- the database connection's opened and
held, then a child is forked off, and the child 'helpfully' closes the
handle during the child's gl
On Nov 20, 2007, at 1:04 PM, Josh Harrison wrote:
I ran vacuum full on this table already. I haven't re-indexed it. But
this will not affect the table size...right...since indexes are stored
separately?
Yes, but your indexes are probably bloated at this point, so to reduce
the space they
On Nov 24, 2007, at 6:18 PM, Laurent CARON wrote:
Question:
I'd like to know if it is possible (and wise) to just keep the
/var/lib/postgres.. directories from the old 32Bit server to use
on
the 64Bit version.
This is just as a personal interest since I can also just dump and
restore th
On Nov 26, 2007, at 10:14 AM, Jeff Larsen wrote:
Yes, but I'd like something better than "near real time" as the above
page describes. Or maybe someone could clarify that Besides,
EnterpriseDB does not save me enough money. In my current commercial
DB, if a transaction is committed on the m
tions.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.MailerMailer, LLC Rockville, MD
http://www.MailerMailer.com/ +1-301-869-4449 x806
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
On Nov 28, 2007, at 8:18 AM, Richard Huxton wrote:
I can read that I can create a PostgreSQL DB on the RAMDisk
partion, but I'm wondering if is it possible to create
one DB with two schemas in two different memory location (RAM and
flash)?
See the manuals for "tablespaces".
but postgres
On Nov 28, 2007, at 8:50 AM, Glyn Astill wrote:
Hi people,
When I try to compile Slony 1.2 I get the following error:P
parser.y:1090:18: error: scan.c: No such file or directory
make[2]: *** [parser.o] Error 1
make[2]: Leaving directory `/tmp/slony1-1.2.12/src/slony_logshipper'
make[1]: *** [
On Nov 28, 2007, at 11:06 AM, Matt Doughty wrote:
Is there a way of selecting all fields except for one in particular?
I'd like to create a query that says something like:
select * except fieldx
For best practices, you should never use select * in your queries.
You will inevitably end
On Nov 28, 2007, at 11:17 AM, Glyn Astill wrote:
I've already tried removing and re-installing bison, but I shall try
again as you suggest.
I recommended uninstalling bison, not re-installing it.
---(end of broadcast)---
TIP 6: explain analyz
On Dec 8, 2007, at 9:21 AM, Geoffrey wrote:
I am quite new to Slony as well, but one of the first requirements
the docs state is:
Thus, examples of cases where Slony-I probably won't work out well
would include:
* Sites where connectivity is really "flakey"
* Replication to nodes t
On Dec 7, 2007, at 11:42 AM, Colin Wetherbee wrote:
You can do this with a conditional. Something like the following
should work.
IF
NOT (a query matching your data returns rows)
THEN
INSERT (your new data)
There exists a race condition here unless you've locked your tables.
--
On Dec 10, 2007, at 1:21 PM, Erik Jones wrote:
You beat me to the punch on this one. I was wanting to use modulo
operations for bin style partitioning as well, but this makes things
pretty awkward as well as unintuitive. So, to the postgres gurus:
What are the limitations of check cons
please don't hijack old threads ("partitioned table query question" in
this case) and change the subject line to start your new question. it
messes up threaded mail readers.
thanks.
On Dec 10, 2007, at 3:00 PM, Nathan Wilhelmi wrote:
Hello - Does anyone happen to have a SQL script or funct
On Dec 10, 2007, at 5:04 PM, Colin Wetherbee wrote:
For what it's worth, the real algorithm would be as follows. I
hadn't had enough coffee yet, and I forgot the UPDATE bit.
IF
(a query matching your old data returns rows)
THEN
UPDATE with your new data
ELSE
INSERT your new data
Still
On Dec 11, 2007, at 10:08 AM, Erik Jones wrote:
b.) precomputing the bin and directly accessing the child table will
be the only options we have for now.
This is where I'm headed I have only one or two queries that don't
specify the partitioned ID, and those need a full table scan anyh
On Dec 11, 2007, at 10:44 AM, Gregory Stark wrote:
The problem Tom's tried to explain is that the function may or may not
preserve the bin. So for example if you wanted to bin based on the
final digit
of a numeric number, so you had a constraint like
I, along with at least Erik, was thinki
On Dec 21, 2007, at 11:09 AM, Martijn van Oosterhout wrote:
The usual answer is use slony. You can use it to replicate the 8.0
server onto an 8.1 server. This may take weeks/months/years/whatever
to
synchronise. When the slave is up to date, you pull the plug on the
8.0
server and get ever
On Jan 23, 2008, at 10:26 AM, Tom Lane wrote:
Reading the release notes is good, but you really really should test
the
application(s) against a test 8.1 installation before you go live ...
be sure to run *every* query your system uses through 8.1. the most
common problems you will run in
On Jan 29, 2008, at 7:24 AM, Glyn Astill wrote:
I'm trying yo run a perl script that uses DBI (Slonys
psql_replication_check.pl to be precise) and I'm getting the error:
Can't locate Pg.pm in @INC
It doesn't use DBI, it uses Pg. At some point I posted patches to
convert it to DBI and DBD
On Jan 31, 2008, at 4:28 AM, Aaron Glenn wrote:
CARP *and* pfsync.
this late at night off the top of my head I can't see any blatantly
obvious reason this wouldn't work (with at least pgpool that is, dunno
about your data)
we use CARP to balance and failover some webserver pairs. We also use
On Jan 31, 2008, at 10:21 AM, Alvaro Herrera wrote:
Glyn Astill wrote:
I'm not piping it to a file, postgres is managing the logs. Is there
any way to manage the permissions, or do I just need to create a
script to change the permissions?
I think you should be able to chmod the files after
On Jan 31, 2008, at 10:14 AM, Erik Jones wrote:
That's an interesting idea. Is there a general audience/
participation wiki for Postgres? I know the developers have one,
but a user-oriented sister wiki would probably be a good way to get
lots of different people involved.
I'm of the opi
On Feb 4, 2008, at 10:00 AM, Wes wrote:
Just a follow-up on this... The REINDEX took about 2 1/2 days. I
didn't
gain much disk space back - a full backup takes just as long as
before, but
the vacuum time dropped from 30 hours to 3 hours.
what you need to do is compare the relpages from
On Feb 5, 2008, at 12:29 PM, Tony Caduto wrote:
So this Stonebraker guy is the Postgres Architect?
That doesn't imply Postgres == PostgreSQL :-)
The original Postgres wasn't even SQL, was it?
---(end of broadcast)---
TIP 9: In versions below
On Feb 4, 2008, at 11:31 AM, Joshua D. Drake wrote:
I don't agree in the least, I was actually going to suggest we add a
new one for relational design questions. I like many lists that are
contextually specific. IMO, general should be removed for example.
I think this makes sense for a web-b
On Feb 9, 2008, at 12:20 PM, Ken Johanson wrote:
But given the recent and dramatic example of 8.3's on-by-default
stricter typing in functions (now not-autocasting), I worry that
kind of change could happen in every minor version (8.4 etc).
You need to *know* your software if you're using
On Feb 11, 2008, at 10:37 AM, Mario Lopez wrote:
The problem arises with the second type of queries, where there are
no possible partitions and that the search keywords are not known, I
have tried making indexes on the letter it ends with, or indexes
that specify that it contains the lette
On Feb 20, 2008, at 2:12 PM, Douglas McNaught wrote:
Alternatively, is there a better way to streamline the duplication
of a
database?
How about:
CREATE DATABASE newdb TEMPLATE olddb;
The template DB has to have no open connections during the entire copy
process, so it is not always
On Mar 11, 2008, at 2:50 PM, A. Kretschmer wrote:
i.e ... WHERE pt.created_date >= '2008-01-21'
You can't compare a date or timestamp to a varchar or text. For your
example, cast the date-string to a real date like:
Since which version of Pg?
Queries like the above have worked for me from
On Mar 12, 2008, at 3:19 PM, Tom Lane wrote:
- restore dump, ignoring "object already exists" errors
Couldn't one use the dump listing feature of pg_restore and comment
out the extensions when restoring? Not likely to be a big improvement
over "ignore" errors :-)
--
Sent via pgsql-gen
On Mar 13, 2008, at 7:50 AM, Glyn Astill wrote:
I'm looking at switching out the perc5i (lsi megaraid) cards from our
Dell 2950s for something else as they're crap at raid 10.
Use an actual LSI branded card instead of the Dell "improved" version.
--
Sent via pgsql-general mailing list (pgsq
On Mar 17, 2008, at 10:58 PM, Tyler, Mark wrote:
I suggest rethinking your dislike of NOTIFY.
I have thought very hard about using NOTIFY for this but it has two
large problems (from my point of view). The first is that it forces me
Wait a while and you will learn to detest Spread, too.
-
On Mar 25, 2008, at 4:28 PM, Jeff Davis wrote:
This obviously does not work in real time, but it may be useful. It
does
not require a lot of additional space to do this because of the ZFS
copy-on-write implementation.
But what benefit does it give you if you're pounding on the same set
of
On Apr 16, 2008, at 8:47 AM, Julio Cesar Sánchez González wrote:
From what I've read, Slony-I does only master-slave replication and
Slony-II is not being actively developed. Is this right? Are there
any
viable master-master replication tools for PostgreSQL. (They could be
commercial/paid for
On Apr 28, 2008, at 6:50 PM, [EMAIL PROTECTED] wrote:
I want to ask if there is something like nomount state or close
database state
in which I can acces postgresql to drop database or to do some other
stuff.
change the permissions on the DB so nobody can log in. you really
should find
Here's how you do it on restore step from a pg_dump in -Fc format.
pg_restore -l dumpfile > list
edit the file "list" to remove references to slony objects
pg_restore -L list dumpfile
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
ht
On Jul 11, 2008, at 4:24 AM, Richard Huxton wrote:
If you just want to see if a lock has been taken (e.g. SELECT FOR
UPDATE) then that shows in pg_locks. If you want details on the
actual rows involved, then you probably want "pgrowlocks" mentioned
in Appendix F. Additional Supplied Module
o its work.
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: khera@kciLink.com Rockville, MD +1-301-869-4449 x806
AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/
---(end of broadcast)---
TIP 3: if posting/readi
eably slower than the 7.4 with identically (translated to 8.0
style) configs.
Any ideas on where to poke around for the pipe problems?
Vivek Khera, Ph.D.
+1-301-869-4449 x806
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
: watchdog timeout -- resetting
Mar 22 10:28:24 d01 kernel: bge0: watchdog timeout -- resetting
now, the question is it OS or is it hardware :-(
Vivek Khera, Ph.D.
+1-301-869-4449 x806
---(end of broadcast)---
TIP 6: Have you searched our list
at happens. I know the clients are not having
problems since they've been stable for a long time.
I'm guessing nobody else is seeing arbitrary connection drops in 8.0.1,
particularly on FreeBSD 5.4-PRERELEASE :-(
Funny thing is that the pg_dump worked yesterday...
Vivek Khera, Ph.D.
the definition.
there are some tricky bits. check the archives for either this list or
the performance list for what I did to mark my reference checks
deferrable. it was within the last few months (no more than 6).
Vivek Khera, Ph.D.
+1-301-869-4449 x806
---(end of
see a bunch of socket not open errors for some reporting
clients with no corresponding ethernet timeouts, so the log
information is either conflicting or incomplete.
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.
the entire OS and postgres *without* the gcc
Opteron optmizations, and it seems much more stable so far (24 hours)
and somewhat faster. My daily reports went faster too (as fast as
with the Pg 7.4 box).
So I'm chalking this one up to bad compiler.
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
u in advance!
I use Postgres 7.4.7 on both FreeBSD 4.x and 5.3+ on i386 and amd64
platforms with good results under heavy use. I'm evaluating Postgres
8.0 still.
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications,
On Mar 30, 2005, at 4:47 PM, Alvaro Herrera wrote:
Now this can't be applied right away because it's easy to run "out of
memory" (shared memory for the lock table). Say, a delete or update
that touches 1 tuples does not work. I'm currently working on a
proposal to allow the lock table to spil
On Apr 1, 2005, at 3:00 PM, Peter Childs wrote:
Either this message is a joke, Or you have all gone mad?
I see a postpostgres on the horizon.
no, dave broke tradition and posted an entirely factual newsletter on
april 1. I think he should lose his job for that. :-)
---
ger, and any changes needed for that will be applicable to gcc4 on
any other platform too. But that's just a SWAG.
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
On May 26, 2005, at 11:55 AM, Manuel García wrote:Hello, Somebody knows If is possible to catch all the sentences applies to one table using triggers and function in C maybe, that’s because, I need to create one log table with all the sentences. Once that I have that I going to use all the senten
On May 26, 2005, at 2:41 PM, David Parker wrote:
But I'm wondering - shouldn't that be part of normal server startup,
cleaning out the pg_listener table? Or has this been addressed in
8.X.?
Or is there a reason this isn't a good idea?
Try slony 1.0.5, which fixed *many* issues and bugs.
our .profile executes the stty command. Don't do that. Why do you need stuff from your .profile anyhow?And why would you drop the output to /dev/null -- you'll never know if your dump fails! Why bother making one then? Vivek Khera, Ph.D. +1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
rs in source code? h
Vivek Khera, Ph.D.
+1-301-869-4449 x806
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
second query does an update or
just a select. it should be clarified.
Vivek Khera, Ph.D.
+1-301-869-4449 x806
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unreg
e multiple files and burn those to the DVD. To restore, you
"cat my.dump.split.?? | pg_restore" with appropriate options to
pg_restore.
My ultimate fix was to start burning and reading the DVD's on my
MacOS desktop instead, which can read/write these large files just
fine :-)
n any case, I urge you to derive the formulas yourself from basic
research so you *know* you're getting what you think you're getting.
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
bounding box inside which you run your
distance computation.
Putting it together is left as an exercise for the reader (hint: just
AND your pieces together...)
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
check
record integrity somehow). Same advice as you get for verifying your
system tape backups...
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
very well kept
zipcode databases for under $50.
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
On Jun 27, 2005, at 8:42 PM, Bruno Wolff III wrote:
Google is your friend. There are places that sell very well kept
zipcode databases for under $50.
The US government gives it away for free. Look for "tiger".
That is stale data.
Vivek Khera, Ph.D.
+1-301-869-4449 x806
On Jun 29, 2005, at 9:01 AM, Sven Willenberger wrote:
Unix user root (and any psql superuser) the vacuum runs fine. It is
when
the unix user is non-root (e.g. su -l pgsql -c "vacuumdb -a -z") that
this memory error occurs. All users use the "default" class for
login.conf purposes which has not
On Jun 30, 2005, at 10:39 AM, Ajay Dalvi wrote:
But while installing , during configuration step i am getting an error
error: Your version of libpq doesn't have PQunescapeBytea
this means that your version of PostgreSQL is lower than 7.3
and thus not supported by Slony-I.
Though i a
ferably the postgres performance list).
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
correspond with your software release numbers... or not. They can
be separate, especially once you get more stable and have more
software updaes than schema updates.
Or you could do something like Pg and require changes only for major
version number releases. :-)
Vivek Khera, Ph.D.
+1-301
ustomed to Oracle, you are probably expecting an ACID
database, which rules out MySQL too).
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
asy way to go about this? Am I about to write my first
server side function for postgresql?
create your unique index using the lower() function.
and don't steal threads to start a new one (ie, replying then just
changing the subject)
Vivek Khera, Ph.D.
+1-30
es...
It does work. I just installed Pg 8.0.3 on my workstation using
darwin ports (which really only just sets some funky installation
location paths and does a standard build).
Vivek Khera, Ph.D.
+1-301-869-4449 x806
---(end of broadcast)---
d better not be windows...)
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
ake a purchase, please post your investigations to
the list - I for one would be interested.
But don't put important data on it since it doesn't do ECC RAM
Vivek Khera, Ph.D.
+1-301-869-4449 x806
smime.p7s
Description: S/MIME cryptographic signature
On Jul 27, 2005, at 12:09 PM, Scott Marlowe wrote:
On Wed, 2005-07-27 at 10:56, Vivek Khera wrote:
But don't put important data on it since it doesn't do ECC RAM
Considering the small incremental cost of ECC ram, it's hard to
believe
someone would build one of t
hared pool
connections over mod_perl and Apache::DBI. I haven't satisfied
myself yet that the timeout will be unset when the next connection
uses the DB...
Vivek Khera, Ph.D.
+1-301-869-4449 x806
---(end of broadcast)---
TIP 3: Have y
while. If an individual
rows results in > 10k related rows being deleted, I pause for a
smaller amount of time.
This keeps everything moving along, and *nobody* notices. So what if
it takes 3 days to finish...
Vivek Khera, Ph.D.
+1-301-869-4449 x806
--
still the one
I want, I check that. If the row is not updated (ie, count 0
returned) then I do a standard update based just on the user_id which
is the PK.
When you add this up over millions of rows, it makes a difference to
bypass the PK index lookup every time.
Vivek Khera, Ph.D.
1 - 100 of 273 matches
Mail list logo