we had some different idea here in the past: what if we had a procedure /
method to allow people to save the list of current buffers / cached blocks to
be written to disk (sorted). we could then reload this "cache profile" on
startup in the background or people could load a certain cache content
On Mar 9, 2012, at 2:34 PM, Robert Haas wrote:
> On Fri, Mar 9, 2012 at 5:42 AM, Hans-Jürgen Schönig
> wrote:
>> we had some different idea here in the past: what if we had a procedure /
>> method to allow people to save the list of current buffers / cached blocks
>&g
hello,
does the problem show up on 2% of all problems after 2 weeks or so?
we had a similar problem on UNIX as well. it even materialized on 100 identical
boxes (on 2% of them). it pops up randomly and never stops …
i checked some code paths. some of those messages are direct output via stderr
(
hello ...
i have come an interesting corner case this morning and i am not sure
if it is worth treating this as a bug or as just "bad luck".
imagine creating a directory along with a tablespace ...
hans-jurgen-schonigs-macbook:html hs$ mkdir /tmp/x
hans-jurgen-schonigs-macbook:html hs$ psql te
On Jul 1, 2012, at 5:44 PM, Magnus Hagander wrote:
> On Sun, Jul 1, 2012 at 1:02 PM, Boszormenyi Zoltan wrote:
>> Hi,
>>
>> attached is a patch that does $SUBJECT.
>>
>> It's a usability enhancement, to take a backup, write
>> a minimalistic recovery.conf and start the streaming
>> standby in o
hello,
some time ago oleg and teodor have posted a PostgreSQL version of bloom filters.
as this appears to be a useful thing for many people i have ported this
prototype to PostgreSQL 9.2.
it seems to work as expected on OS X and Linux.
as it is a contrib module it lacks xlog support.
maybe some
On Sep 2, 2010, at 1:20 AM, Robert Haas wrote:
> On Sep 1, 2010, at 10:21 AM, Greg Stark wrote:
>> For what it's worth I disagree with Tom. I think this is a situation
>> where we need *both* types of solution. Ideally we will be able to use
>> a plain Append node for cases where we know the rel
Sep 8, 2010, at 3:54 PM, Robert Haas wrote:
> On Tue, Sep 7, 2010 at 2:14 PM, Boszormenyi Zoltan wrote:
>> Hi,
>>
>> Robert Haas írta:
>>> 2010/9/3 PostgreSQL - Hans-Jürgen Schönig :
>>>
>>>> i tried this one with 5000 unindexed tables (just one
function call goes down to
basically zero.
i hope this is something which is useful to some folks out there.
many thanks,
hans
canon-pathkeys-as-rbtree-3-ctxdiff.patch
Description: Binary data
On Sep 8, 2010, at 4:18 PM, Stephen Frost wrote:
> * Hans-Jürgen Schö
On Sep 8, 2010, at 4:57 PM, Stephen Frost wrote:
> * Robert Haas (robertmh...@gmail.com) wrote:
>> Neat. Have you checked what effect this has on memory consumption?
>>
>> Also, don't forget to add it to
>> https://commitfest.postgresql.org/action/commitfest_view/open
>
> Would be good to have
On Sep 23, 2010, at 3:29 PM, Robert Haas wrote:
> On Tue, Sep 21, 2010 at 12:29 AM, David Fetter wrote:
>> On Mon, Sep 20, 2010 at 10:57:00PM -0400, Robert Haas wrote:
>>> 2010/9/3 Hans-Jürgen Schönig :
>>>> On Sep 2, 2010, at 1:20 AM, Robert Haas wrote:
>>&g
On Sep 28, 2010, at 10:15 AM, Markus Wanner wrote:
> Hi,
>
> On 09/28/2010 07:24 AM, Li Jie wrote:
>> I'm interested in this parallel project,
>> http://wiki.postgresql.org/wiki/Parallel_Query_Execution
>>
>> But I can't find any discussion and current progress in the website, it
>> seems to sto
hello all ...
it seems we have found a fairly nasty problem.
imagine a long transaction which piles up XX.XXX of locks (count on pg_locks)
inside the same transaction by doing some tasty savepoints, with hold cursors
and so on.
in this case we see that a normal count issued in a second database
On Oct 4, 2010, at 1:23 PM, Heikki Linnakangas wrote:
> On 04.10.2010 14:02, Hans-Jürgen Schönig wrote:
>> it seems we have found a fairly nasty problem.
>> imagine a long transaction which piles up XX.XXX of locks (count on
>> pg_locks) inside the same transaction by doing
On Jan 8, 2014, at 9:27 PM, Bruce Momjian wrote:
> On Wed, Jan 8, 2014 at 05:39:23PM +, Simon Riggs wrote:
>> On 8 January 2014 09:07, Heikki Linnakangas wrote:
>>
>>> I'm going to say right off the bat that I think the whole notion to
>>> automatically disable synchronous replication when
hello ...
a view is already nice but i think it is still too narrow.
the problem is: you don't want a view for every potential join.
in addition to that - ideally there is not much left of a view when it comes to
checking for costs.
so, i think, this is not the kind of approach leading to total
On Jul 15, 2010, at 5:20 PM, Simon Riggs wrote:
> On Thu, 2010-07-15 at 11:05 -0400, Tom Lane wrote:
>> Simon Riggs writes:
>>> The biggest turn off that most people experience when using PostgreSQL
>>> is that psql does not support memorable commands.
>>
>>> I would like to implement the follow
On Jul 15, 2010, at 6:20 PM, Thom Brown wrote:
> On 15 July 2010 17:16, Marc G. Fournier wrote:
>> On Thu, 15 Jul 2010, Thom Brown wrote:
>>
>>> On 15 July 2010 17:07, Marc G. Fournier wrote:
On Thu, 15 Jul 2010, Thom Brown wrote:
> If it's only a psql problem, why impleme
On Jul 15, 2010, at 6:43 PM, Magnus Hagander wrote:
> On Thu, Jul 15, 2010 at 18:35, Simon Riggs wrote:
>> On Thu, 2010-07-15 at 17:38 +0200, Magnus Hagander wrote:
>>
>>> Is there an actual common use-case for having these commands available
>>> for *non-psql* interfaces?
>>
>> There are many
hello everybody,
i have just come across some issue which has been bugging me for a while.
consider:
SELECT * FROM foo ORDER BY bar;
if we have an index on bar, we can nicely optimize away the sort step by
consulting the index - a btree will return sorted output.
under normal circumstan
to solve it no matter what it takes.
many thanks,
hans
On Jul 26, 2010, at 1:14 AM, Robert Haas wrote:
> On Sun, Jul 25, 2010 at 6:40 PM, Greg Stark wrote:
>> 2010/7/25 Robert Haas :
>>> 2010/7/25 PostgreSQL - Hans-Jürgen Schönig :
>>>>
hello all ...
i am bugged with a small issue which is basically like this ...
test=# create table t_test as select x, x % 5 as y from generate_series(1,
100) AS x;
SELECT
test=# create index idx_a on t_test (x) ;
CREATE INDEX
test=# ANALYZE ;
ANALYZE
test=# explain analyze select * from
On Nov 30, 2009, at 10:32 AM, Stefan Kaltenbrunner wrote:
Andres Freund wrote:
On Monday 30 November 2009 03:57:11 Itagaki Takahiro wrote:
Boszormenyi Zoltan wrote:
we tried to discuss on a lower level what should be needed
for a partial replication based on streaming replication.
We need t
Just a side note: in addition to its use for partial replication, this
might have potential for performance-prioritizing databases or
tablespaces.
hello ...
this is an absolutely non-starter. the WAL is designed to be "hyper
ordered" and hyper critical. once you fuck up order you will
hello ...
i picked up csaba nagy's idea and implemented a very simple yet very
useful extension.
i introduced a GUC called statement_cost_limit which can be used to
error out if a statement is expected to be too expensive.
the advantage over statement_timeout is that we are actually able to
On Aug 2, 2008, at 8:38 PM, Tom Lane wrote:
Andrew Dunstan <[EMAIL PROTECTED]> writes:
Hans-Jürgen Schönig wrote:
i introduced a GUC called statement_cost_limit which can be used to
error out if a statement is expected to be too expensive.
You clearly have far more faith in th
hello ...
I still support it. Regrettably, many SQL developers introduce product
joins and other unintentional errors. Why let problem queries through?
i think the killer is that we don't have to wait until the query dies
with a statement_timeout.
it is ways more elegant to kill things be
this entire thing is not about cartesian products at all.
it is about kicking out "expensive" queries before they even start to
eat up tons of CPU.
imagine a user asking for "give me all phone call in the US within the
past 10 years". you could kill the guy instantly because you know that
*snip*
Judging from that, the toasted table
cleanup may be part of ALTER TABLE DROP COLUMN.
That would only help if you were dropping the last potentially-
toastable
column of a table. And implementing it would require introducing
weird
corner cases into the tuple toaster, because it m
On Sep 22, 2008, at 9:46 AM, Simon Riggs wrote:
On Sun, 2008-09-21 at 12:05 -0400, Tom Lane wrote:
... and it goes on to point out how to force immediate space
reclamation
if you need that. These statements apply independently of whether
any
particular value is toasted or not.
The reas
On Sep 22, 2008, at 12:02 PM, Simon Riggs wrote:
On Mon, 2008-09-22 at 04:57 -0400, Greg Smith wrote:
-As Greg Stark suggested, the larger the spindle count the larger the
speedup, and the larger the prefetch size that might make sense. His
suggestion to model the user GUC as "effective_spi
Bruce Momjian wrote:
Greg Stark wrote:
I couldn't get async I/O to work on Linux. That is it "worked" but
performed the same as reading one block at a time. On solaris the
situation is reversed.
In what way is fadvise a kludge?
I think he is saying AIO gives us more flexibility, b
On Nov 13, 2009, at 8:06 AM, Michael Meskes wrote:
On Thu, Nov 12, 2009 at 03:07:27PM -0500, Robert Haas wrote:
If you want to submit patches in a series like this one, they need
to be
considered standalone, I think. The Linux kernel devs work
differently
than us here.
Zoltan broke them
hi ...
i think the easiest way to do this is to simply add a mechanism to
functions which allows a function to "stream" data through.
it would basically mean losing join support as you cannot "read data
again" in a way which is good enough good enough for joining with the
function providing
why not just stream it in via set-returning functions and make sure
that we can mark a set returning function as "STREAMABLE" or so (to
prevent joins, whatever).
is it the easiest way to get it right and it helps in many other cases.
i think that the storage manager is definitely the wrong pla
Joshua Tolley wrote:
Way back in this thread[1] one of the arguments against allowing
some version of CREATE SYNONYM was that we couldn't create a synonym for
an object in a remote database. Will the SQL/MED work make this sort of
thing a possibility? I realize since it's not standard anyway, the
hello tom ...
the reason for SELECT FOR UPDATE is very simple:
this is the typical lock obtained by basically every business
application if written properly (updating a product, whatever).
the problem with NOWAIT basically is that if a small transaction holds
a a lock for a subsecond, you wil
I guess the website is really good. The only thing I'd do is to add a
section listing the core features of PostgreSQL - I think this could be
an important point.
In my opinion MySQL is not a competitor and we should not benchmark
PostgreSQL and compare it with MySQL. Those features which are r
I have tried to compile PostgreSQL with the Intel C Compiler 6.0 for
Linux. During this process some errors occurred which I have attached to
this email. I have compiled the sources using:
[hs@duron postgresql-7.2.1]$ cat compile.sh
#!/bin/sh
CC=/usr/local/intel_compiler/compiler60/ia32/bin/ic
but people just don't look for information.
All in all I think that there are ways to find people contributing
financially to the project.
Regards,
Hans-Jürgen Schönig
Bruce Momjian wrote:
>I think we are going to see more company-funded developers working on
>PostgreSQL.
Linux RedHat 7.3 (ext3, kernel 2.4.18-3)
512MB Ram
AMD Athlon 500
IBM 120GB IDE
[hs@backup hs]$ ./randcost.sh /data/db/
Collecting sizing information ...
Running random access timing test ...
Running sequential access timing test ...
random_page_cost = 0.901961
[hs@backup hs]$ ./randcost.sh
Christopher Kings-Lynne wrote:
>Assuming it's giving out correct information, there seems to be a lot of
>evidence for dropping the default random_page_cost to 1...
>
>Chris
>
>
Some time ago Joe Conway suggest a tool based on a genetic algorithm
which tries to find the best parameter settings
AMD Athlon 500
512MB Ram
IBM 120GB IDE
Tested with:
BLCKSZ=8192
TESTCYCLES=50
Result:
Collecting sizing information ...
Running random access timing test ...
Running sequential access timing test ...
Running null loop timing test ...
random test: 2541
sequential test: 2455
null timin
I have seen various benchmarks where XFS seems to perform best when it
comes to huge amounts of data and many files (due to balanced internal
b+ trees).
also, XFS seems to be VERY mature and very stable.
ext2/3 don't seem to be that fast in most of the benchmarks.
i did some testing with reiser
CREATE INDEX could use many CPUs.
Maybe this is worth thinking about because it will speed up huge
databases and enterprise level computing.
Best regards,
Hans-Jürgen Schönig
--
*Cybertec Geschwinde u Schoenig*
Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
Tel: +43/1/913 68 09; +43
Bingo = great :).
The I/O problem seems to be solved :).
A table space concept would be top of the histlist :).
The symlink version is not very comfortable and I think it would be a
real hack.
Also: If we had a clean table space concept it would be real advantage.
In the first place it would be
Greg Copeland wrote:
>I wouldn't hold your breath for any form of threading. Since PostgreSQL
>is process based, you might consider having a pool of sort processes
>which address this but I doubt you'll get anywhere talking about threads
>here.
>
>Greg
>
>
>
I came across the problem yesterda
Threads are not the best solutions when it comes to portability. A
prefer a process model as well.
My concern was that a process model might be a bit too slow for that but
if we had processes in memory this would be wonderful thing.
Using it for small amounts of data is pretty useless - I totall
Threads are bad - I know ...
I like the idea of a pool of processes instead of threads - from my
point of view this would be useful.
I am planning to run some tests (GEQO, AIX, sorts) as soon as I have
time to do so (still too much work ahead before :( ...).
If I had time I'd love to do somethi
>
>
>Can anybody please tell me in detail.(Not just a pointing towards TODO items)
>
>1) What a table space supposed to offer?
>
They allow you to define a maximum amount of storage for a certain set
of data.
They help you to define the location of data.
They help you to define how much data can
>>>2) What a directory structure does not offer that table space does?
>>>
>>>
>>You need to the command line in order to manage quotas - you might not
>>want that.
>>
>>
>
>Mount a directory on a partition. If the data exceeds on that partition, there
>would be disk error. Like table
>
>
Quotas are handled differently on ever platform (if available).
>>>Yeah. But that's sysadmins responsibility not DBA's.
>>>
>>>
>>Maybe many people ARE the sysadmins of their PostgreSQL box ...
>>When developing a database with an open mind people should try to see
Jim Buttafuoco wrote:
>Is this NOT what I have been after for many months now. I dropped the
>tablespace/location idea before 7.2 because that
>didn't seem to be any interest. Please see my past email's for the SQL commands and
>on disk directory layout I have
>proposed. I have a working 7.2
I guess we had this discussion before but I have just gone through the
general list and I have encountered a problem I had a least VERY often
before.
Sometimes the planner does not find the best way through a query.
Looking at the problem of query optimization it is pretty obvious that
things
First of all PREPARE/EXECUTE is a wonderful thing to speed up things
significantly.
I wonder if there is a way to store a parsed/rewritten/planned query in
a table so that it can be loaded again.
This might be useful when it comes to VERY complex queries (> 10 tables).
I many applications the si
thinking about prepared plans somewhere on disk.
Is there a way to transform ASCII -> plan?
Hans
Bruno Wolff III wrote:
On Wed, Oct 23, 2002 at 18:04:01 +0200,
Hans-Jürgen Schönig <[EMAIL PROTECTED]> wrote:
An example:
I have a join across 10 tables + 2 subselects across 4 tabl
The idea is not to have it accross multiple backends and having it in
sync with the tables in the database. This is not the point.
My problem is that I have seen many performance critical applications
sending just a few complex queries to the server. The problem is: If you
have many queries wher
Greg Copeland wrote:
Could you use some form of connection proxy where the proxy is actually
keeping persistent connections but your application is making transient
connections to the proxy? I believe this would result in the desired
performance boost and behavior.
Now, the next obvious questio
I came across a quite interesting issue I don't really understand but
maybe Tom will know.
This happened rather accidentally.
I have a rather complex query which executes efficiently.
There is one interesting thing - let's have a look at the query:
SELECT t_struktur.id, t_text.code, COUNT(t_we
The standard approach to such a scenario would imho be to write stored procedures
for the complex queries (e.g. plpgsql) and use that from the client.
Maybe even eliminate a few ping pongs between client and server.
Andreas
Does it reduce the time taken by the planner?
Are server side SQL f
Is there going to be a way to use transactions inside transactions of
transactions?
In other words:
BEGIN;
BEGIN;
BEGIN;
BEGIN;
COMMIT;
COMMIT;
COMMIT;
COMMIT;
Is there a way to have some sort of recursive solution with every
transaction but the first one being a child
What I'd like to have in future versions of PostgreSQL:
- replication, replication, ... (you have seen that before). i guess
most people would like to see that.
- a dblink like system for connecting to remote database systems
(not just PostgreSQL???)
something like CREATE REMOTE VI
Oops, there is something I have forgotten:
- "nicing" backends: this would be nice for administration tasks
- CREATE DATABASE ... WITH MAXSIZE (many providers would like to see
that; quotas are painful in this case - especially when porting the
database to a different or a second server)
Why are the features provided by PostGIS not added to the core of
PostgreSQL?
Hans
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Hi ...
I just wanted to admit that an important collegue in Vienna already uses
PostgreSQL instead of Oracle which makes me really proud :).
We have done a training course this year and they use PostgreSQL instead
of free Oracle
I am happy that Austrian students are tortured with the things I ha
Personally I think that configuring things like that is definitely
beyond the scope of an average administrator.
However, there is one thing which would be useful for many applications:
It would be nice if there was a way to renice a connection. When it
comes to reporting it would be nice to ha
+ people measure postgresql by the speed of bulk imports
This is a good point. I can complete agree. What we might need is
something called "SQL Loader" or so. This may sound funny and it doesn't
make technical sense but it is an OBVIOUS way of importing data. People
often forget to use tr
Brian Bruns wrote:
Problem is, nobody builds packages on windows anyway. They just all
download the binary a guy (usually literally "one guy") built. So, let's
just make sure that one guy has cygwin loaded on his machine and we'll be
all set.
Correct.
I wonder why we need a Windows port
This patch adds a note to the documentation describing why the
performance of min() and max() is slow when applied to the entire table,
and suggesting the simple workaround most experienced Pg users
eventually learn about (SELECT xyz ... ORDER BY xyz LIMIT 1).
Any suggestions on improving the word
Neil Conway wrote:
On Sun, 2003-02-02 at 03:55, Hans-Jürgen Schönig wrote:
If people want to count ALL rows of a table. The contrib stuff is pretty
useful. It seems to be transaction safe.
Interesting -- I didn't know about the contrib stuff. I'll update the
docs patch.
Che
But pgstattuple does do a sequential scan of the table. You avoid a lot
of the executor's tuple-pushing and plan-node-traversing machinery that
way, but the I/O requirement is going to be exactly the same.
I have tried it more often so that I can be sure that everything is in
the cache.
I t
Gavin Sherry wrote:
Hi Chris,
On Wed, 12 Feb 2003, Christopher Kings-Lynne wrote:
Machine:
256MB RAM, FreeBSD 4.7, EIDE HDD, > 1 Ghz
Seems like a small amount of memory to be memory based tests with.
What about testing sort_mem as well. It would system to me that there
would be no neg
Hi Stef
I had the same problem some time ago. I wanted to define a trigger
firing on CREATE TABLE (pg_class). This won't work because in most cases
system tables are not accessed using the "standard" way for processing
queries (parse -> rewrite -> plan -> execute). Therefore triggers are
not a
We have an application which syncs Access databases with PostgreSQL (I
know that this is VERY ugly). It is a simple script based Access
application. People wanted that application because they are familiar
with Microsoft stuff. When Access establishes a connection to PostgreSQL
everything is ju
Hardly.
SAP failed on the attempt to open source ADABAS even more miserably than
Borland with Interbase. Now it looks like they found someone who said
"we know open source, we can do that, oh pick me, me, me, pick mee!"
that's what i think as well.
by the way: did you see that MySQL AB has g
Srikanth M wrote:
Hi!
I am interested in adding DATA-CUBE operator(Related to Data
Warehousing) to postgresql. I just want to know wheather it is already
added to postgres or not. please reply to this mail if you have already
worked or still working on it.
Bye
Srikanth
There is a problem which occurs from time to time and which is a bit
nasty in business environments.
When the shared memory is eaten up by some application such as Apache
PostgreSQL will refuse to do what it should do because there is no
memory around. To many people this looks like a problem re
Bruce Momjian wrote:
We already pre-allocate all shared memory and resources on postmaster
start.
I guess we allocate memory when a backend starts, don't we?
Or do we allocate when the instance starts?
I have two explanations for the following behaviour:
a. a bug
b. not enough shared memory
WARN
Tom Lane wrote:
Peter Eisentraut <[EMAIL PROTECTED]> writes:
... Therefore I ask whether everyone agrees
that groups and roles are basically equivalent concepts (and perhaps that
we might in the future strive to make groups more compatible with the
roles as defined in the SQL standard). Or does a
> Yeah, I see it in the Mandrake kernel. But it's not in stock 2.4.19, so
> you can't assume everybody has it.
>
We had this problem on a recent version of good old Slackware.
I think we also had it on RedHat 8 or so.
Doing this kind of killing is definitely a bad habit. I thought it had
it had t
Hi there ...
I have spent some time working on an Oracle version of dblink. It works
quite nicely for me and I hope it does for others.
It already supports some basic features such as persistent connection
and fetching data. This is not a perfect piece of software and there is
lot of room for
Currently I am able to do ...
SELECT dblink_oraconnect('scott/[EMAIL PROTECTED]');
SELECT * FROM dblink_ora('SELECT ename, sal FROM emp')
AS (ename text, sal text);
The problem is: If I wanted to do a
SELECT * FROM dblink_ora('SELECT * FROM emp');
It won't work because dblink_ora returns
A few days ago I have posted a pre-beta version of dblink_ora which is
supposed to solve some problems we had here at Cybertec (getting data
from an Oracle DB and merge it with PostgreSQL). I have implemented a
simple piece of code (more proof of concept than production).
Since I have not got t
Joe Conway wrote:
Bruce Momjian wrote:
OK, can you take ownership of it?
You mean a TODO entry? Sure, as long as Hans is OK with it.
Joe
I am ok with it.
The only problem I have at the moment is that I don't know how to build
properly and to check for the libs needed by Oracle.
The entire co
This week I have done some performance tuning at a customer's office. We
have beaten (demoralized) MS SQL and DB2 in serializable mode and DB2 in
any transaction isolation level :).
By the way: In case of very simple statements SERIALIZABLE is about 3
times faster than READ COMMITTED. I expecte
Hello Mika
Currently you can use dblink which queries a remote database and returns
the data as a table. This works nicely for PostgreSQL. There is also a
JDBC version in progress.
We have writte an Oracle version of dblink.
SELECT * FROM dblink('connectstr', 'SELECT ...') AS some_type;
This w
I had a brief look at DB2's row_number function which seems to be pretty
useful.
What it does is:
test=# SELECT row_number(), relname FROM pg_class LIMIT 3;
row_number |relname
+
1 | pg_description
2 | pg_group
3 | pg_proc
(3 rows)
Th
I have just seen a nice feature provided by DB2 which seems very useful
to me.
When importing huge amounts of data (dozens of gigs) with the help of
COPY errors might occur from time to time (especially when migrating).
The problem with COPY is that it stops after the first error. So if the
fir
Bruce Momjian wrote:
Joe, I can do the configure detection of the Oracle library needed for
/contrib. I don't think we follow the beta freeze as much for /contrib
stuff, but this particular /contrib is more integrated into the main
system than most. If you want to merge it in during the next mont
Bruce Momjian wrote:
Hans-J?rgen Sch?nig wrote:
This week I have done some performance tuning at a customer's office. We
have beaten (demoralized) MS SQL and DB2 in serializable mode and DB2 in
any transaction isolation level :).
By the way: In case of very simple statements SERIALIZABLE is abo
I think for that very reason (SQL-MED) we need to come to terms with
this issue. If/when connections to external data sources is in the
backend, you'll have those exact same dependencies. And in fact, we do
today: consider '--with-openssl' or '--with-tcl'.
I had always assumed we would need '--
Bruce Momjian wrote:
Hans, I am a little confused about what you are suggesting. Are you
suggesting flag to the make of the contrib module rather than configure
tests?
I agree this is a killer feature for many people and would like to have
it in 7.4.
Under these circumstances I was thinking of i
They do the backend protocol using a custom implementation. Why would
they do that?
It seems as if their implemenation provides 20% more throughput. I
haven't benchmarked with lib pq personally so I cannot tell you more.
Hans
--
Cybertec Geschwinde u Schoenig
Ludo-Hartmannplatz 1/14, A-1160
But the snapshots only are grabbing the xids from each proc, right?
Doesn't seem that would take very long.
If this is the bottleneck, maybe we need a shared proc lock.
I had a hard day testing and verifying this kind of stuff. We have run
several hundred benchmarks at the customer using many
There seems to be some disagreement on whether the Oracle lib checks
should be in configure for a /contrib module, and I don't know how far
Hans is. I will say we are probably looking at 7/28 for beta.
I am afraid I won't make it until 7.4beta1.
The problem is that I have not managed to have mor
I have done some beta testing with PostgreSQL 7.4beta2.
I have run a simple set of SQL statements 1 million times:
-- START TRANSACTION ISOLATION LEVEL READ COMMITTED;
INSERT INTO t_data (data) VALUES ('2500');
UPDATE t_data SET data = '2500' WHERE data = '2500';
DELETE FROM t_data WHERE data = '25
This is the Pg backend line from top after about 90 minutes runtime :
PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
16083 postgres 17 0 9172 9172 8524 R 94.7 2.4 84:59.68 postmaster
No sign of the shared growth stopping at this stage...
Pg built with --disable-debug --w
I am seeing a slow but steady growth of the backend process on a Linux
box (RHL 8.0) --- top shows it growing a few K every few seconds.
But I see *zero* growth with the same test on HPUX 10.20.
A possible wild card is that the Postgres build I'm using on the Linux
box is compiled for profiling (-
I have tried to perform a regression test on AIX 5.1 (PostgreSQL 7.4beta3).
I have encountered an error.
gmake[3]: Leaving directory
`/usr/src/shopnet/postgresql-7.4beta3/src/interfaces/libpq'
gmake[3]: Entering directory
`/usr/src/shopnet/postgresql-7.4beta3/src/interfaces/ecpg'
gmake -C include
Tom Lane wrote:
we have fixed the first problem.
here is the next one ...
libm seems to be missing although it is installed (I have installed it
for running 7.3.4).
> It looks like -lm needs to be added to SHLIB_LINK in ecpglib/Makefile.
> I had already proposed this patch for SSL-enabled builds:
1 - 100 of 243 matches
Mail list logo