On Tuesday 01 Feb 2005 6:11 pm, Andrew Mayo wrote:
> PG, on the other hand, appears to do a full table scan
> to answer this question, taking nearly 4 seconds to
> process the query.
>
> Doing an ANALYZE on the table and also VACUUM did not
> seem to affect this.
>
> Can PG find a table's row count
On Thursday 02 Dec 2004 9:37 pm, Dmitry Karasik wrote:
> Hi Thomas!
>
> Thomas> Look at the ACTUAL TIME. It dropped from 0.029ms (using the index
> Thomas> scan) to 0.009ms (using a sequential scan.)
>
> Thomas> Index scans are not always faster, and the planner/optimizer knows
> Thomas>
On Wednesday 01 Dec 2004 4:46 pm, Rodrigo Carvalhaes wrote:
> I need to find a solution for this because I am convincing customers
> that are using SQL Server, DB2 and Oracle to change to PostgreSQL but
> this customers have databases of 5GB!!! I am thinking that even with a
> better server, the re
On Thursday 09 Sep 2004 6:26 pm, Vic Cekvenich wrote:
> What would be performance of pgSQL text search vs MySQL vs Lucene (flat
> file) for a 2 terabyte db?
Well, it depends upon lot of factors. There are few questions to be asked
here..
- What is your hardware and OS configuration?
- What type o
On Wednesday 01 Sep 2004 3:36 pm, G u i d o B a r o s i o wrote:
> Dear all,
>
> I am currently experiencing troubles with the performance of my
> critical's database.
>
> The problem is the time that the postgres takes to perform/return a
> query. For example, trying the \d command takes betw
On Wednesday 11 Aug 2004 7:59 pm, Jesper Krogh wrote:
> The "common" solution, I guess would be to store them in the filesystem
> instead, but I like to have them just in the database it is nice clean
> database and application design and if I can get PostgreSQL to "not
> cache" them then it should
On Monday 09 Aug 2004 7:58 pm, Paul Serby wrote:
> I've not maxed out the connections since making the changes, but I'm
> still not convinced everything is running as well as it could be. I've
> got some big result sets that need sorting and I'm sure I could spare a
> bit more sort memory.
You cou
Hervé Piedvache wrote:
Josh,
Le mercredi 14 Juillet 2004 18:28, Josh Berkus a écrit :
checkpoint_segments = 3
You should probably increase this if you have the disk space. For massive
insert operations, I've found it useful to have as much as 128 segments
(although this means about 1.5GB disk spac
Hervé Piedvache wrote:
In my case it's a PostgreSQL dedicated server ...
effective_cache_size = 500
For me I give to the planner the information that the kernel is able to cache
500 disk page in RAM
That is what? 38GB of RAM?
free
total used free sharedb
gnari wrote:
is there a recommended procedure to estimate
the best value for effective_cache_size on a
dedicated DB server ?
Rule of thumb(On linux): on a typically loaded machine, observe cache memory of
the machine and allocate good chunk of it as effective cache.
To define good chunck of it, y
Bill Chandler wrote:
Hi,
Using PostgreSQL 7.4.2 on Solaris. I'm trying to
improve performance on some queries to my databases so
I wanted to try out various index structures.
Since I'm going to be running my performance tests
repeatedly, I created some SQL scripts to delete and
recreate vario
Missner, T. R. wrote:
Hello,
I have been a happy postgresql developer for a few years now. Recently
I have discovered a very strange phenomenon in regards to inserting
rows.
My app inserts millions of records a day, averaging about 30 rows a
second. I use autovac to make sure my stats and indexes
Gary Cowell wrote:
The explain output on postgres shows the same
execution with a scan on vers and a sort but the query
time is 78.6 seconds.
The explain output from PostgreSQL is:
QUERY PLAN
On Wednesday 19 May 2004 13:02, [EMAIL PROTECTED] wrote:
> > - If you can put WAL on separate disk(s), all the better.
>
> Does that mean only the xlog, or also the clog? As far as I understand, the
> clog contains some meta-information on the xlog, so presumably it is
> flushed to disc synchrono
Fabio Panizzutti wrote:
storico=# explain select tag_id,valore_tag,data_tag from storico_misure
where (data_tag>'2004-05-03' and data_tag <'2004-05-12') and
tag_id=37423 ;
Can you please post explain analyze? That includes actual timings.
Looking at the schema, can you try "and tag_id=37423::integ
Doug Y wrote:
Hello,
I've been having some performance issues with a DB I use. I'm trying
to come up with some performance recommendations to send to the
"adminstrator".
Hardware:
CPU0: Pentium III (Coppermine) 1000MHz (256k cache)
CPU1: Pentium III (Coppermine) 1000MHz (256k cache)
Memory: 3
Anderson Boechat Lopes wrote:
Hi.
I´m new here and i´m not sure if this is the right email to solve my
problem.
Well, i have a very large database, with vary tables and very
registers. Every day, too many operations are perfomed in that DB, with
queries that insert, delete and up
Richard Huxton wrote:
Christopher Kings-Lynne wrote:
What's the case of bigger database PostgreSQL (so greate and amount of
registers) that they know???
Didn't someone say that RedSheriff had a 10TB postgres database or
something?
From http://www.redsheriff.com/us/news/news_4_201.html
"According
On Wednesday 07 April 2004 16:59, Andrew McMillan wrote:
> One thing I recommend is to use ext2 (or almost anything but ext3).
> There is no real need (or benefit) from having the database on a
> journalled filesystem - the journalling is only trying to give similar
> sorts of guarantees to what th
Sending again bacuse of MUA error.. Chose a wrong address in From..:-(
Shridhar
On Wednesday 07 April 2004 17:21, Shridhar Daithankar wrote:
> On Wednesday 07 April 2004 16:59, Andrew McMillan wrote:
> > One thing I recommend is to use ext2 (or almost anything but ext3).
> > T
Heiko Kehlenbrink wrote:
Hmm... I would suggest if you are testing, you should try 7.4.2. 7.4 has
some
good optimisation for hash agregates though I am not sure if it apply to
averaging.
would be the last option till we are runing other applications on that 7.2
system
I can understand..
Also try f
Heiko Kehlenbrink wrote:
[EMAIL PROTECTED]:~> psql -d test -c 'explain analyse select avg(dist)
from massive2 where dist > (100*sqrt(3.0))::float8 and dist <
(150*sqrt(3.0))::float8;'
NOTICE: QUERY PLAN:
Aggregate (cost=14884.61..14884.61 rows=1 width=8) (actual
time=3133.24..3133.24 row
Heiko Kehlenbrink wrote:
hi list,
i want to convince people to use postgresql instead of ms-sql server, so i
set up a kind of comparission insert data / select data from postgresql /
ms-sql server
the table i use was pretty basic,
id bigserial
dist float8
x float8
y float8
z float
http://www.databasejournal.com/features/postgresql/article.php/3323561
Shridhar
---(end of broadcast)---
TIP 8: explain analyze is your friend
Rosser Schwarz wrote:
> shared_buffers = 4096
sort_mem = 32768
vacuum_mem = 32768
wal_buffers = 16384
checkpoint_segments = 64
checkpoint_timeout = 1800
checkpoint_warning = 30
commit_delay = 5
effective_cache_size = 131072
You didn't mention the OS so I would take it as either linux/freeBSD.
On Friday 27 February 2004 21:03, scott.marlowe wrote:
> Linux doesn't work with a pre-assigned size for kernel cache.
> It just grabs whatever's free, minus a few megs for easily launching new
> programs or allocating more memory for programs, and uses that for the
> cache. then, when a request c
Dror Matalon wrote:
Let me try and say it again. I know that setting effective_cache_size
doesn't affect the OS' cache. I know it just gives Postgres the *idea*
of how much cache the OS is using. I know that. I also know that a
correct hint helps performance.
I've read Matt Dillon's discussion abo
On Thursday 19 February 2004 14:31, Saleem Burhani Baloch wrote:
> Hi,
>
> Thanks every one for helping me. I have upgraded to 7.4.1 on redhat 8 ( rh
> 9 require a lot of lib's) and set the configuration sent by Chris. Now the
> query results in 6.3 sec waooo. I m thinking that why the 7.1 process
Saleem Burhani Baloch wrote:
I changed the conf as you wrote. But now the time is changed from 50 sec to 65 sec. :(
I have not more 256 MB ram now.
When I execute the query the
Postmaster takes about 1.8 MB
Postgres session takes 18 MB ram only.
& psql takes 1.3 MB.
After the query finishes the
David Teran wrote:
we are trying to speed up a database which has about 3 GB of data. The
server has 8 GB RAM and we wonder how we can ensure that the whole DB is
read into RAM. We hope that this will speed up some queries.
Neither the DBa or postgresql has to do anything about it. Usually OS cac
Arnau wrote:
explain analyze select * from statistics2 where timestamp_in <
to_timestamp( '20031201', 'MMDD' );
NOTICE: QUERY PLAN:
Seq Scan on statistics2 (cost=0.00..638.00 rows=9289 width=35) (actual
time=0.41..688.34 rows=27867 loops=1)
Total runtime: 730.82 msec
That query is not using
Bill Moran wrote:
Basically, all I do is call each query in turn until I've collected all the
results, then marshall the results in to a SOAP XML response (using gsoap,
if anyone's curious) and give them back to the client application. It's
the client app's job to figure out what to do with them,
Josh Berkus wrote:
Bill,
Some functions they prototyped in MSSQL even return different types, based
on certian parameters, I'm not sure how I'll do this in Postgres, but I'll
have to figure something out.
We support that as of 7.4.1 to an extent; check out "Polymorphic Functions".
To my unders
Bill Moran wrote:
I have an application that I'm porting from MSSQL to PostgreSQL. Part
of this
application consists of hundreds of stored procedures that I need to
convert
to Postgres functions ... or views?
At first I was going to just convert all MSSQL procedures to Postgres
functions.
But
On Wednesday 14 January 2004 18:18, Jón Ragnarsson wrote:
> I am writing a website that will probably have some traffic.
> Right now I wrap every .php page in pg_connect() and pg_close().
> Then I read somewhere that Postgres only supports 100 simultaneous
> connections (default). Is that a limitat
Robert Treat wrote:
On Tue, 2004-01-06 at 07:20, Shridhar Daithankar wrote:
The numbers from pg_class are estimates updated by vacuum /analyze. Of course
you need to run vacuum frequent enough for that statistics to be updated all
the time or run autovacuum daemon..
Ran into same problem on my
On Tuesday 06 January 2004 17:48, D'Arcy J.M. Cain wrote:
> On January 6, 2004 01:42 am, Shridhar Daithankar wrote:
> cert=# select relpages,reltuples::bigint from pg_class where relname=
> 'certificate';
> relpages | reltuples
> --+---
>399
On Tuesday 06 January 2004 01:22, Rod Taylor wrote:
> Anyway, with Rules you can force this:
>
> ON INSERT UPDATE counter SET tablecount = tablecount + 1;
>
> ON DELETE UPDATE counter SET tablecount = tablecount - 1;
That would generate lot of dead tuples in counter table. How about
select relpag
On Tuesday 06 January 2004 07:16, Christopher Browne wrote:
> Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Paul
Tuckfield) wrote:
> > Not that I'm offering to do the porgramming mind you, :) but . .
> >
> > In the case of select count(*), one optimization is to do a scan of the
>
On Monday 05 January 2004 17:48, David Teran wrote:
> Hi,
>
> > The performance will likely to be the same. Its just that integer
> > happens to
> > be default integer type and hence it does not need an explicit
> > typecast. ( I
> > don't remember exactly which integer is default but it is either
On Monday 05 January 2004 17:35, David Teran wrote:
> explain analyze SELECT --columns-- FROM KEY_VALUE_META_DATA t0 WHERE
> t0.ID_FOREIGN_TABLE = 21110;
>
> i see that no index is being used whereas when i use
>
> explain analyze SELECT --columns-- FROM KEY_VALUE_META_DATA t0 WHERE
> t0.ID_FOREIGN
On Monday 05 January 2004 16:58, David Teran wrote:
> We have some tests to check the performance and FrontBase is about 10
> times faster than Postgres. We already played around with explain
> analyse select. It seems that for large tables Postgres does not use an
> index. We often see the scan me
On Thursday 18 December 2003 09:24, David Shadovitz wrote:
> Old server:
> # VACUUM FULL abc;
> VACUUM
> # VACUUM VERBOSE abc;
> NOTICE: --Relation abc--
> NOTICE: Pages 1526: Changed 0, Empty 0; Tup 91528; Vac 0, Keep 0, UnUsed
> 32. Total CPU 0.07s/0.52u sec elapsed 0.60 sec.
> VACUUM
Neil Conway wrote:
How can I get the original server to perform as well as the new one?
Well, you have the answer. Dump the database, stop postmaster and restore it.
That should be faster than original one.
(BTW, "SELECT count(*) FROM table" isn't a particularly good DBMS
performance indication.
David Shadovitz wrote:
Well, now that I have the plan for my slow-running query, what do I do? Where
should I focus my attention?
Briefly looking over the plan and seeing the estimated v/s actual row mismatch,I
can suggest you following.
1. Vacuum(full) the database. Probably you have already
marting OS.
In fact running 32 bit apps on 64 bit OS has plenty of advantages like
effectively using the cache. Unless you need 64bit, going for 64bit software is
not advised.
Shridhar
--
-----
Shridhar Daithankar
LIMS CPE Team Member, PSPL.
mailto:[EMAIL PROTECTED]
Phone:-
Alfranio Correia Junior wrote:
Postgresql configuration:
effective_cache_size = 35000
shared_buffers = 5000
random_page_cost = 2
cpu_index_tuple_cost = 0.0005
sort_mem = 10240
Lower sort mem to say 2000-3000, up shared buffers to 10K and up effective cache
size to around 65K. That should make it
Ivar Zarans wrote:
On Fri, Dec 05, 2003 at 06:19:46PM +0530, Shridhar Daithankar wrote:
is correct SQL, but not correct, considering PostgreSQL bugs.
Personally I don't consider a bug but anyways.. You are the one facing
problem so I understand..
Well, if this is not bug, then wh
Ivar Zarans wrote:
It seems, that PyPgSQL query quoting is not aware of this performance
problem (to which Cristopher referred) and final query, sent to server
is correct SQL, but not correct, considering PostgreSQL bugs.
Personally I don't consider a bug but anyways.. You are the one facing proble
Andrei Bintintan wrote:
There are around 700 rows in this table.
If I set enable_seqscan=off then the index is used and I also used Vacuum
Analyze recently.
For 700 rows I think seq. would work best.
I find it strange because the number of values of id_user and id_modull are
somehow in the same di
On Monday 01 December 2003 18:37, Evil Azrael wrote:
> 1) I have a transaction during which no data was modified, does it
> make a difference whether i send COMMIT or ROLLBACK? The effect is the
> same, but what´s about the speed?
It should not matter. Both commit and rollback should take same amo
Torsten Schulz wrote:
Chester Kustarz wrote:
On Mon, 24 Nov 2003, Torsten Schulz wrote:
shared_buffers = 5000# 2*max_connections, min 16
that looks pretty small. that would only be 40MBytes (8k/page *
5000pages).
http://www.varlena.com/GeneralBits/Tidbits/perf.html
Ok, thats it. I've set
William Yu wrote:
This is an intriguing thought which leads me to think about a similar
solution for even a production server and that's a solid state drive for
just the WAL. What's the max disk space the WAL would ever take up?
There's quite a few 512MB/1GB/2GB solid state drives available now
William Yu wrote:
My situation is this. We have a semi-production server where we
pre-process data and then upload the finished data to our production
servers. We need the fastest possible write performance. Having the DB
go corrupt due to power loss/OS crash is acceptable because we can
alway
Matthew T. O'Connor wrote:
But we track tuples because we can compare against the count given by
the stats system. I don't know of a way (other than looking at the FSM,
or contrib/pgstattuple ) to see how many dead pages exist.
I think making pg_autovacuum dependent of pgstattuple is very good
On Thursday 20 November 2003 20:29, Shridhar Daithankar wrote:
> On Thursday 20 November 2003 20:00, Matthew T. O'Connor wrote:
> > Shridhar Daithankar wrote:
> > > I will submit a patch that would account deletes in analyze threshold.
> > > Since you want to dela
On Thursday 20 November 2003 20:00, Matthew T. O'Connor wrote:
> Shridhar Daithankar wrote:
> > I will submit a patch that would account deletes in analyze threshold.
> > Since you want to delay the analyze, I would calculate analyze count as
>
> deletes are already ac
Josh Berkus wrote:
Shridhar,
>>However I do not agree with this logic entirely. It pegs the next vacuum
w.r.t current table size which is not always a good thing.
No, I think the logic's fine, it's the numbers which are wrong. We want to
vacuum when updates reach between 5% and 15% of total
Benjamin Bostow wrote:
I haven't modified any of the setting. I did try changing shmmax from
32MB to 256MB but didn't see much change in the processor usage. The
init script that runs to start the server uses the following:
su -l postgres -s /bin/sh -c "/usr/bin/pg_ctl -D $PGDATA -p
/usr/bin/post
Josh Berkus wrote:
Shridhar,
I was looking at the -V/-v and -A/-a settings in pgavd, and really don't
understand how the calculation works. According to the readme, if I set -v
to 1000 and -V to 2 (the defaults) for a table with 10,000 rows, pgavd would
only vacuum after 21,000 rows had been
Laurent Martelli wrote:
"Shridhar" == Shridhar Daithankar <[EMAIL PROTECTED]> writes:
[...]
Shridhar> 2. Try following query EXPLAIN ANALYZE SELECT * from lists
Shridhar> join classes on classes.id=lists.value where
Shridhar> lists.id='16'::integer;
Laurent Martelli wrote:
"Shridhar" == Shridhar Daithankar <[EMAIL PROTECTED]> writes:
Shridhar> I am stripping the analyze outputs and directly jumping to
Shridhar> the end.
Shridhar> Can you try following?
Shridhar> 1. Make all fields integer in all the table
Laurent Martelli wrote:
"Shridhar" == Shridhar Daithankar <[EMAIL PROTECTED]> writes:
Shridhar> Laurent Martelli wrote:
[...]
>> Should I understand that a join on incompatible types (such as
>> integer and varchar) may lead to bad performances ?
S
Benjamin Bostow wrote:
I am running RH 7.3 running Apache 1.3.27-2 and PostgreSQL 7.2.3-5.73.
When having 100+ users connected to my server I notice that postmaster
consumes up wards of 90% of the processor and I hardly am higher than
10% idle. I did notice that when I kill apache and postmaster t
Josh Berkus wrote:
Shridhar,
I was looking at the -V/-v and -A/-a settings in pgavd, and really don't
understand how the calculation works. According to the readme, if I set -v
to 1000 and -V to 2 (the defaults) for a table with 10,000 rows, pgavd would
only vacuum after 21,000 rows had been
Laurent Martelli wrote:
"scott" == scott marlowe <[EMAIL PROTECTED]> writes:
[...]
scott> Note here:
scott> Merge Join (cost=1788.68..4735.71 rows=1 width=85) (actual
scott> time=597.540..1340.526 rows=20153 loops=1) Merge Cond:
scott> ("outer".id = "inner".id)
scott> This estimate i
On Friday 14 November 2003 12:51, Rajesh Kumar Mallah wrote:
> Hi ,
>
> my database seems to be taking too long for a select count(*)
> i think there are lot of dead rows. I do a vacuum full it improves
> bu again the performance drops in a short while ,
> can anyone please tell me if anything worn
Fred Moyer wrote:
One thing I learned after spending about a week comparing the Athlon (2
ghz, 333 mhz frontside bus) and Xeon (2.4 ghz, 266 mhz frontside bus)
platforms was that on average the select queries I was benchmarking ran
30% faster on the Athlon (this was with data cached in memory so ma
Paul Ganainm wrote:
Does Interbase/Firebird have (as far as people here are concerned) any
show-stoppers in terms of functionality which they do have on
PostgreSQL? Or, indeed, the other way round?
Personally I think native windows port is plus that interbase/firebird has over
postgresql. It als
Jeff wrote:
On Fri, 31 Oct 2003 09:31:19 -0600
"Rob Sell" <[EMAIL PROTECTED]> wrote:
Not being one to hijack threads, but I haven't heard of this
performance hit when using HT, I have what should all rights be a
pretty fast server, dual 2.4 Xeons with HT 205gb raid 5 array, 1 gig
of memory. And i
Jeff wrote:
On Thu, 30 Oct 2003 17:49:08 -0200 (BRST)
"alexandre :: aldeia digital" <[EMAIL PROTECTED]> wrote:
Both use: Only postgresql on server. Buffers = 8192, effective cache =
10
Well, I'm assuming you meant 1GB of ram, not 1MB :)
Check a ps auxw to see what is running. Perhaps X is ru
Kamalraj Singh Madhan wrote:
Hi,
I'am having major performance issues with post gre 7.3.1 db. Kindly suggest all the possible means by which i can optimize the performance of this database. If not all, some ideas (even if they are common) are also welcome. There is no optimisation done to th
Dror Matalon wrote:
On Mon, Oct 27, 2003 at 01:04:49AM -0500, Christopher Browne wrote:
Most of the time involves:
a) Reading each page of the table, and
b) Figuring out which records on those pages are still "live."
The table has been VACUUM ANALYZED so that there are no "dead" records.
It's s
Vivek Khera wrote:
"JB" == Josh Berkus <[EMAIL PROTECTED]> writes:
JB> Actually, what OS's can't use all idle ram for kernel cache? I
JB> should note that in my performance docs
FreeBSD. Limited by the value of "sysctl vfs.hibufspace" from what I
understand. This value is set at boot based
Hilary Forbes wrote:
If I have a fixed amount of money to spend as a general rule
>is it better to buy one processor and lots of memory or two
>processors and less memory for a system which is transactional
>based (in this case it's handling reservations). I realise the
>answer will be a general
Alexander Priem wrote:
Dell PowerEdge 1750 machine with Intel Xeon CPU at 3 GHz and 4 GB of RAM.
This machine will contain a PERC4/Di RAID controller with 128MB of battery
backed cache memory. The O/S and logfiles will be placed on a RAID-1 setup
of two 36Gb SCSI-U320 drives (15.000rpm). Database d
Harry Broomhall wrote:
> #effective_cache_size = 1000# typically 8KB each
> #random_page_cost = 4 # units are one sequential page fetch cost
You must tune the first one at least. Try
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html to tune these
parameters.
>>2) The EXPLAIN
Alexander Priem wrote:
About clustering: I know this can't be done by hooking multiple postmasters
to one and the same NAS. This would result in data corruption, i've read...
Only if they are reading same data directory. You can run 4 different data
installations of postgresql, each one in its own
Rob Nagler wrote:
It seems a simple "vacuum" (not full or analyze) slows down the
database dramatically. I am running vacuum every 15 minutes, but it
takes about 5 minutes to run even after a fresh import. Even with
vacuuming every 15 minutes, I'm not sure vacuuming is working
properly.
There ar
On Monday 13 October 2003 19:34, Vivek Khera wrote:
> > "SC" == Sean Chittenden <[EMAIL PROTECTED]> writes:
> >>
> >> echo "effective_cache_size = $((`sysctl -n vfs.hibufspace` / 8192))"
> >>
> >> I've used it for my dedicated servers. Is this calculation correct?
>
> SC> Yes, or it's real clo
On Monday 13 October 2003 19:22, Seum-Lim Gan wrote:
> I am not sure I can do the full vacuum.
> If my system is doing updates in realtime and needs to be
> ok 24 hours and 7 days a week non-stop, once I do
> vacuum full, even on that table, that table will
> get locked out and any quiery or update
David Griffiths wrote:
It's a slight improvement, but that could be other things as well.
I'd read that how you tune Postgres will determine how the optimizer works
on a query (sequential scan vs index scan). I am going to post all I've done
with tuning tommorow, and see if I've done anything dum
Seum-Lim Gan wrote:
I have a table that keeps being updated and noticed
that after a few days, the disk usage has growned to
from just over 150 MB to like 2 GB !
Hmm... You have quite a lot of wasted space there..
I followed the recommendations from the various search
of the archives, changed the m
Seth Ladd wrote:
[EMAIL PROTECTED]:express=# explain select distinct region from region;
QUERY PLAN
---
---
Unique (cost=0.00..4326.95 rows=9518 width=14)
-> Ind
David Griffiths wrote:
Have you checked these pages? They've been posted on this list numerous
times:
http://techdocs.postgresql.org
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html
Those are much more instructive
Kaarel wrote:
http://www.ussg.iu.edu/hypermail/linux/kernel/0310.1/0208.html
Shridhar
I feel incompetent when it comes to file systems. Yet everybody would like to
have the best file system if given the choice...so do I :) Here I am looking at
those tables seeing JFS having more green cells
http://www.ussg.iu.edu/hypermail/linux/kernel/0310.1/0208.html
Shridhar
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
Greg Spiegelberg wrote:
The data represents metrics at a point in time on a system for
network, disk, memory, bus, controller, and so-on. Rx, Tx, errors,
speed, and whatever else can be gathered.
We arrived at this one 642 column table after testing the whole
process from data gathering, methods
Jeff wrote:
Let me know if there are blatant errors, etc in there.
Maybe even slightly more subtle blatant errors :)
Some minor nitpicks,
* Slide 5, postgresql already features 64 bit port. The sentence is slightly
confusing
* Same slide. IIRC postgresql always compresses bytea/varchar. Not too m
Adrian Demaestri wrote:
We've a table with about 8 million rows, and we need to get rows by the value
>of two of its fields( the type of the fields are int2 and int4,
the where condition is v.g. partido=99 and partida=123). We created a
>multicolumn index on that fields but the planner doesn't us
Jeff wrote:
I'd be interested in tinkering with this, but I'm more interested at the
moment of why (with proof, not antecdotal) Solaris is so much slower than
Linux and what we cna do about this. We're looking to move a rather large
Informix db to PG and ops has reservations about ditching Sun har
Stef wrote:
On Fri, 03 Oct 2003 12:32:00 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
=> What exactly is failing? And what's the platform, anyway?
Nothing is really failing atm, except the funds for better
hardware. JBOSS and some other servers need to be
run on these machines, along with linux,
Bruce Momjian wrote:
OK, I beefed up the TODO:
* Use a fixed row count and a +/- count with MVCC visibility rules
to allow fast COUNT(*) queries with no WHERE clause(?)
I can always give the details if someone asks. It doesn't seem complex
enough for a separate TODO.detail item.
Dror Matalon wrote:
I smell a religious war in the aii:-).
Can you go several days in a row without doing select count(*) on any
of your tables?
I suspect that this is somewhat a domain specific issue. In some areas
you don't need to know the total number of rows in your tables, in
others you d
Mindaugas Riauba wrote:
Hello,
While writing web application I found that it would
be very nice for me to have "null" WHERE clause. Like
WHERE 1=1. Then it is easy to concat additional
conditions just using $query . " AND col=false" syntax.
But which of the possible "null" clauses is the fa
David Griffiths wrote:
And finally,
Here's the contents of the postgresql.conf file (I've been playing with
these setting the last couple of days, and using the guide @
http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html to
make sure I didn't have it mis-tuned):
tcpip_s
Oleg Lebedev wrote:
effective_cache_size = 32000 # typically 8KB each
That is 256MB. You can raise it to 350+MB if nothing else is running on the box.
Also if you have fast disk drives, you can reduce random page cost to 2 or 1.5.
I don't know how much this will make any difference to benchmark r
On Sunday 28 September 2003 09:19, David Griffiths wrote:
> No difference. Note that all the keys that are used in the joins are
> numeric(10)'s, so there shouldn't be any cast-issues.
Can you make them bigint and see? It might make some difference perhaps.
Checking the plan in the meantime.. BTW
[EMAIL PROTECTED] wrote:
Hi guys
Im running a Datawarehouse benchmark (APB-1) on PostgreSql. The objective is to
choose which of the to main db (PostgreSQL, MySQL) is fastest. I've run into a
small problem which I hope could be resolved here.
I'm trying to speed up this query:
select count(*) fr
Garrett Bladow wrote:
Recently we upgraded the RAM in our server. After the install a LIKE query that used to take 5 seconds now takes 5 minutes. We have tried the usual suspects, VACUUM, ANALYZE and Re-indexing.
Any thoughts on what might have happened?
What all tuning you have done? Have you se
1 - 100 of 184 matches
Mail list logo