Tom Lane wrote:
In case I was mistaken, this explanation makes perfectly sens to me.
But then again it would indicate a 'bug' in libpq, in the sense that
it (apparently) sets TCP_NODELAY on linux but not on windows.
No, it would mean a bug in Windows in that it fails to honor TCP_NODELAY.
ternals.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
--
David LEVY aka Selenium
Zlio.com & Col.fr
Hello!
I'm new to performance tuning on postgres. I've read the docs on the
posgtres site, as well as:
http://www.revsys.com/writings/postgresql-performance.html
http://www.powerpostgresql.com/PerfList
However, my query is really slow, and I'm not sure what the main cause
could be, as there
> > And this is the actual query:
>
> I think you need to look into full-text indexing (see tsearch2).
Thanks, Tom.
Yes, we know this.
This is just a temporary fix that we needed to get up today for biz
reasons. Implementing full-text searching within a few short hours was
out of the question.
Barry Moore wrote:
I have a very slow query that I'm trying to tune. I think my
performance tuning is being complicated by the system's page cache.
If a run the query after the system has been busy with other tasks
for quite a long time then the query can take up to 8-10 minutes to
compl
from here?
Thanks,
David.
--
David Brain - bandwidth.com
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 6: explain analyze is your friend
Hi,
Thanks for the response.
Bill Moran wrote:
In response to David Brain <[EMAIL PROTECTED]>:
I recently migrated one of our large (multi-hundred GB) dbs from an
Intel 32bit platform (Dell 1650 - running 8.1.3) to a 64bit platform
(Dell 1950 - running 8.1.5). However I am not seei
you'd use as a diagnostic on this - I just
installed the module - but I'm not entirely clear as to what the output
is actually showing me and/or what would be considered good or bad.
Thanks,
David.
--
David Brain - bandwidth.com
[EMAIL PROTECTED]
---(end of
he -B 1024 allowed the settings to revert
to the file specified values.
So now I'm back to using ~50k buffers again and things are running a
little more swiftly, and according to pg_buffercache I'm using 49151 of
them (-:
Thanks again to those who helped me track this down.
Dav
tings
using the "ALTER TABLE ALTER SET STORAGE" syntax, but is there a way I
can see what this value is currently set to?
David
Hi,
I am about to order a new server for my Postgres cluster. I will
probably get a Dual Xeon Quad Core instead of my current Dual Xeon.
Which OS would you recommend to optimize Postgres behaviour (i/o
access, multithreading, etc) ?
I am hesitating between Fedora Core 6, CentOS and Debian. Can an
ge of cores, these
should be on the same CPU board. Again, through monitoring, you'll have
to determine how many CPU cores each backend really needs and then
you'll have to determine how best to spread the backends out over each
of the CPU boards.
Good luck.
David
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Tom Lane wrote:
Having malloc/free use
an internal mutex is necessary in multi-threaded programs, but the
backend isn't multi-threaded.
Hmm...confused. I'm not following why then there is contention for the
mutex.
Surely this has to be some other mutex that is in contention, not a heap
loc
what your database will and won't do, and design your schema
around that. I know it takes a little extra helping of courage, but
it's worth it in the long run.
Cheers,
David.
--
David Fetter <[EMAIL PROTECTED]> http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
On Sun, Aug 19, 2007 at 08:26:58PM +0200, Steinar H. Gunderson wrote:
> On Sun, Aug 19, 2007 at 11:12:16AM -0700, David Fetter wrote:
> > There's your mistake. EAV is not performant, and won't become so.
>
> It sort of depends. I put all the EXIF information for my im
; However, it still doesn't answer my question about the EV model
> (where each attribute is given its own table).
The answer to EAV modeling, is, "DON'T!"
Cheers,
David (who, if he were greedy, would be encouraging EAV modeling
because it would cause guaranteed large consulting
, 7200rpm disks with hardware RAID-1 (IBM ServeRAID)
My question is, what's going on?
Thanks,
David
and then analyzing)?
The estimates for that index scan are off by a factor of 100, which may
indicate why the planner is trying so hard to avoid a nestloop there.
--
- David T. Wilson
david.t.wil...@gmail.com
Thanks for all of the suggestions, guys, which gave me some pointers on
new directions to look, and I learned some interesting things.
The first interesting thing was that piping (uncompressed) pg_dump into
gzip, instead of using pg_dump's internal compressor, does bring a lot
of extra paralle
Tom Lane wrote:
I would bet that the reason for the slow throughput is that gzip
is fruitlessly searching for compressible sequences. It won't find many.
Indeed, I didn't expect much reduction in size, but I also didn't expect
a four-order of magnitude increase in run-time (i.e. output at
etting, you'll need to prepare it ("prepare foo as
") and then explain *that* via "explain execute foo".
The prepared version likely has a much more generic plan, whereas the
regular query gets optimized for the actual values provided.
--
- David T. Wilson
david.t.wil...@gmail.com
stics accuracy with an explain analyze might still
be informative, however.
--
- David T. Wilson
david.t.wil...@gmail.com
LinkedIn
David Crooke requested to add you as a connection on LinkedIn:
--
Dimi,
I'd like to add you to my professional network on LinkedIn.
- David Crooke
Accept invitation from David Crooke
http://www.linkedin.
On Wed, Apr 7, 2010 at 2:37 PM, Craig James wrote:
> Most of the time Postgres runs nicely, but two or three times a day we get a
> huge spike in the CPU load that lasts just a short time -- it jumps to 10-20
> CPU loads. Today it hit 100 CPU loads. Sometimes days go by with no spike
> events.
On Wed, Apr 7, 2010 at 3:57 PM, Craig James wrote:
> On 4/7/10 3:36 PM, Joshua D. Drake wrote:
>> My guess is that it is not CPU, it is IO and your CPU usage is all WAIT
>> on IO.
>>
>> To have your CPUs so flooded that they are the cause of an inability to
>> log in is pretty suspect.
>
> I thoug
On Wed, Apr 7, 2010 at 7:06 PM, Craig James wrote:
> On 4/7/10 5:47 PM, Robert Haas wrote:
>> On Wed, Apr 7, 2010 at 6:56 PM, David Rees wrote:
>>>> synchronous_commit = off
>>>
>>> You are playing with fire here. You should never turn this off unless
&
Howdy all,
I've got a huge server running just postgres. It's got 48 cores and 256GB of
ram. Redhat 5.4, Postgres 8.3.9.
64bit OS. No users currently.
I've got a J2EE app that loads data into the DB, it's got logic behind it so
it's not a simple bulk load, so
i don't think we can use copy.
Bas
On Tue, Apr 20, 2010 at 01:44:18PM -0400, Robert Haas wrote:
- On Tue, Apr 20, 2010 at 1:39 PM, David Kerr wrote:
- > My initial thought was hardware issues so we got sar, vmstat, etc all
running on the box and they didn't give
- > any indication that we had resource issues.
- >
-
On Tue, Apr 20, 2010 at 02:12:15PM -0400, Nikolas Everett wrote:
- On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote:
-
- > that thought occured to me while I was testing this. I ran a vacuumdb -z
- > on my database during the load and it didn't impact performance at all.
- >
-
On Tue, Apr 20, 2010 at 12:23:51PM -0600, Scott Marlowe wrote:
- On Tue, Apr 20, 2010 at 12:20 PM, David Kerr wrote:
- > On Tue, Apr 20, 2010 at 02:12:15PM -0400, Nikolas Everett wrote:
- > - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote:
- > -
- > - You can absolutely use copy
On Tue, Apr 20, 2010 at 12:30:14PM -0600, Scott Marlowe wrote:
- On Tue, Apr 20, 2010 at 12:28 PM, David Kerr wrote:
- >
- > I'm logging via syslog, I've had trouble with that before. when i moved to
syslog-ng
- > on my dev environments that mostly resoved the probelm for me.
On Tue, Apr 20, 2010 at 01:17:02PM -0500, Kevin Grittner wrote:
- David Kerr wrote:
-
- > Incidentally the code is written to work like this :
- >
- > while (read X lines in file){
- > Process those lines.
- > write lines to DB.
- > }
-
- Unless you're selecting from m
On Tue, Apr 20, 2010 at 02:15:19PM -0400, Robert Haas wrote:
- On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote:
- > that thought occured to me while I was testing this. I ran a vacuumdb -z
- > on my database during the load and it didn't impact performance at all.
-
- The window to
On Tue, Apr 20, 2010 at 04:26:52PM -0400, Greg Smith wrote:
- David Kerr wrote:
- >the db, xlog and logs are all on separate areas of the SAN.
- >separate I/O controllers, etc on the SAN. it's setup well, I wouldn't
- >expect
- >contention there.
- >
-
- Just be
Hi,
I recently switched to PostgreSQL from MySQL so that I can use PL/R for data
analysis. The query in MySQL form (against a more complex table structure)
takes ~5 seconds to run. The query in PostgreSQL I have yet to let finish,
as it takes over a minute. I think I have the correct table structu
Hi,
I have posted an image of the user inputs here:
http://i.imgur.com/MUkuZ.png
The problem is that I am given a range of days (Dec 22 - Mar 22) over a
range of years (1900 - 2009) and the range of days can span from one year to
the next. This is not the same as saying Dec 22, 1900 to Mar 22, 2
When using MySQL, the performance was okay (~5 seconds per query) using:
date( concat_ws( '-', y.year, m.month, d.day ) ) between
-- Start date.
date( concat_ws( '-', y.year, $P{Month1}, $P{Day1} ) ) AND
-- End date. Calculated by checking to see if the end date wraps
-- into the
What if I were to have the application pass in two sets of date ranges?
For the condition of Dec 22 to Mar 22:
Dec 22 would become:
- Dec 22 - Dec 31
Mar 22 would become:
- Jan 1 - Mar 22
The first range would always be for the current year; the second range would
always be for the year
I was hoping to eliminate this part of the query:
(cast(extract( YEAR FROM m.taken ) + greatest( -1 *
sign(
(extract( YEAR FROM m.taken )||'-12-31')::date -
(extract( YEAR FROM m.taken )||'-01-01')::date ), 0
) AS text)||'-12-31')::date
That uses
Hi,
I was still referring to the measurement table. You have an index on
> stationid, but still seem to be getting a sequential scan. Maybe the planner
> does not realise that you are selecting a small number of stations. Posting
> an EXPLAIN ANALYSE would really help here.
>
Here is the result f
The greatest() expression reduces to either the current year (year + 0) or
the next year (year + 1) by taking the sign of the difference in start/end
days. This allows me to derive an end date, such as:
Dec 22, 1900 to Mar 22, 1901
Then I check if the measured date falls between those two dates.
Hi,
~300 million measurements
~12000 stations (not 7 as I mentioned before)
~5500 cities
some serious data tho, at least. Basically, PG is sequentially scanning
> through all of the tables in your partitioning setup. What is
> constraint_exclusion set to? What version of PG is this? Do th
Hi,
check (taken >= '1913-12-01' and taken <= '1913-12-31')
>
I don't think I want to constrain by year, for a few reasons:
1. There are a lot of years -- over 110.
2. There will be more years added (both in the future for 2010 and in the
past as I get data from other sources).
Currently
Hi,
Something in here really smells fishy to me. Those extract's above are
> working on values which are from the table.. Why aren't you using these
> functions to figure out how to construct the actual dates based on the
> values provided by the *user*..?
>
Because I've only been using Postgre
I took out the date conditions:
SELECT
m.*
FROM
climate.measurement m
WHERE
m.category_id = 1 and
m.station_id = 2043
This uses the station indexes:
"Result (cost=0.00..21781.18 rows=8090 width=28)"
" -> Append (cost=0.00..21781.18 rows=8090 width=28)"
"-> Seq Scan on measur
Hi,
(An EXPLAIN ANALYSE would be better here). Look at the expected number of
> stations
"Nested Loop (cost=0.00..994.94 rows=4046 width=4) (actual
time=0.053..41.173 rows=78 loops=1)"
" Join Filter: ((6371.009::double precision *
sqrt((pow(radians(((c.latitude_decimal - s.latitude_decimal))::
Hi, Yeb.
This is starting to go back to the design I used with MySQL:
- YEAR_REF - Has year and station
- MONTH_REF - Has month, category, and yea referencer
- MEASUREMENT - Has month reference, amount, and day
Normalizing by date parts was fast. Partitioning the tables by year won't do
Hi,
CREATE INDEX measurement_01_001_y_idx
>> ON climate.measurement_01_001
>> USING btree
>> (date_part('year'::text, taken));
>>
>> Is that equivalent to what you suggest?
>>
>
> No. It is not the same function, so Postgres has no way to know it produces
> the same results (if it does).
>
Thi
Hi,
certainly understand that you wouldn't want to partition by year. It
>
Definitely not.
> does strike me that perhaps you could partition by day ranges, but you'd
>
I don't think that will work; users can choose any day range, with the most
common as Jan 1 - Dec 31, followed by seasonal ra
Hi,
The problem is now solved (in theory).
Well, it's not the functions per se that's the problem, it's the lack of
> a useful index on the expression.
>
The measurement table indexes (on date and weather station) were not being
used because the only given date ranges (e.g., 1900 - 2009) were ca
Hi,
I wrote a query (see below) that extracts climate data from weather stations
within a given radius of a city using the dates for which those weather
stations actually have data. The query uses the measurement table's only
index:
CREATE UNIQUE INDEX measurement_001_stc_idx
ON climate.measure
Hi,
I changed the date comparison to be based on year alone:
extract(YEAR FROM sc.taken_start) >= 1900 AND
extract(YEAR FROM sc.taken_end) <= 2009 AND
The indexes are now always used; if someone wants to explain why using the
numbers works (a constant) but using a date (another constant?
Hi, Kevin.
Thanks for the response.
It sounds as though the active portion of your database is pretty
> much cached in RAM. True?
>
I would not have thought so; there are seven tables, each with 39 to 43
million rows as:
CREATE TABLE climate.measurement (
id bigserial NOT NULL,
taken date
Hi, Tom.
Yes, that is what happened, making the tests rather meaningless, and giving
me the false impression that the indexes were being used. They were but only
because of cached results. When multiple users making different queries, the
performance will return to ~80s per query.
I also tried Ke
Hi, Rob.
I tried bumping the effective_cache_size. It made no difference.
My latest attempt at forcing PostgreSQL to use the indexes involved two
loops: one to loop over the stations, the other to extract the station data
from the measurement table. The outer loop executes in 1.5 seconds. The
inn
Hi, Alexey.
Is it necessary to get the data as far as 1900 all the time ? Maybe there is
> a possibility to aggregate results from the past years if they are constant.
>
This I have done. I created another table (station_category) that associates
stations with when they started to take measuremen
Hi, Kevin.
below something in the range of 1.5 to 2 is probably not going to be
> a good choice for the mix as a whole.
>
Good to know; thanks.
> This should probably be set to something on the order of 3GB. This
> will help the optimizer make more intelligent choices about when use
> of the i
Hi,
sc.taken_end <= '1996-12-31'::date AND
> m.taken BETWEEN sc.taken_start AND sc.taken_end AND
>
> category of data at a certain time. But I'm afraid this makes the planning
> much more difficult, as the select from measurements depend on the data
> returned by other parts of the query (rows fro
I was told to try OVERLAPS instead of checking years. The query is now:
SELECT
extract(YEAR FROM m.taken) AS year,
avg(m.amount) as amount
FROM
climate.city c,
climate.station s,
climate.station_category sc,
climate.measurement m
WHERE
c.id = 5148 AND
earth_di
Hi, Bryan.
I was just about to reply to the thread, thanks for asking. Clustering was
key. After rebooting the machine (just to make sure absolutely nothing was
cached), I immediately ran a report on Toronto: 5.25 seconds!
Here's what I did:
1. Created a new set of tables that matched the old
Salut, Cédric.
I wonder what the plan will be if you replace sc.taken_* in :
> m.taken BETWEEN sc.taken_start AND sc.taken_end
> by values. It might help the planner...
>
That is a fairly important restriction. I will try making it *
(year1||'-01-01')::date*, but I have no constant value for it -
Hi, Bryan.
Thanks for the notes. I thought about using a prepared statement, but I
cannot find any examples of using a PREPARE statement from within a
function, and don't really feel like tinkering around to figure it out.
Performance is at the point where the Java/PHP bridge and JasperReports ar
Sorry, Alvaro.
I was contemplating using a GIN or GiST index as a way of optimizing the
query.
Instead, I found that re-inserting the data in order of station ID (the
primary look-up column) and then CLUSTER'ing on the station ID, taken date,
and category index increased the speed by an order of
Hi,
Hmm, that's nice, though I cannot but wonder whether the exclusive lock
> required by CLUSTER is going to be a problem in the long run.
>
Not an issue; the inserts are one-time (or very rare; at most: once a year).
> Hm, keep in mind that if the station clause alone is not selective
> enou
Hi,
I found a slow part of the query:
SELECT
* date(extract(YEAR FROM m.taken)||'-1-1') d1,*
* date(extract(YEAR FROM m.taken)||'-1-31') d2*
FROM
climate.city c,
climate.station s,
climate.station_category sc,
climate.measurement m
WHERE
c.id = 5148 AND ...
Date extraction is 3.2 se
Hi, Andy.
I assume you are doing this in a loop? Many Many Many times? cuz:
>
Yes. Here are the variations I have benchmarked (times are best of three):
Variation #0
-no date field-
Explain: http://explain.depesz.com/s/Y9R
Time: 2.2s
Variation #1
date('1960-1-1')
Explain: http://explain.depes
Hi,
To avoid string concatenation using dates, I figured I could write a C
function:
#include "postgres.h"
#include "fmgr.h"
#include "utils/date.h"
#include "utils/nabstime.h"
#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif
Datum dateserial (PG_FUNCTION_ARGS);
PG_FUNCTION_INFO_V1 (dateserial);
Hi, Tim.
Have you tested DATE_TRUNC()?
>
Not really; it returns a full timestamp and I would still have to
concatenate strings. My goal is to speed up the following code (where
*p_*parameters are user inputs):
*date(extract(YEAR FROM m.taken)||''-'||p_month1||'-'||p_day1||''')
d1,
Hi,
The C function returns a DateADT, which is a typedef for int32, but the
> CREATE FUNCTION statement claims that it returns 'text'.
>
That'll do it. Thank you!
but whether that is faster or slower I don't know. But I
> don't see why this query needs to be fast in the first
> place. It seems t
Hi,
Here is code to convert dates from integers without string concatenation:
Edit dateserial.c:
#include "postgres.h"
#include "utils/date.h"
#include "utils/nabstime.h"
#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif
Datum dateserial(PG_FUNCTION_ARGS);
PG_FUNCTION_INFO_V1 (dateserial);
Datu
Hi, Tom.
extract(YEAR FROM m.taken)
I thought that returned a double precision?
Dave
Hi,
I added an explicit cast in the SQL:
dateserial(extract(YEAR FROM
m.taken)::int,'||p_month1||','||p_day1||') d1,
dateserial(extract(YEAR FROM
m.taken)::int,'||p_month2||','||p_day2||') d2
The function now takes three integer parameters; there was no performance
loss.
Thank y
Hi,
We had a little chat about this with Magnus. It's pretty surprising that
> there's no built-in function to do this, we should consider adding one.
>
I agree; you should be able to create a timestamp or a date from integer
values. Others, apparently, have written code. The implementation I did
Hi,
It's not immediately obvious what the default value of "timezone"
> will be?
>
The system's locale, like now(); documentation can clarify.
By named parameter, I meant default value. You could construct a timestamp
variable using:
construct_timestamp( year := 1900, hour := 1 )
When I read
>
> Does it make sense to allow minutes when hours isn't specified? Or
>
For time, 00 seems a reasonable default for all values; clearly document the
defaults. Also, having a default makes the code simpler than plus
. (Not to mention explaining it.) ;-)
SELECT to_timestamp( minutes := 19 ) -- er
> Fair enough. How about something like make_timestamp? It's at least
> shorter and easier than construct :-)
>
Agreed.
Dave
On 7/8/2010 1:47 PM, Ryan Wexler wrote:
Thanks for the explanations that makes things clearer. It still
amazes me that it would account for a 5x change in IO.
The buffering allows decoupling of the write rate from the disk rotation
speed.
Disks don't spin that fast, at least not relative to t
On 7/8/2010 3:18 PM, timothy.noo...@emc.com wrote:
How does the linux machine know that there is a BBU installed and to
change its behavior or change the behavior of Postgres? I am
experiencing performance issues, not with searching but more with IO.
It doesn't change its behavior at all. It'
Do you guys have any more ideas to properly 'feel this disk at its
teeth' ?
While an 'end-to-end' test using PG is fine, I think it would be easier
to determine if the drive is behaving correctly by using a simple test
program that emulates the storage semantics the WAL expects. Have it
wri
Howdy all,
We're doing some performance testing, and when we scaled it our app up to about
250 concurrent users
we started seeing a bunch of processes sititng in "PARSE WAITING" state.
Can anyone give me insite on what this means? what's the parse waiting for?
Thanks
Dave
--
Sent via pgsql-p
probably waiting on the xlog directory that's filled up...
->->blamo
move along, nothing to see here =)
Dave
On Mon, Aug 23, 2010 at 03:15:56PM -0700, David Kerr wrote:
- Howdy all,
-
- We're doing some performance testing, and when we scaled it our app up to
about 250
On Mon, Aug 23, 2010 at 06:23:25PM -0400, Alvaro Herrera wrote:
- Excerpts from David Kerr's message of lun ago 23 18:15:56 -0400 2010:
- > Howdy all,
- >
- > We're doing some performance testing, and when we scaled it our app up to
about 250 concurrent users
- > we sta
Feels like I fell through a worm hole in space/time, back to inmos in
1987, and a guy from marketing has just
walked in the office going on about there's a customer who wants to use
our massively parallel hardware to speed up databases...
--
Sent via pgsql-performance mailing list (pgsql-per
On 8/30/2010 3:18 PM, Chris Browne wrote:
... As long as you're willing to rewrite PostgreSQL in Occam 2...
Just re-write it in Google's new language 'Go' : it's close enough to
Occam and they'd probably fund the project..
;)
--
Sent via pgsql-performance mailing list (pgsql-performance@
create the new partitions.
>
> Anyone have any thoughts on how to debug this? were running postgres 8.4.4
> on
> CentOS 5.5
If you're doing the partitions on demand, you could be getting
deadlocks. Any reason not to pre-create a big bunch of them in
advance?
Cheer
Howdy,
I'm running pgbench with a fairly large # of clients and getting this error in
my PG log file.
Here's the command:
./pgbench -c 1100 testdb -l
I get:
LOG: could not send data to client: Broken pipe
(I had to modify the pgbench.c file to make it go that high, i changed:
MAXCLIENTS = 204
ocess rather than global.
- >So I think you could get above the FD_SETSIZE limit with a bit of
- >hacking if you were using 9.0's pgbench. No chance with 8.3 though.
- >
-
- I believe David can do this easily enough by compiling a 9.0 source code
- tree with the "--disable-
On Wed, Sep 08, 2010 at 03:44:36PM -0400, Tom Lane wrote:
- Greg Smith writes:
- > Tom Lane wrote:
- >> So I think you could get above the FD_SETSIZE limit with a bit of
- >> hacking if you were using 9.0's pgbench. No chance with 8.3 though.
-
- > I believe David can d
On Wed, Sep 08, 2010 at 04:35:28PM -0400, Tom Lane wrote:
- David Kerr writes:
- > should i be running pgbench differently? I tried increasing the # of threads
- > but that didn't increase the number of backend's and i'm trying to simulate
- > 2000 physical backend proc
On Wed, Sep 08, 2010 at 03:56:24PM -0500, Kevin Grittner wrote:
- David Kerr wrote:
-
- > Actually, this is real.. that's 2000 connections - connection
- > pooled out to 20k or so. (although i'm pushing for closer to 1000
- > connections).
- >
- > I know that'
On Wed, Sep 08, 2010 at 04:51:17PM -0500, Kevin Grittner wrote:
- David Kerr wrote:
-
- > Hmm, i'm not following you. I've got 48 cores. that means my
- > sweet-spot active connections would be 96.
-
- Plus your effective spindle count. That can be hard to calculate,
- but yo
On Wed, Sep 08, 2010 at 05:27:24PM -0500, Kevin Grittner wrote:
- David Kerr wrote:
-
- > My assertian/hope is that the saturation point
- > on this machine should be higher than most.
-
- Here's another way to think about it -- how long do you expect your
- average database req
On Thu, Sep 09, 2010 at 10:38:16AM -0400, Alvaro Herrera wrote:
- Excerpts from David Kerr's message of mié sep 08 18:29:59 -0400 2010:
-
- > Thanks for the insight. we're currently in performance testing of the
- > app. Currently, the JVM is the bottleneck, once we get past that
On Mon, Sep 20, 2010 at 2:54 PM, George Sexton wrote:
> I'll throw in my 2 cents worth:
>
> 1) Performance using RAID 1 for reads sucks. You would expect throughput to
> double in this configuration, but it doesn't. That said, performance for
> RAID 1 is not noticeably worse than Linux MD. My test
New to Postgres and am prototyping a migration from Oracle to Postgres 9.0.1 on
Linux. Starting with the data warehouse. Current process is to load the data
from
our OLTP (Oracle), dump it into another instance of Oracle for staging and
manipulation, then extract it and load it into Infobright.
in Oracle
numeric(11,0) fields in Postgres.
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Wednesday, January 26, 2011 5:12 PM
To: David Greco
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Real vs Int performance
David Greco writes:
> Came acros
ly faster
unless you go out of your way to absolutely force it to do so. If the table
is going to be "fairly large", that's the size you need to be testing and
tuning with.
--
- David T. Wilson
david.t.wil...@gmail.com
wn option:
http://en.wikipedia.org/wiki/Reservoir_sampling
--
- David T. Wilson
david.t.wil...@gmail.com
On Mon, Mar 07, 2011 at 10:49:48PM +0100, Andreas For Tollefsen wrote:
- The synchronous_commit off increased the TPS, but not the speed of the below
- query.
-
- Oleg:
- This is a query i am working on now. It creates an intersection of two
- geometries. One is a grid of 0.5 x 0.5 decimal degree
ssentially eliminate disks for databases of this size.
David Lang
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
301 - 400 of 994 matches
Mail list logo