Is there a way to force a WAL flush so that async commits (from other
connections) are flushed, short of actually updating a sacrificial row?
Would be nice to do it without generating anything extra, even if it is
something that causes IO in the checkpoint.
Am I right to think that an empty t
Le 2013-01-07 à 16:49, james a écrit :
Is there a way to force a WAL flush so that async commits (from other
connections) are flushed, short of actually updating a sacrificial row?
Would be nice to do it without generating anything extra, even if it is
something that causes IO in the
On 23/05/2013 22:57, Jonathan Morra wrote:
I'm not sure I understand your proposed solution. There is also the
case to consider where the same patient can be assigned the same
device multiple times. In this case, the value may be reset at each
assignment (hence the line value - issued_value A
Title: Postgres recovery time
Does anyone know what factors affect the recovery time of postgres if it does not shutdown cleanly? With the same size database I've seen times from a few seconds to a few minutes. The longest time was 33 minutes. The 33 minutes was after a complete system crash
Unless there was a way to guarantee consistency, it would be hard at
best to make this work. Convergence on large data sets across boxes is
non-trivial, and diffing databases is difficult at best. Unless there
was some form of automated way to ensure consistency, going 8 ways into
separate boxes is
I have the following table:
CREATE TABLE timeblock
(
timeblockid int8 NOT NULL,
starttime timestamp,
endtime timestamp,
duration int4,
blocktypeid int8,
domain_id int8,
create_date timestamp,
revision_date timestamp,
scheduleid int8,
CONSTRAINT timeblock_pkey PRIMARY KEY (timeb
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Michael Fuhr) wrote:
> On Sat, Dec 17, 2005 at 09:10:40PM -0800, James Klo wrote:
> > I'd like some suggestions on how to get the deletes to happen faster, as
> > while deleting individually appears to extremely fast
Mitch Skinner wrote:
Have you considered partitioning?
http://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.html
If you can partition your timeblock table so that you archive an entire
partition at a time, then you can delete the archived rows by just
dropping (or truncating) that p
e.)
Should I convert the columns to text? Or create an additional index
that expects ::text args? (If so, how?)
Or is there some other way to ensure the indices get used w/o having
to tag data in the queries?
Thanks,
-JimC
--
James Cloos <[EMAIL PROTECTED]> OpenPGP: 1024D/ED7DAEA6
---
width=105)
(actual time=64.831..64.831 rows=0 loops=1)
|Index Cond: ((npa = '7'::bpchar) AND (nxx = '4'::bpchar))
| Total runtime: 64.927 ms
| (3 rows)
`
BTW, I forgot to mention I'm at 8.1.4 on that box.
-JimC
--
James Cloos <[EMAIL PROTECT
r teh WAL on
a fast SATA or SAS drive pair. I'm thhinking that this would tend to have
good performance because the seek time for the data is very low, even if the
actual write speed can be slower than state of the art. 2GB CF isn't so
pricey any more.
Just wondering.
James
--
No viru
enough
capacity
for quite a lot of raw data, and can swap a card out every weekend and let
the
RAID rebuild it in rotation to keep them within conservative wear limits.
So long as the wear levelling works moderately well (and without needing FAT
on the disk or whatever) then I should be fine.
I
tement (and per row)? Seems to me it should be, at least for
ones that raise 'some thing changed' events. And/or allow specification
that events can fold and should be very cheap (don't know if this is the
case now? Its not as well documented how this works as I'd like)
James
. Obviously update would be a
problem for my purposes, and I suppose a lot of event processing, it
isn't an issue.
Either way, details are at:
http://unsyntax.net/james/blog/tools+and+programming/2007/03/08/Dispatch-Merge-Database-Pattern
Cheers,
James
---(end of broa
e well.
If the WAL write is committed to the disk platter, is it OK for
arbitrary data blocks to have failed to commit to disk so we can
recover the updates for committed transactions?
Is theer any documentation on the write barrier usage?
James
--
No virus found in this outgoing message.
Checked
>sure but for any serious usage one either wants to disable that
>cache(and rely on tagged command queuing or how that is called in SATAII
>world) or rely on the OS/raidcontroller implementing some sort of
>FUA/write barrier feature(which linux for example only does in pretty
>recent kernels)
Does
e rate will result in returns processing
costs that destroy a very thin margin.
Granted, there was a move to very short warranties a while back,
but the trend has been for more realistic warranties again recently.
You can bet they don't do this unless the drives are generally pretty
good.
J
ng at a much
>higher rate, whether they're working hard or not.
Can you cite any statistical evidence for this?
James
--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.26/746 - Release Date: 04/04/2007
13:09
--
design targets *that the manufacturer admits to* may be more
stringent, but I'm interested to know what the actual measured difference
is.
>From the sound of it, you DON'T have such evidence. Which is not a
surprise, because I don't have it either, and I do try to keep my eyes
I've got a table with ~121 million records in it. Select count on it
currently takes ~45 minutes, and an update to the table to set a value
on one of the columns I finally killed after it ran 17 hours and had
still not completed. Queries into the table are butt slow, and
The update query
Auto-vacuum has made Postgres a much more "friendly" system. Is there some
reason the planner can't also auto-ANALYZE in some situations?
Here's an example I ran into:
create table my_tmp_table (...);
insert into my_tmp_table (select some stuff from here and there);
select ... from my_tm
ages instead of the table itself - they
should be much
more compact and also likely to be hot in cache.
Why *wouldn't* the planner do this?
James
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
If Sybase is still like SQL Server (or the other way around), it *may*
end up scanning the index *IFF* the index is a clustered index. If it's
a normal index, it will do a sequential scan on the table.
Are you sure its not covered? Have to check at work - but I'm off next
week so it'll hav
Mark Lewis wrote:
PG could scan the index looking for matches first and only load the
actual rows if it found a match, but that could only be a possible win
if there were very few matches, because the difference in cost between a
full index scan and a sequential scan would need to be greater tha
Alvaro Herrera wrote:
>> Just out of curiosity: Does Postgress store a duplicate of the data in the
index, even for long strings? I thought indexes only had to store the
string up to the point where there was no ambiguity, for example, if I have
"missing", "mississippi" and "misty", the index
We're thinking of building some new servers. We bought some a while back that
have ECC (error correcting) RAM, which is absurdly expensive compared to the
same amount of non-ECC RAM. Does anyone have any real-life data about the
error rate of non-ECC RAM, and whether it matters or not? In my
On Fri, 2007-05-25 at 20:16 +0200, Arnau wrote:
The point I'm worried is performance. Do you think the performance
would be better executing exactly the same queries only adding an extra
column to all the tables e.g. customer_id, than open a connection to the
only one customers DB and execut
Apologies for a somewhat off-topic question, but...
The Linux kernel doesn't properly detect my software RAID1+0 when I boot up.
It detects the two RAID1 arrays, the partitions of which are marked properly.
But it can't find the RAID0 on top of that, because there's no corresponding
device t
[EMAIL PROTECTED] wrote:
various people (not database experts) are pushing to install Oracle
cluster so that they can move all of these to one table with a
customerID column.
They're blowing smoke if they think Oracle can do this. One of my applications
had this exact same problem -- table-p
Scott Marlowe wrote:
OTOH, there are some things, like importing data, which are MUCH faster
in pgsql than in the big database.
An excellent point, I forgot about this. The COPY command is the best thing
since the invention of a shirt pocket. We have a database-per-customer design,
and one o
Jonah H. Harris wrote:
On 6/6/07, Craig James <[EMAIL PROTECTED]> wrote:
They're blowing smoke if they think Oracle can do this.
Oracle could handle this fine.
Oracle fell over dead, even with the best indexing possible,
tuned by the experts, and using partitions keyed to the
lable
facility. I realise in this
case that matching against the index does not allow the match count
unless we check
MVCC as we go, but I don't see why another thread can't be doing that.
James
---(end of broadcast)---
TI
Tyrrill, Ed wrote:
I have a table, let's call it A, whose primary key, a_id, is referenced
in a second table, let's call it B. For each unique A.a_id there are
generally many rows in B with the same a_id. My problem is that I want
to delete a row in A when the last row in B that references it
Tyrrill, Ed wrote:
QUERY PLAN
---
Merge Left Join (cost=38725295.93..42505394.70 rows=13799645 width=8)
(actual time=6503583.342..82
On 2007-06-11 Christo Du Preez wrote:
I really hope someone can shed some light on my problem. I'm not sure
if this is a posgres or potgis issue.
Anyway, we have 2 development laptops and one live server, somehow I
managed to get the same query to perform very well om my laptop, but
on both the
Looking for replication solutions, I find:
Slony-I
Seems good, single master only, master is a single point of failure,
no good failover system for electing a new master or having a failed
master rejoin the cluster. Slave databases are mostly for safety or
for parallelizing queries for performan
Thanks to all who replied and filled in the blanks. The problem with the web
is you never know if you've missed something.
Joshua D. Drake wrote:
Looking for replication solutions, I find...
Slony-II
Dead
Wow, I'm surprised. Is it dead for lack of need, lack of resources, too
complex, or
Andreas Kostyrka wrote:
Slony provides near instantaneous failovers (in the single digit seconds
range). You can script an automatic failover if the master server
becomes unreachable.
But Slony slaves are read-only, correct? So the system isn't fully functional
once the master goes down.
T
Markus Schiltknecht wrote:
Not quite... there's still Postgres-R, see www.postgres-r.org And I'm
continuously working on it, despite not having updated the website for
almost a year now...
I planned on releasing the next development snapshot together with 8.3,
as that seems to be delayed, th
nks for taking the time to put this together and for offering the
services of your team.
Kind regards,
James
begin:vcard
fn:James Neethling
n:Neethling;James
org:Silver Sphere Business Solutions
adr:Centurion Business Park A2;;25633 Democracy Way;Prosperity Park;Milnerton;Cape Town;7
Dolafi, Tom wrote:
min(fmin) | max(fmin)|avg(fmin)
1 | 55296469 |11423945
min(fmax) | max(fmax)|avg(fmax)
18 | 3288 |11424491
There are 5,704,211 rows in the table.
When you're looking for weird index problems, it's more in
I have the same schema in two different databases. In "smalldb", the two tables of
interest have about 430,000 rows, in "bigdb", the two tables each contain about 5.5
million rows. I'm processing the data, and for various reasons it works out well to process it in
100,000 row chunks. However
The two queries below produce different plans.
select r.version_id, r.row_num, m.molkeys from my_rownum r
join my_molkeys m on (r.version_id = m.version_id)
where r.version_id >= 320
and r.version_id < 330
order by r.version_id;
select r.version_id, r.row_num, m.molkeys from my_rownu
Sorry, I forgot to mention: This is 8.1.4, with a fairly ordinary configuration
on a 4 GB system.
Craig
Craig James wrote:
The two queries below produce different plans.
select r.version_id, r.row_num, m.molkeys from my_rownum r
join my_molkeys m on (r.version_id = m.version_id)
where
Here's an oddity. I have 10 databases, each with about a dozen connections to Postgres
(about 120 connections total), and at midnight they're all idle. These are mod_perl
programs (like a FastCGI -- they stay connected so they're ready for instant service).
So using "ps -ef" and grep, we fin
Bruno Rodrigues Siqueira wrote:
Who can help me? My SELECT in a base with 1 milion register,
using expression index = 6seconds…
Run your query using
EXPLAIN ANALYZE SELECT ... your query ...
and then post the results to this newsgroup. Nobody can help until they see
the res
Tilmann Singer wrote:
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [20070728 21:05]:
Let's try putting the sort/limit in each piece of the UNION to speed them up
separately.
SELECT * FROM (
(SELECT * FROM large_table lt
WHERE lt.user_id = 12345
ORDER BY created_at DESC LIMIT 10) AS q1
UNION
(S
this?
James
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Scott Marlowe wrote:
Where unixes generally outperform windows is in starting up new
backends, better file systems, and handling very large shared_buffer
settings.
Why do you think that UNIX systems are better at handling large shared
buffers than Wndows?
32 bit Windows systems can suffer f
Carlo Stonebanks wrote:
Isn't it just easier to assume that Windows Server can't do anything right?
;-)
Well, avoiding the ;-) - people do, and its remarkably foolish of them. Its
a long-standing whinge that many people with a UNIX-background seem to
just assume that Windows sucks, but you
If I delete a whole bunch of tables (like 10,000 tables), should I vacuum
system tables, and if so, which ones? (This system is still on 8.1.4 and isn't
running autovacuum).
Thanks,
Craig
---(end of broadcast)---
TIP 4: Have you searched our lis
Carlos Moreno wrote:
Anyone has tried a setup combining tablespaces with NFS-mounted partitions?
There has been some discussion of this recently, you can find it in the
archives (http://archives.postgresql.org/). The word seems to be that NFS can
lead to data corruption.
Craig
-
Luiz K. Matsumura wrote:
Is connected to full 100Mb, it transfers many things quick to clients.
Is running Apache adn JBoss, transfer rate is good, I did scp to copy
many archives and is as quick as the old server.
I have no idea how to continue researching this problem. Now I'm going
to do s
On 3/17/10 2:52 AM, Greg Stark wrote:
On Wed, Mar 17, 2010 at 7:32 AM, Pierre C wrote:
I was thinking in something like that, except that the factor I'd use
would be something like 50% or 100% of current size, capped at (say) 1 GB.
This turns out to be a bad idea. One of the first thing Oracl
On 3/22/10 11:47 AM, Scott Carey wrote:
On Mar 17, 2010, at 9:41 AM, Craig James wrote:
On 3/17/10 2:52 AM, Greg Stark wrote:
On Wed, Mar 17, 2010 at 7:32 AM, Pierre C wrote:
I was thinking in something like that, except that the factor I'd use
would be something like 50% or 10
Hannu Krosing wrote:
Pulling the plug should not corrupt a postgreSQL database, unless it was
using disks which lie about write caching.
Didn't we recently put the old wife's 'the disks lied' tale to bed in
favour of actually admiting that some well known filesystems and
saftware raid system
On 3/26/10 4:57 PM, Richard Yen wrote:
Hi everyone,
We've recently encountered some swapping issues on our CentOS 64GB Nehalem
machine, running postgres 8.4.2. Unfortunately, I was foolish enough to set
shared_buffers to 40GB. I was wondering if anyone would have any insight into
why the sw
Most of the time Postgres runs nicely, but two or three times a day we get a
huge spike in the CPU load that lasts just a short time -- it jumps to 10-20
CPU loads. Today it hit 100 CPU loads. Sometimes days go by with no spike
events. During these spikes, the system is completely unresponsi
On 4/7/10 2:40 PM, Joshua D. Drake wrote:
On Wed, 2010-04-07 at 14:37 -0700, Craig James wrote:
Most of the time Postgres runs nicely, but two or three times a day we get a
huge spike in the CPU load that lasts just a short time -- it jumps to 10-20
CPU loads. Today it hit 100 CPU loads
On 4/7/10 3:36 PM, Joshua D. Drake wrote:
On Wed, 2010-04-07 at 14:45 -0700, Craig James wrote:
On 4/7/10 2:40 PM, Joshua D. Drake wrote:
On Wed, 2010-04-07 at 14:37 -0700, Craig James wrote:
Most of the time Postgres runs nicely, but two or three times a day we get a
huge spike in the CPU
On 4/7/10 2:59 PM, Tom Lane wrote:
Craig James writes:
Most of the time Postgres runs nicely, but two or three times a day we get a
huge spike in the CPU load that lasts just a short time -- it jumps to 10-20
CPU loads. Today it hit 100 CPU loads. Sometimes days go by with no spike
events
On 4/7/10 5:47 PM, Robert Haas wrote:
On Wed, Apr 7, 2010 at 6:56 PM, David Rees wrote:
max_fsm_pages = 1600
max_fsm_relations = 625000
synchronous_commit = off
You are playing with fire here. You should never turn this off unless
you do not care if your data becomes irrecoverably corrup
Now that it's time to buy a new computer, Dell has changed their RAID models
from the Perc6 to Perc H200 and such. Does anyone know what's inside these? I
would hope they've stuck with the Megaraid controller...
Also, I can't find any info on Dell's site about how these devices can be
config
On 5/12/10 4:55 AM, Kevin Grittner wrote:
venu madhav wrote:
we display in sets of 20/30 etc. The user also has the option to
browse through any of those records hence the limit and offset.
Have you considered alternative techniques for paging? You might
use values at the edges of the page t
On 5/26/10 9:47 AM, Stephen Frost wrote:
* Eliot Gable (egable+pgsql-performa...@gmail.com) wrote:
Since PostgreSQL is written in C, I assume there is no
such additional overhead. I assume that the PL/PGSQL implementation at its
heart also uses SPI to perform those executions. Is that a fair sta
On 5/18/10 3:28 PM, Carlo Stonebanks wrote:
Sample code:
SELECT *
FROM MyTable
WHERE foo = 'bar' AND MySlowFunc('foo') = 'bar'
Let's say this required a SEQSCAN because there were no indexes to
support column foo. For every row where foo <> 'bar' would the filter on
the SEQSCAN short-circuit th
On 5/27/10 2:28 PM, Kevin Grittner wrote:
Craig James wrote:
It would be nice if Postgres had a way to assign a cost to every
function.
The COST clause of CREATE FUNCTION doesn't do what you want?
http://www.postgresql.org/docs/8.4/interactive/sql-createfunction.html
Cool ... I must
I'm testing/tuning a new midsize server and ran into an inexplicable problem.
With an RAID10 drive, when I move the WAL to a separate RAID1 drive, TPS drops
from over 1200 to less than 90! I've checked everything and can't find a
reason.
Here are the details.
8 cores (2x4 Intel Nehalem 2 G
On 6/2/10 4:40 PM, Mark Kirkwood wrote:
On 03/06/10 11:30, Craig James wrote:
I'm testing/tuning a new midsize server and ran into an inexplicable
problem. With an RAID10 drive, when I move the WAL to a separate RAID1
drive, TPS drops from over 1200 to less than 90! I've checked
ever
On 6/10/10 12:34 PM, Anne Rosset wrote:
Jochen Erwied wrote:
Thursday, June 10, 2010, 8:36:08 PM you wrote:
psrdb=# (SELECT
psrdb(# MAX(item_rank.rank) AS maxRank
psrdb(# FROM
psrdb(# item_rank item_rank
psrdb(# WHERE
psrdb(# item_rank.project_id='proj2783'
psrdb(# AND item_rank.pf_id IS NULL
[oops, didn't hit "reply to list" first time, resending...]
On 6/15/10 9:02 AM, Steve Wampler wrote:
Chris Browne wrote:
"jgard...@jonathangardner.net" writes:
My question is how can I configure the database to run as quickly as
possible if I don't care about data consistency or durability? T
On 6/16/10 12:00 PM, Josh Berkus wrote:
* fsync=off => 5,100
* fsync=off and synchronous_commit=off => 5,500
Now, this *is* interesting ... why should synch_commit make a difference
if fsync is off?
Anyone have any ideas?
I found that pgbench has "noise" of about 20% (I posted about this
Can anyone tell me what's going on here? I hope this doesn't mean my system
tables are corrupt...
Thanks,
Craig
select relname, pg_relation_size(relname) from pg_class
where pg_get_userbyid(relowner) = 'emol_warehouse_1'
and relname not like 'pg_%'
order by pg_relation
On 6/24/10 4:19 PM, Alvaro Herrera wrote:
Excerpts from Craig James's message of jue jun 24 19:03:00 -0400 2010:
select relname, pg_relation_size(relname) from pg_class
where pg_get_userbyid(relowner) = 'emol_warehouse_1'
and relname not like 'pg_%'
order by pg_rel
I'm reviving this question because I never figured it out. To summarize: At random
intervals anywhere from a few times per hour to once or twice a day, we see a huge spike
in CPU load that essentially brings the system to a halt for up to a minute or two.
Previous answers focused on "what is
On 6/24/10 9:04 PM, Tom Lane wrote:
Craig James writes:
So what is it that will cause every single Postgres backend to come to life at
the same moment, when there's no real load on the server? Maybe if a backend
crashes? Some other problem?
sinval queue overflow comes to
On 6/25/10 7:47 AM, Tom Lane wrote:
Craig James writes:
On 6/24/10 9:04 PM, Tom Lane wrote:
sinval queue overflow comes to mind ... although that really shouldn't
happen if there's "no real load" on the server. What PG version is
this?
8.3.10. Upgraded based on your
On 6/25/10 9:41 AM, Kevin Grittner wrote:
Craig James wrote:
I always just assumed that lots of backends that would be harmless
if each one was doing very little.
Even if each is doing very little, if a large number of them happen
to make a request at the same time, you can have problems
I've got a new server and want to make sure it's running well. Are these
pretty decent numbers?
8 cores (2x4 Intel Nehalem 2 GHz)
12 GB memory
12 x 7200 SATA 500 GB disks
3WARE 9650SE-12ML RAID controller with BBU
WAL on ext2, 2 disks: RAID1 500GB, blocksize=4096
Database on ext4, 8 disks:
On 6/25/10 3:28 PM, Kevin Grittner wrote:
wrote:
With the PostgreSQL type tables I am not so certain how the data
is arranged within the one file. Does having the data all in one
database allow PostgreSQL to better utilize indexes and caches or
does having a number of smaller databases provide
On 6/25/10 12:03 PM, Greg Smith wrote:
Craig James wrote:
I've got a new server and want to make sure it's running well.
Any changes to the postgresql.conf file? Generally you need at least a
moderate shared_buffers (1GB or so at a minimum) and checkpoint_segments
(32 or higher) in
On 6/30/10 9:42 AM, Dave Crooke wrote:
I haven't jumped in yet on this thread, but here goes
If you're really looking for query performance, then any database which
is designed with reliability and ACID consistency in mind is going to
inherently have some mis-fit features.
Some other ideas
On 7/2/10 6:59 AM, Eliot Gable wrote:
Yes, I have two pl/pgsql functions. They take a prepared set of data
(just the row id of the original results, plus the particular priority
and weight fields) and they return the same set of data with an extra
field called "order" which contains a numerical o
On 7/8/10 9:31 AM, Ryan Wexler wrote:
Thanks a lot for all the comments. The fact that both my windows box
and the old linux box both show a massive performance improvement over
the new linux box seems to point to hardware to me. I am not sure how
to test the fsync issue, but i don't see how th
On 7/8/10 12:47 PM, Ryan Wexler wrote:
On Thu, Jul 8, 2010 at 12:46 PM, Kevin Grittner
mailto:kevin.gritt...@wicourts.gov>> wrote:
Ryan Wexler mailto:r...@iridiumsuite.com>>
wrote:
> One thing I don't understand is why BBU will result in a huge
> performance gain. I thought
a disk that's exceptionally fast at flushing
its buffers.
Craig
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Craig James
Sent: Thursday, July 08, 2010 4:02 PM
To: pgsql-performance@postgresql.org
Subj
On 7/21/10 5:47 PM, Craig Ringer wrote:
On 21/07/10 22:59, Greg Smith wrote:
A useful trick to know is that if you replace the version number
with "current", you'll get to the latest version most of the time
(sometimes the name of the page is changed between versions, too, but
this isn't that
On 7/21/10 6:47 PM, Greg Smith wrote:
Craig James wrote:
By using "current" and encouraging people to link to that, we could
quickly change the Google pagerank so that a search for Postgres would
turn up the most-recent version of documentation.
How do you propose to encourage pe
On 7/23/10 2:22 AM, Torsten Zühlsdorff wrote:
Craig James schrieb:
A useful trick to know is that if you replace the version number
with "current", you'll get to the latest version most of the time
(sometimes the name of the page is changed between versions, too, but
this isn&
On 7/24/10 5:57 AM, Torsten Zühlsdorff wrote:
Craig James schrieb:
The problem is that Google ranks pages based on inbound links, so
older versions of Postgres *always* come up before the latest version
in page ranking.
Since 2009 you can deal with this by defining the canonical-version
I can query either my PARENT table joined to PRICES, or my VERSION table joined
to PRICES, and get an answer in 30-40 msec. But put the two together, it jumps
to 4 seconds. What am I missing here? I figured this query would be nearly
instantaneous. The VERSION.ISOSMILES and PARENT.ISOSMILES
On 8/5/10 11:28 AM, Kenneth Cox wrote:
I am using PostgreSQL 8.3.7 on a dedicated IBM 3660 with 24GB RAM
running CentOS 5.4 x86_64. I have a ServeRAID 8k controller with 6 SATA
7500RPM disks in RAID 6, and for the OLAP workload it feels* slow
My current performance is 85MB/s write, 151 MB/s
On 8/18/10 12:24 PM, Samuel Gendler wrote:
With barriers off, I saw a transaction rate of about 1200. With
barriers on, it was closer to 1050. The test had a concurrency of 40
in both cases.
I discovered there is roughly 10-20% "noise" in pgbench results after running
the exact same test ove
On 8/27/10 5:21 PM, Ozer, Pam wrote:
I have a query that
Select Distinct VehicleId
From Vehicle
Where VehicleMileage between 0 and 15000.
I have an index on VehicleMileage. Is there another way to put an index on a
between? The index is not being picked up. It does get picked up when I run
On 9/14/10 9:10 AM, mark wrote:
Hello,
I am relatively new to postgres (just a few months) so apologies if
any of you are bearing with me.
I am trying to get a rough idea of the amount of bang for the buck I
might see if I put in a connection pooling service into the enviroment
vs our current m
On 10/9/10 6:47 PM, Scott Marlowe wrote:
On Sat, Oct 9, 2010 at 5:26 PM, Neil Whelchel wrote:
I know that there haven been many discussions on the slowness of count(*) even
when an index is involved because the visibility of the rows has to be
checked. In the past I have seen many suggestions a
On 10/15/10 6:58 PM, Navkirat Singh wrote:
I am interested in finding out the pros/cons of using UUID as a
primary key field. My requirement states that UUID would be perfect
in my case as I will be having many small databases which will link
up to a global database using the UUID. Hence, the n
Kevin Grittner wrote:
On what do you base that assumption? I assume that we send a full
8K to the OS cache, and the file system writes disk sectors
according to its own algorithm. With either platters or BBU cache,
the data is persisted on fsync; why do you see a risk with one but
not the other
and comparing.
It can help to use recent versions of gcc with -march=native. And
recent versions of glibc offer improved string ops on recent hardware.
-JimC
--
James Cloos OpenPGP: 1024D/ED7DAEA6
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make ch
ecided to bin
it. Especially handy that SQLite3 has WAL now. (And one last dig - TC
didn't even
have a checksum that would let you tell when it had been broken: but it
might all be fixed now
of course, I don't have time to check.)
James
--
Sent via pgsql-performance mailing list (
1 - 100 of 570 matches
Mail list logo