WHERE c1 = p1;
> END;
Perhaps I'm confused, but I didn't think PostgreSQL had stored procedures. If
the code you're actually running looks like this then I don't think you're
using PostgreSQL.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performa
to disk is a win - think SSD or RAID with BBU cache and with a
relatively modest 13GB database you should be able to spec enough RAM to
keep everything in memory.
Cheers,
Steve
But CPU is often not the limiting factor. With a better understanding of
your needs, people here can offer suggestions for memory, storage, pooling,
network, etc.
Cheers,
Steve
On Tue, May 23, 2017 at 11:29 AM, Jarek wrote:
> Hello!
>
> I've heavy loaded PostgreSQL server, which
greatest and will get critical
updates without worrying about any distribution packager delays.
Cheers,
Steve
worth reading:
https://blog.algolia.com/when-solid-state-drives-are-not-that-solid/
Cheers,
Steve
On 06/05/2015 12:28 PM, Steve Crawford wrote:
On 06/05/2015 11:38 AM, Sheena, Prabhjot wrote:
When I run vacuum analyze it fixes the problem but after 1 or 2 days
the problem comes back
Is autovacuum running and using what settings?
(select name, setting from pg_settings where name
#x27;statistics';)
Cheers,
Steve
P.S. The convention on the PostgreSQL mailing lists it to bottom-post,
not top-post replies.
Konsole outpor name ~ 'statistics';)
Rows Removed by Filter: 22
Total runtime: 19.769 ms
A couple initial questions:
1. Does the result change if you analyze the table and rerun the query?
2. Are there any non-default settings for statistics collection on your
database?
-Steve
quot; is about as fast as a file copy, much
faster than pg_restore tends to be.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Do I leave
indexes as is and evaluate which ones to drop later? Any recommendations
on distribution and/or kernels (and kernel tuning)? PostgreSQL tuning
starting points? Whatever comes to mind.
Thanks,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
e 14.04 is 3.13.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
a large resource-intensive query, a query holding locks that
prevent the other connections' queries from completing or a variety of
other things.
If you are looking to solve/prevent the undescribed "issue", please
provide more detail.
-Steve
7;t) the approach I've
moved to using is to tag the records with ASN on import, using an in-core
patricia trie kept in sync with the address ranges listed in the database to do
the work. It's faster still, at the cost of a fair bit more work during import.
(It's also a little more accurate in some cases, as the mapping of IP address
to list of ASNs is dynamic, and you usually want the ASN at the time of an
incident, not the one now.)
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
g and even they most likely couldn't answer this.
The closest you are likely to come is to read and reread "PostgreSQL
High Performance" which is an invaluable resource.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes t
needs a "nudge" to use
an index on, say, a varchar column being compared to, perhaps, a text
value or column in which case casting to the exact data-type being
indexed can be a big win.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
nt or patients are issued devices at
overlapping times (i.e. using two devices at one time) then the query
gets more complicated but "with..." is still a likely usable construct.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to yo
p information on a specific patient or device?
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
has a nasty security issue. Upgrade. Now.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
g on my 10.8.3 box. (I'm setting it higher in
/etc/sysctl.conf
and have no problems, but it stopped postgres.app starting when I removed
that).
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 13-04-14 08:06 PM, Steve Singer wrote:
On 13-04-13 04:54 PM, Jeff Janes wrote:
If you are trying to make your own private copy of 9.2, then removing
the fudge factor altogether is probably the way to go. But if you want
to help improve future versions, you probably need to test with the
On 13-04-13 04:54 PM, Jeff Janes wrote:
On Sat, Apr 13, 2013 at 9:14 AM, Steve Singer mailto:ssin...@ca.afilias.info>> wrote:
indexTotalCost += index->pages * spc_random_page_cost / 10.0;
Is driving my high costs on the inner loop. The index has 2-5
million pages dep
On 13-04-12 09:20 PM, Jeff Janes wrote:
On Thursday, April 11, 2013, Steve Singer wrote:
I think the reason why it is picking the hash join based plans is
because of
Index Scan using table_b_1_ptid_orgid_ym_unq on table_b_1 b
(cost=0.00..503.86 rows=1 width=10) (actual time
On 13-04-10 07:54 PM, Steve Singer wrote:
On 13-04-10 02:06 PM, Jeff Janes wrote:
On Wed, Apr 10, 2013 at 6:49 AM, Steve Singer mailto:ssin...@ca.afilias.info>> wrote:
I think the index recheck means your bitmap is overflowing (i.e. needing
more space than work_mem) and so keeping on
On 13-04-10 02:06 PM, Jeff Janes wrote:
On Wed, Apr 10, 2013 at 6:49 AM, Steve Singer mailto:ssin...@ca.afilias.info>> wrote:
I think the index recheck means your bitmap is overflowing (i.e. needing
more space than work_mem) and so keeping only the pages which have at
least one match,
On 13-04-10 09:56 AM, k...@rice.edu wrote:
On Wed, Apr 10, 2013 at 09:49:55AM -0400, Steve Singer wrote:
Hi Steve,
The one thing that stands out to me is that you are working with 200GB of
data on a machine with 4-8GB of ram and you have the random_page_cost set
to 2.0. That is almost
han the planner expects?
Steve
HashAggregate (cost=11972282.27..11972448.32 rows=11070 width=51) (actual
time=3523526.572..3523526.646 rows=30 loops=1)
-> Hash Join (cost=1287232.78..11220656.06 rows=724459 width=51) (actual
time=1702760.590..1898522.706 rows=662583 loops=1)
nce/fast13/understanding-robustness-ssds-under-power-fault
Kind of messes with the "D" in ACID.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 03/12/2013 08:06 AM, Misa Simic wrote:
Thanks Steve
Well, the full story is too complex - but point was - whatever
blackbox does - it last 0.5 to 2secs per 1 processed record (maybe I
was wrong but I thought the reason why it takes the time how much it
needs to actually do the task -CPU
verview of your schemas and the type of processing you are attempting
on them.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
pg_putcopyend();
The details are in DBD::Pg. I use this a lot for doing big-ish (tens of
millions of rows) bulk inserts. It's not as fast as you can get, but it's
probably as fast as you can get with perl.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
y that the
statistics will not have been automatically updated before the
subsequent query is planned so an explicit ANALYZE between the update
and the query can be of value.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your su
same RAID strip-size, etc.??
Cheers,
Steve
r queries of this type.
What is the structure of your table? You can use "\d+
dealer_vehicle_details" in psql.
Have you tuned PostgreSQL in any way? If so, what?
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
rt hostname
only. If you want to know if two points in time differ, just compare them.
Spending a couple hours reading
http://www.postgresql.org/docs/current/static/datatype-datetime.html
will be time well spent.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
a client could be
updating some table or tables whenever the reset script isn't actively
working on that same table leading to unexplained weird test results.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
ht
s intended. If you were to add -C so each query required a
new client connection a different picture would emerge. Same thing if
you had 2000 client connections of which only a handful were running
queries at any moment.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-p
to the question: what, other than the db, runs on this machine?
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ects/reorg/)
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 04/13/2012 09:43 AM, Claudio Freire wrote:
On Fri, Apr 13, 2012 at 1:36 PM, Steve Crawford
wrote:
If they are permanent tables used for temporary storage then making them
unlogged may be beneficial. But actual temporary tables *are* unlogged and
attempting to create an unlogged temporary
attempting to create an unlogged temporary table will raise an error.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 03/30/2012 05:51 PM, Josh Berkus wrote:
So this turned out to be a Linux kernel issue. Will document it on
www.databasesoup.com.
Anytime soon? About to build two PostgreSQL servers and wondering if you
have uncovered a kernel version or similar issue to avoid.
Cheers,
Steve
--
Sent via
threaded postgresql.
It's licensing is pretty much incompatible with postgresql too.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
er for your question and a bit offtopic.
Why do you take SAS disks for the OS and not much cheaper SATA ones?
Here's Intel's (very general) take. Your OS disks may not justify SAS on
performance alone but other aspects may sway you.
http://www.intel.com/support/motherboards/server/s
option
if you are short on disk-space. You may be able to start by clustering
your smaller tables and move toward the larger ones as you free
disk-space. Be sure to run ANALYZE on any table that you have CLUSTERed.
You might find it useful to make CLUSTER part of your regular maintenance.
Che
e incoming data
into a temp table then move the data with a variant of:
INSERT INTO main_table (SELECT ... FROM incoming_table WHERE NOT EXISTS
((SELECT 1 from main_table WHERE ...))
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make chang
On 02/23/2012 10:38 AM, Alessandro Gagliardi wrote:
On Thu, Feb 23, 2012 at 10:01 AM, Steve Crawford
<mailto:scrawf...@pinpointresearch.com>> wrote:
The documentation has information like "This parameter can only be
set in the postgresql.conf file or on the serve
em tables have a lot of information on table
vacuuming and analyzing:
select
relname,
last_vacuum,
last_autovacuum,
last_analyze,
last_autoanalyze,
vacuum_count,
autovacuum_count,
analyze_count,
autoanalyze_count
from
pg_stat_user_tables;
Cheers,
Steve
n a Windows org so your options are limited,
but word-on-the-street is that for high-performance production use,
install PostgreSQL on *nix.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgr
don't modify indexed columns.
>
> -Kevin
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>
--
Steve Horn
http://www.stevehorn.cc
st...@stevehorn.cc
http://twitter.com/stevehorn
740-503-2300
Hello all!
I have a very simple query that I am trying to wrap into a function:
SELECT gs.geo_shape_id AS gid,
gs.geocode
FROM geo_shapes gs
WHERE gs.geocode = 'xyz'
AND geo_type = 1
GROUP BY gs.geography, gs.geo_shape_id, gs.geocode;
This query runs in about 10 milliseconds.
Now my goal is to
SQL (EXECUTE 'your query string' inside the function)
>
> Steve *is* using EXECUTE, so that doesn't seem to be the answer. I'm
> wondering about datatype mismatches myself --- the function form is
> forcing the parameter to be char(9), which is not a constrai
Hello all!
I have a very simple query that I am trying to wrap into a function:
SELECT gs.geo_shape_id AS gid,
gs.geocode
FROM geo_shapes gs
WHERE gs.geocode = 'xyz'
AND geo_type = 1
GROUP BY gs.geography, gs.geo_shape_id, gs.geocode;
This query runs in about 10 milliseconds.
Now my goal is to
Note: I am guessing that your seen_its table just grows and grows but is
rarely, if ever, modified. If it is basically a log-type table it will
be a prime candidate for partitioning on date and queries like this will
only need to access a couple relatively small child tables instead of
one massive
On Nov 17, 2011, at 5:12 PM, Ben Chobot wrote:
> I have two queries in PG 9.1. One uses an index like I would like, the other
> does not. Is this expected behavior? If so, is there any way around it?
I don't think you want the group by in that first query.
Cheers,
Steve
>
ay need to intervene. The CLUSTER statement will completely rewrite
and reindex your table (and will physically reorder the table based on
the selected index). Note: CLUSTER requires an exclusive lock on the table.
Cheers,
Steve
e bytea is not so the ultimate solution will depend on whether
the input string is the octal representation of an un-encoded sequence
of bytes or represents a string of ASCII/UTF-8/whatever... encoded text.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgres
idation is left as
an exercise for the reader...:)):
create or replace function octal_string_to_text(someoctal text) returns
text as $$
declare
binstring text;
begin
execute 'select E''' || regexp_replace($1, E'(\\d{3})', E'\\1',
'g')
On 04/11/2011 02:32 PM, Scott Marlowe wrote:
On Mon, Apr 11, 2011 at 12:12 PM, Joshua D. Drake
wrote:
On Mon, 11 Apr 2011 13:09:15 -0500, "Kevin Grittner"
wrote:
Glyn Astill wrote:
The new server uses 4 x 8 core Xeon X7550 CPUs at 2Ghz
Which has hyperthreading.
our current servers are
s your entire database active set fits in RAM I'd expect your
cores to sit idle waiting on disk IO much of the time.
Don't forget that you need a BBU for whichever RAID controller
you need, or it won't be able to safely do writeback caching, and
you'll lose a lot of the
I'm not sure what you're intending by comparing ownername to both firstname and
lastname. I don't think that'll do anything useful, and doubt it'll ever match.
Are you expecting firstname and lastname to be substrings of ownername? If so,
you might need to use wildcar
mplified example but you see the problem.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
rse_str;
END;'
LANGUAGE 'plpgsql' IMMUTABLE;
(Normalizing the email address so that you store local part and domain part
separately is even better, but an index on the reverse of the domain is still
useful for working with subdomains).
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ell as how different
query threads are seeing which version)? (2) Are there any available benchmarks
that can measure this delay? (3) What are relevant config parameters that will
reduce this delay?
Thanks for your patience with my ignorance of MVCC (still learning more about
it),
Steve
On 10-10-27 02:14 PM, Divakar Singh wrote:
yes this is a very clearly visible problem.
The difference b/w oracle and PG increases with more rows.
when oracle takes 3 GB, PG takes around 6 GB.
I only use varchar.
I will try to use your tips on "smart table layout, toast compression".
Assuming thes
On 10-10-25 02:31 PM, Divakar Singh wrote:
> My questions/scenarios are:
>
> 1. How does PostgreSQL perform when inserting data into an indexed
> (type: btree)
> table? Is it true that as you add the indexes on a table, the
> performance
> deteriorates significantly whereas Oracle does no
On 10/20/2010 09:45 PM, Scott Marlowe wrote:
On Wed, Oct 20, 2010 at 8:25 PM, Joshua D. Drake wrote:
On Wed, 2010-10-20 at 22:13 -0400, Bruce Momjian wrote:
Ben Chobot wrote:
On Oct 7, 2010, at 4:38 PM, Steve Crawford wrote:
I'm weighing options for a new s
oward adding a second RAID-set and splitting off the WAL
traffic? Or something else?
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Jul 21, 2010, at 7:27 PM, Greg Smith wrote:
> Steve Atkins wrote:
>> If http://postgresql.org/docs/9.0/* were to 302 redirect to
>> http://postgresql.org/docs/current/* while 9.0 is the current release (and
>> similarly for 9.1 and so on) I suspect we'd find
; beyond raising awareness of the issue periodically on
> these lists, like I did on this thread.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
L and user tables on the memory-backed file
systems? I wouldn't think the performance impact of leaving
the rest of the stuff on disk would be that large.
Or does losing WAL files mandate a new initdb?
--
Steve Wampler -- swamp...@noao.edu
The gods that smiled on your birth are now laughing
PostgreSQL logs? Anything
interesting, there?
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 03/31/2010 11:11 PM, Craig Ringer wrote:
Jaime Casanova wrote:
On Wed, Mar 31, 2010 at 5:33 PM, raghavendra t
wrote:
Why are you doing that?
Our table face lot of updates and deletes in a day, so we prefer reindex to
update the indexes as well overcome with a corrupted index.
do you ha
.conf):
autovacuum = on
autovacuum_naptime = 300# time between autovacuum runs,
in secs
Then you can tune it if you need to but at least it will be looking for
things that are vacuumworthy every 5 minutes.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-perfor
"vacuumdb --analyze --verbose iwt"
But this is duct-tape and bailing-wire. You REALLY need to make sure
that autovacuum is running - you are likely to have much better results
with less pain.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.o
Madison Kelly wrote:
Steve Crawford wrote:
Madison Kelly wrote:
Hi all,
I've got a fairly small DB...
It slows down over time and I can't seem to find a way to get the
performance to return without doing a dump and reload of the
database...
Yup, I even tried manually runni
ot, things
will bloat and you won't be getting appropriate "analyze" runs. Speaking
of which, what happens if you just run "analyze"?
And as long as you are dumping and reloading anyway, how about version
upgrading for bug reduction, performance improvement, and cool new
on-now. Actually got strung along for a couple
months before calling my supplier and telling him to swap it out for a
3ware. The 3ware "just works". I currently have a couple dozen Linux
servers, including some PostgreSQL machines, running the 3ware cards.
Cheers,
Steve
--
Sent via
can ignore
prior-days' static child-tables (and you could keep
historical-data-dumps off-line for later use if desired).
Read up on it here:
http://www.postgresql.org/docs/8.4/interactive/ddl-partitioning.html
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-perfor
along with numerous improvements not directly related to
your question.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Kevin Grittner wrote:
Steve Crawford wrote:
benchmarks I've seen suggest that with 8 cores you may even see an
almost 8x restore speedup
I'm curious what sort of data in what environment showed that ratio.
Was going on memory from a presentation I watched. Reports
options on pg_dump and the --jobs option in pg_restore for details.
Cheers,
Steve
e to random_page_cost seems to make an
enormous difference for this query. Removing the nested loop seems to be
what makes a difference. We'll continue to play with these and check there
are no adverse effects on other queries.
Thanks again, Steve
>
>
> *Scott Marlowe *
> 03/10/2009 05:19 PM
>
> >
> > Nested Loop (cost=466.34..192962.24 rows=15329 width=12) (actual
> > time=13653.238..31332.113 rows=131466 loops=1)
>
>
> Both your query plans end with this nested loop join which is taking
> up about half your time in your query. Notice th
>
>
>
> *Tom Lane *
> Sent by: pgsql-performance-ow...@postgresql.org
> 03/10/2009 08:16 PM AST
>
> "Steve McLellan" writes:
> > lc_messages = 'en_US.UTF-8'
> > lc_monetary = 'en_US.UTF-8'
> > lc_numeric = 'en_US.UTF-8&
Hi,
I'd be grateful for any advice we can get... we recently switched from MySQL
to PostgreSQL on the basis of some trials we carried out with datasets of
varying sizes and varying rates of contention. For the most part we've been
pleased with performance, but one particular application runs queri
source code and reading through the README files and
comments in the source code.
-Kevin
Hello List,
Can this be set in the postgresql.conf file?
default_statistics_target = 50
Thanks,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
tackle the problem?
What OS, amount of RAM, DB size...?
If the queries get progressively faster as you use the system then slow
again after a reboot, my initial guess would be that you are getting
more and more disk-cache hits the longer you use the system.
Cheers,
Steve
--
Sent via pgsql
Glyn Astill wrote:
--- On Sat, 22/11/08, Scott Marlowe <[EMAIL PROTECTED]> wrote:
You really have two choices. First is to try and use it as
a plain
SCSI card, maybe with caching turned on, and do the raid in
software.
Second is to cut it into pieces and make jewelry out of it.
Haha, I'm
ecreating tables - especially if they contain
much data.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Scott Carey wrote:
A fantastic review on this issue appeared in July:
http://www.alternativerecursion.info/?p=106
And then the same tests on a RiData SSD show that they are the same
drive with the same characteristics:
http://www.alternativerecursion.info/?p=276
Most blamed it on MLC being "sl
on
than will fit in RAM.
Neither swapping nor OOM killing are particularly good - it's just a
consequence of the amount of memory needed being unpredictable.
Probably the best solution is to just tell the kernel somehow to
never kill the postmaster.
Or configure adequate swap spa
ation,
rather than intentional storage of it. When a simple incrementing
integer
is used as an identifier in publicly visible places (webapps, ticketing
systems) then that may leak more information than intended.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@p
ce is quite easy. Running heavy load benchmarks on a web app
(high-rate simple-queries) I saw about a 10-fold improvement just by
using pgbouncer.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresq
until now. It would be really,
really unfortunate if this list fell to the spammers.
It's not been "hacked by spammers".
It's a valid From address, probably coincidentally. Nothing worth
discussing. *Definitely* not something worth discussing on the list.
Cheers,
Josh Berkus wrote:
Folks,
Subsequent to my presentation of the new annotated.conf at pgCon last week,...
Available online yet? At?...
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
a complex query.
As I get a better feel for the planner, some queries that used to be
complex become simple. :)
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
write one inefficient but
frequent query. Or add a useful index on the server.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
we 8.0.15 version?
Steve
On Mon, Mar 24, 2008 at 3:23 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> "Matthew T. O'Connor" <[EMAIL PROTECTED]> writes:
> > Steve Poe wrote:
> >> The owners of the animal hospital where I work at want to consider
cisco Bay area. Outside
of sharing your experiences/input with me, I would not mind if you/your company
do this type of consulting offline.
Thank you.
Steve
-
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresq
ve had issues in the past.
It seems the RAID card manufacturers have more to do with failures
than the drives themselves. Have you found a RAID card you did not
have to drop to U160?
Thanks again for sharing your feedback.
Steve
On Sun, Mar 2, 2008 at 7:11 PM, Vivek Khera <[EMAIL PROTECTED]>
1 - 100 of 312 matches
Mail list logo