"Jeffrey W. Baker" <[EMAIL PROTECTED]> writes:
> I think the largest speedup will be to dump the multiphase merge and
> merge all tapes in one pass, no matter how large M. Currently M is
> capped at 6, so a sort of 60GB with 1GB sort memory needs 13 passes over
> the tape. It could be done in a s
On Thu, Sep 29, 2005 at 09:28:38PM +0800, Christopher Kings-Lynne wrote:
>
> >CREATE SEQUENCE ai_id;
> >CREATE TABLE badusers (
> > id int DEFAULT nextval('ai_id') NOT NULL,
> > UserName varchar(30),
> > Date datetime DEFAULT '-00-00 00:00:00' NOT NULL,
> > Reason varchar(200),
> > Admin
25MBps should not be a CPU bound limit for IO, nor should it be
an OS limit. It should be something ~100x (Single channel RAM)
to ~200x (dual channel RAM) that.
For an IO rate of 25MBps to be pegging the CPU at 100%, the CPU
is suffering some combination of
A= lot's of cache misses ("cache thrash
On Fri, Sep 30, 2005 at 01:41:22PM -0700, Josh Berkus wrote:
Realistically, you can't do better than about 25MB/s on a single-threaded
I/O on current Linux machines,
What on earth gives you that idea? Did you drop a zero?
Mike Stone
---(end of broadcast)---
Bulk loading speed is irrelevant here - that is dominated by parsing,
which
we have covered copiously (har har) previously and have sped up by 500%,
which still makes Postgres < 1/2 the loading speed of MySQL.
Let's ask MySQL 4.0
LOAD DATA INFILE blah
0 errors, 666 warnings
SHOW
Ron,
On 9/30/05 1:20 PM, "Ron Peacetree" <[EMAIL PROTECTED]> wrote:
> That 11MBps was your =bulk load= speed. If just loading a table
> is this slow, then there are issues with basic physical IO, not just
> IO during sort operations.
Bulk loading speed is irrelevant here - that is dominated by
I have seen similar performance as Josh and my reasoning is as follows:
* WAL is the biggest bottleneck with its default size of 16MB. Many
people hate to recompile the code to change its default, and
increasing checkpoint segments help but still there is lot of overhead
in the rotation of W
Ron,
> That 11MBps was your =bulk load= speed. If just loading a table
> is this slow, then there are issues with basic physical IO, not just
> IO during sort operations.
Oh, yeah. Well, that's separate from sort. See multiple posts on this
list from the GreenPlum team, the COPY patch for 8.1
That 11MBps was your =bulk load= speed. If just loading a table
is this slow, then there are issues with basic physical IO, not just
IO during sort operations.
As I said, the obvious candidates are inefficient physical layout
and/or flawed IO code.
Until the basic IO issues are addressed, we cou
Looks like it's definately an issue with index bloat. Note that it's
normal to have some amount of empty space depending on vacuum and update
frequency, so 15G -> 20G isn't terribly surprising. I would suggest
using pg_autovacuum instead of the continuous vacuum; it's very possible
that some of you
Ron,
Hmmm.
60GB/5400secs= 11MBps. That's ssllooww. So the first
problem is evidently our physical layout and/or HD IO layer
sucks.
Actually, it's much worse than that, because the sort is only dealing
with one column. As I said, monitoring the iostat our top speed was
2.2mb/s.
--Josh
-
xchris wrote:
Let's suppose i need to add an info about addresses (which includes
country,city,capetc etc).
Addresses can vary from 1 to 20 entries..
Talking about performance is it better to include a list of addresses in
TABLE A or is it better to create an external TABLE B?
Don't optim
Hi list,
i'm a postgres noob and before designing a new Database Schema i would
like to ask you a simple question because preformance will be a critical
point.
My question regards performance comparison between a Table which
includes a list or a Table without list and with an external table.
1 -
Lane Van Ingen wrote:
(2) Set up user variables in memory that are persistent across all
sessions, for
as long as the database is up and running ?
You could probably write "C" functions (or possibly Perl) to store data
in shared memory. Of course you'd have to deal with concurrency iss
14 matches
Mail list logo