On Sat, Jun 28, 2008 at 06:24:42PM +0200, Moritz Onken wrote:
> SELECT distinct url from item where url like 'http://www.micro%' limit
> 10;
Here, the planner knows the pattern beforehand, and can see that it's a
simple prefix.
> select *
> from result
> where exists
>(select * from item w
On Tue, Apr 01, 2008 at 12:42:03PM -0400, Tom Lane wrote:
>> That's CURSOR_OPT_FAST_PLAN and isn't it? Our application reads the full
>> results of most cursors.
> Just out of curiosity, why use a cursor at all then?
This isn't the same scenario as the OP, but I've used a cursor in cases where
I c
On Wed, Mar 12, 2008 at 12:13:01PM +0530, sathiya psql wrote:
> My question is that how to migrate my database to 7.4 to 8.1
aptitude install postgresql-8.1
pg_dropcluster 8.1 main
pg_upgradecluster 7.4 main
/* Steinar */
--
Homepage: http://www.sesse.net/
--
Sent via pgsql-performance m
On Thu, Mar 06, 2008 at 07:28:50AM -0800, Craig James wrote:
> Count() on Oracle and MySQL is almost instantaneous, even for very large
> tables. So why can't Postgres do what they do?
In MySQL's case: Handle transactions. (Try COUNT(*) on an InnoDB table.)
/* Steinar */
--
Homepage: http://www
On Wed, Mar 05, 2008 at 02:27:08AM -0800, SPMLINGAM wrote:
> I have a table with 50 lakhs records, the table has more then 10
> fields, i have primary key, i have select query with count(*) without any
> condition, it takes 17 seconds.
Without knowing what a "lakhs" record is, it's pretty obv
On Wed, Mar 05, 2008 at 12:15:25AM +, dforums wrote:
> In regards of update, I have around 1 updates while a laps of 10 minutes
>
> Is there a settings to optimise updates ?
If you can, batch them into a single transaction.
If you can, upgrade to 8.3. HOT might help you here.
/* Steinar
On Wed, Jan 23, 2008 at 07:54:24PM -0500, Greg Smith wrote:
> (a standard mechanical drive under heavy write load also wears out faster
> than one doing less work).
Wasn't this one of the myths that was dispelled in the Google disk paper a
while ago?
/* Steinar */
--
Homepage: http://www.sesse.n
On Thu, Jan 17, 2008 at 03:07:02PM -0600, Scott Marlowe wrote:
> Sounds like they're sharing something they shouldn't be. I'm not real
> familiar with PCI-express. Aren't those the ones that use up to 16
> channels for I/O? Can you divide it to 8 and 8 for each PCI-express
> slot in the BIOS may
On Sun, Dec 16, 2007 at 07:34:45PM +0100, Adam PAPAI wrote:
> Why does it take cca 18-20 sec to get the results?
> Too many indexes?
You cannot possibly have VACUUMed in a long time. Try a VACUUM FULL, and then
schedule regular VACUUMs (or use autovacuum).
/* Steinar */
--
Homepage: http://www.s
On Wed, Dec 12, 2007 at 10:16:43AM +, Matthew Lunnon wrote:
> Does anyone have any ideas what my bottle neck might be and what I can do
> about it?
Your bottleneck is that you are using a very old version of PostgreSQL. Try
8.2 or (if you can) the 8.3 beta series -- it scales a _lot_ better i
On Thu, Nov 29, 2007 at 03:23:10PM -0500, Matthew T. O'Connor wrote:
> Sorry in advance if this is a stupid question, but how is this better than
> two index, one on "a" and one on "b"? I supposed there could be a space
> savings but beyond that?
You could index on both columns simultaneously w
On Wed, Nov 28, 2007 at 09:16:08PM +0100, cluster wrote:
> Hmm, actually I still don't understand why it takes 6400 ms to fetch the
> rows. As far as I can see the index used is "covering" so that real row
> lookups shouldn't be necessary.
The indexes don't contain visibility information, so Pos
On Tue, Nov 27, 2007 at 07:25:54PM -0500, Tom Lane wrote:
>> You could make an index on (question_id,status) (or a partial index on
>> question id, with status=1 as the filter), but I'm not sure how much it would
>> help you unless the questions table is extremely big. It doesn't appear to
>> be; i
On Tue, Nov 27, 2007 at 11:51:40PM +0100, cluster wrote:
> A thing that strikes me is the following. As you can see I have the
> constraint: q.status = 1. Only a small subset of the data set has this
> status. I have an index on q.status but for some reason this is not used.
> Instead the constr
On Tue, Nov 27, 2007 at 06:28:23PM +, Matthew wrote:
> SELECT * FROM table WHERE a > 1 AND b < 4;
This sounds like something an R-tree can do.
/* Steinar */
--
Homepage: http://www.sesse.net/
---(end of broadcast)---
TIP 2: Don't 'kill -9' the
On Sun, Nov 11, 2007 at 08:27:02PM +0100, Dimitri wrote:
> As well to demonstrate OS parallelism it's not so impressive to see
> 4CPU server results rather 8CPU or 32threaded Niagara... Don't know
> why they did not present similar performance graphs for these
> platform, strange no?...
I guess it
On Thu, Nov 08, 2007 at 04:47:09PM -0600, Pepe Barbe wrote:
> I am having an issue on PostgreSQL 8.0.12. In the past we had performance
> issues with the query planner for queries on some tables where we knew we
> had indexes and it was doing a sequential scan, and for this reason we
> issue "SE
On Sun, Nov 04, 2007 at 07:33:46PM -0500, Greg Smith wrote:
> On the topic of performance improvements in 8.3, I don't think this list
> has been getting information about the concurrent sequential scans
> improvements. Check out these documents for more about that:
>
> http://j-davis.com/postgr
On Wed, Oct 17, 2007 at 02:30:52PM +0200, Stefano Dal Pra wrote:
> The main goal would be to get multiple results while scanning the
> table[s] once only
> thus getting results in a faster way.
In 8.3, Postgres will do this for you itself -- if you already have a
sequential scan running against a
On Tue, Sep 25, 2007 at 12:53:55AM -0400, Carlo Stonebanks wrote:
> My problem is that I think that SRF's are causing my problems. The SRF's
> gets an automatic row estimate of 1000 rows. Add a condition to it, the
> planner guesses 333 rows. Even at 333, this is an overestimate of the number
> of
On Tue, Sep 11, 2007 at 04:47:40PM +0200, Maila Fatticcioni wrote:
> The servers are cross connected with a common 100 Mbit/sec Ethernet so I
> think they have a bandwidth around 80 Mbit/sec (even if I haven't yet
> done any test on it). A rate of 70Mb seems reasonable to me.
Umm, seriously? Unles
On Wed, Aug 29, 2007 at 01:11:32PM -0700, Steve Atkins wrote:
> It think the general conclusion was "When they come out with an ECC
> version, we'll look at them."
FWIW, it shouldn't be impossible to implement ECC in software; they'd still
be orders of magnitude faster than normal disks.
/* Stei
On Wed, Aug 22, 2007 at 12:02:54PM -0400, Carlo Stonebanks wrote:
> Any suggestions on where to go with this project to improve performance
> would be greatly appreciated.
I'm a bit unsure from reading your mail -- have you tried pg_trgm with a GiST
index?
/* Steinar */
--
Homepage: http://www.
On Mon, Aug 20, 2007 at 10:17:14PM -0700, Ben Perrault wrote:
> -> Nested Loop (cost=1.00..106559138.00 rows=336
> width=137)
This sounds very much like you're trying to force the planner. Did you set
enable_nestloop=false or something? Are there any other non-default settings
On Sun, Aug 19, 2007 at 11:41:15AM -0700, David Fetter wrote:
> Unless your records are huge, that's a tiny database, where tiny is
> defined to mean that the whole thing fits in main memory with plenty
> of room to spare. I guarantee that performance will crash right
> through the floor as soon a
On Sun, Aug 19, 2007 at 11:12:16AM -0700, David Fetter wrote:
> There's your mistake. EAV is not performant, and won't become so.
It sort of depends. I put all the EXIF information for my image gallery into
an EAV table -- it was the most logical format at the time, although I'm not
sure I need a
On Thu, Aug 16, 2007 at 10:53:00AM -0700, Luke Lonergan wrote:
> They have a setting that sounds like RAID10, but it actually
> implements spanning of mirrors.
That's interesting. I'm pretty sure it actually says "RAID10" in the BIOS,
but is this a lie?
/* Steinar */
--
Homepage: http://www.sess
On Thu, Aug 16, 2007 at 11:26:52AM +0300, Michael Ben-Nes wrote:
> Does any one have any experience with RAID 10 & perc 5/i ?
Without having done PostgreSQL benchmarking, we have a 2950 with four SATA
disks in RAID 10 (and two SAS disks in RAID 1), and have not seen any
performance issues.
/* Ste
On Wed, Aug 08, 2007 at 01:02:24PM -0700, smiley2211 wrote:
> I am trying to enable capturing of the submitted code via an
> application...how do I do this in Postgres? Performance is SLOW on my
> server and I have autovacuum enabled as well as rebuilt indexes...whatelse
> should be looked at?
Tr
On Sat, Jul 14, 2007 at 10:29:05AM +0200, Hannes Dorbath wrote:
> From the DELL site it seems this `PERC 5/i' on board controller
> (assuming that's what you have) doesn't even have a BBU. If you don't
> plan to post here in a few weeks again about data corruption, go out and
> shop a serious contr
On Fri, Jul 13, 2007 at 06:17:18PM +0200, Adriaan van Os wrote:
> The hardware is an Intel dual-core 17-inch MacBook Pro running Mac
> OS X 10.4.
To isolate things, have you tried testing a different operating system?
/* Steinar */
--
Homepage: http://www.sesse.net/
---
On Mon, Jul 09, 2007 at 11:57:13AM -0400, Jignesh K. Shah wrote:
> I think this result will be useful for performance discussions of
> postgresql against other databases.
>
> http://www.spec.org/jAppServer2004/results/res2007q3/
Am I right if this is for a T2000 (Niagara) database server? It sure
On Tue, Jun 12, 2007 at 04:11:33PM +0200, Christo Du Preez wrote:
> The actual table I noticed the problem has a million rows and it still
> doesn't use indexing
Then please post an EXPLAIN ANALYZE of the query that is slow, along with the
table definition and indexes.
/* Steinar */
--
Homepage:
On Tue, Jun 12, 2007 at 03:32:40PM +0200, Christo Du Preez wrote:
> As an example I took a table with about 650 rows, having a parentid
> field with an index on parentid.
Try a bigger table. Using an index for only 650 rows is almost always
suboptimal, so it's no wonder the planner doesn't use the
On Thu, Jun 07, 2007 at 01:48:43PM -0400, Joe Lester wrote:
> - The server log shows frequent "archived transaction log file"
> entries. Usually once every 10 minutes or so, but sometimes 2 or 3
> per minute.
Sounds like you've got a lot of writes going. You might want more power in
your I/O?
On Thu, Jun 07, 2007 at 07:18:22AM -0400, Kurt Overberg wrote:
> - My production environment is running RedHat 2.6.9.ELsmp on a server
> with 16GB of memory
Seriously, this (the RAM amount) _is_ all the difference. (You don't say how
much RAM is in your Mac, but something tells me it's not 16GB.
On Thu, Jun 07, 2007 at 11:35:27AM +0200, Steinar H. Gunderson wrote:
> How big did you say these tables were?
Sorry, you already said that -- 650k rows for one of them. If that table
doesn't fit in the cache on your Mac, you pretty much lose. From the EXPLAIN
output, it looks like it f
On Wed, Jun 06, 2007 at 07:27:27PM -0400, Kurt Overberg wrote:
> This query runs great on production under 8.0 (27ms), but under 8.2.4
> (on my mac) I'm seeing times in excess of 50,000ms. Note that on
> 8.2.4, if I run the query again, it gets successively faster
> (50,000ms->6000ms->27ms).
On Tue, Jun 05, 2007 at 05:59:25PM -0400, Tom Lane wrote:
> I think the main argument for partitioning is when you are interested in
> being able to drop whole partitions cheaply.
Wasn't there also talk about adding the ability to mark individual partitions
as read-only, thus bypassing MVCC and al
On Tue, Jun 05, 2007 at 05:30:14PM -0400, Steven Flatt wrote:
> (A) LEFT JOIN (B) ON col WHERE B.col IS NULL
>
> These queries are much slower on 8.2 than on 8.1 for what looks like the
> reason outlined above. I have rewritten a few key queries to be of the
> equivalent form:
>
> (A) WHERE col
On Tue, Jun 05, 2007 at 03:23:35PM +0530, Gauri Kanekar wrote:
> Befor doing vaccum full on the database this query use to take less
> than 4min. But now after doing vacumming reindexing the tables it is
> taking 73mins.
Did you analyze the table recently? Some of the selectivity estimates seem
q
On Fri, Jun 01, 2007 at 10:57:56AM -0700, Craig James wrote:
> The Linux kernel doesn't properly detect my software RAID1+0 when I boot
> up. It detects the two RAID1 arrays, the partitions of which are marked
> properly. But it can't find the RAID0 on top of that, because there's no
> corresp
On Wed, May 30, 2007 at 12:41:46AM -0400, Jonah H. Harris wrote:
> Yeah, I've never seen a way to RAID-1 more than 2 drives either.
pannekake:~> grep -A 1 md0 /proc/mdstat
md0 : active raid1 dm-20[2] dm-19[1] dm-18[0]
64128 blocks [3/3] [UUU]
It's not a big device, but I can ensure you it
On Fri, May 25, 2007 at 03:56:35PM -0400, Tom Lane wrote:
> I looked through the CVS logs and didn't find any planner changes
> between 8.2.3 and 8.2.4 that seem likely to affect your query, so
> I'm thinking it must be a statistical discrepancy.
It looks like the estimated cost is lower for 8.2.4
On Fri, May 25, 2007 at 10:50:58AM +0200, Peter T. Breuer wrote:
> I set up pg to replace a plain gdbm database for my application.
Postgres and gdbm are completely different. You want to rethink your queries
so each does more work, instead of running a zillion of them over the network.
/* Steina
On Tue, May 22, 2007 at 02:39:33PM +0200, Alexander Staubo wrote:
> PostgreSQL uses B-tree indexes for scalar values. For an expression
> such as "t between a and b", I believe it's going to match both sides
> of the table independently (ie., t >= a and t <= b) and intersect
> these subsets. This i
On Sat, May 19, 2007 at 12:32:33AM +0200, Steinar H. Gunderson wrote:
> Did you ANALYZE your tables recently? If the joins are really between
> millions of rows and the planner thinks it's a couple thousands, the stats
> sound rather off...
Sorry, I forgot your first e-mail wher
On Fri, May 18, 2007 at 02:37:27PM -0700, Abu Mushayeed wrote:
>>> set enable_nestloop = off;
>> What's the rationale for this?
> To eliminate nested loop. It does a nested loop betwwen to very large
> table(millions of rows).
If the planner chooses a nested loop, it is because it believes it is
On Fri, May 18, 2007 at 02:22:52PM -0700, Tyrrill, Ed wrote:
> Total runtime: 4.951 ms
Going from 1197 seconds to 5 milliseconds. That's some sort of record in a
while, I think :-)
/* Steinar */
--
Homepage: http://www.sesse.net/
---(end of broadcast)---
On Fri, May 18, 2007 at 09:02:52AM -0700, Abu Mushayeed wrote:
> I have an interesting problem. I have the following query that ran ok on
> Monday and Tuesday and it has been running ok since I have been at this
> job. I have seen it to be IO intensive, but since Wednesday it has become
> CPU inten
On Fri, May 18, 2007 at 02:51:42PM +0300, Liviu Ionescu wrote:
> it helped, the new version of the query takes 2303 ms on both 8.1.4 and 8.2.4.
And the old one?
> any idea why the 8.2.4 planner is not happy with the initial select? was it
> just a big chance that it worked in 8.1.4 or the 8.2.4 p
On Fri, May 18, 2007 at 02:05:36PM +0300, Liviu Ionescu wrote:
> "-> Hash Left Join
> (cost=2.44..63.29 rows=1 width=49) (actual time=0.361..14.426 rows=2206
> loops=1)"
> " Hash Cond: (n.nodeid =
> te
On Fri, May 18, 2007 at 01:14:56PM +0300, Liviu Ionescu wrote:
> yes, but to be sure I did it again before issuing the request; no
> improvements...
Is this with the join collapse limit set to 1, or with default? (Default is
generally more interesting.)
/* Steinar */
--
Homepage: http://www.ses
On Fri, May 18, 2007 at 12:02:44PM +0300, Liviu Ionescu wrote:
> the 8.2.4 plan with join_collapse_limit = 1 (with default it was worse, full
> of nested loops)
It will probably be useful with EXPLAIN ANALYZE of your queries, not just the
EXPLAIN.
> "Nested Loop Left Join (cost=32.01..2012.31 r
On Thu, May 10, 2007 at 09:23:03AM -0400, Susan Russo wrote:
> my $aq = $dbh->prepare(sprintf("SELECT * from dbxref dx, db where
> accession = '%s' and dx.db_id = db.db_id and db.name = 'GB_protein'",$rec));
This is not related to your performance issues, but it usually considered bad
form
On Wed, May 09, 2007 at 01:57:51AM -0700, [EMAIL PROTECTED] wrote:
> given that RAID, snapshots, etc are already in the linux kernel, I suspect
> that what will need to happen is for the filesystem to be ported without
> those features and then the userspace tools (that manipulate the volumes )
On Tue, May 08, 2007 at 07:03:17PM -0400, Daniel Griscom wrote:
> I'm only running one query at a time; would that query be guaranteed to
> confine itself to a single processor/core?
Yes; at least it won't be using two at a time. (Postgres can't guarantee that
Windows won't move it to another core
On Tue, May 08, 2007 at 06:32:14PM -0400, Carlos Moreno wrote:
>> Or use a dual-core system. :-)
> Am I missing something?? There is just *one* instance of this idea in,
> what, four replies?? I find it so obvious, and so obviously the only
> solution that has any hope to work, that it makes me t
On Tue, May 08, 2007 at 05:52:13PM -0400, Alvaro Herrera wrote:
>> I am trying to follow a message thread. One guy says we should be running
>> vacuum analyze daily and the other says we should be running vacuum multiple
>> times a day. I have tried looking for what a vacuum analyze is to help me
>
On Tue, May 08, 2007 at 04:27:10PM -0400, Daniel Griscom wrote:
> 3: ... some other solution I haven't thought of.
On a wild guess, could you try setting the CPU costs higher, to make the
planner choose a less CPU-intensive plan?
Other (weird) suggestions would include calling a user-defined func
On Tue, May 08, 2007 at 12:52:35PM -0700, Y Sidhu wrote:
> Does using DISTINCT in a query force PG to abandon any index search it might
> have embarked upon?
No.
If you need help with a specific query, please post it, along with your table
definitions and EXPLAIN ANALYZE output.
/* Steinar */
--
On Tue, May 08, 2007 at 10:18:34AM -0400, Susan Russo wrote:
> explain analyze output on Pg7.3.2:
>
>-> Index Scan using dbxref_idx2 on dbxref dx
> (cost=0.00..5.83 rows=1 width=21) (actual time=25.58..25.58 rows=0 loops=1)
> Index Cond: ((accession >= 'AY85
On Mon, May 07, 2007 at 03:14:08PM -0700, Joshua D. Drake wrote:
> It is my understanding (and I certainly could be wrong) that FreeBSD
> doesn't handle SMP nearly as well as Linux (and Linux not as well as
> Solaris).
I'm not actually sure about the last part. There are installations as big as
10
On Mon, May 07, 2007 at 11:56:14PM -0400, Greg Smith wrote:
> Debian packages PostgreSQL in a fashion unique to it; it's arguable
> whether it's better or not (I don't like it), but going with that will
> assure your installation is a bit non-standard compared with most Linux
> installas. The m
On Sat, May 05, 2007 at 09:52:56PM -0700, Yudhvir Singh Sidhu wrote:
> Here is what I think the story is:
> a. Large amounts of rows are added to and deleted from a table - daily.
> With this much activity, the statistics get out of whack easily. That's
> where ANALYZE or VACUUM ANALYZE would he
On Sat, May 05, 2007 at 03:57:25PM -0700, Yudhvir Singh Sidhu wrote:
> Situation: huge amounts of adds and deletes daily. Running daily vacuums
If you have huge amounts of adds and deletes, you might want to vacuum more
often; optionally, look into autovacuum.
> Problem: Vacuum times jump up fr
On Fri, May 04, 2007 at 09:07:53PM -0400, Greg Smith wrote:
> As if I don't know what the bogo stands for, ha! I brought that up
> because someone suggested testing CPU speed using some sort of idle loop.
> That's exactly what bogomips does.
Just for reference (I'm sure you know, but others mig
On Wed, May 02, 2007 at 02:55:26PM -0400, Ron wrote:
> The fastest test, and possible fix, is to go and buy more RAM. See
> if 16MB of RAM, heck even 10MB, makes the problem go away or delays
> it's onset.
Something tells me 16MB of RAM is not going to help him much? :-)
/* Steinar */
--
Home
On Mon, Apr 30, 2007 at 03:29:30PM +0200, Andreas Haumer wrote:
> This already gives a row matching the given WHERE clause.
> It makes no sense to scan the other tables, as the query
> asks for one row only and all the other tables have timestamps
> larger than all the timestamps in table t_mv_2006
On Mon, Apr 23, 2007 at 07:20:29PM +0200, Arkadiusz Raj wrote:
> I have a table in my database that is updated every minute with new acquired
> data. Anyway there is a query to get latest values to be displayed on
> screen. I have postgresql 7.4.2 that work very fine.
You want _at least_ the lat
On Sat, Mar 03, 2007 at 12:30:16PM +0100, Arjen van der Meijden wrote:
> If you have a MegaCLI-version, I'd like to see it, if possible? That
> would definitely save us some reinventing the wheel :-)
A friend of mine just wrote
MegaCli -AdpAllInfo -a0|egrep ' (Degraded|Offline|Critical Disks
On Tue, Feb 27, 2007 at 01:33:47PM +, Dave Page wrote:
> When we outgrow PostgreSQL & Tsearch2, then, well, we'll need to stop
> pretending to be Google...
Just for the record: Google has been known to sponsor sites in need with
Google Minis and such earlier -- I don't know what their[1] polic
On Mon, Feb 26, 2007 at 05:19:05PM +0530, Gauri Kanekar wrote:
> I have a Query. So when i do explain analyse on it , it shows me many Hash
> Joins.
> So is it possible to indicate the Query Planner not to consider Hash Join.
set enable_hashjoin = false;
This is very often the wrong solution, tho
On Sat, Feb 24, 2007 at 02:04:36AM +0100, Guillaume Smet wrote:
> Could you post EXPLAIN ANALYZE for both queries (after 2 or 3 runs)?
GIST version, short:
amarok=# explain analyze select count(*) from tags where title % 'foo';
QUERY PLAN
On Sat, Feb 24, 2007 at 12:09:41AM +0100, Guillaume Smet wrote:
> Could you try to see if the GIN implementation of pg_trgm is faster in
> your cases?
I'm sorry, I can no longer remember where I needed pg_trgm. Simple testing of
your patch seems to indicate that the GiN version is about 65% _slowe
On Fri, Feb 23, 2007 at 04:53:18PM -0300, Alvaro Herrera wrote:
>> It's slightly unfortunate that AMD and Intel cling to the Opteron and Xeon
>> names even though they're making significant architecture changes, but that's
>> life, I guess.
> AFAIR Intel has been calling their server processors Xeo
On Fri, Feb 23, 2007 at 02:05:57PM -0500, Geoffrey wrote:
> In searching the archives, I can't find any specific info indentifying
> which Xeon processors don't have this problem.
AFAIK the cut-off point is at the Woodcrests. They are overall much better
suited to PostgreSQL than the older Xeons
On Thu, Feb 22, 2007 at 12:19:50PM -0800, Barbara Cosentino wrote:
> I have a table that gets many rows inserted, updated and then deleted,
> consistently throughout the day. At any point in time this table should
> have no more than 50 actual rows and many times a direct select against
> this tab
On Thu, Feb 22, 2007 at 07:11:42PM +0100, Gabriel Biberian wrote:
> Create a new DB and load a dump of the above database with 976009 rows,
> then i perform updates on the whole table. I recorded the time taken
> for each full update and the amount of extra disk space used. Each
> consecutive
On Fri, Feb 16, 2007 at 01:27:46PM -0500, Brad Nicholson wrote:
> If the indexed field is an intger, add 0 to it.
Won't that also invalidate the statistics?
/* Steinar */
--
Homepage: http://www.sesse.net/
---(end of broadcast)---
TIP 6: explain a
On Fri, Feb 02, 2007 at 10:05:29AM -0600, Kirk Wythers wrote:
> Thanks Tom... Any suggestions as to how much to raise ulimit -d? And
> how to raise ulimit -d?
Try multiplying it by 100 for a start:
ulimit -d 614400
/* Steinar */
--
Homepage: http://www.sesse.net/
--
On Fri, Feb 02, 2007 at 07:52:48AM -0600, Kirk Wythers wrote:
> psql(15811) malloc: *** vm_allocate(size=8421376) failed (error code=3)
> psql(15811) malloc: *** error: can't allocate region
> psql(15811) malloc: *** set a breakpoint in szone_error to debug
It sounds like you are out of memory. Ha
On Tue, Jan 30, 2007 at 10:20:28AM -0800, Luke Lonergan wrote:
> You got me - I'll bite - what's PHB?
Usually the Pointy-Haired Boss, a figure from Dilbert.
http://en.wikipedia.org/wiki/Pointy_Haired_Boss
/* Steinar */
--
Homepage: http://www.sesse.net/
---(end of bro
On Tue, Jan 23, 2007 at 11:55:41AM +0100, Steinar H. Gunderson wrote:
> you'll find them to be much master.
s/master/faster/
/* Steinar */
--
Homepage: http://www.sesse.net/
---(end of broadcast)---
TIP 9: In versions below 8.0, the plan
On Tue, Jan 23, 2007 at 11:34:52AM +0100, Laurent Manchon wrote:
> I have a slow response of my PostgreSQL database 7.4 using this query below
> on a table with 80 rows:
>
> select count(*)from tbl;
Contrary to your expectations, this is _not_ a query you'd expect to be fast
in Postgres. Try
On Mon, Jan 15, 2007 at 11:16:36AM +0100, Florian Weimer wrote:
> Am I missing something? Or are trigrams just a poor match for my data
> set? Are the individual strings too long, maybe?
FWIW, I've seen the same results with 8.1.x.
/* Steinar */
--
Homepage: http://www.sesse.net/
On Fri, Jan 12, 2007 at 07:40:25PM -0500, Dave Cramer wrote:
> 5000 is pretty low, you need at least 1/4 of memory for an 8.1.x or
> newer server.
Is this the new "common wisdom"? It looks like at some point, someone here
said "oh, and it looks like you're better off using large values here for
On Tue, Jan 09, 2007 at 05:07:03PM -0800, Brian Herlihy wrote:
> Actually, as I recently discovered, GROUP BY is faster than DISTINCT. It's
> just due to how they are implemented, so don't go looking for any deep
> reason :) The thread "GROUP BY vs DISTINCT" from 2006-12-20 discusses it.
> DISTIN
On Fri, Dec 22, 2006 at 07:47:05PM +0100, ohp@pyrenet.fr wrote:
>> No, pg_xlog can be truncated as soon as a checkpoint occurs.
> Even for currently running transactions ?
Isn't that the entire point of having checkpoints in the first place? :-)
/* Steinar */
--
Homepage: http://www.sesse.net/
On Tue, Dec 19, 2006 at 11:19:39PM -0800, Brian Herlihy wrote:
> Actually, I think I answered my own question already. But I want to
> confirm - Is the GROUP BY faster because it doesn't have to sort results,
> whereas DISTINCT must produce sorted results? This wasn't clear to me from
> the docum
On Sat, Dec 16, 2006 at 11:26:02AM -0600, tsuraan wrote:
> Even an operation like "select count(*) from messages" can take minutes,
> with a totally idle system. Postgres seems to be the most scalable Free
> database out there, so I must be doing something wrong.
Unqualified SELECT COUNT(*) FROM
On Sat, Dec 16, 2006 at 10:53:21AM -0500, Ron wrote:
> AFAICT, no one has stated there would be a "blow-your-socks-off
> dramatic performance improvement" for pg due to compilation
> options. Just that there might be some, and there might be some that
> are arch specific.
FWIW, the original cl
On Fri, Dec 15, 2006 at 10:53:25AM +0100, Alexander Staubo wrote:
> The difference is very slight. I'm going to run without -funroll-
> loops and -pipe (which are not arch-related) to get better data.
-pipe does not matter for the generated code; it only affects compiler speed.
(It simply means t
On Tue, Dec 12, 2006 at 01:42:06PM +0100, Cosimo Streppone wrote:
> "-O0" ~ 957 tps
> "-O1 -mcpu=pentium4 -mtune=pentium4" ~ 1186 tps
> "-O2 -mcpu=pentium4 -mtune=pentium4" ~ 1229 tps
> "-O3 -mcpu=pentium4 -mtune=pentium4" ~ 1257 tps
> "-O6 -mcpu=pentium4 -mtune=pentium4" ~ 1254 tps
For the record
On Mon, Dec 11, 2006 at 11:31:48AM -0200, Daniel van Ham Colchete wrote:
> What PostgreSQL benchmark software should I use???
Look up the list archives; search for "TPC".
> I'll test PostgreSQL 8.1 on a Fedora Core 6 and on a Gentoo. I'll get
> the same version FC6 uses and install it at my Gento
On Mon, Dec 11, 2006 at 11:17:06AM -0200, Daniel van Ham Colchete wrote:
> I just remebered one case with MySQL. When I changed the distro from
> Conectiva 10 (rpm-based ended brazilian distro) to Gentoo, a MySQL
> operation that usually took 2 minutes to run, ended in 47 seconds.
How do you know
On Mon, Dec 11, 2006 at 11:09:13AM -0200, Daniel van Ham Colchete wrote:
>> You know what? I don't.
> So test it yourself.
You're making the claims, you're supposed to be proving them...
> As I said, it is an example. Take floatpoint divisions. You have
> plenty of ways of doing it: 387, MMX, SSE
On Mon, Dec 11, 2006 at 09:05:56AM -0200, Daniel van Ham Colchete wrote:
> But, trust me on this one. It's worth it.
You know what? I don't.
> Think of this: PostgreSQL and GNU LibC use a lot of complex algorithms:
> btree, hashes, checksums, strings functions, etc... And you have a lot of
> ways
On Wed, Dec 06, 2006 at 06:45:56PM +0100, Markus Schiltknecht wrote:
> Cool, thank you for the example :-) I thought the MTA or at least the the
> mailing list would wrap mails at some limit. I've now set word-wrap to
> characters (it seems not possible to turn it off completely in
> thund
On Wed, Dec 06, 2006 at 05:31:01PM +0100, Markus Schiltknecht wrote:
>> Care to post these numbers *without* word wrapping? Thanks.
> How is one supposed to do that? Care giving an example?
This is a rather long sentence without any kind of word wrapping except what
would be imposed on your own s
1 - 100 of 305 matches
Mail list logo