we are using cloud server
*this are memory info*
free -h
total used free sharedbuffers cached
Mem: 15G15G 197M 194M 121M14G
-/+ buffers/cache: 926M14G
Swap: 15G32M15G
*this are
2 Dual, so a rather old and slow box) and I could sort
1E6 rows of 128 random bytes in 5.6 seconds. Even if I kept the first 96
bytes constant (so only the last 32 were random), it took only 21
seconds. Either this CPU is really slow or the data is heavily skewed -
is it possible that all dimension
On 2015-05-29 10:55:44 +0200, Peter J. Holzer wrote:
> wdsah=> explain analyze select facttablename, columnname, term, concept_id,
> t.hidden, language, register
> from term t where facttablename='facttable_stat_fta4' and
> columnname='einh
e_stat_fta4_warenstrom_idx on
facttable_stat_fta4 f (cost=0.00..2124100.90 rows=21787688 width=2) (actual
time=0.029..0.029 rows=1 loops=3)
Index Cond: ((warenstrom)::text = (t.term)::text)
Total runtime: 0.180 ms
(6 rows)
The estimated number of rows in the outer scan is way more a
On 2015-05-29 10:55:44 +0200, Peter J. Holzer wrote:
> wdsah=> explain analyze select facttablename, columnname, term, concept_id,
> t.hidden, language, register
> from term t where facttablename='facttable_stat_fta4' and
> columnname='einh
:text))
-> Index Scan using facttable_stat_fta4_berechnungsart_idx on
facttable_stat_fta4 f (cost=0.00..2545748.85 rows=43577940 width=2) (actual
time=0.089..16263.582 rows=21336180 loops=1)
Total runtime: 30948.648 ms
(6 rows)
Over 30 seconds! That's almost 200'000 times slower.
T
On Wed, Jul 23, 2014 at 6:21 AM, Rural Hunter wrote:
> What's wrong and how can I improve the planning performance?
What is constraint exclusion set to?
--
Douglas J Hunley (doug.hun...@gmail.com)
here any way
to vacuum a specific table instead of whole database ?
Thanks,
Ramesh
On Thu, Aug 16, 2012 at 10:09 AM, Scott Marlowe wrote:
> Please use plain text on the list, some folks don't have mail readers
> that can handle html easily.
>
> On Wed, Aug 15, 2012 at 10:30 PM,
ry quickly. Do you have anything deleting the rows afterwards? I have
> no experience with databases past 50M rows, so my questions are just so you
> can line up the right info for when the real experts get online :-)
>
> Regards, David
>
>
> On 16/08/12 11:23, J Ramesh Kumar
Hi,
My application has high data intensive operations (high number of inserts
1500 per sec.). I switched my application from MySQL to PostgreSQL. When I
take performance comparison report between mysql and pgsql, I found that,
there are huge difference in disk writes and disk space taken. Below st
Hi,
My application is performing 1600 inserts per second and 7 updates per
second. The updates occurred only in a small table which has only 6 integer
columns. The inserts occurred in all other daily tables. My application
creates around 75 tables per day. No updates/deletes occurred in those 75
d
On Sun, Sep 11, 2011 at 5:22 PM, Maciek Sakrejda wrote:
> performance guidelines, I recommend Greg Smith's "PostgreSQL 9.0 High
> Performance" [1] (disclaimer: I used to work with Greg and got a free
> copy)
>
> I'll second that. "PostgreSQL 9.0 High Performance" is an excellent
resource
(I recom
Sorry, meant to send this to the list.
For really big data-warehousing, this document really helped us:
http://pgexperts.com/document.html?id=49
On Sun, Sep 11, 2011 at 1:36 PM, Ogden wrote:
> As someone who migrated a RAID 5 installation to RAID 10, I am getting far
> better read and write performance on heavy calculation queries. Writing on
> the RAID 5 really made things crawl. For lots of writing, I think RAID 10 is
> the best. It sho
On Wed, Aug 17, 2011 at 1:55 PM, Ogden wrote:
>
>
> What about the OS itself? I put the Debian linux sysem also on XFS but
> haven't played around with it too much. Is it better to put the OS itself on
> ext4 and the /var/lib/pgsql partition on XFS?
>
>
We've always put the OS on whatever default
On Mon, Apr 25, 2011 at 10:04 PM, Rob Wultsch wrote:
> Tip from someone that manages thousands of MySQL servers: Use InnoDB
> when using MySQL.
Granted, my knowledge of PostgreSQL (and even MSSQL) far surpasses my
knowledge of MySQL, but if InnoDB has such amazing benefits as being
crash safe, an
Not sure if this is the right list...but:
Disclaimer: I realize this is comparing apples to oranges. I'm not
trying to start a database flame-war. I just want to say thanks to
the PostgreSQL developers who make my life easier.
I manage thousands of databases (PostgreSQL, SQL Server, and MySQL)
On Thu, Apr 21, 2011 at 3:04 PM, Scott Marlowe wrote:
> Just because you've been walking around with a gun pointing at your
> head without it going off does not mean walking around with a gun
> pointing at your head is a good idea.
+1
--
Sent via pgsql-performance mailing list (pgsql-performanc
On Thu, Mar 17, 2011 at 10:13 AM, Jeff wrote:
> hey folks,
>
> Running into some odd performance issues between a few of our db boxes.
We've noticed similar results both in OLTP and data warehousing conditions here.
Opteron machines just seem to lag behind *especially* in data
warehousing. Smal
On Tue, Feb 08, 2011 at 03:52:31PM -0600, Kevin Grittner wrote:
> Scott Marlowe wrote:
> > Greg Smith wrote:
>
> >> Kevin and I both suggested a "fast plus timeout then immediate"
> >> behavior is what many users seem to want.
>
> > Are there any settings in postgresql.conf that would make it
On Thu, Feb 03, 2011 at 12:44:23PM -0500, Chris Browne wrote:
> mladen.gog...@vmsinfo.com (Mladen Gogala) writes:
> > Hints are not even that complicated to program. The SQL parser should
> > compile the list of hints into a table and optimizer should check
> > whether any of the applicable access
On Sun, Jan 30, 2011 at 05:18:15PM -0500, Tom Lane wrote:
> Andres Freund writes:
> > What happens if you change the
> > left join event.origin on event.id = origin.eventid
> > into
> > join event.origin on event.id = origin.eventid
> > ?
>
> > The EXISTS() requires that origin is not nul
Odds are that a table of 14 rows will more likely be cached in RAM
than a table of 14 million rows. PostgreSQL would certainly be more
"openminded" to using an index if chances are low that the table is
cached. If the table *is* cached, though, what point would there be
in reading an index?
Also
On Wednesday 17 November 2010 15:26:56 Eric Comeau wrote:
> This is not directly a PostgreSQL performance question but I'm hoping
> some of the chaps that build high IO PostgreSQL servers on here can help.
>
> We build file transfer acceleration s/w (and use PostgreSQL as our
> database) but we ne
> "Tom Lane" wrote in message
> news:25116.1277047...@sss.pgh.pa.us...
>> "Davor J." writes:
>>> Suppose 2 functions: factor(int,int) and offset(int, int).
>>> Suppose a third function: convert(float,int,int) which simply returns
>>>
a way to affect the functions. So, as far as I understand the
Postgres workings, this shouldn't pose a problem.
Regards,
Davor
"Tom Lane" wrote in message
news:25116.1277047...@sss.pgh.pa.us...
> "Davor J." writes:
>> Suppose 2 functions: factor(int,int) and
out time
zone))"
"-> Bitmap Index Scan on tbl_sensor_channel_data_pkey
(cost=0.00..2978.92 rows=105456 width=0) (actual time=27.433..27.433
rows=150678 loops=1)"
" Index Cond: ((tree_id = 70) AND (sens_chan_data_timestamp >=
'2008-06-11 00:00:00
I think I have read what is to be read about queries being prepared in
plpgsql functions, but I still can not explain the following, so I thought
to post it here:
Suppose 2 functions: factor(int,int) and offset(int, int).
Suppose a third function: convert(float,int,int) which simply returns
$1*
On Friday 04 June 2010 14:17:35 Jon Schewe wrote:
> Some interesting data about different filesystems I tried with
> PostgreSQL and how it came out.
>
> I have an application that is backed in postgres using Java JDBC to
> access it. The tests were all done on an opensuse 11.2 64-bit machine,
> on
On Wednesday 02 June 2010 13:37:37 Mozzi wrote:
> Hi
>
> Thanx mate Create Index seems to be the culprit.
> Is it normal to just use 1 cpu tho?
If it is a single-threaded process, then yes.
And a "Create index" on a single table will probably be single-threaded.
If you now start a "create index"
On Tue, Mar 23, 2010 at 03:22:01PM -0400, Tom Lane wrote:
> "Ross J. Reedstrom" writes:
>
> > Andy, you are so me! I have the exact same one-and-only-one mission
> > critical mysql DB, but the gatekeeper is my wife. And experience with
> > that instance has made
On Sat, Mar 20, 2010 at 10:47:30PM -0500, Andy Colson wrote:
>
> I guess, for me, once I started using PG and learned enough about it (all
> db have their own quirks and dark corners) I was in love. It wasnt
> important which db was fastest at xyz, it was which tool do I know, and
> trust, tha
2010/2/1 :
> * joke 1: insert operation would use a excluse lock on reference row by the
> foreign key . a big big big performance killer , i think this is a stupid
> design .
>
> * joke 2: concurrency update on same row would lead to that other
> transaction must wait the earlier transaction comp
Let's say you have one partitioned table, "tbl_p", partitioned according to
the PK "p_pk". I have made something similar with triggers, basing myself on
the manual for making partitioned tables.
According to the manual, optimizer searches the CHECKs of the partitions to
determine which table(s)
On Mon, 26 Oct 2009 14:09:49 -0400, Tom Lane wrote:
> "Michal J. Kubski" writes:
>> [ function that creates a bunch of temporary tables and immediately
>> joins them ]
>
> It'd probably be a good idea to insert an ANALYZE on the temp tables
> after you
On Mon, 26 Oct 2009 11:52:22 -0400, Merlin Moncure
wrote:
>> Do you not have an index on last_snapshot.domain_id?
>>
> that, and also try rewriting a query as JOIN. There might be
> difference in performance/plan.
>
Thanks, it runs better (average 240s, not 700s) with t
On Mon, 26 Oct 2009 09:19:26 -0400, Merlin Moncure
wrote:
> On Mon, Oct 26, 2009 at 6:05 AM, Michal J. Kubski
> wrote:
>> On Fri, 23 Oct 2009 16:56:36 +0100, Grzegorz Jaśkiewicz
>> wrote:
>>> On Fri, Oct 23, 2009 at 4:49 PM, Scott Mead
>>> wrote:
>>&
Hi,
On Fri, 23 Oct 2009 16:56:36 +0100, Grzegorz Jaśkiewicz
wrote:
> On Fri, Oct 23, 2009 at 4:49 PM, Scott Mead
> wrote:
>
>>
>>
>> Do you not have an index on last_snapshot.domain_id?
>>
>
> that, and also try rewriting a query as JOIN. There might be difference
in
> performance/plan.
>
Hi,
Is there any way to get the query plan of the query run in the stored
procedure?
I am running the following one and it takes 10 minutes in the procedure
when it is pretty fast standalone.
Any ideas would be welcome!
# EXPLAIN ANALYZE SELECT m.domain_id, nsr_id FROM nsr_meta m, last_snapsho
Excellent. I'll take a look at this and report back here.
Ross
On Mon, Feb 23, 2009 at 04:17:00PM -0500, Tom Lane wrote:
> "Ross J. Reedstrom" writes:
> > Summary: C client and large-object API python both send bits in
> > reasonable time, but I suspect there
On Thu, Feb 19, 2009 at 02:09:04PM +0100, PFC wrote:
>
> >python w/ psycopg (or psycopg2), which wraps libpq. Same results w/
> >either version.
>
> I've seen psycopg2 saturate a 100 Mbps ethernet connection (direct
> connection with crossover cable) between postgres server and client dur
[note: sending a message that's been sitting in 'drafts' since last week]
Summary: C client and large-object API python both send bits in
reasonable time, but I suspect there's still room for improvement in
libpq over TCP: I'm suspicious of the 6x difference. Detailed analysis
will probably find i
On Tue, Feb 17, 2009 at 03:14:55PM -0600, Ross J. Reedstrom wrote:
> On Tue, Feb 17, 2009 at 01:59:55PM -0700, Rusty Conover wrote:
> >
> > What is the client software you're using? libpq?
> >
>
> python w/ psycopg (or psycopg2), which wraps libpq. Same resul
On Tue, Feb 17, 2009 at 01:59:55PM -0700, Rusty Conover wrote:
>
> On Feb 17, 2009, at 1:04 PM, Ross J. Reedstrom wrote:
>
>
> What is the client software you're using? libpq?
>
python w/ psycopg (or psycopg2), which wraps libpq. Same results w/
either version.
I
On Tue, Feb 17, 2009 at 12:20:02AM -0700, Rusty Conover wrote:
>
>
> Try running tests with ttcp to eliminate any PostgreSQL overhead and
> find out the real bandwidth between the two machines. If its results
> are also slow, you know the problem is TCP related and not PostgreSQL
> related
Recently I've been working on improving the performance of a system that
delivers files stored in postgresql as bytea data. I was surprised at
just how much a penalty I find moving from a domain socket connection to
a TCP connection, even localhost. For one particular 40MB file (nothing
outragous)
There are a few things you didn't mention...
First off, what is the context this database is being used in? Is it the
backend for a web server? Data warehouse? Etc?
Second, you didn't mention the use of indexes. Do you have any indexes on
the table in question, and if so, does EXPLAIN ANALYZE
On May 21, 2008, at 12:33 AM, Shane Ambler wrote:
Size can affect performance as much as anything else.
For a brief moment, I thought the mailing list had been spammed. ;-)
J. Andrew Rogers
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
alf a minute? Stupid examples
probably, but you get my point I hope :)
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your Subscription:
http://ma
On Wednesday 27 February 2008 13:35:16 Douglas J Hunley wrote:
> > > 2) is there any internal data in the db that would allow me to
> > > programmatically determine which tables would benefit from being
> > > clustered? 3) for that matter, is there info to allow me to
On Wednesday 27 February 2008 12:40:57 Bill Moran wrote:
> In response to Douglas J Hunley <[EMAIL PROTECTED]>:
> > After reviewing
> > http://www.postgresql.org/docs/8.3/static/sql-cluster.html a couple of
> > times, I have some questions:
> > 1) it says to run a
tables with >1 indexes, does clustering on one index negatively impact
queries that use the other indexes?
5) is it better to cluster on a compound index (index on lastnamefirstname) or
on the underlying index (index on lastname)?
tia
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux
On Tuesday 19 February 2008 17:53:45 Greg Smith wrote:
> On Tue, 19 Feb 2008, Douglas J Hunley wrote:
> > The db resides on a HP Modular Storage Array 500 G2. 4x72.8Gb 15k rpm
> > disks. 1 raid 6 logical volume. Compaq Smart Array 6404 controller
>
> You might consider doing s
ow that's just plain cool
/me updates our wiki
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net
Drugs may lead to nowhere, but at least it's the scenic route.
---(end of broadcast)---
TIP 1: i
fwiw, I +1 this
now that I have a (minor) understanding of what's going on, I'd love to do
something like:
pg_restore -WM $large_value
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net
There are no dead students here. This week.
-
On Tuesday 19 February 2008 15:16:42 Dave Cramer wrote:
> On 19-Feb-08, at 2:35 PM, Douglas J Hunley wrote:
> > On Tuesday 19 February 2008 14:28:54 Dave Cramer wrote:
> >> shared buffers is *way* too small as is effective cache
> >> set them to 2G/6G respectively.
>
On Tuesday 19 February 2008 14:28:54 Dave Cramer wrote:
> shared buffers is *way* too small as is effective cache
> set them to 2G/6G respectively.
>
> Dave
pardon my ignorance, but is this in the context of a restore only? or 'in
general'?
--
Douglas J Hunley (dou
On Tuesday 19 February 2008 13:23:23 Jeff Davis wrote:
> On Tue, 2008-02-19 at 13:03 -0500, Douglas J Hunley wrote:
> > I spent a whopping seven hours restoring a database late Fri nite for a
> > client. We stopped the application, ran pg_dump -v -Ft -b -o $db >
> > ~/pre
On Tuesday 19 February 2008 13:22:58 Tom Lane wrote:
> Richard Huxton <[EMAIL PROTECTED]> writes:
> > Douglas J Hunley wrote:
> >> I spent a whopping seven hours restoring a database late Fri nite for a
> >
> > Oh, and have you tweaked the configuration s
On Tuesday 19 February 2008 13:13:37 Richard Huxton wrote:
> Douglas J Hunley wrote:
> > I spent a whopping seven hours restoring a database late Fri nite for a
> > client. We stopped the application, ran pg_dump -v -Ft -b -o $db >
> > ~/pre_8.3.tar on the 8.2.x db, and the
le HD drive? What
> are your settings for postgresql?
It wasn't doing anything but the restore. Dedicated DB box
postgresql.conf attached
system specs:
Intel(R) Xeon(TM) CPU 3.40GHz (dual, so shows 4 in Linux)
MemTotal: 8245524 kB
The db resides on a HP Modular Storage Array 500 G2. 4x7
'll grant you that it's a 5.1G tar file, but 7 hours seems excessive.
Is that kind of timeframe 'abnormal' or am I just impatient? :) If the former,
I can provide whatever you need, just ask for it.
Thanks!
--
Douglas J Hunley (doug at hunley.homeip.net) -
On Tuesday 05 June 2007 10:34:04 Douglas J Hunley wrote:
> On Monday 04 June 2007 17:11:23 Gregory Stark wrote:
> > Those plans look like they have a lot of casts to text in them. How have
> > you defined your indexes? Are your id columns really text?
>
> pro
uot;f_val_fid_val_idx" UNIQUE, btree (field_id, value)
"field_class_idx" btree (value_class)
"field_value_idx" btree (value)
item table:
Indexes:
"item_pk" PRIMARY KEY, btree (id)
"item_created_by_id" btree (created_by_id)
"item_folder" btree
On Monday 04 June 2007 17:17:03 Heikki Linnakangas wrote:
> And did you use the same encoding and locale? Text operations on
> multibyte encodings are much more expensive.
The db was created as:
createdb -E UNICODE -O
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #17477
LIKE 'tracker.peer_review_tracker.%' OR folder.path LIKE 'tracker.tars_0.%'
OR folder.path LIKE 'tracker.reviews.%' OR folder.path LIKE 'tracker.defects.
%' OR folder.path LIKE 'tracker.tars.%' OR folder.path
LIKE 'tracker.database_change_r
;m on the list, so there's no need to reply direct. I can get the
replies from the list
Thanks again for everyone's assistance thus far. Y'all rock!
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net
I feel like I'm di
nterim, I did an 'initdb' to
another location on the same box and then copied those values into the config
file. That's cool to do, I assume?
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net
Cowering in a closet is starting
7;initdb' will make changes to
the file? The file I sent is the working copy from the machine in question.
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley.homeip.net
"Does it worry you that you don't talk any kind of sense?"
-
enable_mergejoin = off
> geqo = off
>
> I've occasionally had to tweak planner settings but I prefer to do
> so for specific queries instead of changing them server-wide.
I concur. Unfortunately, our Engr group don't actually write the SQL for the
app. It's generated, and
.ELsmp. Hyperthreading is disabled in the BIOS and there are 2 Xeon
3.4Ghz cpus. There is 8Gb of RAM in the machine, and another 8Gb of swap.
Thank you in advance for any and all assistance you can provide.
--
Douglas J Hunley (doug at hunley.homeip.net) - Linux User #174778
http://doug.hunley
r our apps, about half the CPU time will be spent inside the
geometry ops. Fortunately, there is significant opportunity for
improvement in the performance of the underlying code if anyone found
the time to optimize (and uglify) it for raw speed.
Cheers,
J. Andrew Rogers
would suggest that XFS is a fine
and safe choice for your application.
J. Andrew Rogers
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
, since those differ significantly from the P4 in
capability.
J. Andrew Rogers
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
replaced with 64-bit Linux on
Opterons because the AMD64 systems tend to be both faster and
cheaper. Architectures like Sparc have never given us problems, but
they have not exactly thrilled us with their performance either.
Cheers,
J. Andrew Rogers
---(end
On May 30, 2006, at 3:59 PM, Daniel J. Luke wrote:
I should have gprof numbers on a similarly set up test machine
soon ...
gprof output is available at http://geeklair.net/~dluke/
postgres_profiles/
(generated from CVS HEAD as of today).
Any ideas are welcome.
Thanks!
--
Daniel J. Luke
"/copy", having the
file sitting on the client?
COPY table FROM STDIN using psql on the server
I should have gprof numbers on a similarly set up test machine soon ...
--
Daniel J. Luke
++
| * [EMAIL PRO
ite
capacity, so I don't think that's currently limiting performance).
--
Daniel J. Luke
++
| * [EMAIL PROTECTED] * |
| *
8.1.x (I think we're upgrading from 8.1.3 to 8.1.4 today).
--
Daniel J. Luke
++
| * [EMAIL PROTECTED] * |
| *-- htt
On May 24, 2006, at 4:13 PM, Steinar H. Gunderson wrote:
On Wed, May 24, 2006 at 04:09:54PM -0400, Daniel J. Luke wrote:
no warnings in the log (I did change the checkpoint settings when I
set up the database, but didn't notice an appreciable difference in
insert performance).
How
bably :) I'll keep searching the list archives and see if I find
anything else (I did some searching and didn't find anything that I
hadn't already tried).
Thanks!
--
Daniel J. Luke
++
| *-
utes and I potentially
have people querying it constantly, so I can't remove and re-create
the index.
--
Daniel J. Luke
++
| * [EMAIL PROTECTED] * |
| *-- http://w
or am I
just crazy)?
Thanks for any insight!
--
Daniel J. Luke
++
| * [EMAIL PROTECTED] * |
| *-- http://www.geeklair.net -* |
+==
Yes, that helps a great deal. Thank you so much.
- Original Message -
From: "Richard Huxton"
To: <[EMAIL PROTECTED]>
Cc:
Sent: Thursday, January 26, 2006 11:47 AM
Subject: Re: [PERFORM] Query optimization with X Y JOIN
[EMAIL PROTECTED] wrote:
If I want my database to go faster, du
If I want my database to go faster, due to X then I would think that the
issue is about performance. I wasn't aware of a paticular constraint on X.
I have more that a rudementary understanding of what's going on here, I was
just hoping that someone could shed some light on the basic principal o
Hey guys, how u been. This is quite a newbie
question, but I need to ask it. I'm trying to wrap my mind around the syntax of
join and why and when to use it. I understand the concept of making a query go
faster by creating indexes, but it seems that when I want data from multiple
tables that
Here's some C to use to create the operator classes, seems to work ok.
---
#include "postgres.h"
#include
#include "fmgr.h"
#include "utils/date.h"
/* For date sorts */
PG_FUNCTION_INFO_V1(ddd_date_revcmp);
Datum ddd_date_revcmp(PG_FUNCTION_ARGS){
DateADT arg1=PG_GETARG_DATEA
I have the answer I've been looking for and I'd like to share with all.
After help from you guys, it appeared that the real issue was using an index
for my order by X DESC clauses. For some reason that doesn't make good
sense, postgres doesn't support this, when it kinda should automatically.
I've read all of this info, closely. I wish when I was searching for an
answer for my problem these pages came up. Oh well.
I am getting an idea of what I need to do to make this work well. I was
wondering if there is more information to read on how to implement this
solution in a more simple way
To: "Josh Berkus"
Cc: ; <[EMAIL PROTECTED]>
Sent: Tuesday, January 17, 2006 5:40 PM
Subject: Re: [PERFORM] Multiple Order By Criteria
On Tue, 17 Jan 2006, Josh Berkus wrote:
J,
> I have an index built for each of these columns in my order by clause.
> This query takes an
new index.
Am I doing something wrong ?
- Original Message -
From: "Josh Berkus"
To:
Cc: <[EMAIL PROTECTED]>
Sent: Tuesday, January 17, 2006 5:25 PM
Subject: Re: [PERFORM] Multiple Order By Criteria
J,
I have an index built for each of these columns in my order by clause.
I'm trying to query a table with 250,000+ rows. My
query requires I provide 5 colums in my "order by" clause:
select column
from table
where
column >= '2004-3-22 0:0:0'order by
ds.receipt desc,
ds.carrier_id asc,
ds.batchnum asc,
encounternum asc,
ds.encounter_id AS
On Dec 12, 2005, at 2:19 PM, Vivek Khera wrote:
On Dec 12, 2005, at 5:16 PM, J. Andrew Rogers wrote:
We've swapped out the DIMMs on MegaRAID controllers. Given the
cost of a standard low-end DIMM these days (which is what the LSI
controllers use last I checked), it is a very cheap up
d in theory and the upgrade cost is below the noise floor for
most database servers.
J. Andrew Rogers
---(end of broadcast)---
TIP 6: explain analyze is your friend
AMD added quad-core processors to their public roadmap for 2007.
Beyond 2007, the quad-cores will scale up to 32 sockets
(using Direct Connect Architecture 2.0)
Expect Intel to follow.
douglas
On Nov 16, 2005, at 9:38 AM, Steve Wampler wrote:
[...]
Got it - the cpu is only
A blast from the past is forwarded below.
douglas
Begin forwarded message:
From: Tom Lane <[EMAIL PROTECTED]>
Date: August 23, 2005 3:23:43 PM EDT
To: Donald Courtney <[EMAIL PROTECTED]>
Cc: pgsql-performance@postgresql.org, Frank Wiles <[EMAIL PROTECTED]>, gokulnathbabu manoharan <[EMAIL PROTEC
Hey, you can say what you want about my style, but you
still haven't pointed to even one article from the vast literature
that you claim supports your argument. And I did include a
smiley. Your original email that PostgreSQL is wrong and
that you are right led me to believe that you, like other
Ron Peacetree sounds like someone talking out of his _AZZ_.
He can save his unreferenced flapdoodle for his SQL Server
clients. Maybe he will post references so that we may all
learn at the feet of Master Peacetree. :-)
douglas
On Oct 4, 2005, at 7:33 PM, Ron Peacetree wrote:
pg is _ver
s all its time waiting for disks, no
quantity of processors will help you unless you are doing a lot of math on
the results.
YMMV, as always. Recommendations more specific than "Opterons rule, Xeons
suck" depend greatly on what you plan on doing with the database.
design.
In short, what you are trying to do is easily doable on PostgreSQL in
theory. However, restrictions on design choices may pose significant
hurdles. We did not start out with an ideal system either; it took a fair
amount of re-engineering to solve all the bottlene
1 - 100 of 138 matches
Mail list logo