7;m a relative newcomer to PostgreSQL (but
not to relational databases), so I'm not sure if this belongs in the novice or
general lists.
Joe
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
cho
ky
behavior on almost every page: the upper 1/2 or 2/3 of the page is displayed
first and you can see a blank bottom (or you can see a half-filled completion
bar). With MySQL each page is generally displayed in one swoop.
Joe
---(end of broadcast)--
itional tables that perhaps could be merged into the entry table (and that
would reduce the number of queries) but I do not want to make major changes to
the schema (and the app) for the PostgreSQL conversion.
Joe
---(end of broadcast)---
TIP 3
in, I'm using PostgreSQL 8.0.3, Apache 1.3.28, PHP 4.3.4, MySQL 4.0.16 and
I'm comparing both databases on XP (on a Pentium 4, 1.6 GHz, 256 MB RAM).
Thanks for any feedback.
Joe
---(end of broadcast)---
TIP 6: explain analyze is your friend
ted
internally with PHP, at the heading level, before display.
Maybe there is some way to merge all the queries (some already fairly complex)
that fetch the data for the entries box but I believe it would be a monstrosity
with over 100 lines of SQL.
Thanks,
Joe
icture in a local
Windows client/server environment?
Joe
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
tests were repeated
five times and the quoted results are averages). Second, is that PostgreSQL's
performance appears to be much more consistent in certain queries. For example,
the query that retrieves the list of subtopics (the names and description of
economists), took 17 msec in P
rformance of SELECTs).
Yes indeed. When I added the REFERENCES to the schema and reran the conversion
scripts, aside from having to reorder the table creation and loading (they used
to be in alphabetical order), I also found a few referential integrity errors in
t
lter: ((topic_id2 = 1252) OR (topic_id1 = 1252))
-> Seq Scan on topic t (cost=0.00..30.94 rows=494 width=216) (never
executed)
Total runtime: 0.000 ms
(13 rows)
The overall execution time for the Economists page for PostgreSQL is within 4%
of the MySQL time, so for the time being
having the database do aggregation and
sorting as you mentioned in your other email).
FWIW, I usually use timestamptz for both created and updated fields.
IIRC 'created' ended up as a DATE because MySQL 4 has a restriction about a
single TIMESTAMP colu
Is there a reason you are not using postgis. The R tree indexes are
designed for exactly this type of query and should be able to do it very
quickly.
Hope that helps,
Joe
> I have this table:
>
> CREATE TABLE test_zip_assoc (
> id serial NOT NULL,
> f_id integer DEF
Hi all,
I was wondering if part or all of Postgres would be able to take
advantage of a beowulf cluster to increase performance? If not then why
not, and if so then how would/could it benefit from being on a cluster?
Thanks for the enlightenment in advance.
-Joe
ly opt for iSCSI over GigE with
a NetApp.
Any particular reason? Our NetApp technical rep advised nfs over iSCSI,
IIRC because of performance.
Joe
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
Aditya wrote:
On Mon, Apr 11, 2005 at 10:59:51AM -0700, Joe Conway wrote:
Any particular reason? Our NetApp technical rep advised nfs over iSCSI,
IIRC because of performance.
I would mount the Netapp volume(s) as a block level device on my server
using
iSCSI (vs. a file-based device like NFS) so
input. As an additional question,
does anyone have any strong recommendations for vendors that offer both
consulting/training and support? We are currently speaking with Command
Prompt, EnterpriseDB, and Greenplum but I am certainly open to hearing
any other recommendations.
Thanks,
Joe
of this list to
help make decisions for the new machine(s), was just very interested in
hearing feedback on software vs. hardware raid.
We will likely be using the 2.6.18 kernel.
Thanks for everyone's input,
Joe
-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED]
ize is changing dramatically. I have attached an
example script.
Hope that helps,
Joe
#! /usr/bin/python
import psycopg
import sys
def fixName(name):
return name[:19]
if len(sys.argv) > 1 and sys.argv[1] ==
About six months ago, our normally fast postgres server started
having performance issues. Queries that should have been instant were
taking up to 20 seconds to complete (like selects on the primary key
of a table). Running the same query 4 times in a row would yield
dramatically different
configured, but want to get some hardware ballparks in order to get
quotes and potentially request a trial unit.
Any thoughts or recommendations? We are running openSUSE 10.2 with
kernel 2.6.18.2-34.
Regards,
Joe Uhl
[EMAIL PROTECTED]
---(end of broadcast
e from someone but I
knew this list would provide some excellent ideas and feedback to get us
started.
Joe Uhl
[EMAIL PROTECTED]
On Thu, 9 Aug 2007 16:02:49 -0500, "Scott Marlowe"
<[EMAIL PROTECTED]> said:
> On 8/9/07, Joe Uhl <[EMAIL PROTECTED]> wrote:
> > We have
total in the past 2 years.
Just my personal experience, i'd be happy to pass along the account
manager's information if anyone is interested.
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
any
> improvements: (seems that the index is not used)
> Filter: ((rank IS NOT NULL) AND (pf_id IS NULL) AND
> ((project_id)::text = 'proj2783'::text))
> Total runtime: 11.988 ms
> (6 rows)
>
> Time: 13.654 ms
try:
create index item_rank_
On 06/10/2010 01:10 PM, Joe Conway wrote:
> try:
>
> create index item_rank_null_idx on item_rank(pf_id)
> where rank IS NOT NULL AND pf_id IS NULL;
oops -- that probably should be:
create index item_rank_null_idx on item_rank(project_id)
where rank IS NOT NULL AND pf_id I
On 06/10/2010 01:21 PM, Anne Rosset wrote:
>>
> I tried that and it didn't make any difference. Same query plan.
A little experimentation suggests this might work:
create index item_rank_project on item_rank(project_id, rank) where
pf_id IS NULL;
Joe
signature.asc
Descri
there are also some good examples of array handling in PL/R, e.g.
pg_array_get_r() in pg_conversion.c
HTH,
Joe
--
Joe Conway
credativ LLC: http://www.credativ.us
Linux, PostgreSQL, and general Open Source
Training, Service, Consulting, & Support
signature.asc
Description: OpenPGP digital signature
ere id in (42, 4242, 424242);
explain analyze select * from testbit where status & 512 = 512;
QUERY PLAN
--
Index Scan using idx32 on testbit (cost=0.00..4712.62 rows=5000
width=22) (actual time=0.080..0.085 rows=3
flect reality. See
example below.
I can set up a cron job to run the ANALYZE manually, but it seems like
the autovacuum daemon should be smart enough to figure this out on its
own. Deletes can have as big an impact on the stats as inserts and
updates.
Joe Miller
---
testdb
compared to the total number of tuples inserted or updated since
the last ANALYZE.
I guess that should be updated to read "insert, updated or deleted".
On Mon, Sep 20, 2010 at 10:12 PM, Tom Lane wrote:
> Joe Miller writes:
>> The autovacuum daemon currently uses the num
On Mon, Sep 20, 2010 at 6:28 PM, Kevin Grittner
wrote:
> Joe Miller wrote:
>
>> I can set up a cron job to run the ANALYZE manually, but it seems
>> like the autovacuum daemon should be smart enough to figure this
>> out on its own. Deletes can have as big an impact on
less clear as concurrency was increased.
Joe
--
Joe Conway
credativ LLC: http://www.credativ.us
Linux, PostgreSQL, and general Open Source
Training, Service, Consulting, & 24x7 Support
signature.asc
Description: OpenPGP digital signature
The biggest single problem with "select count(*)" is that it is
seriously overused. People use that idiom to establish existence, which
usually leads to a performance disaster in the application using it,
unless the table has no more than few hundred records. SQL language, of
which PostgreSQL offe
Thanks for fixing the docs, but if that's the case, I shouldn't be
seeing the behavior that I'm seeing.
Should I flesh out this test case a little better and file a bug?
Thanks,
Joe
On Tue, Sep 21, 2010 at 4:44 PM, Tom Lane wrote:
> Joe Miller writes:
>> I was l
on already in a nearby reply -- see
preload_libraries on this page:
http://www.postgresql.org/docs/8.1/interactive/runtime-config-resource.html
Joe
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
have, but we
use a dedicated gigabit interface to the NetApp, with jumbo (9K) frames,
and an 8K NFS blocksize. We use this for both Oracle and Postgres when
the database resides on NetApp.
Joe
---(end of broadcast)---
TIP 4: Have you searche
I have a index question. My table has 800K rows and I a doing a basic query on an indexed integer field which takes over 2 seconds to complete because it's ignoring the index for some reason. Any ideas as to why it's ignoring the index? I'm using postgres 8.0.2.SELECT count(*) FROM purchase_order_i
great!
Thanks Markus and Tom!
On Jun 30, 2006, at 10:29 AM, Markus Schaber wrote:
Hi, Joe,
Joe Lester wrote:
Aggregate (cost=22695.28..22695.28 rows=1 width=0) (actual
time=2205.688..2205.724 rows=1 loops=1)
-> Seq Scan on purchase_order_items (cost=0.00..21978.08
rows=286882
wi
n "ANALYZE registrazioni;"
during the day. This will only update the statistics, and should be very
low impact.
HTH,
Joe
---(end of broadcast)---
TIP 6: explain analyze is your friend
Gabriele Turchi wrote:
Il giorno sab, 15/07/2006 alle 13.04 -0700, Joe Conway ha scritto:
Why not just periodically (once an hour?) run "ANALYZE registrazioni;"
during the day. This will only update the statistics, and should be very
low impact.
This is my "solution" t
ast night on PATCHES. Something like "insert
into abc (123); insert into abc (234); ..." actually seems to work
pretty well as long as you don't drive the machine into swapping. If
you're doing a very large number of INSERTs, break it up into bite-siz
d a long time ago.
Any chance you can look into this?
I can try. The server belongs to another department, and they are under
the gun to get back on track with their testing. Also, they compiled
without debug symbols, so I need to get permission to recompile.
Yes I would be very curious
Tom Lane wrote:
Joe Conway <[EMAIL PROTECTED]> writes:
Any idea where I can get my hands on the latest version. I found the
original post from Tom, but I thought there was a later version with
both number of pages and time to sleep as knobs.
That was as far as I got. I think Jan posted
ripts, and give it a try on one of my
own machines (all Linux, either RHAS3, RH9, or Fedora).
Joe
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Tom Lane wrote:
Joe Conway <[EMAIL PROTECTED]> writes:
Any idea where I can get my hands on the latest version. I found the
original post from Tom, but I thought there was a later version with
both number of pages and time to sleep as knobs.
That was as far as I got. I think Jan posted
27;s attached in case you are interested. I'll report back once
I have some results.
Joe
Index: src/backend/access/nbtree/nbtree.c
===
RCS file: /home/pgsql/CvsRoot/pgsql-server/src/backend/access/nbtree/nbtree.c,v
retrieving rev
he performance hit is negligible (based
on overall test time, and cpu % used by the vacuum process). I still
have a bit more analysis to do, but this is looking pretty good. More
later...
Joe
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
ly patch form Jan's all_performance
patch. It looks like the only difference is that it uses usleep()
instead of select(). So far the tests look promising.
Thanks,
Joe
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choo
ctionResult():
8<---
tupstore = tuplestore_begin_heap(true, false, work_mem);
8<---
So up to work_mem (sort_mem in 7.4 and earlier) should be stored in memory.
Joe
---(end of broadcast)---
Dirk Lutzebäck wrote:
postgresql 7.4.1
a new Dual Xeon MP
too much context switches (way more than 100.000) on higher load (meaning system
load > 2).
I believe this was fixed in 7.4.2, although I can't seem to find it in
the release notes.
Joe
---(end of b
Dirk Lutzebäck wrote:
Joe, do you know where I should look in the 7.4.2 code to find this out?
I think I was wrong. I just looked in CVS and found the commit I was
thinking about:
http://developer.postgresql.org/cvsweb.cgi/pgsql-server/src/backend/storage/lmgr/s_lock.c.diff?r1=1.22&r2=1.23
Dell 2600 series machine and very fast. It has
the moderately fast 533MHz FSB so may not have as many problems as the MP
type CPUs seem to be having.
I've got a quad 2.8Ghz MP Xeon (IBM x445) that I could test on. Does
anyone have a test set that can reliably reproduce the pro
t switches to about 70,000 to 100,000. Two or more sessions brings
it up to 200K+.
Joe
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
Joe Conway wrote:
In isolation, test_run.sql should do essentially no syscalls at all once
it's past the initial ramp-up. On a machine that's functioning per
expectations, multiple copies of test_run show a relatively low rate of
semop() calls --- a few per second, at most --- a
RE t1.referral_raw_url = t2.url);
?
Joe
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
I haven't had a chance to play with this myself yet, but I hope to
relatively soon.
HTH,
Joe
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
ouple of years. We've found it to
outperform local attached storage, and it has been extremely reliable
and flexible. Our DBAs wouldn't give it up without a fight.
Joe
---(end of broadcast)---
TIP 8: explain analyze is your friend
#x27;t
being vacuumed.
On Sat, 2004-07-31 at 10:25, Joe Lester wrote:
> I've been running a postgres server on a Mac (10.3, 512MB RAM) with 200
> clients connecting for about 2 months without a crash. However just
> yesterday the database and all the clients hung. When I looked at th
ch is operational). As a result
we've compressed a > 1TB database down to ~0.4TB, and seen at least one
typical query reduced from ~9 minutes down to ~40 seconds.
Joe
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
[EMAIL PROTECTED] wrote:
Joe Conway <[EMAIL PROTECTED]> wrote on 15.09.2004, 06:30:24:
We're not completely done with our data conversion (from a commercial
RDBMSi), but so far the results have been excellent. Similar to what
others have said in this thread, the conversion involved re
ind the time to work on it myself, but
for the moment I'm satisfied with the workarounds we've made.
Joe
---(end of broadcast)---
TIP 8: explain analyze is your friend
Simon Riggs wrote:
Joe,
Your application is very interesting. I've just read your OSCON paper. I'd
like to talk more about that. Very similar to Kalido.
...but back to partitioning momentarily: Does the performance gain come from
partition elimination of the inherited tables under t
hey have none it is very quick. In a
real life example I got the following results just this afternoon:
- aggregate row count = 471,849,665
- total number inherited tables = 216
(many are future dated and therefore contain no data)
- select one month's worth of data f
Christopher Browne wrote:
In the last exciting episode, [EMAIL PROTECTED] (Joe Conway) wrote:
That's exactly what we're doing, but using inherited tables instead of
a union view. With inheritance, there is no need to rebuild the view
each time a table is added or removed. Basical
width=16) (actual time=0.008..0.009 rows=1 loops=1)
Index Cond: (f2 = '2004-02-15'::date)
-> Index Scan using foo_2004_03_idx2 on foo_2004_03 foo
(cost=0.00..17.07 rows=5 width=16) (actual time=0.029..0.029 rows=0 loops=1)
Index Cond: (f2 = &
s with IBM hardware, and found their sales
and support to be responsive.
Joe
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
://www.postgresql.org/docs/current/static/libpq-async.html
HTH,
Joe
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
chine too.
The $5K edition is just there to get you hooked ;-) By the time you add
up what you really want/need, figure you'll spend a couple of orders of
magnatude higher, and then > 20% per year for ongoing
maintenance/upgrades/support.
Joe
---(end of broadcast)---
hyperthreaded Intel CPU
count as 1 or 2 CPUs from a licensing standpoint? We were eventually
told 1, but that the decision was "subject to change in the future".
Joe
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
mply don't
have, or work very poorly.
I never said I had a "bad experience" with Oracle. I pointed out the
gotchas. We have several large Oracle boxes running, several MSSQL, and
several Postgres -- they all have their strengths and weaknesses.
Nuff said -- this thread is way off t
nion all select ''C'' union all select ''D'''
) as (product_id int, a int, c int, d int);
product_id | a | c | d
+--+--+--
906 | 3000 | 3000 | 1935
907 | 1500 | 1500 | 4575
924 | 6000 | 1575 |
mydatabase < /path/to/contrib/scripts/tablefunc.sql
I have no idea where that would be on Mandrake, but you could probably do:
locate tablefunc.sql
On Fedora Core 1 I find it here:
/usr/share/pgsql/contrib/tablefunc.sql
Also find and read README.tablefunc.
HTH,
Joe
---(end
brutalized indexes to the SAS disks very soon.
If you do use Dell, get connected with a small business account manager
for better prices and more attention.
Joe
Ketema Harris wrote:
> I am trying to build a very Robust DB server that will support 1000+
> concurrent users (all ready have se
Magnus Hagander wrote:
> Ron St-Pierre wrote:
>
>> Joe Uhl wrote:
>>
>>> I realize there are people who discourage looking at Dell, but i've been
>>> very happy with a larger ball of equipment we ordered recently from
>>> them. Our databas
ion I learned was that
adding the oids in the table adds a significasnt amount of space to the data
AND the index.
As you may gather from this we are relatively new on Postgres.
Any suggestions you can give me would be most helpful.
Cheers,
Joe
On Mon, Mar 10, 2008 at 11:16 AM, Gregory Stark &l
on their own.
If the second one doesn't change often, can you pre-calculate it,
perhaps once a day?
Joe
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
greatly appreciated.
Joe Uhl
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
back to the
list on first try.
On Mar 16, 2009, at 3:52 PM, Alan Hodgson wrote:
On Monday 16 March 2009, Joe Uhl wrote:
Right now (not under peak load) this server is running at 68% CPU
utilization and its SATA raid 10 is doing about 2MB/s writes and
11MB/
s reads. When I run dd I can hi
ly, Aretec and Promise are good, Adaptec good, depending on
model, and the ones that Dell ship w/their servers haven't had good
reviews/reports.
On 03/16/2009 01:10 PM, Joe Uhl wrote:
Here is vmstat 1 30. We are under peak load right now so I can
gather
information from the real deal :)
On Mar 17, 2009, at 12:19 AM, Greg Smith wrote:
On Tue, 17 Mar 2009, Gregory Stark wrote:
Hm, well the tests I ran for posix_fadvise were actually on a Perc5
-- though
who knows if it was the same under the hood -- and I saw better
performance
than this. I saw about 4MB/s for a single drive
On Mar 20, 2009, at 4:29 PM, Scott Marlowe wrote:
On Fri, Mar 20, 2009 at 2:26 PM, Joe Uhl wrote:
On Mar 17, 2009, at 12:19 AM, Greg Smith wrote:
On Tue, 17 Mar 2009, Gregory Stark wrote:
Hm, well the tests I ran for posix_fadvise were actually on a
Perc5 --
though
who knows if it was
On Mar 20, 2009, at 4:58 PM, Scott Marlowe wrote:
On Fri, Mar 20, 2009 at 2:49 PM, Joe Uhl wrote:
On Mar 20, 2009, at 4:29 PM, Scott Marlowe wrote:
What does the cs entry on vmstat say at this time? If you're cs is
skyrocketing then you're getting a context switch storm
On Mar 20, 2009, at 4:58 PM, Scott Marlowe wrote:
On Fri, Mar 20, 2009 at 2:49 PM, Joe Uhl wrote:
On Mar 20, 2009, at 4:29 PM, Scott Marlowe wrote:
What does the cs entry on vmstat say at this time? If you're cs is
skyrocketing then you're getting a context switch storm
n.r-project.org/web/views/HighPerformanceComputing.html
Joe
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
rst). I've run dozens of
distributions and this works well for us (a startup with nontrivial
Linux experience). I imagine at a larger company it definitely would
not be an option.
Joe Uhl
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to you
itions are setup correctly, this query also has
excellent performance:
select * from people where list_id = 'the_unique_list_id' and first_name
= 'JOE';
Here is the explain analyze for that:
Result (cost=0.00..963.76 rows=482 width=37739) (actual
time=6.031..25.394 rows=2319
This seems like a pretty major weakness in PostgreSQL partitioning. I
have essentially settled on not being able to do queries against the
parent table when I want to order the results. Going to have to use a
Hibernate interceptor or something similar to rewrite the statements so
they hit spe
On Mon, Oct 12, 2009 at 10:14 AM, Michal Szymanski wrote:
We have performance problem with query on partitioned table when query
use order by and we want to use first/last rows from result set.
More detail description:
We have big table where each row is one telephone call (CDR).
Definitni
On Mon, Oct 19, 2009 at 6:58 AM, Joe Uhl wrote:
I have a similar, recent thread titled Partitioned Tables and ORDER BY with
a decent break down. I think I am hitting the same issue Michal is.
Essentially doing a SELECT against the parent with appropriate constraint
columns in the WHERE clause
your partitioning scheme
4) create one index as (ts, a)
5) use dynamically generated SQL and table names in the application
code to create (conditionally) and load the tables
But of course test both this and your proposed method and compare ;-)
Also you might consider PL/R for some of your anal
lem, but time is, my advice is to hire a
consultant. There are probably several people on this list that can fill
that role for you. Otherwise read the archives and ask lots of specific
questions.
Joe
---(end of broadcast)---
TIP 3: if posting/readi
and ask lots of specific
questions.
Once we're ready to go with postgresql in a production environment we may
indeed need to hire a consultant. Any suggestions whom I should contact?
(We're in the San Diego area)
Um, actually, I live in the San Diego area ;-)
J
ACUUM ANALYZE) at
appropriate intervals?
2) What are the table definitions and indexes for all tables involved?
3) What is the output of EXPLAIN ANALYZE?
HTH,
Joe
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
given the constraint of 5 drives? 1 drive for OS, and 4 for RAID 1+0 for
data-plus-WAL? I guess the ideal would be to find enough money for that
6th drive, use the mirrored pair for both OS and WAL.
Joe
---(end of broadcast)---
TIP 3: if posti
Adam Witney wrote:
I think the issue from the original posters point of view is that the Dell
PE2650 can only hold a maximum of 5 internal drives
True enough, but maybe that's a reason to be looking at other
alternatives. I think he said the hardware hasn't been bought
Jean-Luc Lachance wrote:
I am currious. How can you have RAID 1+0 with only 2 drives?
If you are thinking about partitioning the drives, wont this defeate the
purpose?
Yeah -- Hannu already pointed out that my mind was fuzzy when I made
that statement :-(. See subsequent posts.
Joe
ives; maybe that fits the bill?
Joe
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
at if you set sort_mem too
high, and you have a lot of simultaneous sorts, you can drive the server
into swapping, which obviously is a very bad thing. You want it set as
high as possible, but not so high given your usage patterns that you
wind up swapping.
Joe
-
"? If so, you
might want that index to be on (element, date). Then do:
SELECT num1, num2, num3 FROM mytable WHERE element = 'an_element' order
by date DESC LIMIT 20;
Replace num1, num2, num3 by whatever columns you want, and "LIMIT X" as
the number of r
of elements?
Can I do anything with Indexing to help with performance? I suspect for the
majority of scans I will need to evaluate an outcome based on 4 or 5 of the
7 columns of data.
Again, this isn't clear to me -- but maybe I'm just being dense ;-)
Does this mean you expect 4 or 5 it
/O time to read 20K
of uncompressed text versus the smaller compressed text is enough to
swamp the time saved from not needing to uncompress.
Any other ideas out there?
Joe
---(end of broadcast)---
TIP 3: if posting/reading through Usenet,
?
Joe
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
1 - 100 of 143 matches
Mail list logo