_index_tuple_cost = 0.0001 # same scale as above
cpu_operator_cost = 0.00025 # same scale as above
effective_cache_size = 679006
I really don't remember how I came up with that effective_cache_size
number
Anyway... any advice would be appreciated :)
Steve
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
e to save
room. Things are certainly kinda tight in the config as is.
4) Try to set up some performance monitoring - for example a 'dstat' is
a nice way to do that. This way you can find yout where's the
bottleneck (memory, I/O etc.)
That's basically all I can think of right
everything a bit, but I'm not
really sure what the heck to do :) It's all guessing for me right now.
Steve
---(end of broadcast)---
TIP 6: explain analyze is your friend
7;average daily load'. It's okay if
on those deadlines it takes forever, because that's understandable.
However, I will look into this and see if I can figure out this
average value. This may be a valid idea, and I'll look some more at it.
Thanks!
Steve
SO ... our goal
been fixed, but we didn't undo the change).
What's your maintenance_work_mem setting? It can make a big difference in
sorting the data for indexes.
6 gigs currently. :)
If you could post the schema including the indexes, people might have more
ideas.
ined at the
hip for querying and sorting, and this method was a performance godsend
when we implemented it (with a C .so library, not using SQL in our
opclasses or anything like that).
Steve
---(end of broadcast)---
TIP 6: explain analyze is your friend
rk with and maybe help figure out what's going
on.
Thanks!
Steve
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
, but I'd be back to square one on learning performance
stuff :)
Anyway -- I'll listen to what people have to say, and keep this in mind.
It would be an interesting test to take parts of the process and compare
at least, if not converting the whole thing.
talk to you later,
Steve
On W
be able to combine this
with Mr. Wagner's idea to make a much more efficient system overall. It's
going to be a pretty big programming task, but I've a feeling this
summarizer thing may just need to be re-written with a smarter system
like this to get something faster.
Thank
ually modifying the data, or
would I have to pmalloc a separate chunk of memory, copy the data, and
return the newly allocated memory because the memory allocated for the
args "goes away" or gets corrupted or something otherwise?
Thanks a lot for the info!
Steve
---
Steve wrote:
IF strlen(source.corrected_procedure_code)
THEN:
summary.procedure_code=source.corrected_procedure_code
summary.wrong_procedure_code=source.procedure_code
ELSE:
summary.procedure_code=source.procedure_code
summary.wrong_procedure_code=NULL
Um, so you test if
backed up daily, UPS'd, etc.
fsync = off
full_page_writes = off
wal_buffers = 512MB
# Leaving this low makes the DB complain, but I'm not sure what's
# reasonable.
checkpoint_segments = 128
random_page_cost = 1.5
cpu_tuple_cost = 0.001
cpu_index_tuple_cost = 0.0005
Steve <[EMAIL PROTECTED]> writes:
- What is temp_buffers used for exactly?
Temporary tables. Pages of temp tables belonging to your own backend
don't ever get loaded into the main shared-buffers arena, they are read
into backend-local memory. temp_buffers is the max amount (per
,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[]))
Any thoughts? Both encounter_id and receipt date are indexed columns.
I've vacuumed and analyzed the table. I tried making a combined index of
encounter_id and receipt and it hasn't worked out any better.
Tha
Could we see the exact definition of that table and its indexes?
It looks like the planner is missing the bitmap scan for some reason,
but I've not seen a case like that before.
Also, I assume the restriction on receipt date is very nonselective?
It doesn't seem to have changed the estimated rowc
Oy vey ... I hope this is a read-mostly table, because having that many
indexes has got to be killing your insert/update performance.
Hahaha yeah these are read-only tables. Nightly inserts/updates.
Takes a few hours, depending on how many records (between 4 and 10
usually). But during the
Seq Scan on detail_summary ds (cost=0.00..1902749.83 rows=9962 width=4)
Filter: ((receipt >= '1998-12-30'::date) AND (encounter_id = ANY
('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204
with your text_revop indexes, that's the reason...
The indexes have all worked, though I'll make the change anyway.
Documentation on how to code these things is pretty sketchy and I believe
I followed an example on the site if I remember right. :/
Thanks for the info though :)
Steve
-
o have no effect on anything though, either way. :)
Steve
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
esn't make it any faster really :/
Steve
On Thu, 12 Apr 2007, Tom Lane wrote:
Scott Marlowe <[EMAIL PROTECTED]> writes:
So there's a misjudgment of the number of rows returned by a factor of
about 88. That's pretty big. Since you had the same number without the
receipt d
27;m quite baffled as well,
Talk to you later,
Steve
On Thu, 12 Apr 2007, Tom Lane wrote:
Steve <[EMAIL PROTECTED]> writes:
Here's my planner parameters:
I copied all these, and my 8.2.x still likes the bitmap scan a lot
better than the seqscan. Furthermore, I double-checked the
)))
The explain analyze is pending, running it now (however it doens't really
appear to be any faster using this plan).
Steve
On Thu, 12 Apr 2007, Tom Lane wrote:
Steve <[EMAIL PROTECTED]> writes:
... even if I force it to use the indexes
(enable_seqscan=off) it doesn't ma
3,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[])))
Total runtime: 121306.233 ms
Your other question is answered in the other mail along with the
non-analyze'd query plan :D
Steve
On Thu, 12 Apr 2007, Tom Lane wrote:
Steve <
e run on this table, much as I'd absolutely love to kill off some
indexes to imrpove our nightly load times I can't foul up the other
queries :)
Thank you very much for all your help on this issue, too!
Steve
On Thu, 12 Apr 2007, Tom Lane wrote:
Steve <
exes. BUT I think it'll be easy to convince the customer to drop their
absurdly complicated sorts if I can come back with serious results like
what we've worked out here.
And thanks again -- have a good dinner! :)
Steve
On Thu, 12 Apr 2007, Tom Lane wrote:
Steve <[EMAIL PR
detail_summary_receipt_encounter_idx
On Thu, 12 Apr 2007, Tom Lane wrote:
Steve <[EMAIL PROTECTED]> writes:
Just dropping that index had no effect, but there's a LOT of indexes that
refer to receipt. So on a hunch I tried dropping all indexes that refer
to receipt date and that worked -- so it's
ous day's backup. However because the DB is read
only for -most- of the day and only read/write at night it's acceptable
risk for us anyway. But good to know that's a reasonable value.
Steve
---(end of broadcast)---
TIP 1: if post
g is done to the DB and if fsync is used or not if that
makes any sense :) Seems there's no "silver bullet" to the shared_memory
question. Or if there is, nobody can agree on it ;)
Anyway, talk to you later!
Steve
---(end of broadcast)---
tings there are probably pretty good. It's a
Solaris 10 machine (V440, 2 processor, 4 Ultra320 drives, 8 gig ram) and
here's some stats:
shared_buffers = 30
work_mem = 102400
maintenance_work_mem = 1024000
bgwriter_lru_maxpages=0
bgwriter_lru_percent=0
fsync = off
wal_buffers = 1
goal.
Since I am a dba novice, I did not physically build this server, nor did
I write the application the hospital runs on, but I have the opportunity
to make it better, I'd thought I should seek some advice from those who
have been down this road before. Suggestions/ideas anyone?
Thanks.
stamp of approval, then the vendor will tell us what they support.
Thanks.
Steve Poe
Tom Lane wrote:
Steve Poe <[EMAIL PROTECTED]> writes:
Situation: An 24/7 animal hospital (100 employees) runs their business
on Centos 3.3 (RHEL 3) Postgres 7.4.2 (because they have to)
[ itch... ]
for less than 7k.
I don't believe it is CPU bound. At our busiest hour, the CPU is idle
about 70% on average down to 30% idle at its heaviest. Context switching
averages about 4-5K per hour with momentary peaks to 25-30K for a
minute. Overall disk performance is poor (35mb per sec).
Th
Steve, can we clarify that you are not currently having any performance
issues, you're just worried about failure? Recommendations should be based
on whether improving applicaiton speed is a requirement ...
Josh,
The priorities are: 1)improve safety/failure-prevention, 2) improve
perfor
The Chenbros are nice, but kinda pricey ($800) if Steve doesn't need the
machine to be rackable.
If your primary goal is redundancy, you may wish to consider the possibility
of building a brand-new machine for $7k (you can do a lot of machine for
$7000 if it doesn't have to be rac
stand the context of how the hardware is being used.
Steve Poe
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
s = 1
sort_mem = 8192
vacuum_mem = 65536
effective_cache_size = 65536
Steve Poe
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Mohan, Ross wrote:
> VOIP over BitTorrent?
Now *that* I want to see. Aught to be at least as interesting
as the "TCP/IP over carrier pigeon" experiment - and more
challenging to boot!
--
Steve Wampler -- [EMAIL PROTECTED]
The gods that smiled on your birth are now laug
hese people adequately to avoid overloading a trunk line...) is probably
sufficiently hard to make it interesting. Then there are the problems of
different accents, dilects, and languages ;)
--
Steve Wampler -- [EMAIL PROTECTED]
The gods that smiled on your birth are now laughing out loud.
--
, and the company went out of
>>business. We were serving "cornish game hens" at our frequent dinner
>>parties for months.
>
>
> This method might have been safer (and it works great with Apaches):
> http://eagle.auc.ca/~dreid/
Aha - VOIPOBD as well as VOIPOBT!
ell under Linux?
These two drive arrays main purpose is for our database. For those
messed with drive arrays before, how would you slice-up the drive array?
Will database performance be effected how our RAID10 is configured? Any
suggestions?
Thanks.
Steve Poe
---(e
a "whose better" war, I am just trying to learn
here. It would seem the more drives you could place in a RAID
configuration, the performance would increase.
Steve Poe
---(end of broadcast)---
TIP 5: Have you checked our extensiv
underlying code to do this has
> to be not-too-complex.
It may not be that far off if you can use COPY instead of INSERT.
But comparing Bulkload to INSERT is a bit apples<->orangish.
--
Steve Wampler -- [EMAIL PROTECTED]
The gods that smiled on your birth are now laughing out loud.
an being chosen or
> generated.
That's the wrong index type for fast range queries. You really need
something like GiST or rtree for that. I do something similar in
production and queries are down at the millisecond level with the
right index.
Cheers,
Steve
> Here are table d
OSDB for more disk thrash testing.
I am new to this; maybe someone else may be able to speak from more
experience.
Regards.
Steve Poe
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining col
If SATA drives don't have the ability to replace SCSI for a multi-user
Postgres apps, but you needed to save on cost (ALWAYS an issue),
could/would you implement SATA for your logs (pg_xlog) and keep the rest
on SCSI?
Steve Poe
Mohan, Ross wrote:
I've been doing some reading up on th
ce of tuning lies with the hardware. The answer is
*both* hardware and application code. Finding the right balance is key. Your
mileage may vary.
Steve Poe
If, however, in the far-more-likely case that the application code
or system/business process is the throttle point, it'd be a great
us
TP-type transactions. I used OSDB
since it is simple to implement and use. Although OSDL's OLTP testing
will closer to reality.
Steve Poe
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
memory serves correctly, will occupy around 800-900M of disc space in
pg_xlog.
Steve Poe
Nurlan Mukhanov (AL/EKZ) wrote:
Hello.
I'm trying to restore my database from dump in several parrallel processes, but
restore process works too slow.
Number of rows about 100 000 000,
RAM: 8192
ation, but some
clarity could help.
Steve Poe
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
ll performance above the 10-15%
baseline, and 3) find out what the mean and standard deviation between
all your results.
If your results are within that range, this maybe "normal". I follow-up
with you later on what I do.
Steve Poe
---(end of broadcast)-
Tom,
Just a quick thought: after each run/sample of pgbench, I drop the
database and recreate it. When I don't my results become more skewed.
Steve Poe
Thomas F.O'Connell wrote:
Interesting. I should've included standard deviation in my pgbench
iteration patch. Maybe I'll
mment?
Steve Poe
Thomas F.O'Connell wrote:
Considering the default vacuuming behavior, why would this be?
-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
Strategic Open Source: Open Your iâ„¢
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 372
Joshua,
This article was in July 2002, so is there update to this information?
When will a new ODBC driver be available for testing? Is there a release
of the ODBC driver with better performance than 7.0.3.0200 for a 7.4.x
database?
Steve Poe
We have mentioned it on the list.
http
DBC access to
PostgreSQL's COPY. (I have it installed here and *love* it - it
gives outstanding performance.) However, it hasn't made into an
official release yet. I don't know why, perhaps there's
a problem yet to be solved with it ('works for me', though)?
I
it is for business 24/7 operations, then the company should
be able to put some money behind what they want to put their business
on. Your mileage may vary.
Steve
---(end of broadcast)---
TIP 6: Have you searched our list archives?
Past recommendations for a good RAID card (for SCSI) have been the LSI
MegaRAID 2x. This unit comes with 128MB of RAM on-board. Has anyone
found by increasing the on-board RAM, did Postgresql performed better?
Thanks.
Steve Poe
---(end of broadcast
yes?
>Update3 is currenly throttling your I/O by about 50%.
>
>
Is that 50% just for the Dell PERC4 RAID on RH AS 3.0? Sound like
severe context switching.
Steve Poe
---(end of broadcast)---
TIP 3: if posting/reading through Usenet,
Is there a better way? Combining
them all into a transaction or something?
Thanks,
Steve Bergman
---(end of broadcast)---
TIP 8: explain analyze is your friend
ou actually need full-text search, but my
experience has been that it's too slow to be useful in production, and
it's not needed for the simple "leading wildcard" case.
Cheers,
Steve
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
been your experience?
I don't forsee more 10-15 concurrent sessions running for an their OLTP
application.
Thanks.
Steve Poe
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
: 46052 kB
SwapTotal: 1534056 kB
SwapFree: 1526460 kB
This is a real doosey for me, please provide any advise possible.
Steve
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
As a follow up to this ive installed on another test Rehat 8 machine
with
7.3.4 and slow inserts are present, however on another machine with ES3
the same 15,000 inserts is about 20 times faster, anyone know of a
change
that would effect this, kernel or rehat release ?
Steve
-Original
s has increased the speed on my Redhat 8 servers my 20X !
Steve
-Original Message-
From: Steve Pollard
Sent: Thursday, 9 June 2005 1:27 PM
To: Steve Pollard; pgsql-performance@postgresql.org
Subject: RE: [PERFORM] Importing from pg_dump slow, low Disk IO
As a follow up to this ive install
umber LIMIT 15;
?
Cheers,
Steve
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
'm understanding correctly that'll mostly increase the estimated
cost of handling a row relative to a sequential page fetch, which
sure sounds like it'll push plans in the right direction, but it
doesn't sound like the right knob to twiddle.
What do you have random_page_cost set to
?
Have you made changes to the postgresql.conf? kernel.vm settings? IO
scheduler?
If you're not doing so already, you may consider running sar (iostat) to
monitor when the hanging occurs if their is a memory / IO bottleneck
somewhere.
Good luck.
Steve Poe
On Tue, 2005-08-09 at 12:04 -0600
this list, RAID5 is not the best
choice for the database. RAID10 would be a better solution (using 8 of
your disks) then take the remaining disk and do mirror with your pg_xlog
if possible.
Best of luck,
Steve Poe
On Thu, 2005-08-11 at 13:23 +0100, Paul Johnson wrote:
> Hi all, we're run
spending the 50-100h to gain 2-4% over a course of a month for a
24x7 operation would seem worth the investment?
I would assume that dbt2 with STP helps minimize the amount of hours
someone has to invest to determine performance gains with configurable
options?
Steve Poe
> If someone spends 1
, I look at checking your max_fsm_pages and
max_fsm_relations after a full vacuum analyze before doing too much with
sort mem.
Your mileage may vary.
Best of luck.
Steve Poe
>
> ---(end of broadcast)---
> TIP 5: don't forget t
T NULL,
> lastname varchar(25),
> firstname varchar(25),
> ...other columns...
> CONSTRAINT agent2_pkey PRIMARY KEY (agent_id),
> CONSTRAINT agent_office_fk FOREIGN KEY (office_id) REFERENCES office
> (office_id) ON UPDATE RESTRICT ON DELETE RESTRICT
> )
> WITHOUT OIDS
, session_table
WHERE some_data_table.server_id = session_table.server_id
AND session_table.session_id = 'foobar'
You'd need a reaper process to delete old data from that table to
prevent it from growing without limit, and probably a table associating
session start time with sessionid to make reaping easier.
Cheers,
Steve
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
h benchmarks showing the
potential return on that investment" would, at the very least, make
the threads far less grating to read.
Cheers,
Steve
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
_xlog
exists. Unfortunately, I don't have room on the RAID1 that the OS exists
on(Centos Linux 4.1).
Anyone have any experience moving the XFS log to the pg_xlog? The
guessing the the benefit / cost will cancel each other out.
Thanks.
Steve Poe
---(end of
Would it not be faster to do a dump/reload of the table than reindex or
is it about the same?
Steve Poe
On Wed, 2005-10-12 at 13:21 -0400, Tom Lane wrote:
> Emil Briggs <[EMAIL PROTECTED]> writes:
> >> Not yet, the db is in production use and I have to plan for a down-time
&
me per operation - but that would lose you the
convenience of maintaining the counts correctly when you futz with
the data manually or using tools not aware of the count table.
Cheers,
Steve
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
el.configdb(#WHERE at.id=a.id AND a.name='obsid' AND
a.value='oid080505');
NOTICE: QUERY PLAN:
Hash Join (cost=42674.42..100600.52 rows=296330 width=100)
-> Seq Scan on tmp_table2 at (cost=0.00..34975.88 rows=1539688 width=50)
-> Hash (cost=42674.32..42674.
Scott Lamb wrote:
> On Nov 14, 2005, at 2:07 PM, Steve Wampler wrote:
>
>> # SELECT at.id FROM "tmp_table2" at, "tmp_tabl2e" a
>> # WHERE at.id=a.id and a.name='obsid' and a.value='oid080505';
>
>
> Isn't this e
Tom Lane wrote:
> Steve Wampler <[EMAIL PROTECTED]> writes:
>
>>We've got an older system in production (PG 7.2.4). Recently
>>one of the users has wanted to implement a selective delete,
>>but is finding that the time it appears to take exceeds her
>
Joshua Marsh wrote:
>
>
> On 11/14/05, *Steve Wampler* <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
>
> However, even that seems to have a much higher cost than I'd expect:
>
>lab.devel.configdb=# explain delete from "tmp_tab
Scott Lamb wrote:
> On Nov 14, 2005, at 3:52 PM, Steve Wampler wrote:
>
>> Scott Lamb wrote:
>>
>>> On Nov 14, 2005, at 2:07 PM, Steve Wampler wrote:
>>>
>>>> # SELECT at.id FROM "tmp_table2" at, "tmp_tabl2e" a
x27;obsid' and t2.value='oid080505');
NOTICE: QUERY PLAN:
Seq Scan on tmp_table2 (cost=0.00..9297614.80 rows=769844 width=6)
SubPlan
-> Index Scan using inv_index_2 on tmp_table2 t2 (cost=0.00..6.02
rows=1 width=0)
EXPLAIN
(This is after p
; Which looks suspicious: 26308 MB/sec???
Eh? That looks more like ~25.7 MB/sec, assuming 1MB = 1024*1024 bytes.
--
Steve Wampler -- [EMAIL PROTECTED]
The gods that smiled on your birth are now laughing out loud.
---(end of broadcast)---
TIP 1: if
Joshua D. Drake wrote:
> The reason you want the dual core cpus is that PostgreSQL can only
> execute 1 query per cpu at a time,...
Is that true? I knew that PG only used one cpu per query, but how
does PG know how many CPUs there are to limit the number of queries?
--
Steve Wampler --
David Boreham wrote:
> Steve Wampler wrote:
>
>> Joshua D. Drake wrote:
>>
>>
>>> The reason you want the dual core cpus is that PostgreSQL can only
>>> execute 1 query per cpu at a time,...
>>>
>>
>>
>> Is that true? I
Arjen van der Meijden wrote:
> On 15-11-2005 15:18, Steve Wampler wrote:
>
>> Magnus Hagander wrote:
>> (This is after putting an index on the (id,name,value) tuple.) That
>> outer seq scan
>> is still annoying, but maybe this will be fast enough.
>>
&g
eheartedly* agree with Chris. An easy way to get the pg_dump
for the upgrade target to run with the upgradable source
would work wonders. (Instructions included, of course.)
--
Steve Wampler -- [EMAIL PROTECTED]
The gods that smiled on your birth are now laughing out loud.
---
Title: Database restore speed
Our application tries to insert data into the database as fast as it can.
Currently the work is being split into a number of 1MB copy operations.
When we restore the postmaster process tries to use 100% of the CPU.
The questions we have are:
1) What is pos
discs, seq. output was about equal regardless of file system being uses
(EXT3,JFS,or XFS).
Steve
On Thu, 2005-12-08 at 12:12 +, Rory Campbell-Lange wrote:
> We are testing disk I/O on our new server (referred to in my recent
> questions about LVM and XFS on this list) and have run bonni
Is there a reason you can't rewrite your SELECT like:
SELECT UUID FROM MDM.KEYWORDS_INFO WHERE KEYWORDS_ID IN (a, b, c, d)
Even doing them 100 at a time will make a big difference; you should
put as many in the list as pgsql supports. I'm assuming that there's
an index over KEYWORDS_ID.
Re
I have questions about how to improve the write performance of PostgreSQL for
logging data from a real-time simulation. We found that MySQL 4.1.3 could log
about 1480 objects/second using MyISAM tables or about 1225 objects/second
using InnoDB tables, but PostgreSQL 8.0.3 could log only about 5
Tom Lane wrote:
Steve Eckmann <[EMAIL PROTECTED]> writes:
We also found that we could improve MySQL performance significantly
using MySQL's "INSERT" command extension allowing multiple value-list
tuples in a single command; the rate for MyISAM tables improved
Steinar H. Gunderson wrote:
On Tue, Jan 03, 2006 at 04:44:28PM -0700, Steve Eckmann wrote:
Are there general guidelines for tuning the PostgreSQL server for this kind
of application? The suggestions I've found include disabling fsync (done),
Are you sure you r
dlang wrote:
On Tue, 3 Jan 2006, Tom Lane wrote:
Steve Eckmann <[EMAIL PROTECTED]> writes:
We also found that we could improve MySQL performance significantly
using MySQL's "INSERT" command extension allowing multiple value-list
tuples in
tions.
Regards, Steve
---(end of broadcast)---
TIP 6: explain analyze is your friend
Tom Lane wrote:
Steve Eckmann <[EMAIL PROTECTED]> writes:
<>Thanks for the suggestion, Tom. Yes, I
think I could do that. But I
thought what I was doing now was effectively the same, because the
PostgreSQL 8.0.0 Documentation says (section 27.3.1): "It is allowe
Kelly Burkhart wrote:
On 1/4/06, Steve Eckmann
<[EMAIL PROTECTED]>
wrote:
Thanks,
Steinar. I don't think we would really run with fsync off, but
I need to document the performance tradeoffs. You're right that my
explanation was confusing; probably because I'm conf
... the ideal might be a RAID1 controller with BBU for the WAL and
something else, such as software RAID, for the main data array?
Cheers,
Steve
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subs
titioned tables, and inheritance means you don't have to
maintain the union view yourself.
Cheers,
Steve
---(end of broadcast)---
TIP 6: explain analyze is your friend
pproaches
to displaying the data. There's a lot of monkey-see, monkey-do in web
UI design too, which doesn't help.
Cheers,
Steve
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Mar 22, 2007, at 11:26 AM, Guido Neitzer wrote:
On 22.03.2007, at 11:53, Steve Atkins wrote:
As long as you're ordering by some row in the table then you can
do that in
straight SQL.
select a, b, ts from foo where (stuff) and foo > X order by foo
limit 10
Then, record
1 - 100 of 312 matches
Mail list logo