What Scott said ... seconded, all of it.
I'm running one 500GB database on a 64-bit, 8GB VMware virtual machine, with
2 vcores, PG 8.3.9 with shared_buffers set to 2GB, and it works great.
However, it's a modest workload, most of the database is archival for data
mining, and the "working set" for
On Wed, Mar 24, 2010 at 8:59 PM, Eger, Patrick wrote:
> Ok, the wording is a bit unclear in the documentation as to whether it is the
> cost for an entire *page* of tuples, or actual tuples. So something like the
> following might give better results for a fully-cached DB?
>
> seq_page_cost = 1.
On Wed, Mar 24, 2010 at 6:49 PM, Campbell, Lance wrote:
> PostgreSQL 8.4.3
>
> Linux Redhat 5.0
>
> Question: How much memory do I really need?
The answer is "as much as needed to hold your entire database in
memory and a few gig left over for sorts and backends to play in."
> From my understand
Ok, the wording is a bit unclear in the documentation as to whether it is the
cost for an entire *page* of tuples, or actual tuples. So something like the
following might give better results for a fully-cached DB?
seq_page_cost = 1.0
random_page_cost = 1.1 #even memory has random access costs, l
PostgreSQL 8.4.3
Linux Redhat 5.0
Question: How much memory do I really need?
>From my understanding there are two primary strategies for setting up
PostgreSQL in relationship to memory:
1) Rely on Linux to cache the files. In this approach you set the
shared_buffers to a relativ
On Wed, Mar 17, 2010 at 9:01 PM, Eger, Patrick wrote:
> I'm running 8.4.2 and have noticed a similar heavy preference for
> sequential scans and hash joins over index scans and nested loops. Our
> database is can basically fit in cache 100% so this may not be
> applicable to your situation, but t
On 03/22/2010 03:21 PM, Tom Lane wrote:
The fundamental reason why you're getting a bad plan choice is the
factor-of-100 estimation error here. I'm not sure you can do a whole
lot about that without rethinking the query --- in particular I would
suggest trying to get rid of the non-constant rang
t...@sss.pgh.pa.us (Tom Lane) writes:
> "Ross J. Reedstrom" writes:
>> On Sat, Mar 20, 2010 at 10:47:30PM -0500, Andy Colson wrote:
>>> (I added the "and trust" as an after thought, because I do have one very
>>> important 100% uptime required mysql database that is running. Its my
>>> MythTV b
reeds...@rice.edu ("Ross J. Reedstrom") writes:
> http://www.mythtv.org/wiki/PostgreSQL_Support
That's a pretty hostile presentation...
The page has had two states:
a) In 2008, someone wrote up...
After some bad experiences with MySQL (data loss by commercial power
failure, very bad
Yeb Havinga wrote:
Greg Smith wrote:
MySQL corruption is one of the top ten cause of a MythTV system
crashing.
It would be the same with PG, unless the pg cluster configuration with
mythtv would come with a properly configured WAL - I had corrupted
tables (and a personal wiki entry (the other
Yeb Havinga wrote:
Greg Smith wrote:
Tom Lane wrote:
So has anyone looked at porting MythTV to PG?
Periodically someone hacks together something that works, last big
effort I'm aware of was in 2006, and then it bit rots away. I'm sure
we'd get some user uptake on the result--MySQL corru
Greg Smith wrote:
Tom Lane wrote:
So has anyone looked at porting MythTV to PG?
Periodically someone hacks together something that works, last big
effort I'm aware of was in 2006, and then it bit rots away. I'm sure
we'd get some user uptake on the result--MySQL corruption is one of
the
12 matches
Mail list logo