On Tue, Aug 01, 2006 at 11:15:11PM -0400, Tom Lane wrote:
> Eugeny N Dzhurinsky <[EMAIL PROTECTED]> writes:
> > [slow query]
> The bulk of your time seems to be going into this indexscan:
> > -> Index Scan using
> > task_scheduler_icustomer_id on task_scheduler ts
Initial testing was with data that essentially looks like a single collection with many items. I then changed this to have 60 collections of 50 items. The result, much better (but not optimum) use of indexs, but a seq scan stillused. Turning seq scan off, all indexes where used. Query was much fas
On Wed, 2006-08-02 at 07:17, H Hale wrote:
> Initial testing was with data that essentially looks like a single collection
> with many items.
> I then changed this to have 60 collections of 50 items.
> The result, much better (but not optimum) use of indexs, but a seq scan still
> used.
>
> Tu
Hi Like, Mark , Alvaro and Andrew,
Thank you very much for sharing you experience with me.
I want to compare DHW performance of PG/Bizgres on different filesystems and
difffrent
Block sizes.
The hardware will be free for me in a week or too (at a moment another project
is running on it) and
Milen,
For the past year, I have been running odbc-bench on a
dual-opteron with 4GB of RAM using a 8GB sample data. I found the
performance difference between EXT3, JFS, and XFS is +/- 5-8%.
This could be written-off as "noise" just for normal server performance
flux. If you plan on using the de
Title: Nachricht
Hi
Steve,
I hope
that performance between EXT3 and XFS is not only 5-8% . Such
a small difference could be interpreted as "noise", as you already
mentioned.
I want
to give many filesystem a try. Stability is also a concern, but I don't want to
favour any FS over anoth
[EMAIL PROTECTED] ("Milen Kulev") writes:
> I am pretty exited whether XFS will clearly outpertform ETX3 (no
> default setups for both are planned !). I am not sure whether is it
> worth to include JFS in comparison too ...
I did some benchmarking about 2 years ago, and found that JFS was a
few p
On Wed, Aug 02, 2006 at 02:26:39PM -0700, Steve Poe wrote:
For the past year, I have been running odbc-bench on a dual-opteron with
4GB of RAM using a 8GB sample data. I found the performance difference
between EXT3, JFS, and XFS is +/- 5-8%.
That's not surprising when your db is only 2x your
Again - the performance difference increases as the disk speed increases.
Our experience is that we went from 300MB/s to 475MB/s when moving from ext3
to xfs.
- Luke
On 8/2/06 4:33 PM, "Michael Stone" <[EMAIL PROTECTED]> wrote:
> On Wed, Aug 02, 2006 at 02:26:39PM -0700, Steve Poe wrote:
>> F
On 7/18/06, Alex Turner <[EMAIL PROTECTED]> wrote:
Remember when it comes to OLTP, massive serial throughput is not gonna help
you, it's low seek times, which is why people still buy 15k RPM drives, and
why you don't necessarily need a honking SAS/SATA controller which can
harness the full 1066MB
Merlin,
> moving a gigabyte around/sec on the server, attached or no,
> is pretty heavy lifting on x86 hardware.
Maybe so, but we're doing 2GB/s plus on Sun/Thumper with software RAID
and 36 disks and 1GB/s on a HW RAID with 16 disks, all SATA.
WRT seek performance, we're doing 2500 seeks per s
My theory, based entirely on what I have read in this thread, is that a low end server (really a small workstation) with an Intel Dual Core CPU is likely an excellent PG choice for the lowest end.I'll try to snag an Intel Dual Core workstation in the near future and report back DBT2 scores comparin
If your server is heavily I/O bound AND you care about your data AND your are throwing out your WAL files in the middle of the day... You are headed for a cliff. I'm sure this doesn't apply to anyone on this thread, just a general reminder to all you DBA's out there who sometimes are too busy to
I was kinda thinking that making the Block Size configurable at InitDB time would be a nice & simple enhancement for PG 8.3. My own personal rule of thumb for sizing is 8k for OLTP, 16k for mixed use, & 32k for DWH.
I have no personal experience with XFS, but, I've seen numerous internal edb-postg
14 matches
Mail list logo