Eugeny N Dzhurinsky <[EMAIL PROTECTED]> writes:
> [slow query]
The bulk of your time seems to be going into this indexscan:
> -> Index Scan using
> task_scheduler_icustomer_id on task_scheduler ts (cost=2.03..11.51 rows=1
> width=51) (actual time=2.785..2.785
Milen,
On 8/1/06 3:19 PM, "Milen Kulev" <[EMAIL PROTECTED]> wrote:
> Sorry, forgot to ask:
> What is the recommended/best PG block size for DWH database? 16k, 32k, 64k
> ?
> What hsould be the relation between XFS/RAID stripe size and PG block size ?
We have found that the page size in PG st
Milen,
On 8/1/06 2:49 PM, "Milen Kulev" <[EMAIL PROTECTED]> wrote:
> Is anyone using XFS for storing/retrieving relatively large amount of data (~
> 200GB)?
I concur with the previous poster's experiences with one additional
observation:
We have had instabilities with XFS with software RAID (m
Milen Kulev wrote:
Is anyone using XFS for storing/retrieving relatively large amount of data (~
200GB)?
Yes, but not for that large - only about 40-50 GB of database data.
If yes, what about the performance and stability of XFS.
I'm pretty happy with the performance, particularly read
On 8/1/06, George Pavlov <[EMAIL PROTECTED]> wrote:
I am looking for some general guidelines on what is the performance
overhead of enabling point-in-time recovery (archive_command config) on
an 8.1 database. Obviously it will depend on a multitude of factors, but
some broad-brush statements and/
J. Andrew Rogers wrote:
>
> On Aug 1, 2006, at 2:49 PM, Milen Kulev wrote:
> >Is anyone using XFS for storing/retrieving relatively large amount
> >of data (~ 200GB)?
>
>
> Yes, we've been using it on Linux since v2.4 (currently v2.6) and it
> has been rock solid on our database servers (Op
On Aug 1, 2006, at 2:49 PM, Milen Kulev wrote:
Is anyone using XFS for storing/retrieving relatively large amount
of data (~ 200GB)?
Yes, we've been using it on Linux since v2.4 (currently v2.6) and it
has been rock solid on our database servers (Opterons, running in
both 32-bit and 64-
Not sure if this helps solve the problem but... (see below) As new records are added Indexes are used for awhile and then at some point postgres switches to seq scan. It is repeatable. Any suggestions/comments to try and solve this are welcome. Thanks Data is as follows: capsa.flatommemberre
Hi Andrew,
Thank you for your prompt reply.
Are you using some special XFS options ?
I mean special values for logbuffers bufferiosize , extent size preallocations
etc ?
I will have only 6 big tables and about 20 other relatively small (fact
aggregation) tables (~ 10-20 GB each).
I believe it
Sorry, forgot to ask:
What is the recommended/best PG block size for DWH database? 16k, 32k, 64k ?
What hsould be the relation between XFS/RAID stripe size and PG block size ?
Best Regards.
Milen Kulev
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
I intend to test Postgres/Bizgres for DWH use. I want to use XFS filesystem to
get the best possible performance at FS
level(correct me if I am wrong !).
Is anyone using XFS for storing/retrieving relatively large amount of data (~
200GB)?
If yes, what about the performance and stability of
On 1 aug 2006, at 20.09, tlm wrote:
SELECT q3.translation, q2.otherstuff
FROM
(
SELECT INPUT.word, q1.meaning_id, INPUT.otherstuff
FROM
INPUT
INNER JOIN
(
SELECT translation, meaning_id
FROM TRANS
WHERE translation IN (SELECT word FROM INPUT)
) AS q1
ON INPUT.word = q1.
I need some expert advice on how to optimize a "translation" query (this word choice will become clear shortly, I hope).
Say I have a HUMONGOUS table of foreign language "translations" (call it TRANS) with records like these:
meaning_id: 1
language_id: 5
translation: jidoosha
meaning_id: 1
la
On 1-8-2006 19:26, Jim C. Nasby wrote:
On Sat, Jul 29, 2006 at 08:43:49AM -0700, Joshua D. Drake wrote:
I'd love to get an english translation that we could use for PR.
Actually, we have an english version of the Socket F follow-up.
http://tweakers.net/reviews/638 which basically displays the
On Sat, Jul 29, 2006 at 08:43:49AM -0700, Joshua D. Drake wrote:
> Jochem van Dieten wrote:
> >Tweakers.net has done a database performance test between a Sun T2000 (8
> >core T1) and a Sun X4200 (2 dual core Opteron 280). The database
> >benchmark is developed inhouse and represents the average qu
In response to "George Pavlov" <[EMAIL PROTECTED]>:
> I am looking for some general guidelines on what is the performance
> overhead of enabling point-in-time recovery (archive_command config) on
> an 8.1 database. Obviously it will depend on a multitude of factors, but
> some broad-brush statemen
I am looking for some general guidelines on what is the performance
overhead of enabling point-in-time recovery (archive_command config) on
an 8.1 database. Obviously it will depend on a multitude of factors, but
some broad-brush statements and/or anecdotal evidence will suffice.
Should one worry a
Actually, what we did in the tests at EnterpriseDB was encapsulate each
SQL statement within its own BEGIN/EXCEPTION/END block.
Using this approach, if a SQL statement aborts, the rollback is
confined
to the BEGIN/END block that encloses it. Other SQL statements would
not be affected since the b
Hello, I have a query:
explain analyze select tu.url_id, tu.url, coalesce(sd.recurse, 100), case when
COALESCE(get_option('use_banner')::integer,0) = 0 then 0 else ts.use_banner
end as use_banner, ts.use_cookies, ts.use_robots, ts.includes, ts.excludes,
ts.track_domain, ts.task_id,get_available_p
"Guoping Zhang" <[EMAIL PROTECTED]> writes:
> In fact, it is a general question that "Is it a good practice we shall avoid
> to run application server and database server on the platform with opposite
> edian? or it simply doesn't matter"?
Our network protocol uses big-endian consistently, so ther
* Arjen van der Meijden:
> For a database system, however, processors hardly ever are the main
> bottleneck, are they?
Not directly, but the choice of processor influences which
chipsets/mainboards are available, which in turn has some impact on
the number of RAM slots. (According to our hardwar
21 matches
Mail list logo