Re: [PERFORM] Contemplating SSD Hardware RAID

2011-06-20 Thread Greg Smith
On 06/20/2011 11:54 PM, Dan Harris wrote: I understand that the majority of consumer grade SSD drives lack the required capacitor to complete a write on a sudden power loss. But, what about pairing up with a hardware controller with BBU write cache? Can the write cache be disabled at the driv

Re: [PERFORM] sequential scan unduly favored over text search gin index

2011-06-20 Thread Tom Lane
Sushant Sinha writes: >> I guess you could poke the planner towards the bitmap scan by lowering >> the random_page_cost (the default value is 4, I'd say lowering it to 2 >> should do the trick). > The numbers that I gave was after setting random_page_cost = 1.0 After > this I don't know what to d

[PERFORM] Contemplating SSD Hardware RAID

2011-06-20 Thread Dan Harris
I'm looking for advice from the I/O gurus who have been in the SSD game for a while now. I understand that the majority of consumer grade SSD drives lack the required capacitor to complete a write on a sudden power loss. But, what about pairing up with a hardware controller with BBU write cac

[PERFORM] Cross Table (Pivot)

2011-06-20 Thread Mario Guerrero
Compañeros buenas noches: Existe una sentencia SQL "fuertemente acoplada" al interios del Sistema Gestor de Bases de Datos Postgresql que me permita "trasponer" tablas, algo similar al contrib cross table de tablefunc. Estoy intentando con una sentencia como: SELECT id, name, max(case()) as

Re: [PERFORM] sequential scan unduly favored over text search gin index

2011-06-20 Thread Sushant Sinha
> > I agree the estimates are damn precise in this case (actually the > estimates are exact). The problem is the planner thinks the seq scan is > about 30% cheaper than the bitmap index scan. > > I guess you could poke the planner towards the bitmap scan by lowering > the random_page_cost (the d

Re: [PERFORM] bad plan: 8.4.8, hashagg, work_mem=1MB.

2011-06-20 Thread Jon Nelson
On Mon, Jun 20, 2011 at 11:08 AM, Tom Lane wrote: > Jon Nelson writes: >> I ran a query recently where the result was very large. The outer-most >> part of the query looked like this: > >>  HashAggregate  (cost=56886512.96..56886514.96 rows=200 width=30) >>    ->  Result  (cost=0.00..50842760.97

Re: [PERFORM] how to know slowly query in lock postgre

2011-06-20 Thread Tomas Vondra
Dne 20.6.2011 09:57, Didik Prasetyo napsal(a): > hai friend i have problem with performance database in postgre, how to > know slowly query in postgre, > i want kill or stop query to make postgre slowly, on the server status > on the admin pg, sometimes the query and how long the query runs do not

Re: [PERFORM] sequential scan unduly favored over text search gin index

2011-06-20 Thread Tomas Vondra
Dne 20.6.2011 18:04, Sushant Sinha napsal(a): > > On Mon, 2011-06-20 at 10:58 -0500, Kevin Grittner wrote: >> Sushant Sinha wrote: >> >>> I have a tsvector column docvector and a gin index on it >>> docmeta1_docvector_idx >>> >>> I have a simple query "select * from docmeta1 where docvector @@

Re: [PERFORM] sequential scan unduly favored over text search gin index

2011-06-20 Thread Jesper Krogh
On 2011-06-20 17:38, Sushant Sinha wrote: I have a tsvector column docvector and a gin index on it docmeta1_docvector_idx I have a simple query "select * from docmeta1 where docvector @@ plainto_tsquery('english', 'free');" I find that the planner chooses a sequential scan of the table even whe

Re: [PERFORM] bad plan: 8.4.8, hashagg, work_mem=1MB.

2011-06-20 Thread Tom Lane
Jon Nelson writes: > I ran a query recently where the result was very large. The outer-most > part of the query looked like this: > HashAggregate (cost=56886512.96..56886514.96 rows=200 width=30) >-> Result (cost=0.00..50842760.97 rows=2417500797 width=30) > The row count for 'Result' is

Re: [PERFORM] sequential scan unduly favored over text search gin index

2011-06-20 Thread Sushant Sinha
On Mon, 2011-06-20 at 10:58 -0500, Kevin Grittner wrote: > Sushant Sinha wrote: > > > I have a tsvector column docvector and a gin index on it > > docmeta1_docvector_idx > > > > I have a simple query "select * from docmeta1 where docvector @@ > > plainto_tsquery('english', 'free');" > > > >

[PERFORM] bad plan: 8.4.8, hashagg, work_mem=1MB.

2011-06-20 Thread Jon Nelson
PostgreSQL 8.4.8 on CentOS 5.6, x86_64. Default settings except work_mem = 1MB. NOTE: I am using partitioned tables here, and was querying the 'master' table. Perhaps is this a Known Issue. I ran a query recently where the result was very large. The outer-most part of the query looked like this:

Re: [PERFORM] sequential scan unduly favored over text search gin index

2011-06-20 Thread Kevin Grittner
Sushant Sinha wrote: > I have a tsvector column docvector and a gin index on it > docmeta1_docvector_idx > > I have a simple query "select * from docmeta1 where docvector @@ > plainto_tsquery('english', 'free');" > > I find that the planner chooses a sequential scan of the table > even when t

[PERFORM] sequential scan unduly favored over text search gin index

2011-06-20 Thread Sushant Sinha
I have a tsvector column docvector and a gin index on it docmeta1_docvector_idx I have a simple query "select * from docmeta1 where docvector @@ plainto_tsquery('english', 'free');" I find that the planner chooses a sequential scan of the table even when the index performs orders of magnitude. I

Re: [PERFORM] Inoptimal query plan for max() and multicolumn index

2011-06-20 Thread Kevin Grittner
Vladimir Kulev wrote: > # explain analyze select max(timestamp) from sms where number in > ('5502712','5802693','5801981'); > According to planner cost estimations - it has enough data to > understand that it is better to aggregate maximum from three > subqueries. I suppose it's not a bug but

Re: [PERFORM] how to know slowly query in lock postgre

2011-06-20 Thread Claudio Freire
Something like this[0] ? [0] http://archives.postgresql.org/pgsql-hackers/2007-04/msg01037.php On Mon, Jun 20, 2011 at 9:57 AM, Didik Prasetyo wrote: > hai friend i have problem with performance database in postgre, how to know > slowly query in postgre, > i want kill or stop query to make postg

Re: [PERFORM] generating a large XML document

2011-06-20 Thread Julius Tuskenis
2011.06.20 10:58, Pavel Stehule rašė: string_agg is more effective now. The solution is only radical refactoring of xmlagg function. Thank you, Pavel for letting me know of string_agg. explain analyze SELECT string_agg( XMLELEMENT ( NAME "bar", XMLELEMENT ( NAME "code", tick_b

Re: [PERFORM] generating a large XML document

2011-06-20 Thread Pavel Stehule
2011/6/20 Pavel Stehule : > 2011/6/20 Julius Tuskenis : >> Thank you, Pavel for your answer >> >> 2011.06.20 09:51, Pavel Stehule rašė: >>> >>> can you send a profile? >> >> Excuse me, but what do you mean by saying "profile"? I've sent content of >> pg_settings in the first post. Please be more sp

[PERFORM] how to know slowly query in lock postgre

2011-06-20 Thread Didik Prasetyo
hai friend i have problem with performance database in postgre, how to know slowly query in postgre, i want kill or stop query to make postgre slowly, on the server status on the admin pg, sometimes the query and how long the query runs do not appear Thanks for solution

Re: [PERFORM] generating a large XML document

2011-06-20 Thread Pavel Stehule
2011/6/20 Julius Tuskenis : > Thank you, Pavel for your answer > > 2011.06.20 09:51, Pavel Stehule rašė: >> >> can you send a profile? > > Excuse me, but what do you mean by saying "profile"? I've sent content of > pg_settings in the first post. Please be more specific as I am more of a > programme

Re: [PERFORM] generating a large XML document

2011-06-20 Thread Julius Tuskenis
Thank You, Samuel for the time it took to investigate the issue. I'll try to use buffer to see what the results are... I'll post results to the list if I succeed. -- Julius Tuskenis Programavimo skyriaus vadovas UAB nSoft mob. +37068233050 -- Sent via pgsql-performance mailing list (pgsql-pe

Re: [PERFORM] generating a large XML document

2011-06-20 Thread Julius Tuskenis
Thank you, Pavel for your answer 2011.06.20 09:51, Pavel Stehule rašė: can you send a profile? Excuse me, but what do you mean by saying "profile"? I've sent content of pg_settings in the first post. Please be more specific as I am more of a programmer than an server administrator. -- Julius

Re: [PERFORM] generating a large XML document

2011-06-20 Thread Samuel Gendler
On Sun, Jun 19, 2011 at 11:36 PM, Julius Tuskenis wrote: > Hello, > > I'm sorry to write again, but as I received no answer I wonder if there is > a better mailing list to address concerning this question? Or is there > nothing to be done about the speed of xmlagg ?. Please let me as no answer >