Hi,
I first suggestion would be to either build the index only on
parcel_id_code or on (parcel_id_code, id).
But I am not sure because I am new in pg:)
cheers,
lefteris
On Sat, Jan 9, 2010 at 1:46 PM, Richard Neill wrote:
> Dear All,
>
> I'm trying to optimise the speed of som
On Thu, Jan 7, 2010 at 11:57 PM, Lefteris wrote:
> Hi Greg,
>
> thank you for your help. The changes I did on the dataset was just
> removing the last comma from the CSV files as it was interpreted by pg
> as an extra column. The schema I used, the load script and queries ca
times I got from postgres.
I really appreciate your help! this is a great opportunity for me to
get some feeling and insights on postgres since I never had the chance
to use it in a large scale project.
lefteris
On Thu, Jan 7, 2010 at 11:21 PM, Greg Smith wrote:
> Lefteris wrote:
>>
On Thu, Jan 7, 2010 at 4:57 PM, Ivan Voras wrote:
> 2010/1/7 Lefteris :
>> On Thu, Jan 7, 2010 at 3:51 PM, Ivan Voras wrote:
>>> On 7.1.2010 15:23, Lefteris wrote:
>>>
>>>> I think what you all said was very helpful and clear! The only part
>>>
On Thu, Jan 7, 2010 at 3:51 PM, Ivan Voras wrote:
> On 7.1.2010 15:23, Lefteris wrote:
>
>> I think what you all said was very helpful and clear! The only part
>> that I still disagree/don't understand is the shared_buffer option:))
>
> Did you ever try increasi
On Thu, Jan 7, 2010 at 3:14 PM, Alvaro Herrera
wrote:
> Lefteris escribió:
>> Yes, I am reading the plan wrong! I thought that each row from the
>> plan reported the total time for the operation but it actually reports
>> the starting and ending point.
>>
>> So we
only have one session with one connection, do I have like many
reader workers or something?
Thank you and sorry for the plethora of questions, but I know few
about the inner parts of postgres:)
lefteris
On Thu, Jan 7, 2010 at 3:05 PM, Jochen Erwied
wrote:
> Thursday, January 7, 2010, 2:47:36
o
0.1% consumption of main memory. There is no way to force sort to use
say blocks of 128MB ? wouldn't that make a difference?
lefteris
p.s. i already started the analyze verbose again as Flavio suggested
and reset the parrameters, although I think some of Flavioo's
suggestions have to d
rows=52484047 loops=1)
I dont see the seq scan to ba a problem, and it is the correct choice
here because Year spans from 1999 to 2009 and the query asks from 2000
and on, so PG correctly decides to use seq scan and not index access.
lefteris
On Thu, Jan 7, 2010 at 2:32 PM, A. Kretschmer
wrote
ch means that of course work_mem == sort_mem, as
such, shouldn't be the case that the sort algorithm should have used
much more memory?
I also attach the output of 'show all;' so you can advice me in any
other configuration settings that I might need to change to perform
better.
Than
10 matches
Mail list logo