Hi Andreas,
Maybe the content of this blog could be usefull: http://raspberrypg.org/
Cheers,
Giuseppe.
2014-08-05 6:32 GMT+02:00 Michael Paquier :
> On Tue, Aug 5, 2014 at 12:32 PM, Andreas wrote:
> > I then even tried to remove the local repository and install PG 9.1.
> > That doesn't work e
Hi Serge,
A million apologies for the delayed acknowledgement of your email. The Yahoo
webmail is doing weird things with conversations (your email was hiding in my
sent box instead of inbox, tagged onto the end of my original email !).
But I digress. I will take a look at your suggestions a
Am 05.08.2014 um 06:32 schrieb Michael Paquier:
On Tue, Aug 5, 2014 at 12:32 PM, Andreas wrote:
Is there a way to get a working PG9.3 on a Rasberry?
This seems like a problem inherent to your OS or your RPMs as two
buildfarm machines are Raspberry PIs and are able to run Postgres:
hamster an
Am 05.08.2014 um 09:36 schrieb Giuseppe Broccolo:
Maybe the content of this blog could be usefull: http://raspberrypg.org/
Hi Guiseppe,
I found this (your) Blog with google. :)
In the post how to install PG you just include the postgresql.org
repository and fetch the binary from there.
Pro
On 08/03/2014 08:55 PM, Jeff Janes wrote:
Does RAID 1 mean you only have 2 disks in your RAID? If so, that is
woefully inadequate to your apparent workload. The amount of RAM
doesn't inspire confidence, either.
Phoenix, I agree that this is probably the core of the problem you're
having. a 1
On 07/30/2014 12:51 PM, Kevin Goess wrote:
A couple months ago we upgraded the RAM on our database servers from
48GB to 64GB. Immediately afterwards the new RAM was being used for
page cache, which is what we want, but that seems to have dropped off
over time, and there's currently actually lik
I would be appreciative if somebody could help explain why we have two nearly
identical queries taking two different planner routes; one a nested index loop
that takes about 5s to complete, and the other a hash join & heap scan that
takes about 2hr. This is using Postgres 9.3.3 on OS X 10.9.4
On 08/05/2014 02:16 PM, john gale wrote:
Your EXPLAIN output basically answered this for you. Your fast query has
this:
Nested Loop (cost=0.85..2696.12 rows=88 width=1466)
While your slow one has this:
Hash Join (cost=292249.24..348608.93 rows=28273 width=1466)
If this data is at
On Aug 5, 2014, at 12:45 PM, Shaun Thomas wrote:
> Your EXPLAIN output basically answered this for you. Your fast query has this:
>
>> Nested Loop (cost=0.85..2696.12 rows=88 width=1466)
>
> While your slow one has this:
>
>> Hash Join (cost=292249.24..348608.93 rows=28273 width=1466)
>
On 08/05/2014 03:06 PM, john gale wrote:
Even on a 114G table with a 16G index, you would consider this slow?
(physical disk space is closer to 800G, that was our high-water before
removing lots of rows and vacuuming, although it is running on SSD)
Yes, actually, Especially now that you've t
On Aug 5, 2014, at 1:26 PM, Shaun Thomas wrote:
> On 08/05/2014 03:06 PM, john gale wrote:
>
>> Even on a 114G table with a 16G index, you would consider this slow?
>> (physical disk space is closer to 800G, that was our high-water before
>> removing lots of rows and vacuuming, although it is
On 08/05/2014 04:08 PM, john gale wrote:
Most of the planner options haven't diverged from default, so
default_statistics_target is still set to 100. I'm vaguely
understanding the docs on that variable, but if we have the space it
sounds like we should bump this up significantly to accommodate
Shaun Thomas-2 wrote
>> Also, that doesn't make sense to me, since we don't have 2.5mil rows
>> that match this one SpawnID. Could this suggest that my partial
>> hstore index is somehow misconstructed? Or is that saying that
>> 2.5mil rows have a SpawnID, not all of which will be the one I'm
>>
13 matches
Mail list logo