I have a situation where I cannot explicitly control the queries generated from
our BI and I would like to use table partitioning. Unfortunately the queries
don't appear to be taking advantage of the table partitions because the key
used to limit the query results is the joined foreign key rath
bryanck writes:
> One further question, how exactly can a CachedPlan leak? The OpenStreetMap
> data load does use plsql and triggers are involved as well. Is there
> something new to 9.2 that changes the way CachedPlans are cleaned up?
9.2 is more aggressive about caching plans, but you've provid
One further question, how exactly can a CachedPlan leak? The OpenStreetMap
data load does use plsql and triggers are involved as well. Is there
something new to 9.2 that changes the way CachedPlans are cleaned up?
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Postgres
In the logs, after I got the OOM, I noticed I had several thousand lines
similar to the following:
CachedPlan: 7168 total in 3 blocks; 2472 free (0 chunks); 4696 used
CachedPlan: 7168 total in 3 blocks; 3200 free (0 chunks); 3968 used
CachedPlan: 1024 total in 1 blocks; 64 free (0 chun
I recently upgraded from Postgresql 9.0.10 to 9.2.1. I am now running into
problems with Postgresql running out of memory during large data operations,
more specifically loading the OpenStreetMap data into the database. The load
under 9.0 went fine and there were no memory issues. This is on the
On Sat, Sep 29, 2012 at 4:21 PM, Scott Marlowe wrote:
> Whoa aren't you running pg bouncer? If so then leave pg alone, adjust
> pg bouncer. Revert that db side change, examine pgbouncer config etc.
Let me expand a bit on that point. The reason to use pgbouncer is
that you can have hundreds of
Ack! This fixed it:
ln -s /usr/local/lib/libuuid.so.16 /lib/x86_64-linux-gnu/libuuid.so.16
I dream of the day where UUIDs just work out of the box in Postgres..
Mike
On Sat, Sep 29, 2012 at 5:24 PM, Mike Christensen wrote:
> I built 1.6 directly from the source and ran a make install. I'm
>
I built 1.6 directly from the source and ran a make install. I'm
pretty sure this library is installed:
root@Database:~# locate libuuid.so
/lib/x86_64-linux-gnu/libuuid.so.1
/lib/x86_64-linux-gnu/libuuid.so.1.3.0
/root/uuid/uuid-1.6.0/.libs/libuuid.so
/root/uuid/uuid-1.6.0/.libs/libuuid.so.16
/ro
Yea, this is already installed.. I can type "uuid" from the command
line and get a UUID generated..
On Sat, Sep 29, 2012 at 5:07 PM, Adrian Klaver wrote:
> On 09/29/2012 04:54 PM, Mike Christensen wrote:
>>
>> I'm having problems installing the uuid-ossp extension on 9.2, running
>> on Ubuntu.
On 09/29/2012 04:54 PM, Mike Christensen wrote:
I'm having problems installing the uuid-ossp extension on 9.2, running
on Ubuntu. This is a fresh install, and I downloaded the binaries off
of EnterpriseDB (I did not build from source).. According to:
select * from pg_available_extensions;
The
I'm having problems installing the uuid-ossp extension on 9.2, running
on Ubuntu. This is a fresh install, and I downloaded the binaries off
of EnterpriseDB (I did not build from source).. According to:
select * from pg_available_extensions;
The extension is available. However, when I run:
CRE
On Sat, Sep 29, 2012 at 2:21 PM, Alexander Farber
wrote:
> Hello Scott and others,
>
> On Sat, Sep 29, 2012 at 9:38 PM, Scott Marlowe
> wrote:
>> On Sat, Sep 29, 2012 at 11:27 AM, Alexander Farber
>> wrote:
>>> I've finally doubled up RAM to 32 GB for my Quad core
>>> CentOS 6.3 server and have
Hello Scott and others,
On Sat, Sep 29, 2012 at 9:38 PM, Scott Marlowe wrote:
> On Sat, Sep 29, 2012 at 11:27 AM, Alexander Farber
> wrote:
>> I've finally doubled up RAM to 32 GB for my Quad core
>> CentOS 6.3 server and have changed postgresql.conf to
>>
>>max_connections = 100
>>share
On Sat, Sep 29, 2012 at 11:27 AM, Alexander Farber
wrote:
> Hello,
>
> I've finally doubled up RAM to 32 GB for my Quad core
> CentOS 6.3 server and have changed postgresql.conf to
>
>max_connections = 100
>shared_buffers = 4096MB
> work_mem = 16M
>
> But don't see any speed improvement
On 09/29/12 11:43 AM, Jeff Janes wrote:
>http://serverfault.com/questions/433281/doubled-up-ram-to-32-gb-now-how-to-speed-up-a-lapp-server
If you expand the "COMMAND" field of the "top" display (by hitting "c"
on my linux), you can probably see what the top "postmaster" process
is doing.
Anyway
On Sat, Sep 29, 2012 at 10:27 AM, Alexander Farber
wrote:
> Hello,
>
> I've finally doubled up RAM to 32 GB for my Quad core
> CentOS 6.3 server and have changed postgresql.conf to
>
>max_connections = 100
>shared_buffers = 4096MB
> work_mem = 16M
>
> But don't see any speed improvement
-- Weekend practice : 2012-09-30
--
-- having recently discovered the Common Table Expressions, I wanted to see
if the SQL powerfull (threaded) motors
-- could run sophisticated algorithms (not "brute force") more quickly than
classic langages.
--
-- result : "could" = yes, "more quickly" = no, bu
On 29 September 2012 18:08, Andreas wrote:
> Hi
> I encountered something that puzzled me a bit.
> Is it expected that DROP table CASCADE drops this table and just the
> foreign key constraints but not the refering tables?
>
> PG 9.2.1 on WinXP
>
>
>
Hi Andreas,
yes, it is expected, at least acco
Hi
I encountered something that puzzled me a bit.
Is it expected that DROP table CASCADE drops this table and just the
foreign key constraints but not the refering tables?
PG 9.2.1 on WinXP
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscript
On 2012-09-24, Robert James wrote:
> I have some code which creates a function in Postgres, taken from
> http://wiki.postgresql.org/wiki/Array_agg .
>
> DROP AGGREGATE IF EXISTS array_agg(anyelement);
> CREATE AGGREGATE array_agg(anyelement) (
> SFUNC=array_append,
> STYPE=anyarray,
> INITCOND='{}
On 2012-09-27, Dennis Gearon wrote:
> The errors were always 'illegal command', 10s of thousands of them. As far as
> what I did 2 years ago, I can't remember 2 days ago, sorry about that ;-) Yes
> it
> is a plan text dump. I don't have huge databases yet, so to make it easier to
> go
> betw
21 matches
Mail list logo