be greatly appreciated :)
Reference:
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/s-memory-captun.html
Thanks
Bruce
On 16/06/2014 14:21, Bruce McAlister wrote:
Hi,
On 16/06/2014 14:15, Andres Freund wrote:
Hi,
On 2014-06-16 13:5
Hi,
On 16/06/2014 14:15, Andres Freund wrote:
Hi,
On 2014-06-16 13:56:23 +0100, Bruce McAlister wrote:
[1] 3 x ESX VM's
[a] 8 vCPU's each
[b] 16GB memory each
# Dont hand out more memory than neccesary
vm.overcommit_memory = 2
So you ha
Hi All,
I need some assistance with a particular out of memory issue I am
currently experiencing, your thoughts would be greatly appreciated.
Configuration:
[1] 3 x ESX VM's
[a] 8 vCPU's each
[b] 16GB memory each
[2] CentOS 6.5 64-bit on each
>
> Bizarre ... I've never heard of a Unix system that didn't consider that
> a default place to look. Unless this is a 64-bit machine and uuid
> should have installed itself in /usr/lib64?
>
It is a rather peculiar issue, I also assumed that it would check the
standard locations, but I thought
>
> The 1.6.2 stable version which you use is right.
>
Thanks, we managed to get it working now. Thanks for the pointers.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
>
> Huh. Nothing obvious in your info about why it wouldn't work. I think
> you'll need to dig through the config.log output to see why these link
> tests are failing. (They'll be a few hundred lines above the end of the
> log, because the last part of the log is always a dump of configure's
>
>
> Um ... did you run PG's configure script with --with-ossp-uuid?
> It looks like either you didn't do that, or configure doesn't know
> to look in the place where you put the ossp-uuid header files.
>
Doh, I missed that, however, I have now included that option but it
still does not find the
Hi All,
I am trying to build the uuid-ossp contrib module for PostgreSQL 8.3.4.
I am building on Solaris x86 with Sun Studio 12.
I built the ossp-uuid version 1.6.2 libraries and installed them,
however, whenever I attempt to build the contrib module I always end up
with the following error:
---
Looks like you're out of disk space on:
/dev/sda2 ext39.9G 9.5G 0 100% /var
is this where your database resides?
Phoenix Kiula wrote:
> I am getting this message when I start the DB:
>
>
> psql: FATAL: could not access status of transaction 0
> DETAIL: Could not write to file "
> 2. Mac OSX is proprietary even down to the hardware. That is enough for
> me to not use it. I gave up the whole IBM/SUN/SGI/HP fiasco of closed
> door unix and hardware a decade ago.
Wow :) Maybe you need to re-visit Sun gear again, OpenSolaris,
OpenCluster are only but a subset of the unix tool
Denis Gasparin wrote:
> Martijn van Oosterhout ha scritto:
>> On Thu, Jun 28, 2007 at 11:12:19AM +0100, Bruce McAlister wrote:
>>
>>> I just want to verify that I understand you correctly here, do you mean
>>> that the temporary table is created by specific
Alvaro Herrera wrote:
> > Bruce McAlister wrote:
> >
>> >> Alvaro Herrera wrote:
>> >>
>>> >>> Bruce McAlister wrote:
>>> >>>
>>>> >>>> Martijn van Oosterhout wrote:
>>>> >>>&g
Alvaro Herrera wrote:
> Bruce McAlister wrote:
>> Martijn van Oosterhout wrote:
>>> All the values here look OK, except one:
>>>
>>> On Thu, Jun 28, 2007 at 07:50:36AM +0100, Bruce McAlister wrote:
>>>> blueface-crm=# select oid, relfrozen
Alban Hertroys wrote:
> Bruce McAlister wrote:
>> Which brings me onto a possibly related question. I've noticed that in
>> this particular database, that there are temporary tables that are
>> created. I'm not 100% sure how/why these temporary tables are being
>
Martijn van Oosterhout wrote:
> All the values here look OK, except one:
>
> On Thu, Jun 28, 2007 at 07:50:36AM +0100, Bruce McAlister wrote:
>> blueface-crm=# select oid, relfrozenxid from pg_class where relkind in
>> ('r', 't');
>>oid | relfr
Bruce McAlister wrote:
> I will run with DEBUG2 for a while and see if my output looks anything
> like this :)
I've been running in DEBUG2 mode for a couple days now and I can see the
extra information being logged into the log file, but it looks like the
autovacuum is not actually s
Alvaro Herrera wrote:
> Alvaro Herrera wrote:
>
>>> How much is the age decremented by on a vacuum run then?
>> It should be decremented to the vacuum_freeze_min_age. However, I'm
>> running some experiments with your settings and apparently it's not
>> working as it should.
>
Okay, if it's dec
Alvaro Herrera wrote:
> Bruce McAlister wrote:
>> Alvaro Herrera wrote:
>>> Bruce McAlister wrote:
>>>
>>> Ok now this is interesting:
>>>
>>>> select datname, age(datfrozenxid) from pg_database;
>>>> datname |
Alvaro Herrera wrote:
> Bruce McAlister wrote:
>
> Ok now this is interesting:
>
>> select datname, age(datfrozenxid) from pg_database;
>> datname |age
>> -+---
>> blueface-crm| 441746613
>
Alvaro Herrera wrote:
> Bruce McAlister wrote:
>
>> I have just checked the pg_stat_all_tables in the pg_catalog schema and
>> I can see the index scans etc table values incrementing. The data in the
>> tables seems to be updating. Just an FYI, I've enabled manual
Alvaro Herrera wrote:
> Bruce McAlister wrote:
>> Hi All,
>>
>> I have enabled autovacuum in our PostgreSQL cluster of databases. What I
>> have noticed is that the autovacuum process keeps selecting the same
>> database to perform autovacuums on and does not selec
Albe Laurenz wrote:
> Richard Huxton wrote:
>>> In our environment it takes approx 2 hours to perform a PIT backup of
>>> our live system:
>>>
>>> [1] select pg_start_backup('labe;')
>>> [2] cpio & compress database directory (exclude wals)
>>> [3] select pg_stop_backup()
>>>
>>> However, if we per
Richard Huxton wrote:
> Bruce McAlister wrote:
>> Thats exactly what I think. There is something strange going on. At the
>> moment I think it is the disk I am writing the data to that is slow,
>> possibly due to the fact that it is mounted up as "forcedirectio", so
Richard Huxton wrote:
> Bruce McAlister wrote:
>> Hi All,
>>
>> Is it at all possible to "roll forward" a database with archive logs
>> when it has been recovered using a dump?
>>
>> Assuming I have the archive_command and archive_timeout parameters
Hi All,
Is it at all possible to "roll forward" a database with archive logs
when it has been recovered using a dump?
Assuming I have the archive_command and archive_timeout parameters set
on our "live" system, then I follow these steps:
[1] pg_dump -d database > /backup/database.dump,
[2] initd
Hi All,
I have enabled autovacuum in our PostgreSQL cluster of databases. What I
have noticed is that the autovacuum process keeps selecting the same
database to perform autovacuums on and does not select any of the others
within the cluster. Is this normal behaviour or do I need to do
something m
eter
freeze_min_age integer Custom vacuum_freeze_min_age parameter
freeze_max_age integer Custom autovacuum_freeze_max_age parameter
Thanks
Bruce
"Bruce McAlister" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Hi Tom,
>
> Thanks for the sug
Hi Ashish Karalkar
That's because the shell is waiting for input.
Try do something like this
su - postgres -c "${PG_PATH}/pg_dump -d qsweb -U postgres -p 5432 -Ft -f
/usr/local/pgsql/data/backup/BACKUP_QSWEB_${TIMEDUMP}.tar"
That should run the command as the postgres user.
Than
28 matches
Mail list logo