On Tue, Aug 30, 2011 at 12:54 AM, Florian Weimer wrote:
> * Scott Marlowe:
>
>> On a machine with lots of memory, I've run into pathological behaviour
>> with both the RHEL 5 and Ubuntu 10.04 kernels where the kswapd starts
>> eating up CPU and swap io like mad, while doing essentially nothing.
>>
On Tue, Aug 30, 2011 at 10:39 AM, Scott Marlowe wrote:
> On Tue, Aug 30, 2011 at 12:54 AM, Florian Weimer wrote:
>> * Scott Marlowe:
>>
>>> On a machine with lots of memory, I've run into pathological behaviour
>>> with both the RHEL 5 and Ubuntu 10.04 kernels where the kswapd starts
>>> eating u
On a machine with lots of memory, I've run into pathological behaviour
with both the RHEL 5 and Ubuntu 10.04 kernels where the kswapd starts
eating up CPU and swap io like mad, while doing essentially nothing.
Setting swappiness to 0 delayed this behaviour but did not stop i
Yes, a few hundred MB of swap, and its definitely making a huge
difference. Upon restarting postgres, its all freed up, and then perf
is good again. Also, this box only has 1GB of swap total, so its
never going to get up a few dozen GB.
Anyway, here's some of top output f
On Tue, Aug 30, 2011 at 2:50 AM, Sim Zacks wrote:
>
> On a machine with lots of memory, I've run into pathological behaviour
> with both the RHEL 5 and Ubuntu 10.04 kernels where the kswapd starts
> eating up CPU and swap io like mad, while doing essentially nothing.
> Setting swappiness to 0 dela
On Tue, Aug 30, 2011 at 2:50 AM, Sim Zacks wrote:
>
> Instead of restarting the database try swapoff -a && swapon -a and see if
> that helps performance. If it is that little swap in use, it might be
> something else clogging up the works.
Check to see if kswapd is going crazy or not. If it is,
Hi,
2011-08-29 22:36 keltezéssel, Lonni J Friedman írta:
> ... I read that
> (max_connections * work_mem) should never exceed physical RAM, and if
> that's accurate, then I suspect that's the root of my problem on
> systemA (below).
work_mem is process-local memory so
(max_connections * work_me
It is recommended to identify the processes using up high work_mem and try
to set work_mem to higher value at the session level.
I this case, all the connections using up maximum work_mem is the potential
threat. As said by Zoltan, work_mem is very high and shared_buffers as well.
Other considera
Hello Everyone,
I have a situation here -
I am trying to restore the production online backup and recover the same.
- I had initially rsynced (excluded pg_log) the data directory and the
tarred and zipped the same
- SCP'd the tar to a different server and untarred and unzipped the same
- I go
Lonni J Friedman wrote:
> ok, I'll do my best to capture this data, and then reply back.
If using linux, you should find interesting data on per-process swap and
memory usage in /proc/${pid}/smaps
Also consider the script here:
http://northernmost.org/blog/find-out-what-is-using-your-swa
On Tue, Aug 30, 2011 at 1:26 AM, Greg Smith wrote:
> I doubt this has anything to do with your problem, just pointing this out as
> future guidance. Until there's a breakthrough in the PostgreSQL buffer
> cache code, there really is no reason to give more than 8GB of dedicated
> memory to the dat
On Mon, Aug 29, 2011 at 6:54 PM, peixubin wrote:
> You should monitor PageTables value in /proc/meminfo.if the value larger than
> 1G,I Suggest enable hugepages .
>
> To monitor PageTables:
> # cat /proc/meminfo |grep -i pagetables
$ cat /proc/meminfo |grep -i pagetables
PageTables: 608
On Tue, Aug 30, 2011 at 3:00 AM, Boszormenyi Zoltan wrote:
> Hi,
>
> 2011-08-29 22:36 keltezéssel, Lonni J Friedman írta:
>> ... I read that
>> (max_connections * work_mem) should never exceed physical RAM, and if
>> that's accurate, then I suspect that's the root of my problem on
>> systemA (bel
On Mon, Aug 29, 2011 at 5:42 PM, Tom Lane wrote:
> Lonni J Friedman writes:
>> I have several Linux-x68_64 based dedicated PostgreSQL servers where
>> I'm experiencing significant swap usage growth over time. All of them
>> have fairly substantial amounts of RAM (not including swap), yet the
>>
hi guys (and hopefully also ladies)
I use postgresql as a backend for freeradius with a coova-chilli hotspot
we have an installation with plenty of concurrent users with a lot of
traffic, however the database is not under that huge load.
Normally all is working fine, but from time to time i get t
On Aug 30, 2011, at 10:19 AM, Peter Warasin wrote:
> The message tells me furthermore that freeradius tries to insert a
> record with a radacctid which already exists.
>
> But how can that happen when it is bigserial?
Postgres only assigns the value if it is not explicitly provided. Any client,
Hi,
When I run select datname, procpid, current_query from pg_stat_activity; I
get 26 rows of queries. How can I set postgres to qutomatically
close connections that have finished their queries and now sit idle?
Thanks!
-JD
hi guys (and hopefully also ladies)
I use postgresql as a backend for freeradius with a coova-chilli hotspot
we have an installation with plenty of concurrent users with a lot of
traffic, however the database is not under that huge load.
Normally all is working fine, but from time to time i get t
I am trying a simple
access of a table and get an out of
memory error. How do I avoid this issue. It seems I
have some configuration set wrong.
Our system has 24GB of memory and is dedicated to the postgres
database.
B
Hi,
I have a server running PostgreSQL 8.4 (Scientific Linux release 6.0).
I'm running a process which receives messages from a remote server and
logs them into a table. Here is the table structure:
CREATE TABLE messages.message_log
(
message_id text,
message_timestamp timestamp with time zon
My friend, thanks for your replay, however how to prove your view?
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Whether-the-function-exists-a-in-pgsql-table-or-not-tp4741670p4749963.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent v
I need to understand why this command fails:
nevada=# copy statdata to
'/home/rshepard/projects/nevada/queenstake/stats/chem.csv' with delimiter '|'
null as 'NA' CSV HEADER;
ERROR: could not open file
"/home/rshepard/projects/nevada/queenstake/stats/chem.csv" for writing:
Permission denied
On Aug 30, 2011, at 10:03 AM, JD Wong wrote:
> How can I set postgres to qutomatically close connections that have finished
> their queries and now sit idle?
They haven't finished their queries. They've opened transactions, and then are
sitting there doing nothing. In other words, this is a bug
On Aug 30, 2011, at 11:14 AM, Rich Shepard wrote:
> The permissions on that directory are 755 and it's owned by me. Since I
> have no problems writing other files to that directory I must have the
> command syntax incorrect but I don't see where.
Where is the server and where are you? You are iss
On Aug 30, 2011, at 8:22 AM, Dan Scott wrote:
> Perhaps because I'm locking the table with my query?
Do you mean you're explicitly locking the table? If so, why???
--
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice
--
Sent via pgsql-general mailing
On Tue, Aug 30, 2011 at 1:20 PM, Scott Ribe wrote:
> On Aug 30, 2011, at 11:14 AM, Rich Shepard wrote:
>
> > The permissions on that directory are 755 and it's owned by me. Since I
> > have no problems writing other files to that directory I must have the
> > command syntax incorrect but I don't s
Hello
if table is large, then client can raise this exception too
try to set FETCH_COUNT to 1000
http://www.postgresql.org/docs/8.4/interactive/app-psql.html
Regards
Pavel Stehule
2011/8/30 Don :
> I am trying a simple access of a table and get an out of memory error. How
> do I avoid this i
Hi
thank you for answering!
On 30/08/11 18:56, Scott Ribe wrote:
>> But how can that happen when it is bigserial?
>
> Postgres only assigns the value if it is not explicitly provided. Any client,
> freeradius included, could be assigning ids and could have bugs. Allowing pg
> to assign the val
On Tue, 30 Aug 2011, Scott Ribe wrote:
Where is the server and where are you? You are issuing a command to the
server to create a file at that path on the server.
It's sitting right here next to my desk. That host is the network server
and my workstation. Yes, my home directory (and all othe
On Tue, 30 Aug 2011, Scott Mead wrote:
In this case, it's not about YOU and your permissions, it's about the
server. The COPY command writes data as the 'postgres' operating system
user (or whichever user owns the postgres backend process).
Scott,
Ah so. User 'postgres' is in the same group
On 08/30/11 7:28 AM, Don wrote:
I am trying a simple access of a table and get an out of memory
error. How do I avoid this issue. It seems I have some configuration
set wrong.
Our system has 24GB of memory and is dedicated to the postgres database.
Back ground information
aquarec=> explain
On Tue, 30 Aug 2011, Rich Shepard wrote:
Ah so. User 'postgres' is in the same group ('users') as I am, so I need
to change the perms on the data directory to 775 to give postgres write
access.
That did the trick. Thanks for the lesson, Scott.
Rich
--
Sent via pgsql-general mailing list (
Peter Warasin wrote:
> The message tells me furthermore that freeradius tries to insert a
> record with a radacctid which already exists.
No, the message you quoted tells about the other unique constraint, the one
named radacct_unique. It's not related to the bigserial primary key.
Best
Hey, I am trying to upgrade a CentOS 5.4 32bit test server running postgres
8.3.4 to postgres 9.1 RC1 and am running into an error I haven't seen
mentioned in the forums (at least dealing with the upgrade process). The
steps I ran through for the upgrade are...
>Stop postgres
>move /usr/local/pgsq
hi
On 30/08/11 19:43, Daniel Verite wrote:
>> The message tells me furthermore that freeradius tries to insert a
>> record with a radacctid which already exists.
>
> No, the message you quoted tells about the other unique constraint, the one
> named radacct_unique. It's not related to the bigseri
Dan Scott wrote:
> the insert process is unable to insert new rows into the database
You should probably provide the error message on insert or otherwise describe
how it's not working. Normally reading does not unintentionally prevent
writing in a concurrent session.
Best regards,
--
Da
On 08/30/2011 02:13 PM, Scott Ribe wrote:
On Aug 30, 2011, at 10:03 AM, JD Wong wrote:
How can I set postgres to qutomatically close connections that have finished
their queries and now sit idle?
AFAIK you can't, you should check |pg_terminate_backend function and see
if it is useful for you
On Tue, Aug 30, 2011 at 11:17 AM, Lonni J Friedman wrote:
> On Mon, Aug 29, 2011 at 5:42 PM, Tom Lane wrote:
>> Lonni J Friedman writes:
>>> I have several Linux-x68_64 based dedicated PostgreSQL servers where
>>> I'm experiencing significant swap usage growth over time. All of them
>>> have fa
On Tue, Aug 30, 2011 at 11:03 AM, JD Wong wrote:
> Hi,
> When I run select datname, procpid, current_query from pg_stat_activity; I
> get 26 rows of queries. How can I set postgres to qutomatically
> close connections that have finished their queries and now sit idle?
you don't. this should be
Merlin Moncure writes:
> On Tue, Aug 30, 2011 at 11:17 AM, Lonni J Friedman wrote:
>> In the past 18 hours, swap usage has nearly doubled on systemA:
>> $ free -m
>> total used free sharedbuffers cached
>> Mem: 56481 56210271 0
On Tue, Aug 30, 2011 at 12:48 PM, Justin Arnold wrote:
> Hey, I am trying to upgrade a CentOS 5.4 32bit test server running postgres
> 8.3.4 to postgres 9.1 RC1 and am running into an error I haven't seen
> mentioned in the forums (at least dealing with the upgrade process). The
> steps I ran thro
Merlin Moncure writes:
> It looks like some time after 8.3 was released that function was
> changed from returning 'record'. This is making me wonder if the
> upgrade process was ever tested/verified on 8.3.
Not lately, apparently :-(
> I absolutely do not
> advise doing this without taking a l
Hi,
I have a set of servers in the rack running 9.0.3. The production
server is doing streaming replication and that is working fine. I have
some quarterly reports that are select only so I've been running them
against the replica.
I have one part of that report that consistently dies with
Thanks Tom and Merlin, I removed that logic from check.c, rebuilt, and it
worked fine.
On Tue, Aug 30, 2011 at 2:47 PM, Tom Lane wrote:
> Merlin Moncure writes:
> > It looks like some time after 8.3 was released that function was
> > changed from returning 'record'. This is making me wonder if
I wrote:
> I think it'd be a lot safer to modify (or just remove) the test in
> pg_upgrade. It looks like a one-liner:
Specifically, the attached patch takes care of the problem. Thanks
for reporting it!
regards, tom lane
diff --git a/contrib/pg_upgrade/check.c b/contr
On 08/30/11 12:18 PM, Tom Lane wrote:
total used free sharedbuffers cached
>> Mem: 56481 55486995 0 15 53298
>> -/+ buffers/cache: 2172 54309
>> Swap: 1099 18 1081
> This is totall
On Tue, Aug 30, 2011 at 2:55 PM, John R Pierce wrote:
> On 08/30/11 12:18 PM, Tom Lane wrote:
total used free shared buffers cached
>> Mem: 56481 55486 995 0 15
>> 53298
>> -/+ buffers/cache: 2
We recently took a copy of our production data (running on 8.4.2), scrubbed
many data fields, and then loaded it onto a qa server (running 8.4.8). We're
seeing some odd planner performance that I think might be a bug, though I'm
hoping it's just idiocy on my part. I've analyzed things and looked
On Tue, Aug 30, 2011 at 5:05 PM, Lonni J Friedman wrote:
> On Tue, Aug 30, 2011 at 2:55 PM, John R Pierce wrote:
>> On 08/30/11 12:18 PM, Tom Lane wrote:
>
> total used free shared buffers cached
> >> Mem: 56481 55486 995
On Wed, Aug 31, 2011 at 5:51 AM, Jeff Ross wrote:
> Is there a setting in this or something else that I should tweak so this
> query can complete against the replica? Google turned up some threads on
> the error code associated with the error but I didn't find much else that
> seems applicable.
On 30/08/2011 6:59 PM, Venkat Balaji wrote:
Hello Everyone,
I have a situation here -
I am trying to restore the production online backup and recover the same.
- I had initially rsynced (excluded pg_log) the data directory and the
tarred and zipped the same
Did you do that after pg_start_b
On 31/08/2011 4:51 AM, Jeff Ross wrote:
On my workstation using psql this query runs in about 1.5 minutes. I can
choose the quarter the query uses and I'm virtually positive that no
rows in that set will be updated or deleted so the error message to me
seems wrong.
AFAIK: There may be other da
On 31/08/2011 1:28 AM, Peter Warasin wrote:
Hi
thank you for answering!
On 30/08/11 18:56, Scott Ribe wrote:
But how can that happen when it is bigserial?
Postgres only assigns the value if it is not explicitly provided. Any client,
freeradius included, could be assigning ids and could have
On 31/08/2011 1:34 AM, Rich Shepard wrote:
On Tue, 30 Aug 2011, Scott Mead wrote:
In this case, it's not about YOU and your permissions, it's about the
server. The COPY command writes data as the 'postgres' operating system
user (or whichever user owns the postgres backend process).
Scott,
A
On Wed, 31 Aug 2011, Craig Ringer wrote:
Yeah, or use the client/server copy protocol via psql's \copy command.
Craig,
I was aware there was a back-slash version but did not recall when its use
is appropriate nor just how to use it.
Thanks,
Rich
--
Sent via pgsql-general mailing list (pg
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Scott Marlowe
> Sent: Tuesday, August 30, 2011 3:52 AM
> To: Sim Zacks
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] heavy swapping, not sure why
>
>
On Tue, Aug 30, 2011 at 8:36 PM, mark wrote:
>
> Scott,
> 1000 max connections ? I thought that was several times more than
> recommended these days, even for 24 or 48 core machines. Or am I living in
> the past ? (I admit that my most recent runs of pgbench showed that best
> throughput at around
On Tue, Aug 30, 2011 at 8:36 PM, mark wrote:
> To the broader list, regarding troubles with kswap. I am curious to what
> others seeing from /proc/zoneinfo for DMA pages (not dma32 or normal) -
> basically if it sits at 1 or not. Setting swappiness to 0 did not have any
> affect for us on kswap i
Dear all,
Today I am researching about fetching all the table names in a
particular database.
There is \dt command but I need to fetch it from metadata.
I find some commands as below :
|1. SELECT table_name FROM information_schema.tables WHERE table_schema
= 'public';
2. |SELECT tablename F
On Tue, Aug 30, 2011 at 11:26 PM, Adarsh Sharma
wrote:
> Dear all,
>
> Today I am researching about fetching all the table names in a particular
> database.
> There is \dt command but I need to fetch it from metadata.
> I find some commands as below :
>
> 1. SELECT table_name FROM information_sche
On Tue, Aug 30, 2011 at 11:30 PM, Scott Marlowe wrote:
>> But I need to specify a particular database & then fetch tables in that.
>
> Try this, start psql with the -E switch, then run \d and copy and edit
> the query(s) that gives you.
P.s. I think you have to connect to the database you want to
Below is the output of the \d command
SELECT n.nspname as "Schema",
c.relname as "Name",
CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i'
THEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as "Type",
pg_catalog.pg_get_userbyid(c.relowner) as "Owner"
FROM pg_c
On Tue, Aug 30, 2011 at 11:38 PM, Adarsh Sharma
wrote:
> Below is the output of the \d command
>
> SELECT n.nspname as "Schema",
> c.relname as "Name",
> CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN
> 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as "T
On Tue, Aug 30, 2011 at 11:42 PM, Scott Marlowe wrote:
> On Tue, Aug 30, 2011 at 11:38 PM, Adarsh Sharma
> wrote:
>> Below is the output of the \d command
>>
>> SELECT n.nspname as "Schema",
>> c.relname as "Name",
>> CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN
>>
On 08/30/11 10:26 PM, Adarsh Sharma wrote:
Dear all,
Today I am researching about fetching all the table names in a
particular database.
There is \dt command but I need to fetch it from metadata.
I find some commands as below :
|1. SELECT table_name FROM information_schema.tables WHERE
table
I understand, So there is no way to fetch table in a single query. The
only way is :
1. Connect demo
2. Execute the query 'SELECT n.nspname as "Schema", c.relname as
"Name", CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN
'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' TH
On Tue, Aug 30, 2011 at 11:50 PM, Adarsh Sharma
wrote:
> I understand, So there is no way to fetch table in a single query. The only
> way is :
>
> 1. Connect demo
> 2. Execute the query 'SELECT n.nspname as "Schema", c.relname as "Name",
> CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'v
pdc_uima=# select table_name from information_schema.tables where
table_schema='pdc_uima';
table_name
(0 rows)
But filtering on 'public', it gives the result , :
pdc_uima=# select * from information_schema.tables where
table_schema='public';
table_catalog | table_schema |tabl
On Wed, Aug 31, 2011 at 12:10 AM, Adarsh Sharma
wrote:
> Come back to the original problem. I have 10 databases with different names
> you have to go into the database by \c command to fetch the table names.
Again, in PostgreSQL databases are very separate objects. In mysql
they are closer to sc
Thanks Craig !
Below is what i did -
1. pg_start_backup()
2. rsync the data dir
3. pg_stop_backup()
I believe the backup is valid because, i was able to bring up the cluster
without any issues (ofcourse with data loss).
+ve signs-
I am able to bring up the cluster with the Online backup, but,
70 matches
Mail list logo