Now let me reply it myself. When I changed memory.limit_in_bytes=300M, it worked. memeory before sql statment execution and after sql statement execution is:
[postgres@cent6 Desktop]$ free -m total used free shared buffers cached Mem: 2006 537 1469 0 30 242 -/+ buffers/cache: 265 1741 Swap: 4031 0 4031 [postgres@cent6 Desktop]$ free -m total used free shared buffers cached Mem: 2006 687 1318 0 27 388 -/+ buffers/cache: 272 1734 Swap: 4031 129 3902 [postgres@cent6 Desktop]$ Best Regards 2013/9/9 高健 <luckyjack...@gmail.com> > Hello: > > Sorry for disturbing, > In order to make my question clear, > I wrote this one as a seperate question. > > > If using cgroup, I can find wget work well. > But , for postgresql, when I deal huge amount of data, it still report > out of memory error. In fact I hope postgresql can work under a limit > amount of memory. > > My test is as this: > Step 1:To make memory limit through cgroup: > [postgres@cent6 Desktop]$ cat /etc/cgconfig.conf > ... > mount { > cpuset = /cgroup/cpuset; > cpu = /cgroup/cpu; > cpuacct = /cgroup/cpuacct; > memory = /cgroup/memory; > devices = /cgroup/devices; > freezer = /cgroup/freezer; > net_cls = /cgroup/net_cls; > blkio = /cgroup/blkio; > } > > group test1 { > perm { > task{ > uid=postgres; > gid=postgres; > } > > admin{ > uid=root; > gid=root; > } > > } memory { > memory.limit_in_bytes=30M; > } > } > > [postgres@cent6 Desktop]$ > [postgres@cent6 Desktop]$ cat /etc/cgrules.conf > # /etc/cgrules.conf > ... > #<user> <controllers> <destination> > postgres memory test1/ > # > [postgres@cent6 Desktop]$ > [postgres@cent6 Desktop]$ chkconfig --list cgconfig > cgconfig 0:off 1:off 2:on 3:on 4:on 5:on 6:off > [postgres@cent6 Desktop]$ chkconfig --list cgred > cgred 0:off 1:off 2:on 3:on 4:on 5:on 6:off > [postgres@cent6 Desktop]$ > > Step 2 :To see memory : > [postgres@cent6 Desktop]$ free -m > total used free shared buffers cached > Mem: 2006 384 1622 0 26 138 > -/+ buffers/cache: 219 1787 > Swap: 4031 87 3944 > [postgres@cent6 Desktop]$ > > Step3: To start postgresql and deal with data: > postgres=# select count(*) from test01; > count > ------- > 0 > (1 row) > > postgres=# insert into test01 values(generate_series(1,614400),repeat( > chr(int4(random()*26)+65),1024)); > > Then, In psql client, I got: > The connection to the server was lost. Attempting reset: Failed. > !> > > In postgresql's log, I found: > [postgres@cent6 pgsql]$ LOG: database system was shut down at 2013-09-09 > 16:20:29 CST > LOG: database system is ready to accept connections > LOG: autovacuum launcher started > LOG: server process (PID 2697) was terminated by signal 9: Killed > DETAIL: Failed process was running: insert into test01 > values(generate_series(1,614400),repeat( chr(int4(random()*26)+65),1024)); > LOG: terminating any other active server processes > WARNING: terminating connection because of crash of another server process > DETAIL: The postmaster has commanded this server process to roll back the > current transaction and exit, because another server process exited > abnormally and possibly corrupted shared memory. > HINT: In a moment you should be able to reconnect to the database and > repeat your command. > FATAL: the database system is in recovery mode > LOG: all server processes terminated; reinitializing > LOG: database system was interrupted; last known up at 2013-09-09 > 17:35:42 CST > LOG: database system was not properly shut down; automatic recovery in > progress > LOG: redo starts at 1/9E807C90 > LOG: unexpected pageaddr 1/946BE000 in log file 1, segment 159, offset > 7069696 > LOG: redo done at 1/9F6BDB50 > LOG: database system is ready to accept connections > LOG: autovacuum launcher started > > When I see dmesg, I can see: > > [postgres@cent6 Desktop]$ dmesg | grep post > [ 2673] 500 2673 64453 200 0 0 0 postgres > [ 2675] 500 2675 64494 79 0 0 0 postgres > [ 2676] 500 2676 64453 75 0 0 0 postgres > [ 2677] 500 2677 64453 77 0 0 0 postgres > [ 2678] 500 2678 64667 80 0 0 0 postgres > [ 2679] 500 2679 28359 72 0 0 0 postgres > [ 2697] 500 2697 64764 100 0 0 0 postgres > [ 2673] 500 2673 64453 200 0 0 0 postgres > [ 2675] 500 2675 64494 79 0 0 0 postgres > [ 2676] 500 2676 64453 75 0 0 0 postgres > [ 2677] 500 2677 64453 77 0 0 0 postgres > [ 2678] 500 2678 64667 80 0 0 0 postgres > [ 2679] 500 2679 28359 72 0 0 0 postgres > [ 2697] 500 2697 64764 100 0 0 0 postgres > [ 2673] 500 2673 64453 208 0 0 0 postgres > [ 2675] 500 2675 64494 79 0 0 0 postgres > [ 2676] 500 2676 64453 98 0 0 0 postgres > [ 2677] 500 2677 64453 782 0 0 0 postgres > [ 2678] 500 2678 64667 133 0 0 0 postgres > [ 2679] 500 2679 28359 86 0 0 0 postgres > [ 2697] 500 2697 73075 3036 0 0 0 postgres > Memory cgroup out of memory: Kill process 2697 (postgres) score 1000 or > sacrifice child > Killed process 2697, UID 500, (postgres) total-vm:292300kB, > anon-rss:8432kB, file-rss:3712kB > [postgres@cent6 Desktop]$ > > Although I encountered the out of memory , but it is cgroup out of memory. > I think that cgroup really influenced postgresql. > But not by the means I hope: I hope PG can work under limit , not to be > killed by OOM. > > My above sql statement quickly failed. that is: PG 's main process is > killed by OOM because it is going to use memory more than 30M,which is > limited by cgroup. > > But If I run wget, wget can work well under memory limit: > such as wget > http://centos.arcticnetwork.ca/6.4/isos/x86_64/CentOS-6.4-x86_64-LiveCD.iso > > So My question is: > Wen under cgroup's memory limit , Why PG crashed but wget do not? > > Best Regards >