Thank you all a lot!
I have got it.
Best regards
2013/10/10 Stuart Bishop
> On Wed, Oct 9, 2013 at 9:58 AM, 高健 wrote:
>
> > The most important part is:
> >
> > 2013-09-22 09:52:47 JST[28297][51d1fbcb.6e89-2][0][XX000]FATAL: Could
> not
> > receive data f
py
archive wal log, isn't it the old warm standby (file shipping)?
Best Regards
jian gao
2013/10/9 Adrian Klaver
> On 10/08/2013 07:58 PM, 高健 wrote:
>
>> Hello:
>>
>> My customer encountered some connection timeout, while using one
>> primary-one standby strea
egards
jian gao
2013/10/9 Jeff Janes
> On Tue, Oct 8, 2013 at 1:54 AM, 高健 wrote:
>
>> Hello:
>>
>> Sorry for disturbing:
>>
>> I have one question about checkponint . That is : can checkpoint be
>> parallel?
>>
>
> PostgreSQL does not curren
Hello:
My customer encountered some connection timeout, while using one
primary-one standby streaming replication.
The original log is japanese, because there are no error-code like oracle's
ora-xxx,
I tried to translate the japanese information into English, But that might
be not correct English
Hello
My customer asked me about the relationship about PostgreSQL's following
process:
wal writer process writer process
checkpoint process
Currently My understanding is:
If I execute some DML, then,Firstly , the related operation or data will be
written to wal buffer.
Secondly, the related dat
Hello:
Sorry for disturbing:
I have one question about checkponint . That is : can checkpoint be
parallel?
It is said that checkpoint will be activated according to either conditions:
1)After last checkpoint, checkpoint_timeout seconds passed.
2)When shared_buffers memory above checkpoint_segm
on, Oct 7, 2013 at 12:02 PM, 高健 wrote:
>>
>>> Hello :
>>>
>>>
>>> I found that for PG9.2.4, there is parameter max_wal_senders,
>>>
>>> But there is no parameter of max_wal_receivers.
>>>
>>>
>>>
>> max_
Hello :
I found that for PG9.2.4, there is parameter max_wal_senders,
But there is no parameter of max_wal_receivers.
Is that to say, that If max_wal_senders are 3.
Then 3 wal_senders will be activated ,
And then on the standby server, there will be 3 "receivers" for
counter-part ?
Th
388
-/+ buffers/cache:272 1734
Swap: 4031129 3902
[postgres@cent6 Desktop]$
Best Regards
2013/9/9 高健
> Hello:
>
> Sorry for disturbing,
> In order to make my question clear,
> I wrote this one as a seperate question.
>
>
> If u
Hello:
Sorry for disturbing,
In order to make my question clear,
I wrote this one as a seperate question.
If using cgroup, I can find wget work well.
But , for postgresql, when I deal huge amount of data, it still report out
of memory error. In fact I hope postgresql can work under a limit a
=root;
}
} memory {
memory.limit_in_bytes=500M;
}
}
[root@cent6 Desktop]#
[root@cent6 Desktop]# service cgconfig status
Running
[root@cent6 Desktop]#
When I start postgres and run the above sql statement, It still consume too
much memory. As if cgroups does not work.
Best Regards
2013/9
Thanks, I'll consider it carefully.
Best Regards
2013/9/3 Jeff Janes
> On Sun, Sep 1, 2013 at 6:25 PM, 高健 wrote:
> >>To spare memory, you would want to use something like:
> >
> >>insert into test01 select generate_series,
> >>repeat(chr(int4(random
memory , Is
ulimit command a good idea for PG?
Best Regards
2013/9/1 Jeff Janes
> On Fri, Aug 30, 2013 at 2:10 AM, 高健 wrote:
> >
> >
> > postgres=# insert into test01 values(generate_series(1,2457600),repeat(
> > chr(int4(random()*26)+65),1024));
>
> The con
Hello:
Thank you all.
I have understood this.
Best Regards
2013/8/31 Kevin Grittner
> 高健 wrote:
>
> > So I think that in a mission critical environment, it is not a
> > good choice to turn full_page_writes on.
>
> If full_page_writes is off, your database can be cor
Hello:
I have done the following experiment to test :
PG's activity when dealing with data which is bigger in size than total
memory of the whole os system.
The result is:
PG says:
WARNING: terminating connection because of crash of another server process
DETAIL:
Hello
Thanks for replying.
It is really a complicated concept.
So I think that in a mission critical environment , it is not a good choice
to turn full_page_writes on.
Best Regards
2013/8/27 Jeff Janes
> On Sun, Aug 25, 2013 at 7:57 PM, 高健 wrote:
> > Hi :
> >
> > Th
other data will still remain in backend process's memory.
Is my understanding right?
Best Regard
2013/8/27 Jeff Janes
> On Sun, Aug 25, 2013 at 11:08 PM, 高健 wrote:
> > Hello:
> >
> > Sorry for disturbing.
> >
> > I am now encountering a serious problem: mem
Hello:
Sorry for disturbing.
I am now encountering a serious problem: memory is not enough.
My customer reported that when they run a program they found the totall
memory and disk i/o usage all reached to threshold value(80%).
That program is written by Java.
It is to use JDBC to pull out data
content of each disk page to WAL "?
the id=1 val=1 data is "very old", and even not in read into memory.
Why it should be from disk-->memory-->wal by wal writer?
maybe I have many mis-understanding about it. Thanks for replying!
Best Regards
2013/8/23 Alvaro Herrera
>
Hello:
Sorry for disturbing.
I have one question : Will checkpoint cause wal written happen?
I found the following info at:
http://www.postgresql.org/docs/9.2/static/wal-configuration.html
...
Checkpoints are fairly expensive, first because they require writing out
all currently dirty buffe
Hi:
Thank you all for kindly replying.
I think that I need this: pg_stat_user_tables.n_tup_hot_upd
And Adrian's information is a pretty good material for me to understand
the internal.
Best regards
2013/8/22 Adrian Klaver
> On 08/21/2013 07:20 PM, 高健 wrote:
>
>>
Hi:
I have heard that Heap-Only Tuples is introduce from 8.3.
And I am searching information for it.
How can I get a detailed information of HOT?
For example:
according to a table, How many tuples are heap only tuples , and how many
are not?
And also , Is there any options which can influence
Thank you !
Best Regards
2013/7/9 Adrian.Vondendriesch
> Hello,
>
> Am 09.07.2013 11:29, schrieb 高健:
> > Hello:
> >
> >
> >
> > I have found the following wiki about autonomous transaction:
> >
> > https://wiki.postgresql.org/wiki/Autonomous
Hello:
I have found the following wiki about autonomous transaction:
https://wiki.postgresql.org/wiki/Autonomous_subtransactions
But when I test it, I found the following error:
pgsql=# BEGIN;
BEGIN
pgsql=# INSERT INTO tab01 VALUES (1);
INSERT 0 1
pgsql=# BEGIN SUBTRANSACTION;
ERROR:
result = CommandIdGetDatum(HeapTupleHeaderGetRawCommandId(tup->t_data));
break;
code end---
2013/7/2 高健
> Hello:
>
>
> I have question for cmin and cmax.
>
> It is said:
>
> cminis: The command identifier (starting at zero) within the
> inserti
Hello:
I have question for cmin and cmax.
It is said:
cminis: The command identifier (starting at zero) within the
inserting transaction.
cmax is: The command identifier within the deleting transaction, or
zero.
http://www.postgresql.org/docs/9.1/static/ddl-system-columns.html
s from proc->xmin.
And I heard that when a process is working, it get transaction id from
system, then use it as xmin when inserting a record.
Why the proc->xmin can be 0 ? Is it a bug?
2013/6/25 高健
> Hello:
>
> Sorry for disturbing again.
>
> I traced source code of P
xecuted,
Then Because of VirtualXactLockTableWait(old_snapshots[i]) running, index
creation is blocked.
For the similar sql statement, the source code running logic differs, I
think that there might be something wrong in the source code.
2013/6/21 高健
> Thanks Jeff
>
> But What I can't understand is:
do it on PG, it really confused me...
sincerely yours
Jian
2013/6/21 Jeff Janes
> On Thu, Jun 20, 2013 at 1:27 AM, 高健 wrote:
>
>> Hello:
>>
>>
>>
>> I have question about PG's "create index concurrently". I think it is a
>> bug perhaps.
>>
&g
Hello:
I have question about PG's "create index concurrently". I think it is a
bug perhaps.
I make two tables tab01 and tab02, they have no relationships.
I think "create index concurrently " on tab02 will not be influenced by
transaction on tab01.
But the result differs:
My first prog
mation of all kinds of paths.
So even when I put different parameter, it just execute the same
finished plan.
2013/6/19 Jeff Janes
> On Tue, Jun 18, 2013 at 2:09 AM, 高健 wrote:
>
>
>
>> postgres=# explain execute s(2);
>&
Hello:
I have some questions about parameterized path.
I have heard that it is a new feature in PG9.2.
I digged for information of parameterized path, but found few(maybe my
method is not right).
My FIRST question is:
What is "parameterized path " for?
Is the following a correct example
couple of times.
Thank you . I think it is an exciting point for PG.
This make it "clever" to choice those always executed sql.
Thanks!
2013/6/18 Albe Laurenz
> 高健 wrote:
> > I change my Java program by adding the following:
> >
> > org.postgresql.PGS
tatement as prepared ,then parsed it and hold the plan.
-
Is my understanding right?
Thanks
2013/6/17 Albe Laurenz
> 高健 wrote:
> > I have one question about prepared statement.
Hello:
I have one question about prepared statement.
I use Java via JDBC, then send prepared statement to execute.
I thought that the pg_prepared_statments view will have one record after
my execution.
But I can't find.
Is the JDBC's prepared statement differ from SQL execute by prepar
2013/6/14 Stephen Frost
> * 高健 (luckyjack...@gmail.com) wrote:
> > So I can draw a conclusion:
> >
> > Prepared statement is only for use in the same session at which it has
> > been executed.
>
> Prepared statements are session-local.
>
> > It can not be
Hello everybody:
Sorry for disturbing.
I experience the prepared statement of postgresql via psql and have one
question:
In terminal A:
I prepared:
postgres=# prepare test(int) AS
postgres-# select * from customers c where c.cust_id = $*1*;
PREPARE
postgres=#
Then run:
postgres=# e
n,
the < if ( hashclauses) > condition is false, So there is no chance to
try a hashjoin path.
That is :
When I use the where condition such as ,
postgresql is clever enough to know it is better to make seqscan and
filter ?
2013/6/13 Tom Lane
> Stephen Frost writes:
&
Hi :
Sorry for disturbing. I don't know if it is ok to put this question here.
I want to learn more about hash join's cost calculation.
And I found the following function of PostgreSQL9.2.1. The hash join cost
is calculated.
But what confused me is a reuction calculation:
qp_qual_cos
Hi :
Sorry for replying lately.
I tried to take the commit statement out of the function , and it works
well.
Thank you!
2013/6/10 Kevin Grittner
> 高健 wrote:
>
> > CREATE OR REPLACE Function ...
>
> > BEGIN
> > BEGIN
>
> > UPDATE ...
> > C
Hello:
Would somebody please kindly tell why my function run but can't update
table via cursor:
I have table like this:
create table course_tbl(course_number integer, course_name varchar(4),
instructor varchar(10));
insert into course_tbl values (1,'','TOM'), (2,'','JACK');
Hello:
I created a table, and found the file created for that table is about 10
times of that I estimated!
The following is what I did:
postgres=# create table tst01(id integer);
CREATE TABLE
postgres=#
postgres=# select oid from pg_class where relname='tst01';
oid
---
16384
(1 row)
Then
g-autovacuum.html
>
>
> 2013/5/24 高健
>
>> Hello all:
>>
>> I found that during postgresql running, there are so many processes
>> being created and then died.
>> I am interested in the reason.
>>
>> Here is the detail:
>> I installed from p
Hello all:
I found that during postgresql running, there are so many processes being
created and then died.
I am interested in the reason.
Here is the detail:
I installed from postgresql-9.2.1.tar.bz2.
I put some debug code in fd.c 's PathNameOpenFile function:
fprintf(stderr,"+++While Calling
Scan using ctest02_id_idx on ctest02 ptest
(cost=0.00..13.75 rows=1 width=9)
Index Cond: (id = 600)
(14 rows)
postgres=#
2012/11/12 Craig Ringer
> On 11/12/2012 10:39 AM, 高健 wrote:
> > The selection used where condition for every partition table, w
Hi all:
I made partition tables:
postgres=# create table ptest(id integer, name varchar(20));
CREATE TABLE
postgres=# create table ctest01(CHECK(id<500)) inherits (ptest);
CREATE TABLE
postgres=# create table ctest02(CHECK(id>=500)) inherits (ptest);
CREATE TABLE
postgres=#
postgres=# crea
think.
I hope in future the architecture of PostgreSQL can put the committed data
& uncommitted data apart,
Or even put them in separate physical disks.That will Help to improve
performance I think.
Jian Gao
2012/11/9 Albe Laurenz
> 高健 wrote:
> > I have one question about the
Hi all:
I have one question about the visibility of explain plan.
Firstly , I was inserting into data to a table. I use : [ insert into
ptest select * from test02; ]
And test02 table has 10,000,000 records. And ptest is a parent table,
which has two distribution child table --- cte
012 at 11:17 PM, 高健 wrote:
> > Hi all:
> >
> >
> >
> > I want to see the explain plan for a simple query. My question is :
> How
> > is the cost calculated?
> >
> >
> >
> > The cost parameter is:
> >
> >
>
Hi Jeff
Thank you for your reply.
I will try to learn about effective_cache_size .
Jian gao
2012/11/9 Jeff Janes
> On Wed, Nov 7, 2012 at 11:41 PM, 高健 wrote:
>
>> Hi all:
>>
>>
>>
>> What confused me is that: When I select data using order by clause, I
Hi all:
What confused me is that: When I select data using order by clause, I got
the following execution plan:
postgres=# set session
enable_indexscan=true;
SET
postgres=# explain SELECT * FROM pg_proc ORDER BY
oid;
QUERY
PLAN
---
Hi all:
I want to see the explain plan for a simple query. My question is : How
is the cost calculated?
The cost parameter is:
random_page_cost= 4
seq_page_cost = 1
cpu_tuple_cost =0.01
cpu_operator_cost =0.0025
And the table and its index physica
Hi tom
At frist I have thought that the database parsed my explain statement,
so the pre-compiled execution plan will be re-used , which made the
statement's second run quick.
I think that what you said is right.
Thank you
2012/11/7 Tom Lane
> =?UTF-8?B?6auY5YGl?= writes:
> > It might not be
Hi all:
I have one question about the cache clearing.
If I use the following soon after database startup(or first time I use it):
postgres=# explain analyze select id,deptno from gaotab where id=200;
QUERY
PLAN
Hi all:
I am trying to understand when the bgwriter is written.
I thought that the bgwriter.c's calling turn is:
BackgroundWriterMain ->BgBufferSync-> SyncOneBuffer
And In my postgresql.conf , the bgwriter_delay=200ms.
I did the following:
postgres=# select * from testtab;
id | val
+--
Hi:
I am now reading the bgwriter.c’s source code, and found :
pqsignal(SIGHUP, BgSigHupHandler); /* set flag to read config file */
So I tried to use Kill –s SIGHUP to confirm it.
I found that if I directly send SIGHUP to bgwriter, it has no response.
If I send SIGHUP to its parent—postgres, I
In src/backend/postmaster/bgwriter.c , I can find the following source
code(PostgreSQL9.2):
/*
* GUC parameters
*/
int BgWriterDelay = 200;
...
rc = WaitLatch(&MyProc->procLatch,
WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH,
BgWriterDelay /* ms */ );
...
if (rc == WL_TIMEOUT && can_hi
On /src/include/storage/proc.h:
I saw the following line:
extern PGDLLIMPORT PGPROC *MyProc;
I want to know why PGDLLIMPORT is used here?
Does it mean: exten PGPROC *MyProc; right?
I am new to PostgreSQL's SPI(Server Programming Interface).
I can understand PostgreSQL's exampel of using SPI. But I am not sure about
SPI_prepare's parameter.
void * SPI_prepare(const char * command, int nargs, Oid * argtypes)
Can somebody kindly give an example of using SPI_prepare ?
59 matches
Mail list logo