Re: [HACKERS] [PERFORM] unusual performance for vac following 8.2 upgrade

2007-01-11 Thread Kim
Our pgstats.stat file is 40M for 8.2, on 8.1 it was 33M. Our schema size 
hasn't grown *that* much in the two weeks since we upgraded


I'm not sure if this sheds any more light on the situation, but in 
scanning down through the process output from truss, it looks like the 
first section of output was a large chunk of reads on pgstat.stat, 
followed by a larger chunk of reads on the global directory and 
directories under base - this whole section probably went on for a good 
6-7 minutes, though I would say the reads on pgstat likely finished 
within a couple of minutes or so. Following this there was a phase were 
it did a lot of seeks and reads on files under pg_clog, and it was while 
doing this (or perhaps it had finished whatever it wanted with clogs) it 
dropped into the send()/SIGUSR1 loop that goes for another several minutes.


Kim


Tom Lane wrote:


I wrote:
 


(2) Reconsider whether last-vacuum-time should be sent to the collector
unconditionally.
   



Actually, now that I look, the collector already contains this logic:

   /*
* Don't create either the database or table entry if it doesn't already
* exist.  This avoids bloating the stats with entries for stuff that is
* only touched by vacuum and not by live operations.
*/

and ditto for analyze messages.  So my idea that the addition of
last-vac-time was causing an increase in the statistics file size
compared to 8.1 seems wrong.

How large is your $PGDATA/global/pgstat.stat file, anyway?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend

 



Re: [HACKERS] [PERFORM] unusual performance for vac following 8.2

2007-01-17 Thread Kim

Hello again Tom,

We have our upgrade to 8.2.1 scheduled for this weekend, and we noticed 
your message regarding the vacuum patch being applied to 8.2 and 
back-patched. I expect I know the answer to this next question :) but I 
was wondering if the patch referenced below has also been bundled into 
the normal source download of 8.2.1 or if we would still need to 
manually apply it?


- Fix a performance problem in databases with large numbers of tables
 (or other types of pg_class entry): the function
 pgstat_vacuum_tabstat, invoked during VACUUM startup, had runtime
 proportional to the number of stats table entries times the number
 of pg_class rows; in other words O(N^2) if the stats collector's
 information is reasonably complete.  Replace list searching with a
 hash table to bring it back to O(N) behavior.  Per report from kim
 at myemma.com.  Back-patch as far as 8.1; 8.0 and before use
 different coding here.

Thanks,
Kim


Tom Lane wrote:


I wrote:
 


What I think we need to do about this is
(1) fix pgstat_vacuum_tabstats to have non-O(N^2) behavior; I'm thinking
of using a hash table for the OIDs instead of a linear list.  Should be
a pretty small change; I'll work on it today.
   



I've applied the attached patch to 8.2 to do the above.  Please give it
a try and see how much it helps for you.  Some limited testing here
confirms a noticeable improvement in VACUUM startup time at 1
tables, and of course it should be 100X worse with 10 tables.

I am still confused why you didn't see the problem in 8.1, though.
This code is just about exactly the same in 8.1.  Maybe you changed
your stats collector settings when moving to 8.2?

regards, tom lane

 




Index: pgstat.c
===
RCS file: /cvsroot/pgsql/src/backend/postmaster/pgstat.c,v
retrieving revision 1.140
diff -c -r1.140 pgstat.c
*** pgstat.c21 Nov 2006 20:59:52 -  1.140
--- pgstat.c11 Jan 2007 22:32:30 -
***
*** 159,164 
--- 159,165 
 static void pgstat_read_statsfile(HTAB **dbhash, Oid onlydb);
 static void backend_read_statsfile(void);
 static void pgstat_read_current_status(void);
+ static HTAB *pgstat_collect_oids(Oid catalogid);
 
 static void pgstat_setheader(PgStat_MsgHdr *hdr, StatMsgType mtype);

 static void pgstat_send(void *msg, int len);
***
*** 657,666 
 void
 pgstat_vacuum_tabstat(void)
 {
!   List   *oidlist;
!   Relationrel;
!   HeapScanDesc scan;
!   HeapTuple   tup;
PgStat_MsgTabpurge msg;
HASH_SEQ_STATUS hstat;
PgStat_StatDBEntry *dbentry;
--- 658,664 
 void
 pgstat_vacuum_tabstat(void)
 {
!   HTAB   *htab;
PgStat_MsgTabpurge msg;
HASH_SEQ_STATUS hstat;
PgStat_StatDBEntry *dbentry;
***
*** 679,693 
/*
 * Read pg_database and make a list of OIDs of all existing databases
 */
!   oidlist = NIL;
!   rel = heap_open(DatabaseRelationId, AccessShareLock);
!   scan = heap_beginscan(rel, SnapshotNow, 0, NULL);
!   while ((tup = heap_getnext(scan, ForwardScanDirection)) != NULL)
!   {
!   oidlist = lappend_oid(oidlist, HeapTupleGetOid(tup));
!   }
!   heap_endscan(scan);
!   heap_close(rel, AccessShareLock);
 
 	/*

 * Search the database hash table for dead databases and tell the
--- 677,683 
/*
 * Read pg_database and make a list of OIDs of all existing databases
 */
!   htab = pgstat_collect_oids(DatabaseRelationId);
 
 	/*

 * Search the database hash table for dead databases and tell the
***
*** 698,709 
{
Oid dbid = dbentry->databaseid;
 
! 		if (!list_member_oid(oidlist, dbid))

pgstat_drop_database(dbid);
}
 
 	/* Clean up */

!   list_free(oidlist);
 
 	/*

 * Lookup our own database entry; if not found, nothing more to do.
--- 688,701 
{
Oid dbid = dbentry->databaseid;
 
! 		CHECK_FOR_INTERRUPTS();
! 
! 		if (hash_search(htab, (void *) &dbid, HASH_FIND, NULL) == NULL)

pgstat_drop_database(dbid);
}
 
 	/* Clean up */

!   hash_destroy(htab);
 
 	/*

 * Lookup our own database entry; if not found, nothing more to do.
***
*** 717,731 
/*
 * Similarly to above, make a list of all known relations in this DB.
 */
!   oidlist = NIL;
!   rel = heap_open(RelationRelationId, AccessShareLock);
!   scan = heap_beginscan(rel, SnapshotNow, 0, NULL);
!   while ((tup = heap_getnext(scan, ForwardScanDirection)) != NULL)
!   {
!   oidlist = lappend_oid(oidlist, HeapTupleGetOid(tup));

[HACKERS] Call to build-in operator from new operator

2005-04-18 Thread kim
Greetings,

I don't really know if this is the correct place to ask this question, if not 
please
direct me to the correct mailing list.

I'm trying to develop a new operator for PostGreSQL (actually for TelegraphCQ, 
which
is an extension of PSQL). Part of the operator's procedure is the @-operator. 
So now
my question is: How do I call the on_pb function from inside my own function?.
The on_pb of course takes the argument 'PG_FUNCTION_ARGS' which is defined in 
fmgr.h
as 'FunctionCallInfo fcinfo' which is defined as pointer to 'struct
FunctionCallInfoData', so my question boils down to:

What do I put into this struct to call 'on_pb' with two arguments from the call 
to
my function?


Further, is there a way to access data in tables in the database other than 
those
given as arguments to the function? And how?

Sincerely
Kim Bille
Department of Computer Science
Aalborg University
Denmark

-- 
"Mind are like parachutes --- they only work when open"

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] JPUG wants to have a copyright notice on the translated doc

2016-03-06 Thread Ioseph Kim
Thanks too!

Korean User Group has been translating PGDoc since 2013.

I'll append Copyright in PGDoc.kr too. 
http://postgresql.kr/docs/current/

Regards, Ioseph.


2016-03-04 (금), 17:55 -0800, Joshua D. Drake:
> On 03/04/2016 05:39 PM, Tatsuo Ishii wrote:
> > JPUG (Japan PostgreSQL Users Group) would like to add a copyright
> > ntice to the Japanese translated docs.
> >
> > http://www.postgresql.jp/document/9.5/html/
> >
> > Currently "Copyright 1996-2016 The PostgreSQL Global Development
> > Group" is showed on the translated doc (of course in Japanese). What
> > JPUG is wanting is, adding something like "Copyright 2016 Japan
> > PostgreSQL Users Group" to this.
> 
> As I understand it the translation would be copyrighted by the people 
> that do the translation so it is perfectly reasonable to have JPUG hold 
> the copyright for the .jp translation.
> 
> >
> > The reason for this is, "Copyright 1996-2016 The PostgreSQL Global
> > Development Group" may not be effective from the point of Japan laws
> > because "The PostgreSQL Global Development Group" is not a valid
> > entity to be copyright holder according to Japan's laws. To prevent
> > from abuses of the translated docs, JPUG thinks that JPUG needs to add
> > the copyright notice by JPUG to the Japanese translated docs (note
> > that JPUG is a registered non profit organization and can be a
> > copyright holder).
> 
> Considering they are BSD licensed, I am not sure what abuses could be taken?
> 
> Sincerely,
> 
> JD
> 
> -- 
> Command Prompt, Inc.  http://the.postgres.company/
>  +1-503-667-4564
> PostgreSQL Centered full stack support, consulting and development.
> Everyone appreciates your honesty, until you are honest with them.
> 
> 




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JPUG wants to have a copyright notice on the translated doc

2016-03-06 Thread Ioseph Kim
Thanks.

Korean Document is translating currently.
When that be done, I will announce to official site.
current, translation progress is 50%.
This work is dependent only voluntary support by 6 peoples.
so, translating is very slow. :)

I also hope that many Korean documents appear on the official site.

Regards, Ioseph

2016-03-07 (월), 11:19 +0900, Tatsuo Ishii:
> You'd better to ask to place the below above to
> http://www.postgresql.org/docs/.  Currently only French and Japanese
> are listed. I'm sure there are more local translations.
> 
> Not only people living in Korea are interested in Korean translated
> docs. There are many Korean speaking people in the world. Same thing
> can be said to other languages of course.
> 
> Best regards,
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese:http://www.sraoss.co.jp
> 
> > Thanks too!
> > 
> > Korean User Group has been translating PGDoc since 2013.
> > 
> > I'll append Copyright in PGDoc.kr too. 
> > http://postgresql.kr/docs/current/
> > 
> > Regards, Ioseph.
> > 
> > 
> > 2016-03-04 (금), 17:55 -0800, Joshua D. Drake:
> >> On 03/04/2016 05:39 PM, Tatsuo Ishii wrote:
> >> > JPUG (Japan PostgreSQL Users Group) would like to add a copyright
> >> > ntice to the Japanese translated docs.
> >> >
> >> > http://www.postgresql.jp/document/9.5/html/
> >> >
> >> > Currently "Copyright 1996-2016 The PostgreSQL Global Development
> >> > Group" is showed on the translated doc (of course in Japanese). What
> >> > JPUG is wanting is, adding something like "Copyright 2016 Japan
> >> > PostgreSQL Users Group" to this.
> >> 
> >> As I understand it the translation would be copyrighted by the people 
> >> that do the translation so it is perfectly reasonable to have JPUG hold 
> >> the copyright for the .jp translation.
> >> 
> >> >
> >> > The reason for this is, "Copyright 1996-2016 The PostgreSQL Global
> >> > Development Group" may not be effective from the point of Japan laws
> >> > because "The PostgreSQL Global Development Group" is not a valid
> >> > entity to be copyright holder according to Japan's laws. To prevent
> >> > from abuses of the translated docs, JPUG thinks that JPUG needs to add
> >> > the copyright notice by JPUG to the Japanese translated docs (note
> >> > that JPUG is a registered non profit organization and can be a
> >> > copyright holder).
> >> 
> >> Considering they are BSD licensed, I am not sure what abuses could be 
> >> taken?
> >> 
> >> Sincerely,
> >> 
> >> JD
> >> 
> >> -- 
> >> Command Prompt, Inc.  http://the.postgres.company/
> >>  +1-503-667-4564
> >> PostgreSQL Centered full stack support, consulting and development.
> >> Everyone appreciates your honesty, until you are honest with them.
> >> 
> >> 
> > 
> > 




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] propose: detail binding error log

2016-03-14 Thread Ioseph Kim
Hi, hackers.

I had a error message while using PostgreSQL.

"ERROR:  42804: column "a" is of type boolean but expression is of type
integer at character 25
LOCATION:  transformAssignedExpr, parse_target.c:529"

This error is a java jdbc binding error.
column type is boolean but bind variable is integer.

I want see that value of bind variable at a server log.

java code:
pstmt = connection.prepareStatement("insert into test values (?)");
pstmt.setInt(1, 1);

I could not see that value at a server log, by changing any servier
configurations.

That case is debuged to client only.

So, I propose that error value of bind variable will be displayed at a
server log.
 
in parse_target.c:529,
input parameter value of that node not containded value of column at
client.

I want more detail error log.

Regards, Ioseph Kim



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] propose: detail binding error log

2016-03-14 Thread Ioseph Kim
thanks for reply.

value of log_statement is already 'all'
I set log_min_messages = debug5, log_error_verbosity = verbose and
debug_print_parse = on too.
but I could not a value of client in server log.

this case is occured only at jdbc prepare statement and wrong type
binding.

reguards, Ioseph.

 
2016-03-14 (월), 23:06 -0400, Tom Lane:
> Ioseph Kim  writes:
> > I want see that value of bind variable at a server log.
> 
> That's available if you turn on log_statements, IIRC.
> 
>   regards, tom lane
> 
> 




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] propose: detail binding error log

2016-03-15 Thread Ioseph Kim
thanks for reply.

Craig wrote:
>> At the time PostgreSQL parses the statement it doesn't know the 
>> parameter values yet, because PgJDBC hasn't sent them to it. It  
>> cannot log them even if they mattered, which they don't.

I know already that, so I wrote how can see error value at server log.


case 1: in psql simple query

ERROR:  42804: column "a" is of type integer but expression is of type
text at character 45
HINT:  You will need to rewrite or cast the expression.
LOCATION:  transformAssignedExpr, parse_target.c:529
STATEMENT:  prepare aaa (text) as insert into b values ($1);

when this case, server error log is right that does not know value
because not yet be assigned.

but.
case 2: in jdbc program.
ERROR:  42804: column "a" is of type boolean but expression is of type
integer at character 25
HINT:  You will need to rewrite or cast the expression.
LOCATION:  transformAssignedExpr, parse_target.c:529
STATEMENT:  insert into test values ($1)

when this case, statement is 'insert', I think binding values already
sent to server, then server can display these.

I want see that
"ERROR:  42804: column "a" is of type boolean but expression is of type
integer(input value = 24) at character 25"

Best regards, Ioseph.


2016-03-15 (화), 13:54 +0800, Craig Ringer:
> On 15 March 2016 at 10:52, Ioseph Kim  wrote:
> Hi, hackers.
> 
> I had a error message while using PostgreSQL.
> 
> "ERROR:  42804: column "a" is of type boolean but expression
> is of type
> integer at character 25
> LOCATION:  transformAssignedExpr, parse_target.c:529"
> 
> This error is a java jdbc binding error.
> column type is boolean but bind variable is integer.
> 
> I want see that value of bind variable at a server log.
> 
> 
> log_statement = 'all' will log bind var values, but only when the
> statement actually gets executed.
> 
> 
> This is an error in parsing or parameter binding, before we execute
> the statement. It's a type error and not related to the actual value
> of the bind variable - you could put anything in the variable and you
> would get the same error.
> 
> 
> PostgreSQL is complaining that you bound an integer variable and tried
> to insert it into a boolean column. There is no implicit cast from
> integer to boolean, so that's an error. It doesn't care if the integer
> is 1, 42, or null, since this is a type error. There's no need to log
> the value since it's irrelevant.
> 
> 
> Observe:
> 
> 
> postgres=# create table demo(col boolean);
> CREATE TABLE
> 
> 
> postgres=# prepare my_insert(boolean) AS insert into demo(col) values
> ($1);
> PREPARE
> 
> 
> postgres=# prepare my_insertint(integer) AS insert into demo(col)
> values ($1);
> ERROR:  column "col" is of type boolean but expression is of type
> integer
> LINE 1: ... my_insertint(integer) AS insert into demo(col) values
> ($1);
>^
> HINT:  You will need to rewrite or cast the expression.
> 
> 
> 
> 
> As you see, the error is at PREPARE time, when we parse and validate
> the statement, before we bind parameters to it. You can get the same
> effect without prepared statements by specifying the type of a literal
> explicitly:
> 
> 
> postgres=# insert into demo(col) values ('1'::integer);
> ERROR:  column "col" is of type boolean but expression is of type
> integer
> LINE 1: insert into demo(col) values ('1'::integer);
>   ^
> HINT:  You will need to rewrite or cast the expression.
> 
> 
> 
> 
> At the time PostgreSQL parses the statement it doesn't know the
> parameter values yet, because PgJDBC hasn't sent them to it. It
>  cannot log them even if they mattered, which they don't.
> 
> 
> -- 
>  Craig Ringer   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
> 




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PGCon 2016 call for papers

2016-01-17 Thread Ioseph Kim
Hello, 
I want to speak a proposal on PGCon 2016.
Currently I wrote only title of contents.
Main title is "PostgreSQL in Korea".

That proposal contains

*  Short history of PostgreSQL in Korea (status of korean user group
and theses works)
*  kt (korea telecom) report (for PostgreSQL)
*  ToDo in Korea (of korean user group) 

What can I do for next step?


2016-01-03 (일), 16:58 -0500, Dan Langille:
> In case you've overlooked it, you have about two weeks to submit your 
> proposal.
> 
> PGCon 2016 will be on 17-21 May 2016 at University of Ottawa.
> 
> * 17-18 (Tue-Wed) tutorials
> * 19 & 20 (Thu-Fri) talks - the main part of the conference
> * 17 & 21 (Wed & Sat) The Developer Unconference & the User Unconference 
> (both very popular)
> 
> PLEASE NOTE: PGCon 2016 is in May.
> 
> See http://www.pgcon.org/2016/
> 
> We are now accepting proposals for the main part of the conference (19-20 
> May).
> Proposals can be quite simple. We do not require academic-style papers.
> 
> If you are doing something interesting with PostgreSQL, please submit
> a proposal.  You might be one of the backend hackers or work on a
> PostgreSQL related project and want to share your know-how with
> others. You might be developing an interesting system using PostgreSQL
> as the foundation. Perhaps you migrated from another database to
> PostgreSQL and would like to share details.  These, and other stories
> are welcome. Both users and developers are encouraged to share their
> experiences.
> 
> Here are a some ideas to jump start your proposal process:
> 
> - novel ways in which PostgreSQL is used
> - migration of production systems from another database
> - data warehousing
> - tuning PostgreSQL for different work loads
> - replication and clustering
> - hacking the PostgreSQL code
> - PostgreSQL derivatives and forks
> - applications built around PostgreSQL
> - benchmarking and performance engineering
> - case studies
> - location-aware and mapping software with PostGIS
> - The latest PostgreSQL features and features in development
> - research and teaching with PostgreSQL
> - things the PostgreSQL project could do better
> - integrating PostgreSQL with 3rd-party software
> 
> Both users and developers are encouraged to share their experiences.
> 
> The schedule is:
> 
> 1 Dec 2015 Proposal acceptance begins
> 19 Jan 2016 Proposal acceptance ends
> 19 Feb 2016 Confirmation of accepted proposals
> 
> NOTE: the call for lightning talks will go out very close to the conference.
> Do not submit lightning talks proposals until then.
> 
> See also 
> 
> Instructions for submitting a proposal to PGCon 2016 are available
> from: 
> 




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PGCon 2016 call for papers

2016-01-17 Thread Ioseph Kim
Sorry ^^

2016-01-18 (월), 16:10 +0900, Michael Paquier:
> On Mon, Jan 18, 2016 at 3:29 PM, Ioseph Kim  wrote:
> > What can I do for next step?
> 
> (pgsql-hackers is not the right place to ask that, it is a mailing
> list dedicated to the development and discussion of new features)
> 
> Follow the flow here:
> http://www.pgcon.org/2016/papers.php
> And Register here:
> https://papers.pgcon.org/submission/PGCon2016/
> And finally wait to see if your talk is accepted.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Figures in docs

2016-02-16 Thread Ioseph Kim
Hi. 

In DocBook 4.2 sgml dtd, figure tag is supported already.
that was implemented for multi output format.

I remember that very old postgresql document has some picture (eg.
system architecture, ERD ...).
when release new version, these might be changed, nevertheless these can
not been.

this proposer is not about xml or sgml.
how can new figures be modified for new documents.

regards ioseph.
 
2016-02-17 (수), 08:26 +0300, Oleg Bartunov:
> 
> 
> On Wed, Feb 17, 2016 at 4:17 AM, Tatsuo Ishii 
> wrote:
> It seems there's no figures/diagrams in our docs. I vaguely
> recall that
> we used to have a few diagrams in our docs. If so, was there
> any
> technical reason to remove them?
> 
> I don't know the reason, but it's shame, we are still in sgml ! 
> 
> We already do our translations (as others) in xml using custom
> scripting.  xml provides us better integration with available tools
> and ability to insert graphics. Last time we asked in -docs about
> moving to xml and Alexander demonstrated acceptable speed of xml
> build, but there were no reply from Peter, who is (?) responsible for
> our documentation infrastructure. Probably, we should just created a
> patch and submit to commitfest.  You can check this thread
> http://www.postgresql.org/message-id/1428009501118.85...@postgrespro.ru
> 
> 
> 
> Best regards,
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese:http://www.sraoss.co.jp
> 
> 
> --
> Sent via pgsql-hackers mailing list
> (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
> 
> 




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Figures in docs

2016-02-17 Thread Ioseph Kim
On 수, 2016-02-17 at 12:14 +0300, Alexander Lakhin wrote:
> 17.02.2016 09:17, Tatsuo Ishii wrote:
> >> Hi. 
> >>
> >> In DocBook 4.2 sgml dtd, figure tag is supported already.
> >> that was implemented for multi output format.
> > Ok, there's no technical problems with figures then.  MySQL docs has
> > some nice figures. I am jealous.
> The "figure" tag is just a placeholder in the Docbook
> (http://www.docbook.org/tdg/en/html/figure.html).
> The question is what to place inside this tag: "graphic" (not inline),
> "mediaobject/imageobject" (alternative object, supports inline contents
> and SVG), or something else.
> So you surely can insert some picture from an external file (.png,
> .jpg), but if it could contain title or some labels that should be
> translated, it's not a best solution.

I want say, just about figure tag.
sgml document can include external image file. 

in sgml

   

 Some Image
  

   

then make html command generate below html code

Figure E-1. Some Image

so, 
I asked how maintenance that some_image.jpg file.
I think that is so difficult, because new release document might change
these too.
if use svg module for docbook, document sources will are maked very
dirty.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] locale problem of bgworker: logical replication launcher and worker process

2017-08-21 Thread Ioseph Kim

Hi,
I tried ver. 10 beta3.
I had below messages.

-
$ pg_ctl start
서버를 시작하기 위해 기다리는 중완료
서버 시작됨
2017-08-22 14:06:21.248 KST [32765] 로그:  IPv6, 주소: "::1", 포트 5433 
번으로 접속을 허용합니다
2017-08-22 14:06:21.248 KST [32765] 로그:  IPv4, 주소: "127.0.0.1", 포트 
5433 번으로 접속을 허용합니다
2017-08-22 14:06:21.364 KST [32765] 로그:  "/tmp/.s.PGSQL.5433" 유닉스 
도메인 소켓으로 접속을 허용합니다
2017-08-22 14:06:21.570 KST [32766] 로그:  데이터베이스 시스템 마지막 가동 
중지 시각: 2017-08-22 14:04:52 KST
2017-08-22 14:06:21.570 KST [32766] 로그:  recovered replication state of 
node 1 to 0/0
2017-08-22 14:06:21.638 KST [32765] 로그:  이제 데이터베이스 서버로 접속할 
수 있습니다
2017-08-22 14:06:21.697 KST [306] 로그:  logical replication apply worker 
for subscription "replica_a" has started
2017-08-22 14:06:21.698 KST [306] 오류:  발행 서버에 연결 할 수 없음: ??? 
??? ? ??: ??? ???

"localhost" (::1)  ??? ?? ???,
5432 ??? TCP/IP ???  ??.
??? ??? ? ??: ??? ???
"localhost" (127.0.0.1)  ??? ?? ???,
5432 ??? TCP/IP ???  ??.
-

main postmaster messages are printed  in korean well,
but bgworker process message is not.

This problem seems to have occurred because the server locale 
environment and the client's that are different.


How I can resolv that?

Regards ioseph.



Re: [HACKERS] locale problem of bgworker: logical replication launcher and worker process

2017-08-25 Thread Ioseph Kim

Thanks for reply.

I resolved this problem.

This problem is that dgettext() function use codeset of database's lc_ctype.

below database's lc_ctype is C, but locale is ko_KR.UTF8.

I made a new database with lc_ctype is ko_KR.UTF8.

this problem is resolved.



work logs are here.


(10) postgres@postgres=# \l
   데이터베이스 목록
   이름|  소유주  | 인코딩 | Collate |Ctype|  액세스 권한
---+--++-+-+---
 krdb  | postgres | UTF8   | C   | ko_KR.UTF-8 |
 postgres  | postgres | UTF8   | C   | C   |
 template0 | postgres | UTF8   | C   | C   | 
=c/postgres  +
   |  || | | 
postgres=CTc/postgres
 template1 | postgres | UTF8   | C   | C   | 
=c/postgres  +
   |  || | | 
postgres=CTc/postgres

(4개 행)

(10) postgres@postgres=# \c
접속정보: 데이터베이스="postgres", 사용자="postgres".
(10) postgres@postgres=# create subscription sub1 connection 
'host=127.0.0.1 port=5432 client_encoding=C' publication pub1;
2017-08-25 18:13:34.556 KST [5401] 오류:  발행 서버에 연결 할 수 없음: ??? 
??? ? ??: ??? ???

"127.0.0.1"  ??? ?? ???,
5432 ??? TCP/IP ???  ??.
2017-08-25 18:13:34.556 KST [5401] 명령 구문:  create subscription sub1 
connection 'host=127.0.0.1 port=5432 client_encoding=C' publication pub1;

오류:  발행 서버에 연결 할 수 없음: ??? ??? ? ??: ??? ???
"127.0.0.1"  ??? ?? ???,
5432 ??? TCP/IP ???  ??.
(10) postgres@postgres=# \c krdb
접속정보: 데이터베이스="krdb", 사용자="postgres".
(10) postgres@krdb=# create subscription sub1 connection 'host=127.0.0.1 
port=5432 client_encoding=C' publication pub1;
2017-08-25 18:13:45.687 KST [5402] 오류:  발행 서버에 연결 할 수 없음: 
서버에 연결할 수 없음: 연결이 거부됨

"127.0.0.1" 호스트에 서버가 가동 중인지,
5432 포트로 TCP/IP 연결이 가능한지 살펴보십시오.
2017-08-25 18:13:45.687 KST [5402] 명령 구문:  create subscription sub1 
connection 'host=127.0.0.1 port=5432 client_encoding=C' publication pub1;

오류:  발행 서버에 연결 할 수 없음: 서버에 연결할 수 없음: 연결이 거부됨
"127.0.0.1" 호스트에 서버가 가동 중인지,
5432 포트로 TCP/IP 연결이 가능한지 살펴보십시오.


2017년 08월 23일 22:40에 Peter Eisentraut 이(가) 쓴 글:

On 8/22/17 01:19, Ioseph Kim wrote:

2017-08-22 14:06:21.697 KST [306] 로그:  logical replication apply
worker for subscription "replica_a" has started
2017-08-22 14:06:21.698 KST [306] 오류:  발행 서버에 연결 할 수 없음:
??? ??? ? ??: ??? ???
 "localhost" (::1)  ??? ?? ???,
 5432 ??? TCP/IP ???  ??.
 ??? ??? ? ??: ??? ???
 "localhost" (127.0.0.1)  ??? ?? ???,
 5432 ??? TCP/IP ???  ??.
-

main postmaster messages are printed  in korean well,
but bgworker process message is not.

This problem seems to have occurred because the server locale
environment and the client's that are different.

I have tried it locally with a ko_KR locale, and it seems to work
correctly for me.  Still, I can imagine there are all kinds of ways this
could go wrong in particular configurations.  Could you construct a
reproducible test setup, including specific initdb and locale settings,
operating system, etc.?





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Mailing subscription test

2014-01-28 Thread Ioseph Kim
Sorry, i can't receive mailing :(


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] questions about PG update performance

2015-10-25 Thread Kisung Kim
Dear,

I have a question about update performance of PG.

Because of the internal implementation of MVCC in PG
the update of a row is actually a insertion of a new version row.
So if the size of a row is huge, then it incurs some overhead compare to
in-place update strategy.

Let's assume that a table has 200 columns,
and a user updates one of the columns of one row in the table.
Then PG will rewrite the whole contents of the updated row
including the updated columns and not-updated columns.

I'm not sure about the implementation of Oracle's update.
But if the Oracle can overwrite only the updated column,
the performance difference between Oracle and PG in that case may be
significant.

I researched about this issues in mailing list and google.
But I've not found anything related to this issues.

Thank you.

Kisung Kim.






(C)Bitnine, Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : ks...@bitnine.co.kr
Office phone : 070-4800-3321
Mobile phone : 010-7136-0834
Fax : 02-568-1332


Re: [HACKERS] questions about PG update performance

2015-10-25 Thread Kisung Kim
2015-10-26 11:12 GMT+09:00 Michael Paquier :

>
>
> On Mon, Oct 26, 2015 at 10:52 AM, Kisung Kim  wrote:
>
>> Because of the internal implementation of MVCC in PG
>> the update of a row is actually a insertion of a new version row.
>> So if the size of a row is huge, then it incurs some overhead compare to
>> in-place update strategy.
>>
>
> Yeah, that's how an UPDATE in Postgres for MVCC usage. The xmax of the old
> row is updated, and a new row is inserted with an xmin equal to the
> previous xmax. So if you update tuple fields one by one the cost is going
> to be high.
>
>
>> Let's assume that a table has 200 columns,
>> and a user updates one of the columns of one row in the table.
>> Then PG will rewrite the whole contents of the updated row
>> including the updated columns and not-updated columns.
>>
>
> When a table has a large number of columns, usually I would say that you
> have a normalization problem and such schemas could be split into a smaller
> set of tables, minimizing the UPDATE cost.
>
>
>> I'm not sure about the implementation of Oracle's update.
>> But if the Oracle can overwrite only the updated column,
>> the performance difference between Oracle and PG in that case may be
>> significant.
>>
>> I researched about this issues in mailing list and google.
>> But I've not found anything related to this issues.
>>
>
> What you are looking at here is columnar storage, Alvaro and 2nd Quadrant
> folks have been doing some work in this area recently:
> http://www.postgresql.org/message-id/20150831225328.GM2912@alvherre.pgsql
> Also, you may want to have a look at cstore_fdw:
> https://github.com/citusdata/cstore_fdw.
> Regards,
> --
> Michael
>

Thank you for your reply.
I already know about the column store and that it is optimized for the case
I described.

However, what I want to know is about the update performance difference
between PG and Oracle if there any.
The case I described is for exaggerating the difference between PG and
Oracle.

I want to explain for our clients that PG's update performance is
comparable to Oracle's.

Regards,


Re: [HACKERS] problem with msvc linker - cannot build orafce

2015-11-24 Thread Kisung Kim
2015-11-24 8:12 GMT+09:00 Chapman Flack :

> On 11/23/15 15:14, Tom Lane wrote:
> > Lack of PGDLLIMPORT on the extern declaration, no doubt.
> >
> > The fact that we've not heard this before implies that either nobody has
> > ever tried to use orafce on Windows, or it only very recently grew a
> > dependency on session_timezone.
>
>
Actually, we encountered the situation before couple of months.
A client wanted to use orafce on Windows and the same build problem
occurred.
We performed a workaround to edit the PG source to export unresolved
symbols,
which I think of not a good solution.

2015-11-24 8:12 GMT+09:00 Chapman Flack :

> Has anyone got the stomach to try such a thing and see what happens?
> I don't have MSVC here.
>
> -Chap


We have the environment to test your ideas.
Can you explain your ideas with more detail?





(C)Bitnine, Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : ks...@bitnine.co.kr
Office phone : 070-4800-3321
Mobile phone : 010-7136-0834
Fax : 02-568-1332


[HACKERS] request patch pg_recvlogical.c, pg_receivexlog.c for nls

2015-12-28 Thread Ioseph Kim
Hello,

at PostgreSQL version 9.5rc1,
at line 934 in pg_recvlogical.c 

set_pglocale_pgservice(argv[0],PG_TEXTDOMAIN("pg_recvlogical"));

this references invalid catalog file, because pg_basebackup,
pg_recvlogical, pg_receivexlog commands use same catalog file
pg_basebackup.mo

pg_receivexlog.c too.

patch please. 





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CP949 for EUC-KR?

2010-05-03 Thread Ioseph Kim
Hi, I'm Korean.

CP51949 is EUC-KR correct.
so, that defined code is correct too.

But in Korea, EUC-KR code is not good to use all Korean character.
In recent years, many people in Korea use the CP949 code.
MS Windows codepage also is CP949.

- Original Message - 
From: "Takahiro Itagaki" 
To: 
Sent: Tuesday, April 27, 2010 7:27 PM
Subject: [HACKERS] CP949 for EUC-KR?


>I heard pg_get_encoding_from_locale() failed in kor locale.
> 
>WARNING:  could not determine encoding for locale "kor": codeset is "CP949"
> 
> I found the following description in the web:
>CP949 is EUC-KR, extended with UHC (Unified Hangul Code).
>
> http://www.opensource.apple.com/source/libiconv/libiconv-13.2/libiconv/lib/cp949.h
> 
> but we define CP51949 for EUC-KR in chklocale.c.
>{PG_EUC_KR, "CP51949"}, /* or 20949 ? */
> 
> Which is the compatible codeset with our PG_EUC_KR encoding?
> 949, 51949, or 20949? Should we add (or replace) CP949 for EUC-KR?
> 
> Regards,
> ---
> Takahiro Itagaki
> NTT Open Source Software Center
> 
> 
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting a bug tracker for the Postgres project

2011-05-30 Thread Kim Bisgaard

On 2011-05-30 04:26, Greg Stark wrote:

My biggest gripe about bugzilla was that it sent you an email with updates to 
the bug but you couldn't respond to that email.


Just checked bugzilla's list of features and they *now* lists that as supported:


File/Modify Bugs By Email

In addition to the web interface, you can send Bugzilla an email that will create a new bug, or will modify an existing bug. You can also 
very easily attach files to bugs this way.


http://www.bugzilla.org/features/#email-in

Regards,
Kim


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Proposal: add error message in backend/catalog/index.c

2016-12-12 Thread Ioseph Kim

Hi,

I propose to append an error message when index name and table name are 
same.



example:

postgres@postgres=# create table t (a int not null, constraint t primary 
key (a));

ERROR:  relation "t" already exists


End users will  confusing pretty, because if users meet this message, 
users will check pg_class,


but they will not found in pg_class.

in this case,

"index name must not be same relation name" error message is better.


Some RDBMS are allow that table name and constraint unique, primary key 
name are same.


if they meet that message(relation "t" already exists), that message is 
not clear.



Regards, Ioseph




[HACKERS] increase message string buffer size of watch command of psql

2016-06-12 Thread Ioseph Kim
Hello.
In po.ko (korean message) at psql 
#: command.c:2971
#, c-format
msgid "Watch every %lds\t%s"
msgstr "%ld초 간격으로 지켜보기\t%s"

this message string is a cut string, because buffer size is small.
At line 2946 in src/bin/psql/command.c 
chartitle[50];

size of message string for korean is over 50 bytes.
(at least 80 columns terminal for common)

Increase size of this title, please.
50 bytes is so small for multi language.

And. I suggest that date string might be local language,
or current_timestamp string.

Regards, Ioseph.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] increase message string buffer size of watch command of psql

2016-06-14 Thread Ioseph Kim



2016년 06월 15일 01:56에 Tom Lane 이(가) 쓴 글:
I take it from the vast silence that nobody particularly cares one way 
or the other. On reflection I think that this would be a good change 
to make, so I'll go do so unless I hear complaints soon. regards, tom 
lane 


I propose to change from asctime() to sql current_timestamp value,
then users will  change date format with set command DateStyle.

Regards, Ioseph.



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] increase message string buffer size of watch command of psql

2016-06-15 Thread Ioseph Kim
Thanks, I agree that strftime() is better then asctime().

regards, Ioseph

2016년 06월 16일 08:37에 Tom Lane 이(가) 쓴 글:
> Alvaro Herrera  writes:
>> +1 to strftime("%c").  If we wanted to provide additional flexibility we
>> could have a \pset option to change the strftime format string to
>> something other than %c, but I don't think there's enough demand to
>> justify it.
> Agreed, that seems like something for later (or never).  Pushed that way.
>
> I also widened the buffer a bit in the back branches, to address the
> original complaint.
>
>   regards, tom lane



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] timestamp informations to pg_stat_statements

2016-07-18 Thread Jason Kim
Hi guys,
Thank you for feedbacks.

> I think that this is the job of a tool that aggregates things from 
> pg_stat_statements. It's unfortunate that there isn't a good 
> third-party tool that does that, but there is nothing that prevents 
> it. 

Right. We can do this if we aggregate it frequently enough. However,
third-parties can do it better if we have more informations.
I think these are fundamental informations to strong third-parties could be.

> The concern I've got about this proposal is that the results get very 
> questionable as soon as we start dropping statement entries for lack 
> of space.  last_executed would be okay, perhaps, but first_executed 
> not so much. 

If we set pg_stat_statements.max to large enough, there could be long
lived entries and short lived ones simultaneously. In this case, per call 
statistics could be accurate but per seconds stats can not.
The idea of I named it as created and last_updated (not
first_executed and last_executed) was this. It represents timestamp of
 stats are created and updated, so we can calculate per second stats with
simple math.

> Also, for what it's worth, I should point out to Jun that 
> GetCurrentTimestamp() should definitely not be called when a spinlock 
> is held like that. 

I moved it outside of spinlock.

@@ -1204,6 +1209,11 @@ pgss_store(const char *query, uint32 queryId,
 */
volatile pgssEntry *e = (volatile pgssEntry *) entry;

+   /*
+* Read a timestamp before grab the spinlock
+*/
+   TimestampTz last_updated = GetCurrentTimestamp();
+
SpinLockAcquire(&e->mutex);

/* "Unstick" entry if it was previously sticky */
@@ -1251,6 +1261,7 @@ pgss_store(const char *query, uint32 queryId,
e->counters.blk_read_time +=
INSTR_TIME_GET_MILLISEC(bufusage->blk_read_time);
e->counters.blk_write_time +=
INSTR_TIME_GET_MILLISEC(bufusage->blk_write_time);
e->counters.usage += USAGE_EXEC(total_time);
+   e->counters.last_updated = last_updated;

SpinLockRelease(&e->mutex);
  }

pg_stat_statements_with_timestamp_v2.patch
<http://postgresql.nabble.com/file/n5912461/pg_stat_statements_with_timestamp_v2.patch>
  

Regards,
Jason Kim.



--
View this message in context: 
http://postgresql.nabble.com/PROPOSAL-timestamp-informations-to-pg-stat-statements-tp5912306p5912461.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Btree Index on PostgreSQL and Wiredtiger (MongoDB3.2)

2016-08-10 Thread Kisung Kim
Hi,

I've run YCSB(Yahoo! Cloud Service Benchmark) on PostgreSQL and MongoDB
with WiredTiger.
And I found some interesting results and some issues(maybe) on Btree index
of PostgreSQL.

Here is my experiments and results.
YCSB is for document store benchmark and I build following schema in PG.

CREATE TABLE usertable (
YCSB_KEY varchar(30) primary key,
FIELDS jsonb);

And the benchmark generated avg-300-byte-length Json documents and loaded
100M rows in PG and Mongo.

First I found that the size difference between PG and Mongo:
I configured Mongo not to use any compression for both storage and index.

MongoDB index size: 2.1 GB
PostgreSQL index size: 5.5 GB

When I used the index bloating estimation script in
https://github.com/ioguix/pgsql-bloat-estimation,
the result is as follows:
 current_database | schemaname | tblname  |  idxname
   | real_size  | extra_size |extra_ratio| fillfactor |
bloat_size |bloat_ratio| is_na
--++--+---+++---+++---+---
ycsb | public | usertable| usertable_pkey
 | 5488852992 | 2673713152 |  48.7116917850949 | 90 |
2362122240 |  43.0348971532448 | f

It says that the bloat_ratio is 42 for the index.

So, I rebuilded the index and the result was changed:

 current_database | schemaname | tblname  |  idxname
   | real_size  | extra_size |extra_ratio| fillfactor |
bloat_size |bloat_ratio| is_na
--++--+---+++---+++---+---
 ycsb | public | usertable| usertable_pkey
   | 3154264064 |  339132416 |  10.7515543758863 | 90 |
27533312 | 0.872891788428275 | f


I am curious about the results
1) why the index was bloated even though rows were only inserted not
removed or updated.
2) And then why the bloating is removed when it is rebuilded

I guess that the splits of Btree nodes during inserting rows caused the
bloating but I don't know exact reasons.
And given that MongoDB's index size is much smaller than PG after they
handled the same workload (only insert),
then is there any chances to improve PG's index behavior.

Thank you very much.


-- 




Bitnine Global Inc., Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : ks...@bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192


Re: [HACKERS] Btree Index on PostgreSQL and Wiredtiger (MongoDB3.2)

2016-08-10 Thread Kisung Kim
Thank you for your information.
Here is the result:

After insertions:

ycsb=# select * from pgstatindex('usertable_pkey');
 version | tree_level | index_size | root_block_no | internal_pages |
leaf_pages | empty_pages | deleted_pages | avg_leaf_density |
leaf_fragmentation
-+++---+++-+---+--+
   2 |  3 | 5488721920 | 44337 |   4464 |
665545 |   0 | 0 |   52 | 11
(1 row)

After rebuild:


ycsb=# select * from pgstatindex('usertable_pkey');
 version | tree_level | index_size | root_block_no | internal_pages |
leaf_pages | empty_pages | deleted_pages | avg_leaf_density |
leaf_fragmentation
-+++---+
++-+---+
--+
   2 |  3 | 3154296832 | 41827 |   1899 |
383146 |   0 | 0 |90.08 |  0


It seems like that rebuild has an effect to reduce the number of internal
and leaf_pages and make more dense leaf pages.



On Wed, Aug 10, 2016 at 6:47 PM, Lukas Fittl  wrote:

> On Wed, Aug 10, 2016 at 4:24 PM, Kisung Kim  wrote:
>>
>> When I used the index bloating estimation script in
>> https://github.com/ioguix/pgsql-bloat-estimation,
>> the result is as follows:
>>
>
> Regardless of the issue at hand, it might make sense to verify these
> statistics using pgstattuple - those bloat estimation queries can be wildly
> off at times.
>
> See also https://www.postgresql.org/docs/9.5/static/pgstattuple.html as
> well as https://www.keithf4.com/checking-for-postgresql-bloat/
>
> Best,
> Lukas
>
> --
> Lukas Fittl
>
> Skype: lfittl
> Phone: +1 415 321 0630
>



-- 




Bitnine Global Inc., Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : ks...@bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192


Re: [HACKERS] Btree Index on PostgreSQL and Wiredtiger (MongoDB3.2)

2016-08-12 Thread Kisung Kim
You're right. Reindex improves the performance of the benchmark workloads
dramatically.
I'm gathering results and will announce them.

But I think we should notice that the results before Reindexing is poorer
than MongoDB.
It seems that this is because of Btree bloating (not exact expression).
The lookup performance for the Btree is most crucial for the results
because the workload is select for primary key.
So larger Btree could mean less cache hits and slower query performance.
I think that PG doesn't pack leaf node to 90% but half for future insertion
and because of this PG's btree is larger than MongoDB
(I turned off prefix compression of WiredTiger index and block compression
for storage.)
But MongoDB (actually WiredTiger) seems to do differently.

Is my speculation is right? I'm not sure because I didn't review the btree
code of PG yet.

And I want to point that the loading performance of MongoDB is better than
PG.
If PG leaves more space for future insertion, then could we get at least
faster loading performance?
Then can we conclude that we have more chances to improve Btree of PG?

Best Regards,



On Fri, Aug 12, 2016 at 5:40 PM, Mark Kirkwood <
mark.kirkw...@catalyst.net.nz> wrote:

> After examining the benchmark design - I see we are probably not being
> helped by the repeated insertion of keys all of form 'userxxx' leading
> to some page splitting.
>
> However your index rebuild gets you from 5 to 3 GB - does that really help
> performance significantly?
>
> regards
>
> Mark
>
>
> On 11/08/16 16:08, Kisung Kim wrote:
>
>> Thank you for your information.
>> Here is the result:
>>
>> After insertions:
>>
>> ycsb=# select * from pgstatindex('usertable_pkey');
>>  version | tree_level | index_size | root_block_no | internal_pages |
>> leaf_pages | empty_pages | deleted_pages | avg_leaf_density |
>> leaf_fragmentation
>> -+++---+
>> ++-+---+
>> --+
>>2 |  3 | 5488721920 | 44337 | 4464 |
>>  665545 |   0 | 0 |   52 | 11
>> (1 row)
>>
>> After rebuild:
>>
>>
>> ycsb=# select * from pgstatindex('usertable_pkey');
>>  version | tree_level | index_size | root_block_no | internal_pages |
>> leaf_pages | empty_pages | deleted_pages | avg_leaf_density |
>> leaf_fragmentation
>> -+++---+
>> ++-+---+
>> --+
>>2 |  3 | 3154296832 | 41827 | 1899 |
>>  383146 |   0 | 0 |90.08 |
>> 0
>>
>>
>> It seems like that rebuild has an effect to reduce the number of internal
>> and leaf_pages and make more dense leaf pages.
>>
>>
>>
>>
>


-- 




Bitnine Global Inc., Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : ks...@bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192


Re: [HACKERS] Server-side support of all encodings

2007-03-25 Thread Ioseph Kim

At Korea, Johab code is very old encondig.
by the way, cp949 code page is really used in most environments.

Personally speaking, Johab server code set is not need.
I think that PostgreSQL supports UHC (cp949) server  code set.
This feature will be greet many Korean. :)
Unfortunately, UHC code set have character sequences less then 128 byte.

I tred to patch this problem, but this is not simply. I had gave up. :(


- Original Message - 
From: "ITAGAKI Takahiro" <[EMAIL PROTECTED]>

To: "Tom Lane" <[EMAIL PROTECTED]>
Cc: 
Sent: Monday, March 26, 2007 11:29 AM
Subject: Re: [HACKERS] Server-side support of all encodings




Tom Lane <[EMAIL PROTECTED]> wrote:

> PostgreSQL suppots SJIS, BIG5, GBK, UHC and GB18030 as client 
> encodings,
> but we cannot use them as server encodings. Are there any reason for 
> it?


Very much so --- they aren't safe ASCII-supersets, and thus for example
the parser will fail on them.  Backend encodings must have the property
that all bytes of a multibyte character are >= 128.


But then, PG_JOHAB have already infringed it. Please see 
johab_to_utf8.map.

Trailing bytes of JOHAB can be less than 128.
It's true that other server-supported encodings use only characters >= 
128.


Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] ALTER TABLE ... ALTER COLUMN ... SET DISTINCT

2009-08-03 Thread Kim Bisgaard

Are there plans to make a small follow-up patch to make
CREATE UNIQUE INDEX on one column
(and variants in CREATE TABLE ... PRIMARY KEY) automatically do
SET STATISTICS DISTINCT?

It might not be as perfect a solution as teaching the planner to know 
about unique indexes, but it is better than nothing.


Good work!

Regards,
Kim


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tuple concurrently updated

2002-07-25 Thread Kangmo, Kim

I guess two transactions updated a tuple concurrently.
Because versioning scheme allows old versions can be
read by another transaction, the old version can be updated too.

For example,

We have a row whose value is 1
create table t1 (i1 integer);
insert into t1 values(1);

And,

Tx1 executes  update t1 set i1=2 where i1=1
Tx2 executes  update t1 set i1=10 where i1=1

Now suppose that
Tx1 read i1 with value 1
Tx2 read i1 with value 1
Tx1 updates i1 to 2 by inserting 2 into t1 according to versioning scheme.
Tx2 tries to update i1 to 10 but fails because it is already updated by Tx1.

Because you created different index with different transaction,
The concurrently updated tuple cannot be any of index node.

If the index on the same class,
two concurrent CREATE INDEX command can update pg_class.relpages
at the same time.

I guess that is not a bug of pgsql, but a weak point of
MVCC DBMS.

"Curt Sampson" <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
>
> One of my machines has two CPUs, and in some cases I build a pair
> of indexes in parallel to take advantage of this.  (I can't seem
> to do an ALTER TABLE ADD PRIMARY KEY in parallel with an index
> build, though.) Recently, though, I received the message "ERROR:
> simple_heap_update: tuple concurrently updated." Can anybody tell
> me more about this error and its consequences?
>
> cjs
> --
> Curt Sampson  <[EMAIL PROTECTED]>   +81 90 7737 2974   http://www.netbsd.org
> Don't you know, in this new Dark Age, we're all light.  --XTC
>
>
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/users-lounge/docs/faq.html



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] tuple concurrently updated

2002-07-25 Thread Kangmo, Kim

A solution to this problem is not versioning catalog tables but in-place
updating them.
Of course anther transaction that wants to update the same row in the
catalog table should wait,
which leads to bad concurrency.
But this problem can be solved by commiting every DDL right after its
execution successfully ends.
Note that catalog table updators are DDL, not DML.
I think that's why Oracle commits on execution of every DDL.

"Kangmo, Kim" <[EMAIL PROTECTED]> wrote in message
ahple2$1rgh$[EMAIL PROTECTED]">news:ahple2$1rgh$[EMAIL PROTECTED]...
> I guess two transactions updated a tuple concurrently.
> Because versioning scheme allows old versions can be
> read by another transaction, the old version can be updated too.
>
> For example,
>
> We have a row whose value is 1
> create table t1 (i1 integer);
> insert into t1 values(1);
>
> And,
>
> Tx1 executes  update t1 set i1=2 where i1=1
> Tx2 executes  update t1 set i1=10 where i1=1
>
> Now suppose that
> Tx1 read i1 with value 1
> Tx2 read i1 with value 1
> Tx1 updates i1 to 2 by inserting 2 into t1 according to versioning scheme.
> Tx2 tries to update i1 to 10 but fails because it is already updated by
Tx1.
>
> Because you created different index with different transaction,
> The concurrently updated tuple cannot be any of index node.
>
> If the index on the same class,
> two concurrent CREATE INDEX command can update pg_class.relpages
> at the same time.
>
> I guess that is not a bug of pgsql, but a weak point of
> MVCC DBMS.
>
> "Curt Sampson" <[EMAIL PROTECTED]> wrote in message
> [EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> >
> > One of my machines has two CPUs, and in some cases I build a pair
> > of indexes in parallel to take advantage of this.  (I can't seem
> > to do an ALTER TABLE ADD PRIMARY KEY in parallel with an index
> > build, though.) Recently, though, I received the message "ERROR:
> > simple_heap_update: tuple concurrently updated." Can anybody tell
> > me more about this error and its consequences?
> >
> > cjs
> > --
> > Curt Sampson  <[EMAIL PROTECTED]>   +81 90 7737 2974   http://www.netbsd.org
> > Don't you know, in this new Dark Age, we're all light.  --XTC
> >
> >
> > ---(end of broadcast)---
> > TIP 5: Have you checked our extensive FAQ?
> >
> > http://www.postgresql.org/users-lounge/docs/faq.html
>
>



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] ERROR: MemoryContextAlloc: invalid request size (maybe bug ingist index)

2003-07-14 Thread Nikolay Kim
hi,

i use _int contrib module and gist index
here sample


t1.sql.gz
Description: GNU Zip compressed data

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] ERROR: MemoryContextAlloc: invalid request size

2003-07-14 Thread Nikolay Kim
thanks!

error is gone.

On Пнд, 2003-07-14 at 20:36, Teodor Sigaev wrote:
> Try
> create index data_idx on testkw using gist(data gist__intbig_ops);
> 
> 
> Default class gist__int_ops is usable for small arrays (with small number of 
> unique elements of all arrays)
> 
> 
> Nikolay Kim wrote:
> > hi,
> > 
> > i use _int contrib module and gist index
> > here sample
> > 
> > 
> > 
> > 
> > 
> > ---(end of broadcast)---
> > TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] problem of geometric operator in v8.2.1

2007-02-02 Thread Ioseph Kim

=> select @-@ lseg '((0,0),(1,0))';
?column?
--
   1
(1 row)

=> select @-@ path '((0,0),(1,0))';
?column?
--
   2
(1 row)

--

It's maybe bug in v8.2.1



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] problem of geometric operator in v8.2.1

2007-02-02 Thread Ioseph Kim

when @-@ operator used at path type,
below query maybe returns 1.
because this path is just line.


- Original Message - 
From: "Tom Lane" <[EMAIL PROTECTED]>

To: "Ioseph Kim" <[EMAIL PROTECTED]>
Cc: 
Sent: Saturday, February 03, 2007 5:36 AM
Subject: Re: [HACKERS] problem of geometric operator in v8.2.1 




"Ioseph Kim" <[EMAIL PROTECTED]> writes:

=> select @-@ lseg '((0,0),(1,0))';
 ?column?
--
1
(1 row)



=> select @-@ path '((0,0),(1,0))';
 ?column?
--
2
(1 row)



It's maybe bug in v8.2.1


What do you think is wrong with those answers?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] problem of geometric operator in v8.2.1

2007-02-02 Thread Ioseph Kim

I misunderstood. :)

path '((0,0),(1,0))' is 'closed' path.

in this case, it's maybe operator calculated return length too.

- Original Message - 
From: "Ioseph Kim" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, February 03, 2007 6:00 AM
Subject: Re: [HACKERS] problem of geometric operator in v8.2.1 




when @-@ operator used at path type,
below query maybe returns 1.
because this path is just line.


- Original Message - 
From: "Tom Lane" <[EMAIL PROTECTED]>

To: "Ioseph Kim" <[EMAIL PROTECTED]>
Cc: 
Sent: Saturday, February 03, 2007 5:36 AM
Subject: Re: [HACKERS] problem of geometric operator in v8.2.1 




"Ioseph Kim" <[EMAIL PROTECTED]> writes:

=> select @-@ lseg '((0,0),(1,0))';
 ?column?
--
1
(1 row)



=> select @-@ path '((0,0),(1,0))';
 ?column?
--
2
(1 row)



It's maybe bug in v8.2.1


What do you think is wrong with those answers?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Table clustering idea

2006-06-27 Thread Kim Bisgaard

Jim C. Nasby wrote:

On Sun, Jun 25, 2006 at 08:04:18PM -0400, Luke Lonergan wrote:
  

Other DBMS have index organized tables that can use either hash or btree
organizations, both of which have their uses.  We are planning to
implement btree organized tables sometime - anyone else interested in
this idea?



I'm curious how you'll do it, as I was once told that actually trying to
store heap data in a btree structure would be a non-starter (don't
remember why).
  
Ingres is now open source - they have clustering on btree/isam/hash 
(it's called "modify table xx to btree on col1,col2")


Regards,
Kim


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[HACKERS] Call to build-in operator from new operator

2005-04-18 Thread Kim Bille
Greetings,

I don't really know if this is the correct place to ask this question, if not please direct me to the correct mailing list.

I'm trying to develop a new operator for PostGreSQL (actually for
TelegraphCQ, which is an extension of PSQL). Part of the operator's
procedure is the @-operator. So now my question is: How do I call the
on_pb function from inside my own function?.
The on_pb of course takes the argument 'PG_FUNCTION_ARGS' which is
defined in fmgr.h as 'FunctionCallInfo fcinfo' which is defined as
pointer to
'struct FunctionCallInfoData', so my question boils down to:

What do I put into this struct to call 'on_pb' with two arguments from the call to my function?


Further, is there a way to access data in tables in the database other than those given as arguments to the function? And how?
Sincerely
Kim Bille
Department of Computer Science
Aalborg University
Denmark
-- "Mind are like parachutes --- they only work when open"


[HACKERS] Project proposal/comments please - query optimization

2005-08-11 Thread Kim Bisgaard
I have noticed a deficiency in the current query optimizer related to 
"full outer joins". Tom Lane has confirmed to me that it will not be 8.1 
material. I am not able to wait for 8.2


I am in the lucky situation that my project has money to hire 
consultants, so I would be very interested in hearing about any who 
feels able to work on this, with estimates to costs. The sw developed 
shall be freely available and will be given back into PostgreSQL, if the 
project wants it. I actually think it should be a requirement that the 
sw is accepted into PostgreSQL, but I do not know how to phrase it so 
that it is acceptable to all parties.


The specific problem can be illustrated with two example queries.
Query1:

SELECT x, y, av, bv
FROM at a
FULL OUTER JOIN bt b
USING (x, y)
WHERE x = 52981
 AND y = '2004-1-1 0:0:0';

Query2:

SELECT x, y, av, bv
FROM
 (SELECT x, y, av
   FROM at
   WHERE x = 52981 AND y = '2004-1-1 0:0:0') a
 FULL OUTER JOIN
 (SELECT x, y, bv
   FROM bt
   WHERE x = 52981 AND y = '2004-1-1 0:0:0') b
 USING (x, y);

Both queries select the same set of data (one record), but query2 is 
able to use the indexes in doing so. By looking at the "explain analyze" 
output it is clear that this is because the current PostgreSQL query 
optimizer is not able to push the conditions (x = 52981 AND y = 
'2004-1-1 0:0:0') down into the sub-queries, thus forcing the fetching 
of all data from the tables, and then lastly filtering out the few 
records (zero to one row from each table).


The reason why I say it is related to "full outer joins" it that if I 
take Query1 and substitute "full" with "left", the optimizer is capable 
of pushing the conditions out in the sub-selects, and is thus able to 
use indices.


Looking forward for any comments. I am aware that there are workarounds 
(like query2, union of two left-joins, hand coding the join from a 
series of simple selects, ...) but I do not feel they are practical for 
my use.


Regards,

--
Kim Bisgaard

Computer Department  Phone: +45 3915 7562 (direct)
Danish Meteorological Institute  Fax: +45 3915 7460 (division)


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Project proposal/comments please - query optimization

2005-08-12 Thread Kim Bisgaard




Tom Lane wrote:

  Kim Bisgaard <[EMAIL PROTECTED]> writes:
  
  
I have noticed a deficiency in the current query optimizer related to 
"full outer joins". Tom Lane has confirmed to me that it will not be 8.1 
material.

  
  
... There
are related issues involving N-way joins that we're still not very
good at.
...

Consider this version of Kim Bisgaard's example:
	SELECT FROM a join (b full join c using (id)) using (id)
If A is small and B,C have indexes on ID then it is interesting to consider
a plan like
	Nest Loop
		Scan A
		Merge Full Join
			Indexscan B using id = outer.id
			Indexscan C using id = outer.id
We are fairly far from being able to do this. generate_outer_join_implications
could easily be modified to generate derived equalities (I think it works to
allow a deduction against any clause not overlapping the outerjoin itself)
but the planner would want to evaluate them at the wrong level, and the
executor doesn't have support for passing the outer variable down more than
one level of join.  This is why the existing hack works only for equalities
to pseudoconstants.  We could maybe mark join RestrictInfos as "valid only
below xxx" and ignore them when processing a join that includes all of the
indicated rels?  Still not clear how you get the planner to recognize the
above as an inner indexscan situation though.

The query samples I gave was the smallest test I could find to provoke
the behavior. Tom is right in that the full case I am ideally want
solved is of the form above with lots (below 20) of full outer joined
tables. 

I am still a little intrigued by the effect of substituting "full" with
"left" in my examples; maybe an alternative to Toms idea could be to
work in the direction of treating "full" more like "left/right"

There are some examples of my problems (on nightly builds of yesterday)
at the bottom of the mail.

Regards,
Kim.

obsdb=> explain analyse select
wmo_id,timeobs,temp_dry_at_2m,temp_grass
from station
 join (temp_grass
 full join temp_dry_at_2m using (station_id, timeobs)
 ) using (station_id)
where wmo_id=6065 and '2004-1-2 6:0' between startdate and enddate
and timeobs = '2004-1-2 06:0:0';

 
QUERY PLAN
---
 Hash Join  (cost=5.04..372928.92 rows=1349 width=28) (actual
time=23986.480..46301.966 rows=1 loops=1)
   Hash Cond: (COALESCE("outer".station_id, "outer".station_id) =
"inner".station_id)
   ->  Merge Full Join  (cost=0.00..338124.90 rows=6957100 width=32)
(actual time=23965.761..46281.380 rows=76 loops=1)
 Merge Cond: (("outer".timeobs = "inner".timeobs) AND
("outer".station_id = "inner".station_id))
 Filter: (COALESCE("outer".timeobs, "inner".timeobs) =
'2004-01-02 06:00:00'::timestamp without time zone)
 ->  Index Scan using temp_grass_idx on temp_grass 
(cost=0.00..75312.59 rows=2406292 width=16) (actual
time=12.436..4916.043 rows=2406292 loops=1)
 ->  Index Scan using temp_dry_at_2m_idx on temp_dry_at_2m 
(cost=0.00..210390.85 rows=6957100 width=16) (actual
time=13.696..21363.054 rows=6956994 loops=1)
   ->  Hash  (cost=5.03..5.03 rows=1 width=8) (actual
time=19.612..19.612 rows=0 loops=1)
 ->  Index Scan using wmo_idx on station  (cost=0.00..5.03
rows=1 width=8) (actual time=19.586..19.591 rows=1 loops=1)
   Index Cond: ((wmo_id = 6065) AND ('2004-01-02
06:00:00'::timestamp without time zone >= startdate) AND
('2004-01-02 06:00:00'::timestamp without time zone <= enddate))
 Total runtime: 46302.208 ms
(11 rows)

obsdb=> explain analyse select
wmo_id,timeobs,wind_dir_10m,temp_dry_at_2m,temp_grass
from station
join (temp_grass
full join temp_dry_at_2m using (station_id, timeobs)
full join wind_dir_10m using (station_id, timeobs)
) using (station_id)
where wmo_id=6065 and '2004-1-2 6:0' between startdate and enddate
and timeobs = '2004-1-2 06:0:0';

 
QUERY PLAN
---
 Hash Join  (cost=1249443.54..1700082.70 rows=1392 width=40) (actual
time=331437.803..384389.174 rows=1 loops=1)
   Hash Cond: (COALESCE(COALESCE("outer".station_id,
"outer".station_id), "outer".station_id) = "inner".station_id)
   ->  Merge Full Join  (cost=1249438.51..166

[HACKERS] how to optimize for ia64

2003-10-23 Thread jinwoo Kim




Hello
 
This is my first time to 
post.
I 
have several questions.
 
Currently, 
there is Postgresql for ia64.
I 
was wondering, previous version was optimized for ia64 or just 
converted.
 
Now 
I am looking for some way to optimize postgresql especially for ia64 
machine.
(thinking 
about modifying cache part of postgresql)
 
 
Do 
you have any idea about this?
And 
what kind of methods are possible to improve performance for 
ia64?
 
How 
can I get the specific information about this?

 
Thanks.
 


Re: [HACKERS] source documentation tool doxygen

2006-01-16 Thread Kim Bisgaard

Try following the link (the Doxygen icon) - it has both a tutorial and
extensive doc.

Regards,
Kim Bisgaard

Thomas Hallgren wrote:

I wish I've had this when I started working with PostgreSQL. This 
looks really good. Very useful indeed, even without the comments. What 
kind of changes are needed in order to get the comments in?


Regards,
Thomas Hallgren

Joachim Wieland wrote:


I've created a browsable source tree "documentation", it's done with the
doxygen tool.

http://www.mcknight.de/pgsql-doxygen/cvshead/html/

There was a discussion about this some time ago, Jonathan Gardner 
proposed

it here:

http://archives.postgresql.org/pgsql-hackers/2004-03/msg00748.php

quite a few people found it useful but it somehow got stuck. The 
reason was
apparently that you'd have to tweak your comments in order to profit 
from

it as much as possible.

However I still think it's a nice-to-have among the online documentation
and it gives people quite new to the code the possibility to easily 
check

what is where defined and how... What do you think?

doxygen can also produce a pdf but I haven't succeeded in doing that 
so far,
pdflatex keeps bailing out. Has anybody else succeeded building this 
yet?




Joachim




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Question about ALTER TABLE SET TABLESPACE locing

2006-02-02 Thread Kim Bisgaard

Hannu Krosing wrote:


Ühel kenal päeval, K, 2006-02-01 kell 18:08, kirjutas Tom Lane:
 


Hannu Krosing <[EMAIL PROTECTED]> writes:
   


Does ALTER TABLE SET TABLESPACE lock the table
 


It had better ... see nearby discussion about relaxing locking for
TRUNCATE.  
   



Is it some recent disussion ?
 

it is "[PERFORM] partitioning and locking problems" (on the performance 
list)


Regards,

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] Re: [ADMIN] User administration tool

2001-03-31 Thread Jan T. Kim

On Fri, Mar 30, 2001 at 10:48:54AM -0500, Bruce Momjian wrote:
> > Bruce Momjian writes:
> > 
> > > I have started coding a user/group administration tool that allows you
> > > to add/modify/delete users and groups.  I should have something working
> > > in a week.  I will look similar to my pgmonitor tool.
> > 
> > Pgaccess already does part of this.  If you're going to write it in Tcl/Tk
> > anyway, I think you might as well integrate it there.
> 
> Wow, I see.  I never suspected it did that too.  :-)  Seems I don't need
> to write anything, except perhaps add group capabilities to pgaccess.

Isn't phpPgAdmin yet another tool of this type? I haven't tried it myself,
(no need, myself being the only user...) but the web page
(http://www.greatbridge.org/project/phppgadmin/projdisplay.php) says:

Features include: 

* create and drop databases 
* create, copy, drop and alter
  tables/views/sequences/functions/indicies/triggers 
* edit and add fields (to the extent Postgres allows) 
* execute any SQL-statement, even batch-queries 
* manage primary and unique keys 
* create and read dumps of tables 
* administer one single database 
* administer multiple servers 
* administer postgres users and groups 

Greetinx, Jan
-- 
 +- Jan T. Kim ---+
 |  *NEW* -->  email: [EMAIL PROTECTED]   |
 |  *NEW* -->  WWW:   http://www.inb.mu-luebeck.de/staff/kim.html |
 *-=<  hierarchical systems are for files, not for humans  >=-*

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] The problem of making new system catalog

2004-03-09 Thread Kyoung Hwa Kim



Hello,I made new system catalog.I think 
it is successful.But i got an error whenI tried to select statement to get 
content of this catalog.This error is"catalog is missing 2 attribute(s) for 
relid 16652".So,I found the sourcecode that this error occurs.This file is 
relcache.c.This file try to getattribute from pg_attribute.The problem of my 
new system catalog is theoid is not just 16652.It has a blank before 
16652.When I try to find thisoid in pg_attribute like "select * from 
pg_attribute where attrelid =16652",I can't get a result.But when I run the 
query like "select * frompg_attribute where attrelid = %16652",I could get a 
result.I didn't touchanything for oid in pg_attribute.How can I solve this 
problem?Thank you