ower(bin_range)+1, upper(bin_range)+5) as r
> From bin_data) as b;
>
The value '(,)' signifies a range with no lower bound and no upper
bound. So '{(,)}' is the multirange that contains just that range
(unbounded in either direction)---in other words not an empty list but
ev
Hello,
I have been struggling with trying to use a current version of libpq to link
against on macOS in Xcode. It seems that nothing works, nobody on the web
eventually succeeded either as far as I found so far. I try to write a small
sample client. I started with the public source download to
Hi Team,
My name is Sabjar Ali working for one of the MNC companies, I am planning
to do Postgresql professional certification. Regarding the same i don't
find any link in postgresql.org site.
Please help me how i can approach to do my Postgresql professional
certification.
Thanks
Sabjar Ali
Dear pgsql-general,
I have successfully sent an (updated) email list message to
pgsql-hack...@lists.postgresql.org
which can be viewed in the archive here:
https://www.postgresql.org/message-id/Rm7WBtZZpYL1NWuunuN_16EY0JcyejqV22z_JlUfvD5FYPenGDd_ZGUddwAcMNi2KNSyvqAhBRzj2JxtJoNmWAzykBQU4Z1AzBp0G
I started using the latest postgres 14.2 and the query plans there for a
simplified query joining just 2 tables look like:
explain analyze select t.*, ta.* from team as t, team_aliases as ta where
ta.team_id = t.id and t.id > 2 order by t.id limit 1000;
QUERY P
imizer would be able to infer it itself.
Thank you
On Thu, Feb 3, 2022 at 4:24 PM Tomas Vondra
wrote:
>
>
>
> On 2/3/22 20:32, A Shaposhnikov wrote:
> > I made a mistake yesterday claiming that the created statistics
> > changed the row counts in the estimates - it did no
y low cost regardless of size. Another guess is that it almost
ignores the "limit" parameter or uses it in some incorrect way.
On Thu, Feb 3, 2022 at 12:31 PM Vijaykumar Jain
wrote:
>
>
> On Fri, 4 Feb 2022 at 01:03, A Shaposhnikov wrote:
>>
>> I made a mistake yeste
I made a mistake yesterday claiming that the created statistics
changed the row counts in the estimates - it did not - I looked at
the wrong query yesterday. In the correct query plan the row estimate
still differs from the actual by many orders of magnitude:
Nested Loop (cost=1.01..27584834.53
Hi everyone.
Can someone tell me why these two equivalent queries, one involving a
"naked" EXISTS
versus one involving an EXISTS inside a SELECT statement perform so
differently?
I can see that the slow one scans the entire child table while the fast one
only scans children
that hav
-In PostgreSQL, what is the difference between Roles, Groups, and Policies
exactly? What are they each, how do they relate to one another, and which
one(s) can be assigned to another?
-Is there a simple, immediate, straightforward way to temporarily disable
PostgreSQL database
use of all other
To who it may concern,
I am trying to get a project completed to enhance PostgreSQL arithmetic and
elementary functions
prowess by means of two new High Precision mixed decimal number types in a self
installing
extension. Hopefully, I want this to be a free or low cost project.
Is there
'What I am trying to find is PostgreSQL support for High Precision numbers,
running inside the database engine itself. PostgreSQL, presently at default,
has only got limited function support, on the range of a double.'
My needs for High Precision Arithmetic require new types and lo
e.
I am aware of the pgmp project. While I have contacted it and and am
awaiting their response, it is the case that the present state of pgmp
is not enough for my needs.
PostgreSQL does not have a private doubly linked list available,
and so I can't assemble all this for myself, given tha
e.
I am aware of the pgmp project. While I have contacted it and and am
awaiting their response, it is the case that the present state of pgmp
is not enough for my needs.
PostgreSQL does not have a private doubly linked list available,
and so I can't assemble all this for myself, given tha
1) Are there free scripts for CREATE TYPE (native type), more advanced,
or sorts of types out there, online, free for commercial use? With
function support, too? Can someone reply with a link or a suggestion?
2) How may I get PostgreSQL to output the create table statement(s) for one or
more
1) Are there free scripts for CREATE TYPE (native type), more advanced,
or sorts of types out there, online, free for commercial use? With
function support, too? Can someone reply with a link or a suggestion?
2) How may I get PostgreSQL to output the create table statement(s) for one or
more
,
online, free for commercial use? With function support, too? Can someone reply
with a link or a suggestion?
1. If I am using the COPY command for input and output around *.csv files,
how may I specify internal tuple delimiters of ',', but at the same time also
specify an end of line
Thanks, works.
Sent from my iPhone
> On Apr 3, 2021, at 11:02, Joe Conway wrote:
>
> On 4/2/21 7:06 PM, A. Reichstadt wrote:
>> Hello,
>> I try to deny access to all databases on my server if the user “postgres"
>> tries to connect from a non-local host. H
Hello,
I try to deny access to all databases on my server if the user “postgres" tries
to connect from a non-local host. Here is what I did in pg_hba.conf:
# TYPE DATABASEUSERADDRESS METHOD
# "local" is for Unix domain socket connections o
lex except for tablex.col1 AND tablex.col2, and
> also select tabley.col1"
> ?
I take the proposal to mean this:
SELECT listOfColumns [EXCEPT listOfColumns] FROM ...
not this:
SELECT listOfColumns [EXCEPT (listOfColumns) [listOfColumns [EXCEPT
(listOfColumns)]]]... FROM ...
So there
Hi,
I'm working on configuring high availability for pgpool using watchdog.
Initially, I tried with two pgpool nodes (along with a pgmaster and
pgslave). In this scenario, assuming pgpool node 1 was started first and
became the leader. After sometime , the node got disconnected with pgpool
Hello,
I'm reading the docs about the Postgres Rule system here:
https://www.postgresql.org/docs/12/rules-views.html
That page says:
> It turns out that the planner will collapse this tree into a two-level query
> tree: the bottommost SELECT commands will be “pulled up” into
> That said, I think it is reasonable that a PostgreSQL JSON function
> behaves in the way that JSON users would expect, so here is my +1 for
> interpreting an SQL NULL as a JSON null in the above case
Just to chime in as another application developer: the current
functionality does se
le the extra data at the master. I'm not sure what would happen but
> killall in general is a dangerous command.
>
> On Thu, 3 Oct 2019, 7:00 am Shital A, wrote:
>
>>
>>
>> On Thu, 3 Oct 2019, 00:08 Ravi Krishna, wrote:
>>
>>> >
>>> >
On Thu, 3 Oct 2019, 00:08 Ravi Krishna, wrote:
> >
> > As the failed primary is having more data, How is it possible that
> primary is committing transaction before they were applied on standby with
> synchronous_commit=remote_apply?
>
> If I am not mistaken remote_apply is only from ver 11.
>
H
Hello,
We are seeing a strange issue with postgresql streaming application in sync
mode.
We are using postgresql 9.6. Old version because of some specific
requirements. We have setup cluster with master-standby using pacemaker.
When we kill master using killall -9 postgres. The failed primary
*Hello All,*
We are in process of testing a Postgres 9.6 active-passive HA cluster using
streaming replication on Rhel 7.5.
We are seeing below issue with the setup :
1. When a failover is triggered while data is being added to the primary
by killing primary (killall -9 postgres), the failed
On Mon, 23 Sep 2019, 00:46 Shital A, wrote:
>
> Hello,
>
> We have setup active-passive cluster using streaming replication on Rhe
> 7.5. We are testing pacemaker for automated failover.
> We are seeing below issues with the setup :
>
> 1. When a failoveris triggered whe
On Sun, Sep 22, 2019 at 11:53 AM Tom Lane wrote:
> I thought of an easier-to-maintain approach to that part than having
> a reference file. Binary send functions are invokable from SQL,
> so we could just imagine adding test cases along the lines of
>
> regression=# sel
Hello,
We have setup active-passive cluster using streaming replication on Rhe
7.5. We are testing pacemaker for automated failover.
We are seeing below issues with the setup :
1. When a failoveris triggered when data is being added to the primary by
killing primary (killall -9 postgres), the
ge report says they aren't tested, nor are the send/recv
functions for ranges or arrays. When does Postgres actually use these
functions? Is there a good way to test them?
Thanks,
Paul
[1] https://www.postgresql.org/docs/11/xtypes.html
[2]
http://www.myrkraverk.com/blog/2019/08/postgresql-r
*Hello*,
We are working on a payments systems which uses postgreSql 9.6 as backend
DB and blockchain technology. The database is setup in HA in master-standby
mode using pacemaker on Linux 7.6.
*We are new to postgres and ne*ed help in deciding how to set value for
max_connections on DB.
1. How
On Tue, Aug 27, 2019 at 3:27 PM stan wrote:
> I need to write a function that, given a month, and a year as input returns
> the "work hours" in that month. In other words something like
>
> 8 * the count of all days in the range Monday to Friday) within that
> calendar m
On Mon, Aug 26, 2019 at 8:12 AM Tom Lane wrote:
>
> Paul A Jungwirth writes:
> > Is it expected for flinfo to be null when using DirectFunctionCall*?
>
> Yes, per the comment on those functions:
>
> * These are for invocation of a specifically named function with
Hello,
I'm writing a multirange_cmp function which calls range_cmp to compare
individual elements. It looks like this:
cmp = DatumGetInt32(DirectFunctionCall2(
range_cmp,
RangeTypePGetDatum(r1),
RangeTypePGetDatum(r2)));
But I get a segfault when range_cmp tries to
On Mon, 19 Aug 2019, 23:36 Shital A, wrote:
>
>
> On Mon, 19 Aug 2019, 18:47 Shital A, wrote:
>
>> Hello,
>>
>> Need advise on below situation:
>>
>> Postgres 9.6
>> Pacemaker 1.1.19
>> Corosync 2.4.3
>>
>> We are testing HA s
On Mon, 19 Aug 2019, 18:47 Shital A, wrote:
> Hello,
>
> Need advise on below situation:
>
> Postgres 9.6
> Pacemaker 1.1.19
> Corosync 2.4.3
>
> We are testing HA setup on a two node cluster using pacemaker, corosync
> stack. The replication is streaming replica
On Tue, 13 Aug 2019, 11:50 Shital A, wrote:
>
>
> On Mon, 12 Aug 2019, 18:54 Adrien Nayrat,
> wrote:
>
>> On 8/12/19 2:57 PM, Shital A wrote:
>> >
>> >
>> > Hello,
>> >
>> > Postgres version : 9.6
>> > OS:Rhel 7.
On Mon, 12 Aug 2019, 18:54 Adrien Nayrat,
wrote:
> On 8/12/19 2:57 PM, Shital A wrote:
> >
> >
> > Hello,
> >
> > Postgres version : 9.6
> > OS:Rhel 7.6
> >
> > We are working on HA setup for postgres cluster of two nodes in
> active-pas
ow:
1. Cluster is setup like A - B with A as master.
2. Kill services on A, node B will come up as master.
3. node A is ready to join the cluster, we have to delete the lock file it
creates on any one of the node and execute the cleanup command to get the
node back as standby
Step 3 is manual so HA i
On Fri, 9 Aug 2019, 21:25 Adrian Klaver, wrote:
> On 8/9/19 8:14 AM, Shital A wrote:
> >
>
> > Hello,
>
> >
> > 4) What techniques have you tried?
> > Insert into with With statement, inserting 200 rows at a time. This
> > takes 40 mins.
>
On Fri, 9 Aug 2019, 20:08 Adrian Klaver, wrote:
> On 8/9/19 4:12 AM, Shital A wrote:
> > Hello
> >
> > Postgresql 9.6
> >
> > Need to generate 1GB test data in very less time. I got some techniques
> > online but they take around 40mins for 400GB. Any quic
Hello
Postgresql 9.6
Need to generate 1GB test data in very less time. I got some techniques
online but they take around 40mins for 400GB. Any quicker way?
Thanks.
this imply that either his table is partitioned or he
> > regularly moves records from the main table to the archive table?
> >
>
> Hi,
>
> Yes, he will need to do something to meet his goal of both a 100k TPS
> and have older archives online. He could also use something like
> po
On Mon, 5 Aug 2019, 12:42 Ron, wrote:
> On 8/5/19 1:30 AM, Shital A wrote:
> > Hello,
> >
> > Need inputs on below:
> >
> > We are working on a setting up a new highly transactional (tps 100k)
> OLTP
> > system for payments using blockchain and postgr
Hello,
Need inputs on below:
We are working on a setting up a new highly transactional (tps 100k) OLTP
system for payments using blockchain and postgresql 9.6 as DB on Rhel 7.6.
Postgres version is 9.6 and not latest because of specs of blockchain
component.
There is a requirement for data
Hi,
I'm exploring the options to migrate postgresql master, slave along with
pgpool from ubuntu14.04 to 18.04. Please help me understand the following.
1. What are the available options for migrating from one version of O/S to
the other (Here ubuntu 14.04 to 18.04) ?
2. Document references.
3. An
Thanks your response.
Please check the following details.
1) Size of Postgres cluster.
A. The number of nodes per cluster are 3 - 1 pgmaster, 1 pgslave and 1
pgpool.
There are clusters of multiple sizes - 500 GB, < 500 GB and a few higher
than that.
2) Method of migration:
Dump/rest
Hi,
I'm working on migrating postgresql 9.6 database from ubuntu 16.04 to
18.04. Please let me know if there are any document references and best
practices that can be followed.
Thanks,
venkatesh.
Hi!
Is it possible to determine which RLS policies failed on INSERT OR UPDATE?
For instance, this is the error returned when inserting a new record:
**ERROR: new row violates row-level security policy for table "my_table"**
Is it possible to obtain the name of the policy(ies) that
Dear all,
I am new to postgresql and I am creating a c program that can receive request
from different users. I want to maximize the performance of postgresql so I
intend to just create 1 connection that would service all queries of different
users. How do I do this?
Do I create a new
Hi,
I am using Jboss EAP 6.1 version. For Database i am using the postgres
9.3 version. Can i upgrade to postgres 9.6 without any issue. I am also
considering to implement BDR 2.0, please let me know how i can get the
licensing details.
--
with regards
A H S Phanindra
May I ask the proper (fast) way of inserting bulk data and if possible, an
example would be very appreciated.
Please note that the data I would like to insert contains arrays (float8[]).
By the way, my environment is Windows 10, PostgreSQL 11 and compiling with
Visual Studio 2017.
On Fri, Nov 2, 2018 at 6:21 AM Laurenz Albe wrote:
> I have no idea how to improve that, sorry.
Heh, thank you for looking. From your response I've got a vision of a
medical drama where the doctor looks over some test results and sadly
informs the patient "I'm sorry, there
Well, so as to not spam the list here's a link to a folder with copies
of the output:
https://drive.google.com/open?id=1lSlx7UMUMNgRft2B3Rq2zc4WIRJLLOqU
On Fri, Nov 2, 2018 at 4:12 AM James A. Robinson wrote:
>
> Thank you. Do you need the entire output of 'EXPLAIN (ANALYZE,
>
Thank you. Do you need the entire output of 'EXPLAIN (ANALYZE,
BUFFERS) ...', or will just the sub-section, as I emailed for the
plain 'EXPLAIN ...' I initially e-mailed. do?
On Thu, Nov 1, 2018 at 10:50 AM Ravi Krishna wrote:
> [...] What I need is a constant refresh.
> We plan to use it daily to replicate data from db2 to pg.
Perhaps you've already considered and discarded the idea, but your use
case made me think back to when I was looking at AWS SCT
On Thu, Nov 1, 2018 at 10:28 AM Ravi Krishna wrote:
>
> I have a project to develop a script/tool to copy data from DB2 to PG. The
> approach I am thinking is
>
> 1. Export data from db2 in a text file, with, say pipe as delimiter.
> 2. Load the data from the text file to PG
Hello,
I'm newly exposed to a Postgres 9.4 database system, and am
trying to understand how I might optimize a query that is taking
a long time to return.
What I'm observing is an uncached query that takes much much
longer to complete, sometimes minutes longer, when
enable_bitmapsc
Hi and thanks for answer,
Nope. The problem is suggested, if not exactly clearly explained,
by the error message: casting a literal to ddate isn't a guaranteed
fixed process. Wasn't clear enought to me.For example, suppose you created this table and then
did
alter domain ddate
Thank you very much. Creating a function seems to be a good idea :)
-- Original message --
From: "David G. Johnston";
Sendtime: Thursday, Aug 30, 2018 8:31 PM
To: "a"<372660...@qq.com>;
Cc: "pgsql-general";
Subject: R
Sendtime: Friday, Aug 31, 2018 6:24 AM
To: "a"<372660...@qq.com>;
Cc: "pgsql-general";
Subject: Re: Ways to deal with large amount of columns;
a <372660...@qq.com> writes:
> Hi all:
>
>
> I need to make a table contains projected monthly cashflow f
Hi all:
I need to make a table contains projected monthly cashflow for multiple agents
(10,000 around).
Therefore, the column number would be 1000+.
I would need to perform simple aggregate function such as count, sum or average
on each cashflow projected.
So if there is anyway of doing
On Fri, Aug 24, 2018 at 2:01 AM, a <372660...@qq.com> wrote:
>
> Say if I have an float8 array:
>
> id| data
> --|---
> a | {1,2}
> b | {2,4}
>
> . . .
>
> Since I would have around 200,000 rows, I would prefer it having enough
> capacit
Say if I have an float8 array:
id| data
--|---
a | {1,2}
b | {2,4}
If I could using query to make it looks like this:
id| data[1] | data[2]
--|--|---
a | 1 | 2
b | 2 | 4
Since I would have around 200,000 rows, I would prefer it having
Hey, I fixed the problem by using web pgadmin also with a upgrade to 4.3.1.
So I installed pgadmin 4.3 under the location of .\PostgreSQL\10\pgadmin, so it
will start up a window instead of a web viewer.
I deleted it and re-installed the 4.3.1 and started it with web browser, the
problem
Hi, since I'm not with the test environment so there may be not very exact
answers.
1, The data was set to load from oracle server at 0 am of each work day. There
are 3 ways to determine the right results:
a. data from oracle server;
b. there is going to be a statement summary in
Hi
I'm doing a normal query on pgAdmin, my server platform is win server 2008, my
laptop is win10, both using pgsql 10.
The results of the query shows 8488 rows are selected, which is correct.
However, when I drag down in data output window, it shows more than 10
rows.
Further
b.com/pjungwir/range_agg
In the README I give instructions to create a new aggregate with a
custom range type, and it is actually not bad at all, since the same C
function can serve as implementation in all cases.
Thanks again,
Paul
Hello,
I want to make an aggregate function range_agg(anyrange) that returns
anyrange[]. But when I try to define it, Postgres tells me it doesn't
know what an anyrange[] is. I get this error:
ERROR: type anyrange[] does not exist
I also tried taking an anyrange and returning an anyarray, which
Hi
I'm using pgAdmin 4, pgAgent and postgresql 10 on windows server.
I tried a job but due to some reasons, its running long time. Is there a way
that I can terminate it ??
Thanks
Shore
Sorry I was in the wrong db last time
TESTDB=# \dn
| ??
-+--
MjorTbl | postgres
Rate| postgres
public | postgres
-- Original --
From: "Adrian Klaver";;
Date: Jun 14, 2018
To: "a&qu
Hey thank you~
postgres=# \dn
| ??
-+--
pgagent | postgres
public | postgres
(2 ??)
-- Original --
From: "Adrian Klaver";
Date: Thursday, Jun 14, 2018 12:06 PM
To: "a"<372660...@
rom public, while I would like to create table using
the composite type, it reports a error:
create type "MjorTbl".decrmt as(
nodecrmt int4,
mortality "MjorTbl"."mort"
);
ERROR: type "MjorTbl.mort" does
Hi
I wanna use C library to connect pgsql server and I got 2 questions:
1, If I can fetch a row's data instead of using PQgetvalue?
2, PQgetvalue will return all data as text, I would have to convert them into
the relative data type, is there a way of getting data by its original f
Thanks a lot for your reply.
I tried to set the host/client encoding, however, when I create new tables
using Chinese, it reported an encoding error.. also with other queries.
May I ask encoding of pgAgent? May it reading from a text file? Or I can use
notepad++ to convert and then
Hi I'm using pgAgent to set up daily or monthly tasks.
I have tables that with the name of Chinese or Japanese. It runs ok in normal
sql script, but report non-recognizable in pgAgent SQL jobs.
I'm using windows server 2008, pgsql 10, and pgAgent 3.4, may there be a way of
Hey, the following is the table :
1 1 1 "f" 1 "2018-06-11 11:38:33.163731+08"
"00:00:00.033002" "Couldn't get a connection to the database!"
2 2 1 "f" 1 "2018-06-11
Hey thanks so much.
I found the error msg as "Couldn't get a connection to the database!"
I've searched internet with the issue, some may say replace pgpass.conf as
follow:
"localhost:5432:[dbname]:postgres:[password]"
but after I changed, the job returns a
Hi all
I set up a pgagent job, but it says the job was failed.
But where I can have a look on error report message??
I'm using pgadmin 4.3, pgagent 3.4, postgresql 10, the platform is windows
server 2008
Thank you so much.
Hi
I'm setting up agent jobs for postgresql. I'm on win server 2008, and
postgresql 10.
I set up the jobs using pgadmin 4.3.
however, the job is not executing, may I ask why??
Thank you so much for you suggestion, it is probably a better way to normalize
the data to a policy data an using multiple tables.
The millions of table is not true (but there are around 60 database instances),
but the hundreds of lines of query are the actual queries provided by current
db
Thank you so much, did you mean the section 8.15.6??
-- Original message --
From: "David G. Johnston";
Sendtime: Wednesday, May 23, 2018 9:18 PM
To: "a"<372660...@qq.com>;
Cc: "amul sul"; "pgsql-general";
Subje
Thank you so much, I may discuss more on what I am doing so that it might have
a clearer version for me and if you would be so kind to provide your opinions
on this.
The target industry is insurance industry and the table is used to hold policy
data for insurance company.
The key reason for
Thanks for your reply...
Honestly I do not use java and don't really know json. All I understand is that
it is a text format that allow some customization.
However, as long as it can solve my problem, I'm happy to learn it.
now I do have a complex structure of data to store
Hi, say if I have composite type and table
create type A as(
x float8,
y float8
);
create table B(
Ay A[]
);
insert into B
values(array[
(1,2)::A,
(3,4)::A]
);
How could I select the element of Ay that satisfy x=3??
Thank you so much!!
Shore
uot;amul sul";;
Send time: Wednesday, May 23, 2018 5:11 PM
To: "a"<372660...@qq.com>;
Cc: "pgsql-general";
Subject: Re: How do I copy an element of composite type array into csv file?
On Wed, May 23, 2018 at 2:05 PM, a <372660...@qq.com> wrote:
> Hi supp
Hi suppose I have composite type and table
create type A as(
x float8,
y float8
);
create table B(
Ay A[]
);
insert into B
values(array[
(1,2)::A,
(3,4)::B]
);
Now I would like to export the first element of table B into an csv file:
COPY B(Ay[1])
to 'E:/product
Thank you so much. BTW, may I ask one more question that, how should I select
every first element of the array??
I know that "select p[:] from f" will print all element of the p array, but I
probably would want to present result as "select p[:].a from f", but the
sta
Hi:
I would like to import data from a csv table. But the table that is going to be
inserted is constructed with arrays and composite types, also with array of
composite.
I have tried many ways of inserting but fail. Can anyone help? Thank you so
much.
Hi:
I am writing a trigger that will record all adjustment of the row into the last
column of the row. I have done a iterator to produce text record and the last
step would be add the text record into the last column (which designed as an
array of user defined type, with the output function
BTW, I searched another function called "lookup_type_cache", which will return
a pointer of structure of "TypeCacheEntry".
The structure contains "TypeCacheEntry.typtype" that may also help to identify
the type.
But I'm concerning about possible memory le
hey thank you!!
could you provide some more detail information??
What function should I use to get FormData_pg_type structure??
thanks a lot
-- Original message --
From: "Laurenz Albe";
Sendtime: Monday, May 14, 2018 4:11 PM
To: "a&qu
Hi I am writing something that would need to identify the attribute category
inside a C function, now I may need a way of fast access it, but the
information is not stored in FormData_pg_attribute structure.
Can anyone help? Thanks a lot!!!
Shore
Saying that I am writing a C function that may facing varies of types. Is there
a postgresql function that will automatically call the relative to_string
function and transform them to string??
Thanks a lot, would you please be so kind to tell me more about what is top
level query??
-- Original message --
From: "Pavel Stehule";
Sendtime: Thursday, May 10, 2018 6:38 PM
To: "a"<372660...@qq.com>;
Cc: "pgsql-general"
Hi I would like to write a trigger that recorded every sql statement under the
effected entry. So if it is possible to retrieve the sql statement within a
trigger??
Hey all:
As far as I know, composite type and rows (trigger functions) are passed
between PostgreSQL and C functions as tuple. However, I rarely find things
relating on how to manipulate a tuple under C from documentation.
The only thing I can find from the doc is very simple stuff like
Thank you for your reply;
Please allow me to ask few more questions:
1, Since I'm writing a C trigger function, is there any method for me to get
some of the basic information like the follow:
(1) Total number of rows;
(2) Rows' names;
(3) Value of OLD and NEW;
2
Hey:
What I want is to add a log entry at the last column of each row, which will
record the history update, insert automatically when relative statement is
processed.
I have read the documentation on triggers, which helps a lot. However, I may
have few more extra requirement to complete my
1 - 100 of 113 matches
Mail list logo