On Fri, Jun 27, 2014 at 11:53 AM, Ravi Kiran
wrote:
> hi,
>
> I am using the environment Eclipse for the execution of the programs in
> the executor, whenever I give break points to specific program in eclipse ,
> the control goes to main.c and finally the process never comes back to the
> actual
hi,
I am using the environment Eclipse for the execution of the programs in the
executor, whenever I give break points to specific program in eclipse , the
control goes to main.c and finally the process never comes back to the
actual program.
Is there any way that the process be constrained only t
On 06/26/2014 02:14 AM, Rémi Cura wrote:
Hey,
thanks for your answer !
Yep you are right, the function I would like to test are going to be
called a lot (100k times), so even 15 ms per call matters.
I got to thinking about this.
100K over what time frame?
How is it being called?
--
Adri
The database is functioning fine now but I am anticipating a much higher
workload in the future. The table in question is probably going to have a
few million rows per day inserted into it when it gets busy, if it gets
very busy it might be in the tens of millions per day but that's
speculation at
This is what I was thinking but I am worried about two things.
1. If there is a very large set of data in the table that needs to be moved
this will be slow and might throw locks which would impact the performance
of the inserts and the updates.
2. Constantly deleting large chunks of data might ca
James Le Cuirot writes:
> Tom Lane wrote:
>> PG is not capable of executing queries that are not in transactions,
>> so yes, PQsendQuery will create a single-statement transaction if you
>> haven't sent BEGIN. However, there's a huge difference for the
>> purposes we're discussing here: PQsendQu
On Thu, 26 Jun 2014 11:02:09 -0700
Tom Lane wrote:
> James Le Cuirot writes:
> > This got me wondering what Rails uses. I dug into ActiveRecord and
> > found that apart from the odd call to PQexec with hardcoded single
> > statements, it uses PQsendQuery. The libpq docs state a few of the
> > di
A full dump and restore would definitely help. I tend not to suggest
that often because I work with very large databases that are usually
extremely cumbersome to dump and restore.
But yeah, if you can get a successful pg_dump from your database, a
restore should obviously clean up all of you
>
> > So here are my questions:
> >
> > 1) Is there anyway to control this behavior of daterange(), or is it
> just
> > best to (for example) add 1 to the upper bound argument if I want an
> > inclusive upper bound?
>
> See link for question #3; namely use the three-arg version of daterange
> (typ
Hi,
We run postgres 9.3.3 on Centos 6.3, kernel 2.6.32-431.3.1. Every once in a
while, we see postgres processes spinning on semop:
Here is an output from an strace on a delete process:
root@site-db01a:~ # strace -p 53744
Process 53744 attached - interrupt to quit
semop(21692498, {{6, 1, 0}}, 1
On 25/06/2014 23:19, Dennis Ryan wrote:
> I having trouble with correct syntax to get this trigger function to
> compile. I have tried every combination of removing the ‘;’ characters
> but the function will not compile. Can someone tell me what I am doing
> wrong, I am stumped. I will be adding
Adrian Klaver writes:
> On 06/26/2014 02:14 AM, Rémi Cura wrote:
>> On another internet page (can't find it anymore) somebody mentioned this
>> module loading at server startup, one way or another, but gave no
>> precision. It seems that the "plpy" python module get loaded by default,
>> would'nt
On Thu, Jun 26, 2014 at 2:14 AM, Rémi Cura wrote:
> Hey,
> thanks for your answer !
>
> Yep you are right, the function I would like to test are going to be called
> a lot (100k times), so even 15 ms per call matters.
>
> I'm still a bit confused by a topic I found here :
> http://stackoverflow.co
James Le Cuirot writes:
> This got me wondering what Rails uses. I dug into ActiveRecord and
> found that apart from the odd call to PQexec with hardcoded single
> statements, it uses PQsendQuery. The libpq docs state a few of the
> differences but don't mention whether PQsendQuery automatically c
2014-06-26 18:26 GMT+02:00 Raymond O'Donnell :
> On 25/06/2014 23:19, Dennis Ryan wrote:
> > I having trouble with correct syntax to get this trigger function to
> > compile. I have tried every combination of removing the ‘;’ characters
> > but the function will not compile. Can someone tell me
2014-06-26 18:28 GMT+02:00 Shaun Thomas :
> On 06/25/2014 05:19 PM, Dennis Ryan wrote:
>
> CASE
>> WHEN NEW.period = 201001
>> THEN INSERT INTO sn_dm_b.pm201001 VALUES (NEW.*);
>> END;
>>
>
> You can't just have a bare CASE statement in plpgsql. Try this:
>
>
> CREATE OR REPL
On 26/06/2014 17:26, Raymond O'Donnell wrote:
> On 25/06/2014 23:19, Dennis Ryan wrote:
>> I having trouble with correct syntax to get this trigger function to
>> compile. I have tried every combination of removing the ‘;’ characters
>> but the function will not compile. Can someone tell me what
On 06/25/2014 05:19 PM, Dennis Ryan wrote:
CASE
WHEN NEW.period = 201001
THEN INSERT INTO sn_dm_b.pm201001 VALUES (NEW.*);
END;
You can't just have a bare CASE statement in plpgsql. Try this:
CREATE OR REPLACE FUNCTION sn_dm_b.pm_insert_trigger()
RETURNS TRIGGER AS $$
BEG
Hello
You are using PLpgSQL CASE statement
this start by CASE keyword and finishing by END CASE keywords
CREATE OR REPLACE FUNCTION sn_dm_b.pm_insert_trigger()
RETURNS TRIGGER AS $$
BEGIN
CASE
WHEN NEW.period = 201001
THEN INSERT INTO sn_dm_b.pm201001 VALUES (NEW.*);
Hi Shaun,
We reindexed all the primary and unique keys of all the tables, But we
did not reindex the tables. You think we should do that also ?
Yes, you need to reindex. Part of the problem with this kind of table
corruption, is that PostgreSQL has applied data and index page
modification
Thanks Shaun.
We reindexed all the primary and unique keys of all the tables, But we
did not reindex the tables. You think we should do that also ?
Also, do you think we should do a clean dump restore to eliminate all
data inconsistencies.
One more query :
We managed to get the old server
I having trouble with correct syntax to get this trigger function to compile.
I have tried every combination of removing the ‘;’ characters but the function
will not compile. Can someone tell me what I am doing wrong, I am stumped. I
will be adding addition when clauses the case statement once
On 06/26/2014 10:34 AM, Karthik Iyer wrote:
Any inputs here? You think a pgdump and restore would help more ?
A full dump and restore would definitely help. I tend not to suggest
that often because I work with very large databases that are usually
extremely cumbersome to dump and restore.
On 06/26/2014 10:47 AM, Marti Raudsepp wrote:
This deserves a caveat, in the default "read committed" isolation
level, this example can delete more rows that it inserts;
This is only true because I accidentally inverted the date resolutions.
It should have been:
BEGIN;
INSERT INTO my_table_
On Thu, Jun 26, 2014 at 5:49 PM, Shaun Thomas wrote:
> Then you create a job that runs however often you want, and all that job
> does, is move old rows from my_table, to my_table_stable. Like so:
>
> BEGIN;
> INSERT INTO my_table_stable
> SELECT * FROM ONLY my_table
> WHERE date_col >= now() - I
On 06/26/2014 02:29 AM, Tim Uckun wrote:
I have a use case in which the most recent data experiences a lot of
transactions (inserts and updates) and then the churn kind of calms
down. Eventually the data is relatively static and will only be
updated in special and sporatic events.
I was thin
On 06/26/2014 09:44 AM, Karthik Iyer wrote:
We reindexed all the primary and unique keys of all the tables, But we
did not reindex the tables. You think we should do that also ?
Yes, you need to reindex. Part of the problem with this kind of table
corruption, is that PostgreSQL has applied d
On 06/26/2014 04:29 AM, Tim Uckun wrote:
I don't think partitioning is a good idea in this case because the
partitions will be for small time periods (5 to 15 minutes).
Actually, partitioning might be exactly what you want, but not in the
way you might think. What you've run into is actually
On Thu, Jun 26, 2014 at 7:59 AM, James Le Cuirot
wrote:
> On Thu, 26 Jun 2014 07:23:02 -0500
> Merlin Moncure wrote:
>> To be clear, Tom was advising not to rely on some of the quirky
>> aspects of -c. psql as it stands right now has a some limitations:
>> single transaction mode does not work w
On 06/26/2014 02:14 AM, Rémi Cura wrote:
Hey,
thanks for your answer !
Yep you are right, the function I would like to test are going to be
called a lot (100k times), so even 15 ms per call matters.
I'm still a bit confused by a topic I found here :
http://stackoverflow.com/questions/15023080/h
On Thu, 26 Jun 2014 07:23:02 -0500
Merlin Moncure wrote:
> On Thu, Jun 26, 2014 at 4:30 AM, James Le Cuirot
> wrote:
> > On Wed, 25 Jun 2014 13:21:44 -0500
> > Merlin Moncure wrote:
> >
> >> > The cookbook currently uses PQexec so multiple SQL commands are
> >> > wrapped in a transaction unless
On Thu, Jun 26, 2014 at 4:30 AM, James Le Cuirot
wrote:
> On Wed, 25 Jun 2014 13:21:44 -0500
> Merlin Moncure wrote:
>
>> > The cookbook currently uses PQexec so multiple SQL commands are
>> > wrapped in a transaction unless an explicit transaction
>> > instruction appears. I don't want to change
On Wed, 25 Jun 2014 17:30:15 +0200
hubert depesz lubaczewski wrote:
> On Wed, Jun 25, 2014 at 5:18 PM, James Le Cuirot
> wrote:
>
> > > Also - I have no idea what "peer authentication" has to do with Pg
> > > gem - care to elaborate? The gem is for client, and authentication
> > > happens in se
On Wed, 25 Jun 2014 10:34:57 -0500
Jerry Sievers wrote:
> > The cookbook currently uses PQexec so multiple SQL commands are
> > wrapped in a transaction unless an explicit transaction
> > instruction appears. I don't want to change this behaviour but
> > the only way to get exactly the same effec
On Wed, 25 Jun 2014 13:21:44 -0500
Merlin Moncure wrote:
> > The cookbook currently uses PQexec so multiple SQL commands are
> > wrapped in a transaction unless an explicit transaction
> > instruction appears. I don't want to change this behaviour but
> > the only way to get exactly the same effe
I have a use case in which the most recent data experiences a lot of
transactions (inserts and updates) and then the churn kind of calms down.
Eventually the data is relatively static and will only be updated in
special and sporatic events.
I was thinking about keeping the high churn data in a di
On Wed, 25 Jun 2014 09:04:44 -0700
Tom Lane wrote:
> James Le Cuirot writes:
> > hubert depesz lubaczewski wrote:
> >> Perhaps you can explain what is the functionality you want to
> >> achieve, as I, for one, don't understand. Do you want transactions?
> >> Or not?
>
> > I want an implicit tr
Hey,
thanks for your answer !
Yep you are right, the function I would like to test are going to be called
a lot (100k times), so even 15 ms per call matters.
I'm still a bit confused by a topic I found here :
http://stackoverflow.com/questions/15023080/how-are-import-statements-in-plpython-handle
38 matches
Mail list logo