Hi,
I have created a database that have a function that disable triggers
on tables, but when I execute the function: (I have created the
database with the same user that I'm trying to execute the function)
[code]
select triggerall(false);
[/code]
return
[code]
ERROR: permission denied: "RI_Con
On 10/18/2011 02:52 PM, Mark Priest wrote:
I am getting an Out of Memory error in my server connection process
while running a large insert query.
Postgres version: "PostgreSQL 8.2.16 on i686-pc-mingw32, compiled by
GCC gcc.exe (GCC) 3.4.2 (mingw-special)"
OS: Windows 7 Professional (v.6.1, buil
Hi,
Thanks for the reply!
But I don't want to check if the table exists, I want to see the
result of the SELECT query, if a row presence or not.
The tmp_tbl is a dynamic generated table name, but when I write the
code without EXECUTE, I get syntax error too.
In this case how can I check if a SELEC
On 10/18/2011 03:52 PM, Andre Lopes wrote:
Hi,
I have created a database that have a function that disable triggers
on tables, but when I execute the function: (I have created the
database with the same user that I'm trying to execute the function)
[code]
select triggerall(false);
[/code]
retu
On 18 October 2011 09:57, wrote:
> Hi,
>
> Thanks for the reply!
> But I don't want to check if the table exists, I want to see the
> result of the SELECT query, if a row presence or not.
So you want to check that the table contains data? In that case it
makes no sense to create the table if it
Hi all
I'm once again trying to figure out which row of a 5000-record insert is
violating a constraint, and can't help thinking how nice it'd be if Pg
would report the contents of the row violating the constraint, or at
least the values that were tested by the constraint check.
It's really,
Hi,
I have created a database and all tables with a user, but I can't
execute this alter table:
[code]
xxx_database=> ALTER TABLE tdir_categories DISABLE TRIGGER ALL;
ERROR: permission denied: "RI_ConstraintTrigger_25366" is a system trigger
[/code]
What can I do to solve this?
Best Regards,
Hello,
I have many SQL script files to update schema, delete data, unit test etc.
I want to run all the files in one transaction using shell script to ease the
installation procedure. I can do that from the psql client by using the \i
option
BEGIN;
\i / .../ module1.sql
\i / .../
On 18 October 2011 14:11, salah jubeh wrote:
> Hello,
>
> I have many SQL script files to update schema, delete data, unit test
> etc. I want to run all the files in one transaction using shell script
> to ease the installation procedure. I can do that from the psql client by
> using the \i o
Hello,
Thanks for the reply.
I considered cat as an option but I did not go for it, because of the number
of sql files I have is large which makes the code not readable
The second thing, which is more important is because I have some advantages
with using -f such as the line number which c
2011/10/18 salah jubeh :
> Hello,
> Thanks for the reply.
> I considered cat as an option but I did not go for it, because of the
> number of sql files I have is large which makes the code not readable
> The second thing, which is more important is because I have some advantages
> with using -f
On Tue, Oct 18, 2011 at 7:57 AM, Cédric Villemain
wrote:
> 2011/10/18 salah jubeh :
>> Hello,
>> Thanks for the reply.
>> I considered cat as an option but I did not go for it, because of the
>> number of sql files I have is large which makes the code not readable
>> The second thing, which is m
On Tuesday, October 18, 2011 2:42:00 am Andre Lopes wrote:
> Hi,
>
> I have created a database and all tables with a user, but I can't
> execute this alter table:
>
> [code]
> xxx_database=> ALTER TABLE tdir_categories DISABLE TRIGGER ALL;
> ERROR: permission denied: "RI_ConstraintTrigger_25366"
Thanks guys as you have pointed , I think the best solution is to go for CAT
and set the appropriate options for psql.
Regards
From: Merlin Moncure
To: Cédric Villemain
Cc: salah jubeh ; "andr...@a-kretschmer.de"
; pgsql
Sent: Tuesday, October 18, 20
Mark Priest writes:
> I am getting an Out of Memory error in my server connection process
> while running a large insert query.
> Postgres version: "PostgreSQL 8.2.16 on i686-pc-mingw32, compiled by
> GCC gcc.exe (GCC) 3.4.2 (mingw-special)"
> OS: Windows 7 Professional (v.6.1, build 7601 service
I was looking for an easier, more automatic way, but I wrote a few
scripts that wrapped the boolean fields in case statements as suggested.
Thanks,
Viktor
Henry Drexler wrote:
> couldn't you just wrap it in a case statement to change the t to true
> etc...?
>
> On Mon, Oct 17, 2011 at 2:29 PM,
I have postgres setup for streaming replication and my slave box went down.
My question is, how long can that box stay down before it causes a material
impact on the master?
The archive_command that I use will not archive logs while the slave is down.
I know the obvious problems:
* you're no
HI,
I am using PostgreSQL 8.4 in windows XP. Also using Slony-I for
implementing master/slave support. I have a data base with some tables
and a table space with some other tables in another location. My issue
is that after a 30 days continuous run, some of the files in the
PostgreSQL table sp
Dear Yogesh,
To get best answer's from community member's you need to provide complete
information like,PG version, Server /Hardware info etc., So that it help's
member's to assist you in right way.
http://wiki.postgresql.org/wiki/Guide_to_reporting_problems
---
Regards,
Raghavendra
EnterpriseDB
On Tue, Oct 18, 2011 at 4:58 PM, David Kerr wrote:
> I have postgres setup for streaming replication and my slave box went down.
>
> My question is, how long can that box stay down before it causes a material
> impact on the master?
>
> The archive_command that I use will not archive logs while
On 10/18/2011 09:44 AM, Simon Riggs wrote:
On Tue, Oct 18, 2011 at 4:58 PM, David Kerr wrote:
I have postgres setup for streaming replication and my slave box went down.
My question is, how long can that box stay down before it causes a material
impact on the master?
The archive_command tha
In response to "Deshpande, Yogesh Sadashiv (STSD-Openview)"
:
> Hello ,
>
> We have a setup where in there are around 100 process running in parallel
> every 5 minutes and each one of them opens a connection to database. We are
> observing that for each connection , postgre also created on sub
>
> > We need following information
> >
> > 1. Is there any configuration we do that would pool the connection
> request rather than coming out with connection limit exceed.
>
> Use pgpool or pgbouncer.
>
>
Use pgbouncer, which is a light weighted connection pooling tool, if you are
not optin
On 10/18/2011 06:57 AM, Deshpande, Yogesh Sadashiv (STSD-Openview) wrote:
Hello ,
We have a setup where in there are around 100 process running in
parallel every 5 minutes and each one of them opens a connection to
database. We are observing that for each connection , postgre also
created on su
The log is getting from PostgreSQL 9.0.4
Basically we set up streaming replication hot-standby slave while master is
under heavy load
The slave started but not accepting read-only queries,
every request will trigger the "FATAL: the database system is starting up"
error.
The slave will eventually
I am not able to find binary distribution of pgbouncer for windows.. Can you
point me to the location?
From: Raghavendra [mailto:raghavendra@enterprisedb.com]
Sent: Tuesday, October 18, 2011 10:33 PM
To: Bill Moran
Cc: Deshpande, Yogesh Sadashiv (STSD-Openview); pgsql-general@postgresql.org
S
Hello Raghavendra,
Following are the details..
PostgreSQL9.0 , we running our application on 4CPU 8GB RAM system.
Thanks
Yogesh
From: Raghavendra [mailto:raghavendra@enterprisedb.com]
Sent: Tuesday, October 18, 2011 9:46 PM
To: Deshpande, Yogesh Sadashiv (STSD-Openview)
Cc: pgsql-general@po
On 18 Oct 2011, at 17:54, Viktor Rosenfeld wrote:
> I was looking for an easier, more automatic way, but I wrote a few
> scripts that wrapped the boolean fields in case statements as suggested.
You are aware that COPY accepts a query as well?
You could also have created a VIEW over that table th
=?ISO-8859-1?Q?Fr=E9d=E9ric_Rejol?= writes:
> I created a custom CAST to cast from one table type to another.
> pg_dump does not include my custom CAST.
Hmm. The reason for that is that the table types aren't considered
dumpable objects. I suppose we need to fix that, but in the meantime
you'd
Here you go..
http://winpg.jp/~saito/pgbouncer/pgbouncer-1.4-win32.zip
---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/
On Tue, Oct 18, 2011 at 11:08 PM, Deshpande, Yogesh Sadashiv (STSD-Openview)
wrote:
> I am not able to find binary distribution of pgbo
On 10/18/11 9:51 AM, Bill Moran wrote:
Basically we wanted to limit the number of processes so that client code
doesn't have to retry for unavailability for connection or sub processes , but
postgre takes care of queuing?
pgpool and pgbouncer handle some of that, but I don't know if they do
ex
Hi Alban,
in the end I used a COPY statement with a query and a CASE statement as
suggested by Henry.
Cheers,
Viktor
Alban Hertroys wrote:
> On 18 Oct 2011, at 17:54, Viktor Rosenfeld wrote:
>
> > I was looking for an easier, more automatic way, but I wrote a few
> > scripts that wrapped the b
Hello,
I created a custom CAST to cast from one table type to another.
pg_dump does not include my custom CAST.
Here is an example:
CREATE TABLE foo_source(id integer);
CREATE TABLE foo_target(id integer);
CREATE OR REPLACE FUNCTION cast_ident(foo_source)
RETURNS foo_target
AS
$BODY$
DECLARE
Hi all,
Wondering if you can help. We have a PostgreSQL database that we are
using to store our data and we are using Access as the front-end. I
have linked all the tables and up to now have had no problem with
creating the forms/queries based on this data. Until now that is. One
of the tables whi
Hello ,
We have a setup where in there are around 100 process running in parallel every
5 minutes and each one of them opens a connection to database. We are observing
that for each connection , postgre also created on sub processes. We have set
max_connection to 100. So the number of sub proce
Hello,
I see the errors
ERROR: value too long for type character varying(32)
CONTEXT: SQL statement "update pref_users set first_name = $1 ,
last_name = $2 , female = $3 , avatar = $4 , city = $5 , last_ip =
$6 , login = now() where id = $7 "
PL/pgSQL function "pref_update_users"
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Alexander Farber
Sent: Tuesday, October 18, 2011 3:44 PM
To: pgsql-general
Subject: [GENERAL] value too long - but for which column?
Hello,
I see the errors
ERROR: value
On Tue, Oct 18, 2011 at 12:43 PM, John R Pierce wrote:
> On 10/18/11 9:51 AM, Bill Moran wrote:
>>>
>>> Basically we wanted to limit the number of processes so that client code
>>> doesn't have to retry for unavailability for connection or sub processes ,
>>> but postgre takes care of queuing?
>>
I have a large, frequently accessed table that needs a primary key
constraint added. The table already has a unique index (but not a unique
constraint) on one of its columns. The overly simplified schema looks like
this:
CREATE TABLE table_without_pk (not_a_pk integer not null, some_data text);
Dear all,
I am new in PostgreSQL and need assistance from you for my problem.
I tried to install PostreSQL v.9.0 which have been downloaded from PostgreSQL
website for Windows x64. My computer use Windows 7 Professional 64 bit.
I follow the installation steps with no problems until the last step
> Thanks, Craig.
>
> There are no triggers on the tables and the only constraints are the
> primary keys.
>
> I am thinking that the problem may be that I have too many full self
> joins on the simple_group table. I am probably getting a
> combinatorial explosion when postgres does cross joins
Thanks, Craig.
There are no triggers on the tables and the only constraints are the
primary keys.
I am thinking that the problem may be that I have too many full self
joins on the simple_group table. I am probably getting a
combinatorial explosion when postgres does cross joins on all the
deriv
Mark Priest writes:
> However, I am still curious as to why I am getting an out of memory
> error. I can see how the performance might be terrible on such a
> query but I am surprised that postgres doesn't start using the disk at
> some point to reduce memory usage. Could it be that postgres tr
On 10/19/2011 09:21 AM, Wendi Adrian wrote:
Do anyone can help me to solve this problem? Or, PostgreSQL does not
support Windows 7 Professional 64 bit?
PostgreSQL does support Windows 7 Pro 64-bit; that's one of the OSes I
use and it works fine.
It would be helpful to know which language yo
Hi Craig,
thanks for your response.
I installed Windows 7 Professional on my workstation which connect with office
network.
I tried to reinstall after disconnect from the network but still failed. So I
am assuming that PostgreSQl cannot be installed on workstation (because it does
successfully
45 matches
Mail list logo