1. You have to remove foreign key reference from table searchcache.
2. take backup of data from searchcache.
3. create partition of table product
4. add constraints on table searchcache.(if necessary delete and create
searchcache after taking backup.)
-
Thanks and Regards,
Sachin Kotwal
Please look at following topic:-
http://www.postgresql.org/docs/9.0/static/functions-admin.html
try using
#SELECT pg_terminate_backend(2738);
-
Thanks and Regards,
Sachin Kotwal
NTT-DATA-OSS Center (Pune)
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Problem
John DeSoi wrote:
> I have a 9.2.4 (CentOS 6.5) standby that has been working with no problems
> using log shipping. I
> wanted to add streaming replication which I thought would be as simple as
> adding primary_conninfo to
> recovery.conf and restarting the standby. But on restart there is no me
On Sep 24, 2013, at 5:48 AM, Albe Laurenz wrote:
>> Here is what I have on the standby:
>>
>> postgresql.conf
>>
>> hot_standby = on
>> max_wal_senders = 2
>> wal_level = hot_standby
>
> You should set the same parameters on the primary, else it won't work.
On the primary I have
wal_level
Support at it's finest.
Thinking maybe ALLOCATED transactions zero out
when the allocated qty is shipped, but I would like to save
the original allocated qty..maybe add another field in my
transaction table to save the allocated transaction qty.
Also don't see any problem with deleting the ALLOCAT
Think I'll just do an UPDATE which changes the ALLOCATED
transaction to a SHIP transaction and uses the current
Ship Date/Time
On Tue, 2013-09-24 at 07:38 -0700, Bret Stern wrote:
> Support at it's finest.
> Thinking maybe ALLOCATED transactions zero out
> when the allocated qty is shipped, but I
On 09/23/2013 05:47 PM, Andreas wrote:
Am 24.09.2013 02:25, schrieb Adrian Klaver:
On 09/23/2013 05:19 PM, Andreas wrote:
I need to import some log-files of an application [...]
The import would be easy if the files had a constant name but the app
creates csv files with names like "ExportM
Greetings,
I've got two different 9.3 clusters setup, a & b (on Linux if that
matters). On cluster b, I have a table (nppsmoke) that is partitioned
by date (month), which uses a function which is called by a trigger to
manage INSERTS (exactly as documented in the official documentation
for partiti
To be very straightforward :
your bash script will dynamically create the sql query in a string, then
send it to database using psql.
You can also use pipes.
For example :
$4 -c "COPY $1 FROM '/tmp/pipe_ply_binaire_vers_ply_ascii_"$1"' WITH
CSV DELIMITER AS ' ';";
where $4 is the psql command to
On 24/09/2013 18:18, Rémi Cura wrote:
To be very straightforward :
your bash script will dynamically create the sql query in a string,
then send it to database using psql.
You can also use pipes.
For example :
$4 -c "COPY $1 FROM '/tmp/pipe_ply_binaire_vers_ply_ascii_"$1"' WITH
CSV DELIMITER
On 24/09/2013 02:14, Adrian Klaver wrote:
Just a thought:
1) Create a function that encapsulates the above logic where the
argument is the file name.
2) Create a shell script. Use substitution to take the filename passed
to the shell script to build a string and in the script do
psql -d db
I have a Postgressql-9.2 database with postgis-2.0 in production service. Due,
I think, to a bad postgis upgrade, there are both old and new functions
present. I am planning to install Postgresql-9.3 and Postgis-2.01 and then copy
the data over with pg_upgrade. My question is: Will this also cop
I am attempting to benchmark a number of queries over a 15GB dataset with ~
10mil records. When I run linux time on the query execution (single column
projection), it returns 1 minute, but the \timing command returns only 15
seconds? Can someone explain the difference? 1 minute is consistent with
r
On Tue, Sep 24, 2013 at 1:13 PM, Daniel Tahara wrote:
> I am attempting to benchmark a number of queries over a 15GB dataset with
> ~ 10mil records. When I run linux time on the query execution (single
> column projection), it returns 1 minute, but the \timing command returns
> only 15 seconds?
>
On Tue, Sep 24, 2013 at 6:07 PM, Jeff Janes wrote:
> On Tue, Sep 24, 2013 at 1:13 PM, Daniel Tahara wrote:
>
>> I am attempting to benchmark a number of queries over a 15GB dataset with
>> ~ 10mil records. When I run linux time on the query execution (single
>> column projection), it returns 1 mi
On Tue, Sep 24, 2013 at 6:37 PM, Daniel Tahara wrote:
> time echo '\\timing select msg from test' | $PG_ROOT/bin/psql >>
> out.txt
This should be:
time echo '\\timing select msg from test' | $PG_ROOT/bin/psql test >>
out.txt
Sorry for the typo, and thanks for the help.
Daniel
Comput
I'm curious what the following would show:
time echo '\\timing select 1 from dual' | $PG_ROOT/bin/psql test >>
out.txt
Greg Haase
On Sep 24, 2013 4:01 PM, "Daniel Tahara" wrote:
>
> On Tue, Sep 24, 2013 at 6:37 PM, Daniel Tahara wrote:
>
>> time echo '\\timing select msg from test' |
On 09/24/2013 04:07 PM, Gregory Haase wrote:
I'm curious what the following would show:
time echo '\\timing select 1 from dual' | $PG_ROOT/bin/psql test
>> out.txt
Also try ...>/dev/null to see if writing to local disk or buffer is part
of the delay.
Cheers,
Steve
--
Sent via pgs
Hi all,
I am planning to use pgbadger for analyzing queries. Documentation for
pgbadger says "Do not enable log_statement as their log format will not be
parsed by pgBadger. "
I have
log_min_duration_statement = 0
I do see quite a few SELECTs, INSERTS etc in the log files, function calls
too, for e
Hi all!
I import many, many rows of data into a table, from three or more computers, 4
times per hour. I have a primary key, and the query I use to import the data is
supposed to dedup before inserting, but I still get primary key violations.
The import process is:
* Load CSV data into temp ta
20 matches
Mail list logo