Hi Tomas,
Thanks. Increasing the wal_keep_segments to 4000 did the trick.
I will set up WAL archive as well.
Thanks again.
Ashish.
On Sat, Nov 19, 2011 at 5:59 PM, Tomas Vondra wrote:
> Hi,
>
> On 19 Listopad 2011, 10:44, Ashish Gupta wrote:
> > I searched on various forums, where people encou
On sön, 2011-11-20 at 12:09 +0200, Andrus wrote:
> Debian seems to require update-rc.d and Centos chkconfig
> How to use single command for every distro ?
apt-get install chkconfig
> "/etc/init.d/postgresql start" works in all distros. Adding to
> postgresql to startup requires different commands
On 21 Listopad 2011, 4:17, David Johnston wrote:
> On Nov 20, 2011, at 20:50, Phoenix Kiula wrote:
>
>> On Mon, Nov 21, 2011 at 7:26 AM, Gavin Flower
>> wrote:
>>
>>> How about having 2 indexes: one on each of ip & url_md5? Pg will
>>> combine the
>>> indexes as required, or will just use one if
> On Nov 20, 2011, at 21:33, Phoenix Kiula wrote:
>
> My big table now has about 70 million rows, with the following columns:
>
> alias | character varying(35)
> url | text
> modify_date | timestamp without time zone
> ip | bigint
>
>
> For each IP addres
Hello,
is it possible to archive the WAL files received by a hot-standby server? In
noticed nothing about this on the pgsql docs. The idea is to archive logs in
two locations, at the primary site and at the replica site (over a wan) in
order to be able to perform a PITR also at the replica site.
got it. thank you very much for you help. I found out this problem too
late, and there is no backup.
luckily there was not too much data for this, and my app keeps running
without error.
I am not sure if they are related but I could not use pg_restore to import
data dumped by "pg_dump -Fc";
p
On Mon, Nov 21, 2011 at 10:58 AM, Enrico Sirola wrote:
> is it possible to archive the WAL files received by a hot-standby server? In
> noticed nothing about this on the pgsql docs. The idea is to archive logs in
> two locations, at the primary site and at the replica site (over a wan) in
> or
On Monday, November 21, 2011 6:39:55 am Yan Chunlu wrote:
> got it. thank you very much for you help. I found out this problem too
> late, and there is no backup.
>
> luckily there was not too much data for this, and my app keeps running
> without error.
>
> I am not sure if they are related b
Hi all,
I have installed PostgreSQL server on a Windows Server 2008 server and I
need to write a more complex parser than the default one in PostgreSQL.
Searching on internet i found this example:
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/HOWTO-parser-tsearch2.html
where the p
On 11/21/11 1:51 AM, Antonio Franzoso wrote:
I have installed PostgreSQL server on a Windows Server 2008 server and
I need to write a more complex parser than the default one in
PostgreSQL. Searching on internet i found this example:
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/
This may be a duplicate response.
On 11/20/2011 11:05 AM, Tom Lane wrote:
> Rob Sargent writes:
>> On 11/20/2011 09:24 AM, Tom Lane wrote:
>>> It appears that on Ubuntu, libbsd defines those symbols, which confuses
>>> configure into supposing that they're provided by libc, and then the
>>> link
I've seen a couple backup scripts that query the metadata to determine the
list of databases to back up. I like this approach, but have a few
databases which don't get backed up for various reasons, e.g. testing
databases which we'd prefer to recreate on the off chance we loose them,
rather than h
Mike Blackwell, 21.11.2011 17:50:
I've seen a couple backup scripts that query the metadata to
determine the list of databases to back up. I like this approach,
but have a few databases which don't get backed up for various
reasons, e.g. testing databases which we'd prefer to recreate on the
off
On Nov 21, 2011, at 11:50, Mike Blackwell wrote:
> Might there be a way to
> tag those databases somehow so the backup script knows to skip them?
Add a table to each database that can be queried by the back up script to store
this additional metadata?
Michael Glaesemann
grzm seespotcode net
> What about using the comments on the database to control this?
That sounds close, though the comments are already being used for general
descriptions. I suppose it wouldn't hurt to add 'no_backup' to the
existing comments where appropriate. I was hoping maybe I'd missed a
'user-defined databas
On 11/20/2011 06:21 PM, Phoenix Kiula wrote:
*SNIP*
Forgive me if I accidentally rehash something already discussed...
Divide an conquer:
First, verify that you can connect directly to your database *using
TCP*, i.e. "psql -h 127.0.0.1 -U youruser -p 5432 yourdb". If you are
using psql witho
I have a custom inventory system that runs on PG 9.1. I realize this is
not a postgres specify question, but I respect the skills of the members of
this list and was hoping for some general advice.
The system is not based on any ERP and was built from scratch.
My customer requested some supply f
google 'weeks of supply'
On Mon, Nov 21, 2011 at 1:18 PM, Jason Long
wrote:
> I have a custom inventory system that runs on PG 9.1. I realize this is
> not a postgres specify question, but I respect the skills of the members of
> this list and was hoping for some general advice.
>
> The system i
On 21 November 2011 18:18, Jason Long wrote:
> My customer requested some supply forecasting to see when there will be a
> surplus or shortage of parts based on delivery dates and production dates
> that will require these items.
Take a look at http://en.wikipedia.org/wiki/Newsvendor_model
--
P
Hi,
Is it possible, and if so how, to export a single column of a table into
a separate file per row? I have a table with ~21000 rows that have a
column "body1" containing ASCII text and I want to have 21000 separate
ASCII files, each containing that column "body1". The name of the file
does not m
On 21 November 2011 19:10, Joost Kraaijeveld wrote:
> Hi,
>
> Is it possible, and if so how, to export a single column of a table into
> a separate file per row? I have a table with ~21000 rows that have a
> column "body1" containing ASCII text and I want to have 21000 separate
> ASCII files, each
Thanks for the reply. Weeks of Supply(WOS) is not exactly what I am
looking for, but might lead to a solution.
Here is a better description of the problem.
I know the following:
Delivery dates and quantities for items on order or in transit.
A manager will forecast manually what the pending ite
Joost Kraaijeveld writes:
> Hi,
>
> Is it possible, and if so how, to export a single column of a table
> into a separate file per row? I have a table with ~21000 rows that
> have a column "body1" containing ASCII text and I want to have 21000
> separate ASCII files, each containing that column "
Hi,
On 22 November 2011 06:10, Joost Kraaijeveld wrote:
> Is it possible, and if so how, to export a single column of a table into
> a separate file per row? I have a table with ~21000 rows that have a
> column "body1" containing ASCII text and I want to have 21000 separate
> ASCII files, each co
Hello all,
I have tried to install windows 7 on 64bit Acer machine.Both postgresql 8.4
and 9.0 giving error;
"An error cocured executing the Microsoft VC++ runtime installer "error
for 8.4.while for 9.0 giving me error "Unable to write inside TEMP
enveronment variable?
I have tried to check for W
On 21/11/11 22:51, Antonio Franzoso wrote:
Hi all,
I have installed PostgreSQL server on a Windows Server 2008 server and
I need to write a more complex parser than the default one in
PostgreSQL. Searching on internet i found this example:
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
> Might there be a way to tag those databases somehow so the backup
> script knows to skip them? I'd rather not hard code the list in
> the script.
Give them a unique connection limit, higher than max_connections,
e.g. ALTER DATABASE testdb
Lets say that the primary key column is A. I am trying to select all
the rows with duplicated values in columns B, C, and D.
I am not too experienced in SQL syntax, and I've used the following:
select A from table_name where B+C+D in (select B+C+D from table_name
group by B+C+D having count(*)>1
In postgresql.org/docs/9.1/static/transaction-iso.html I read
13.2.1. Read Committed Isolation Level
. . . two successive SELECT commands can see different data, even though they
are within a single transaction . . .
Please consider this code being executed by postgres:
= = = = = = = = = =
selec
I think you should not "add columns", but concatenate them.
Instead
select A from table_name where B+C+D in (select B+C+D from table_name
group by B+C+D having count(*)>1 )
use "B || '/' || C || '/' || D"
select A from table_name where B || '/' || C
Another option is to perform a self-join on columns B, C, and D (filtering out
the 'same' record where a=a) instead of using the sub-select. This may yield
better performance depending on the size of the table. Also, I don't believe
the concatenation / sub-select will work if all of B, C, and
I cannot find a way to programatically:
1. Given a table name, find all foreign key fields in the given
table by field name (column name)
2. Given a single foreign key field name, programatically look up
the corresponding reference table name and the reference primary key field
so h
Take a look at
http://www.postgresql.org/docs/9.1/interactive/information-schema.html and
http://www.postgresql.org/docs/9.1/interactive/catalogs.html. I think
you'll find what you need. The former is relatively stable between
releases, while the latter has more detail but is subject to change.
and database will stop receiving the following data after detected an
error?
that means while using pg_restore, no error allowed to happen, otherwise
the database will stop receiving data and the import will fail.
I found only one record in psql's log:
duplicate key value violates unique constra
On Nov 21, 2011, at 17:23, jeffrey wrote:
> Lets say that the primary key column is A. I am trying to select all
> the rows with duplicated values in columns B, C, and D.
>
> I am not too experienced in SQL syntax, and I've used the following:
> select A from table_name where B+C+D in (select B
On Nov 21, 2011, at 21:11, David Johnston wrote:
> On Nov 21, 2011, at 17:23, jeffrey wrote:
>
>> Lets say that the primary key column is A. I am trying to select all
>> the rows with duplicated values in columns B, C, and D.
>>
>> I am not too experienced in SQL syntax, and I've used the fol
writes:
> In postgresql.org/docs/9.1/static/transaction-iso.html I read
> 13.2.1. Read Committed Isolation Level
> . . . two successive SELECT commands can see different data, even though they
> are within a single transaction . . .
> Please consider this code being executed by postgres:
> = = =
On Monday, November 21, 2011 4:53:21 pm Yan Chunlu wrote:
> and database will stop receiving the following data after detected an
> error?
> that means while using pg_restore, no error allowed to happen, otherwise
> the database will stop receiving data and the import will fail.
>
> I found only o
If I insert a NULL value explicitly into a column declared to be NOT NULL
DEFAULT 0 in postgreSQL 8.4 the column ends up with the default value. If I
do the same in postgreSQL 9.0 I get an error about how I am inserting a
null value into a NOT NULL column.
i.e.: insert into table1 (column1, column
Hello
2011/11/22 J.V. :
>
> I cannot find a way to programatically:
> 1. Given a table name, find all foreign key fields in the given table by
> field name (column name)
> 2. Given a single foreign key field name, programatically look up the
> corresponding reference table name and the ref
Tanmay Patel writes:
> If I insert a NULL value explicitly into a column declared to be NOT NULL
> DEFAULT 0 in postgreSQL 8.4 the column ends up with the default value. If I
> do the same in postgreSQL 9.0 I get an error about how I am inserting a
> null value into a NOT NULL column.
I'm sorry,
On Mon, Nov 21, 2011 at 5:27 PM, Tanmay Patel wrote:
> If I insert a NULL value explicitly into a column declared to be NOT NULL
> DEFAULT 0 in postgreSQL 8.4 the column ends up with the default value. If I
> do the same in postgreSQL 9.0 I get an error about how I am inserting a null
> value into
On Tue, Nov 22, 2011 at 3:36 AM, Twaha Daudi wrote:
> Hello all,
> I have tried to install windows 7 on 64bit Acer machine.Both postgresql
> 8.4 and 9.0 giving error;
> "An error cocured executing the Microsoft VC++ runtime installer "error
> for 8.4.while for 9.0 giving me error "Unable to writ
Hello,
I have a data sets where each of the objects is represented in a metric
space with 32 dimensions (i.e., each object is represented by 32 numbers).
Is there a way to represent this object in Postgresql so that I can perform
KNN?
Thanks,
Benjamin
Hi list.
I'm migrating a bunch of old suse 9.3 systems with postgresql 8.2 databases
to opensuse 11.4 systems with 8.2 databases (the exact same version -
8.2.14). From there, the databases will be migrated to postgresql 9.x with
custom process.
Let's assume that 9.3 machine is machine A, and new
On Tue, Nov 22, 2011 at 1:05 PM, Twaha Daudi wrote:
> Hello Ashesh,
> here is the output of the command:
> C:\>echo %TEMP%
> C:\Users\User\AppData\Local\Temp
>
> It looks like the variable is set properly and still there is problem
> Any help?
>
>
> On Tue, Nov 22, 2011 at 7:13 AM, Ashesh Vashi <
On 11/21/11 11:20 PM, Benjamin Arai, Ph.D. wrote:
I have a data sets where each of the objects is represented in a
metric space with 32 dimensions (i.e., each object is represented by
32 numbers). Is there a way to represent this object in Postgresql so
that I can perform KNN?
Would an arra
47 matches
Mail list logo