On 9/14/07, Ansgar -59cobalt- Wiechers <[EMAIL PROTECTED]> wrote:
>
> On 2007-09-14 soni de wrote:
> > In Postgres 7.2.4, COPY command is working fine even if tables have 6
> > fields but we are copying only 5 fields from the file
> >
> > But in Postgres 8.2.
Hello,
In Postgres 7.2.4, COPY command is working fine even if tables have 6 fields
but we are copying only 5 fields from the file
But in Postgres 8.2.0, if table has 6 fields and we need to copy data for 5
fields only, then we need to specify the column names too in COPY command.
Is there any
Hello,
We have installed postgres 8.2.0
default time zone which postgres server using is
template1=# SHOW timezone;
TimeZone
---
ETC/GMT-5
(1 row)
But we want to set this timezone parameter to IST.
Our system timezone is also in IST. We are using solaris.
Please provide me some help
Any response?
On 10/27/06, soni de <[EMAIL PROTECTED]> wrote:
Hello,
My Server is crashed in PQfinish. Below is the core file details:
=>[1] DLRemHead(0x2b7780, 0xfb6bc008, 0x319670, 0xfb6bc008, 0x21c40, 0x3106f8), at 0xfded10e4 [2] DLFreeList(0x2b7780, 0x0, 0x417b48, 0xfdec5aa4
Thanks a lot for your help.
Thanks,
Soni
On 10/17/06, Dawid Kuroczko <[EMAIL PROTECTED]> wrote:
On 10/17/06, soni de <
[EMAIL PROTECTED]> wrote:
I didn't understand the "Bitmap Scan" and the sentence "indexes will be dynamically converted to bitmaps in me
Hello,
My Server is crashed in PQfinish. Below is the core file details:
=>[1] DLRemHead(0x2b7780, 0xfb6bc008, 0x319670, 0xfb6bc008, 0x21c40, 0x3106f8), at 0xfded10e4 [2] DLFreeList(0x2b7780, 0x0, 0x417b48, 0xfdec5aa4, 0x21c18, 0x0), at 0xfded0c64 [3] freePGconn(0x371ea0, 0x0, 0x289f48, 0xfbf
Hello,
I was going through the Performance Enhancements of 8.1.0, in that I have read "Bitmap Scan"
"Bitmap Scan: indexes will be dynamically converted to bitmaps in memory when appropriate, giving up to twenty times faster index performance on complex queries against very large tables. This
Fri, 2006-08-25 at 21:23 +0530, soni de wrote:> > Hello,> >> > I want to ask, Is there any way to insert records from XML
> > file to the postgres database?>> Try the contrib/xml2 module.Alas, that module will not help you much with the insertion of records.It is more a
Hello,
I want to ask, Is there any way to insert records from XML file to the postgres database?
Please provide me some help regarding above query.
Postgres version which we are using is 7.2.4
Thanks,
Sonal
Hello,
I am getting following error while inserting a row into the "abc" table:
ERROR: fmgr_info: function 2720768: cache lookup failed
Table "abc" has one trigger called "abct"
Definition is as follows:
BEGIN;
LOCK TABLE abc IN SHARE ROW EXCLUSIVE MODE;
create TRIGGER abct
AF
Hello,
We have to take a backup of database and we know the pg_dump utility of postgresql.
But may I know, is there any API for this pg_dump utility so that we can call it from the C program? Or only script support is possible for this.
I think script support is bit risky because if anythin
?
Would there be any data loss or in this case also ALTER will block all the new accesses to the table?
Thanks,
Soni
On 6/7/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
On Wed, Jun 07, 2006 at 06:13:11PM +0530, soni de wrote:> Hello,>>>> We have database on which continueous
Hello,
We have database on which continueous operations of INSERT, DELETE, UPDATE are going on, In the mean time irrespective of INSERT and UPDATE we want to ALTER some filelds from the table can we do that?
Would the ALTER command on heavily loaded database create any perfomance problem?
I
Hello,
I have tried the query SELECT * FROM wan ORDER BY stime DESC OFFSET 0 LIMIT 50; and it is working great.
EXPLAIN ANALYSE of the above query is:
pdb=# EXPLAIN ANALYZE select * from wan order by stime desc limit 50 ;NOTICE: QUERY PLAN:
Limit (cost=0.00..12.10 rows=50 width=95) (actual tim
I don't want to query exactly 81900 rows into set. I just want to fetch 50 or 100 rows at a time in a decreasing order of stime.(i.e 50 or 100 rows starting from last to end).
if we fetched sequentially, there is also problem in fetching all the records (select * from wan where kname='pluto' orde
Please provide me some help regarding how could I use cursor in following cases? :
I want to fetch 50 records at a time starting from largest stime.
Total no. of records in the "wan" table:
82019
pdb=# \d wan
Table "wan"
Column | Type
| bigint |
b_outpkt | bigint |
Primary key: lan_pkey
Check constraints: "lan_stime" ((stime >= 0) AND (stime < 86400))
On 4/10/06, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
Rajesh Kumar Mallah wrote:>> what is the query
Hello,
I have difficulty in fetching the records from the database.
Database table contains more than 1 GB data.
For fetching the records it is taking more the 1 hour and that's why it is slowing down the performance.
please provide some help regarding improving the performance and how do I run q
18 matches
Mail list logo