On Mon, 2007-07-23 at 05:34 -0400, Chuck Payne wrote:
>
> Hey,
>
> I have spend the last several days looking for a website or how to
> that would show me how to call postgresql in bash script. I know that
> in mysql I can do like this
>
> for i in `cat myfile.txt` ; do mysql -uxxx -p -Ass
New to PG, just wondering if there's anyway to say.. I want t Full
backup of DB-Sample and I can just tar up the directory containing that
tablespace, copy it to another PG server and then re-attach it?
Thanks
---(end of broadcast)---
TIP 6: explain
Can anyone shed some light on this. I just would like to know if queries
for raw data (not aggregregates) is expected to take a long time.
Running times between 30 - 2 hours for large dataset pulls.
Involves lots of joins on very large tables (min 1 millon rows each
table, 300 columns per table)
On Fri, 2007-08-03 at 07:55 -0600, Josh Tolley wrote:
> On 8/3/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> > Can anyone shed some light on this. I just would like to know if
> queries
> > for raw data (not aggregregates) is expected to take a long time.
> > Running
On Wed, 2007-07-25 at 19:32 +0300, Devrim GÜNDÜZ wrote:
> Hi,
>
> On Sat, 2007-07-21 at 15:57 -0700, Steve Wampler wrote:
> > I need the Java and Python interfaces supplied with
> > (from 8.1.9):
> >
> >postgresql-jdbc-8.1.4-1.centos.1
> >postgresql-python-8.1.9-1.el4s1.1
>
> The actual
Hi,
Writing a script to pull data from SQL server into a flat-file (or just
piped in directly to PG using Perl DBI)
Just wondering if the copy command is able to do a replace if there are
existing data in the Db already. (This is usually in the case of updates
to specific rows and there be a time
I'm seeing an obstacle in my aim to migrate from mysql to PG mainly from
the manner in which PG handles duplicate entries either from primary
keys or unique entries.
Data is taken from perl DBI into (right now) CSV based files to be used
via psql's \copy command to insert into the table.
In MySql
On Tue, 2007-08-14 at 10:16 -0500, Scott Marlowe wrote:
> On 8/14/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> >
> > In MySql, I was using mysqlimport --replace which essentially provided
> > the means to load data into the DB, while at the same time, would
> &g
On Thu, 2007-08-23 at 16:42 +0800, Phoenix Kiula wrote:
> On 23/08/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> >
> > Yeah, I'm not the biggest fan of CR, but it's worked with PostgreSQL
> > for quite some time now. We had it hitting a pg7.2 db back in the
> > day, when hip kids road around in r
On Tue, 2007-08-14 at 10:16 -0500, Scott Marlowe wrote:
> On 8/14/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> > I'm seeing an obstacle in my aim to migrate from mysql to PG mainly from
> > the manner in which PG handles duplicate entries either from primary
On Mon, 2007-08-27 at 11:55 +0800, Ow Mun Heng wrote:
> I just ran into trouble with this. This rule seems to work when I do
> simple inserts, but as what I will be doing will be doing \copy
> bulkloads, it will balk and fail.
> Now would be a good idea to teach me how to skin the cat
I'm trying to see if pgloader will make my work easier for bulkloads.
I'm testing it out and I'm stucked basically because it can't find the
module TextReader or CSVreader.
Googling doesn't help as there seems to be no reference to a module
named textreader or csvreader.
I'm on Python 2.4.4
Tha
On Mon, 2007-08-27 at 12:22 +0200, Dimitri Fontaine wrote:
> Le lundi 27 août 2007, Ow Mun Heng a écrit :
> > I'm trying to see if pgloader will make my work easier for bulkloads.
> > I'm testing it out and I'm stucked basically because it can't find the
On Mon, 2007-08-27 at 11:27 +0200, Dimitri Fontaine wrote:
> We've just made some tests here with 2.2.1 and as this release contains the
> missing files, it works fine without any installation.
Yep.. I can confirm that it works.. I am using the csv example.
Goal : similar functionality much lik
Hi,
Does the psql's \copy command run as a transaction? I think it does, but
somehow when I cancel (in a script) a running import, "seems" (I can't
seem to duplicate it on the cli though) like a few lines/rows gets
inserted anyway..
---(end of broadcast)-
Is it possible to name a primary key (composite) primary key rather than
have pg default to table_name_pkey??
I tried something like
primary key pkey_table_short_form_name (a,b,c)
but it didnt' work.
---(end of broadcast)---
TIP 2: Don't 'kill -
On Mon, 2007-08-27 at 21:03 -0500, Erik Jones wrote:
> On Aug 27, 2007, at 8:50 PM, Ow Mun Heng wrote:
>
> > Is it possible to name a primary key (composite) primary key rather
> > than have pg default to table_name_pkey??
> >
> > I tried some
Continuining with my efforts to get similar functionality as mysql's
mysqlimport --replace I want to ask for the list's opinion on which is
better
What currently is happening
1. select from mssql (into CSV via PerlDBI)
2. psql\copy into PG
3. pg chokes on duplicate pkeys as there's no --replace o
On Tue, 2007-08-28 at 08:19 +0100, Richard Huxton wrote:
> Ow Mun Heng wrote:
> > Continuining with my efforts to get similar functionality as mysql's
> > mysqlimport --replace I want to ask for the list's opinion on which is
> > better
>
> I would sugges
Is there a way to do a dump of a database using a select statement?
eg: \copy trd to 'file' select * from table limit 10
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Thu, 2007-08-30 at 09:14 +0200, A. Kretschmer wrote:
> am Thu, dem 30.08.2007, um 14:59:06 +0800 mailte Ow Mun Heng folgendes:
> > Is there a way to do a dump of a database using a select statement?
>
> A complete database or just a simple table?
a simple table.. couple millio
Hi all,
I'm sure some of you guys do perl-dbi to access perl. need some
pointers. (pg specific I guess)
1. Possible to execute queries to PG using multiple statemments?
eg:
prepare("A")
bind_param($A)
execute()
prepare("BB")
bind_param($B)
execute()
prepare("CC")
bind_param($B)
execute()
right
On Mon, 2007-08-27 at 18:41 -0400, Tom Lane wrote:
> Ow Mun Heng <[EMAIL PROTECTED]> writes:
> > Does the psql's \copy command run as a transaction?
>
> Certainly.
>
> > I think it does, but
> > somehow when I cancel (in a script) a running import, &qu
Same query, executed twice, once using seqscan enabled and the other
with it disabled. Difference is nearly night and day.
How can I persuade PG to use the index w/o resorting to setting seqscan
= false
(actually, I don't know what are the pro or cons - I read posts from the
archives far back as
On Mon, 2007-09-03 at 11:31 +0100, Gregory Stark wrote:
> "Ow Mun Heng" <[EMAIL PROTECTED]> writes:
> >
> > How can I persuade PG to use the index w/o resorting to setting seqscan
> > = false
>
> The usual knob to fiddle with is random_page_cost. If yo
On Tue, 2007-09-04 at 10:06 +0800, Ow Mun Heng wrote:
> On Mon, 2007-09-03 at 11:31 +0100, Gregory Stark wrote:
> > "Ow Mun Heng" <[EMAIL PROTECTED]> writes:
> > >-> Bitmap Heap Scan on drv (cost=30.44..4414.39
> > > rows=1291 wid
I just browsed to my $PGDATA location and noticed that there are some
tables which has ending of .1
# ls -lahS | egrep '(24694|24702|24926)'
-rw--- 1 postgres postgres 1.0G Sep 3 22:56 24694
-rw--- 1 postgres postgres 1.0G Sep 3 22:52 24702
-rw--- 1 postgres postgres 1.0G Sep 3 22:5
Hi,
I'm running out of space on one of my partitions and I still have not
gotten all the data loaded yet. I've read that one could symlink the
pg_pg_xlog directory to another drive. I'm wondering if I can do the
same for specific tables as well.
Thanks.
I've already done a pg_dump of the entire
On Tue, 2007-09-04 at 05:15 +0100, Gregory Stark wrote:
> "Ow Mun Heng" <[EMAIL PROTECTED]> writes:
>
> > On Mon, 2007-09-03 at 11:31 +0100, Gregory Stark wrote:
> >> "Ow Mun Heng" <[EMAIL PROTECTED]> writes:
> >> >
> >>
On Tue, 2007-09-04 at 07:09 +0300, Mikko Partio wrote:
>
>
> On 9/4/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I'm running out of space on one of my partitions and I still
> have not
> gotten all the data l
On Tue, 2007-09-04 at 07:49 +0300, Mikko Partio wrote:
>
> No (changing tablespaces does not change your logical schema).
I just tested this "feature" with a temp table and it works as
advertised. (In progress of moving a table there now actually)
2nd question.. reading the docs, it says that
On Tue, 2007-09-04 at 00:51 -0400, Tom Lane wrote:
> Ow Mun Heng <[EMAIL PROTECTED]> writes:
> > I just browsed to my $PGDATA location and noticed that there are some
> > tables which has ending of .1
>
> TFM has some useful background knowledge for that sort of thing:
Table is like
create table foo (
number int,
subset int,
value int
)
select * from foo;
number | subset | value
111
122
1310
143
current query is like
select number,
avg(case when subset = 1 then value else null end) as v1,
avg(ca
I have a table in PG, Pulled from SQL Server using Perl DBI (w/o using
chopblanks) and have ended up with a column where the "space" is being
interpreted as a value.
eg:
"ABC " when it should be "ABC"
this is being defined as varchar(4)
I've already pull the relevent columns with
create foo
On Thu, 2007-09-06 at 14:13 +0800, Ow Mun Heng wrote:
> I have a table in PG, Pulled from SQL Server using Perl DBI (w/o using
> chopblanks) and have ended up with a column where the "space" is being
> interpreted as a value.
>
> eg:
>
> "ABC " when it s
On Thu, 2007-09-06 at 04:07 -0500, Ron Johnson wrote:
> On 09/06/07 01:13, Ow Mun Heng wrote:
> > update org_column set number = foo.number where foo.unique_id =
> > org_column=unique_id.
>
> Number? Where does "number" come from? Unless you've got weird
&g
On Thu, 2007-09-06 at 11:08 +0200, Alban Hertroys wrote:
> Ow Mun Heng wrote:
> > I found 2 new ways to do this.
> >
> > option 1
> > ---
> >
> > create table foo as select unique_id, rtrim(number) as number from foo;
> > alter table add primary
On Thu, 2007-09-06 at 04:47 -0500, Ron Johnson wrote:
> Maybe there's an English language "issue", or maybe I'm just
> excessively picky, but using "number" in this context is confusing.
My Bad.. hehe..
> Then I agree with Alban:
> update table set number = trim(number);
> or, if you need the
On Thu, 2007-09-06 at 20:20 -0500, Michael Glaesemann wrote:
> On Sep 6, 2007, at 19:58 , Ow Mun Heng wrote:
>
> > Don't denormalise the table?
>
> Yes. Don't denormalize the tables.
I would believe performance would be better it being denormalised. (in
this case
On Thu, 2007-09-06 at 20:19 -0700, Joe Conway wrote:
> Ow Mun Heng wrote:
> > => select code, round(avg(case when subset = '0' then value else null
> > end),0) as v0,
> > round(avg(case when subset = '1' then value else null end),0) as v1,
> > rou
On Fri, 2007-09-07 at 00:17 -0500, Erik Jones wrote:
> On Sep 6, 2007, at 10:54 PM, Ow Mun Heng wrote:
>
> > In either of the above, I would like to know which one is able to help
> > me to like connect to a remote DB, use the table there and join to a
> > local table in
I'm confused as to the difference between dblink and dbi-link.
dblink is included in the contrib directory and dbi-link is available
from pgfoundry.
dblink seems like it creates a view of a remote DB and is static, which
means that it needs to be refreshed each time.
dbi-link seems like it uses
On 9/7/07, Ron Johnson <[EMAIL PROTECTED]> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> On 09/06/07 20:53, Merlin Moncure wrote:
> [snip]
> >
> > arrays are interesting and have some useful problems. however, we
> > must first discuss the problems...first and foremost if you need to
> > read any
>Datahouse or "data warehouse"?
OLTP data warehouse.
On Fri, 2007-09-07 at 10:16 +0200, Albe Laurenz wrote:
> Ow Mun Heng wrote:
> True, you'll have to write some stuff that does something like:
> - pull the data over from the other database
> - replace the data in the local table
Already doing that currently for some data table
On Fri, 2007-09-07 at 00:18 -0500, Ron Johnson wrote:
> On 09/06/07 21:26, Ow Mun Heng wrote:
> I've not arrived at any conclusion but merely
> > exploring my options on which way would be the best to thread. I'm
> > asking the list because I'm new in PG and aft
On Thu, 2007-09-06 at 20:57 -0500, Michael Glaesemann wrote:
> On Sep 6, 2007, at 20:46 , Ow Mun Heng wrote:
> > I would believe performance would be better it being denormalised. (in
> > this case)
>
> I assume you've arrived at the conclusion because you have
On Thu, 2007-09-06 at 21:53 -0400, Merlin Moncure wrote:
> On 9/6/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> > Table is like
> >
> > create table foo (
> > number int,
> > subset int,
> > value int
> > )
> >
> > select * from
Nobody has any comments on this??
On Thu, 2007-09-06 at 12:22 +0800, Ow Mun Heng wrote:
> Table is like
>
> create table foo (
> number int,
> subset int,
> value int
> )
>
> select * from foo;
> number | subset | value
> 111
> 12
On Wed, 2007-08-22 at 20:41 +0100, Richard Huxton wrote:
> Murali Maddali wrote:
> > This is what I am doing, I am reading the data from SQL Server 2005 and
> > dumping to out to Postgresql 8.2 database.
My 2 cents.. I'm doing roughly the same thing, but I'm using perl and
DBI to do it.
> Fastest
On Thu, 2007-09-06 at 19:52 -0500, Michael Glaesemann wrote:
> On Sep 6, 2007, at 19:37 , Ow Mun Heng wrote:
>
> > Nobody has any comments on this??
>
> Don't do it.
don't do what?
Don't denormalise the table?
don't put them into arrays?
Thing is, end-re
New to Pg and wondering the extent of PG's table partitioning
capability.
I have a huge table > 18 million rows(growth rate ~8 million a week)
which I like to implement partitioning.
Initially the plan is to just partition it by date. eg: 1 partition per
month.
Now, I'm thinking if it's possible
On Wed, 2007-09-12 at 09:42 +0100, Richard Huxton wrote:
> Ow Mun Heng wrote:
> > New to Pg and wondering the extent of PG's table partitioning
> > capability.
> >
> > I have a huge table > 18 million rows(growth rate ~8 million a week)
> > which
On Mon, 2007-09-10 at 13:00 -0600, RC Gobeille wrote:
> Or this one:
> http://schemaspy.sourceforge.net/
Can't seem to get it to connect to PG using the example.
java -jar schemaSpy_3.1.1.jar -t pgsql -u operator -p operator -o
test_db -host localhost -db test_db
Using database properties:
[sc
I've created a new custom datatype
eg:
create type foo as (
id smallint
data1 int
data2 int
)
to view the datatype in psql I do
=> \d foo
what is the equilvalent on pgadmin3 or any other SQL query tool?
As i understand from the docs, I can't alter / change the datatype
definition. If I need
On Mon, 2007-09-17 at 09:21 +0800, Ow Mun Heng wrote:
> Hi,
>
> I want to use a SRF to return multi rows.
>
> current SRF is pretty static.
>
> create type foo_type as (
> id smallint
> data1 int
> data2 int
> )
>
> CREATE OR REPLACE FUNCTION foo_
On Mon, 2007-09-17 at 17:28 -0300, Roberto Spier wrote:
> Ow Mun Heng escreveu:
> > I've created a new custom datatype
> > to view the datatype in psql I do
> > => \d foo
> >
> > what is the equilvalent on pgadmin3 or any other SQL query tool?
> >
&g
Just wondering how everyone is doing aggregration of production data.
Data which runs in the vicinity of a few million a week.
What are the methods which will effectively provide the
min/max/average/count/stdev of the weekly sample size based on different
products/software mix etc.
and still be
On Mon, 2007-09-17 at 22:14 -0700, Joshua D. Drake wrote:
> Phoenix Kiula wrote:
> > So a YES/NO question:
> >
> > Q: To get rid of index bloat, is a VACUUM ANALYZE enough? Or must I
> > reindex/cluster indexes?
>
> If you overrun your max_fsm_pages, no:
> else yes;
Maybe my english suck, but
Just a short background.
using Perl-DBI to pull data from mssql into PG and \copy into a temp
table where the following is done.
my $query1 = "DELETE FROM $table_name
WHERE $unique_id in
(SELECT $unique_id from $table_name_loading)";
my $query2 = "INSERT INTO $table_n
On Mon, 2007-09-17 at 09:42 +0800, Ow Mun Heng wrote:
> On Mon, 2007-09-17 at 09:21 +0800, Ow Mun Heng wrote:
> > Hi,
> >
> > I want to use a SRF to return multi rows.
> >
> > current SRF is pretty static.
> >
> > create type foo_type as
Hi,
I have 3 tables
foo
foo_loading_source1
foo_loading_source2
which is something like
create table foo (a int, b int, c int)
create table foo_loading_source1 (a int, b int, c int)
create table foo_loading_source2 (a int, b int, c int)
Is there a way which can be made easier to keep these 3 t
Final specs for the server is just an ordinary desktop fitted w/ 3
7200rpm 500GB drives & 1 7200 80GB drive / 1 GB ram / 2G processor
(single core)
number of records will be between 3 to 30 million rows.
Currently the process is
1. pull from mssql
2. \copy into PG temp table
3. insert into fina
On Tue, 2007-09-18 at 06:01 -0400, Bill Moran wrote:
> * (with newer version) reduce the fill factor and REINDEX
What is fill factor?
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
On Tue, 2007-09-18 at 06:52 -0400, Geoffrey wrote:
> [EMAIL PROTECTED] wrote:
> > Robert Wickert would like to recall the message, "August Monthly
> > techdata split file printers for France and Denmark ".
>
> In my experience, attempting to 'recall' an email message is a fruitless
> endeavor.
Hi,
I want to use a SRF to return multi rows.
current SRF is pretty static.
create type foo_type as (
id smallint
data1 int
data2 int
)
CREATE OR REPLACE FUNCTION foo_func()
RETURNS SETOF foo AS
$BODY$
SELECT
TRH.ID,
TRH.data1,
TRH.data2,
FROM D
On Tue, 2007-09-18 at 09:55 -0500, Scott Marlowe wrote:
> On 9/17/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
>
> > Just wondering how everyone is doing aggregration of production data.
>
> Where I work, we aggregate by the minute in the application, then dump
> t
On Wed, 2007-09-19 at 07:57 +0200, A. Kretschmer wrote:
> am Mon, dem 17.09.2007, um 9:21:22 +0800 mailte Ow Mun Heng folgendes:
> > CREATE OR REPLACE FUNCTION foo_func(fromdate timestamp, todate
> > timestamp, code text)
> > RETURNS SETOF foo AS
> > $BODY$
&g
On Tue, 2007-09-18 at 09:56 +0100, Filip Rembiałkowski wrote:
> 2007/9/18, Ow Mun Heng <[EMAIL PROTECTED]>:
> > Hi,
> >
> > I have 3 tables
> >
> > foo
> > foo_loading_source1
> > foo_loading_source2
> >
> > which is something like
On Tue, 2007-09-18 at 08:37 -0400, [EMAIL PROTECTED] wrote:
> Ow Mun Heng wrote:
> > Hi,
> >
> > create table foo (a int, b int, c int)
> > create table foo_loading_source1 (a int, b int, c int)
> > create table foo_loading_source2 (a int, b int, c int)
> &g
On Wed, 2007-09-19 at 11:05 +0100, Filip Rembiałkowski wrote:
> 2007/9/19, Ow Mun Heng <[EMAIL PROTECTED]>:
>
> (...)
>
> > simulate a delete
> > => delete from parent where id in (select id from child);
> > DELETE 6
> >
> > => select * fr
On Wed, 2007-09-19 at 19:31 +0800, Ow Mun Heng wrote:
> On Wed, 2007-09-19 at 11:05 +0100, Filip Rembiałkowski wrote:
> > 2007/9/19, Ow Mun Heng <[EMAIL PROTECTED]>:
> >
> > (...)
> >
> > > simulate a delete
> > > => delete from par
On Wed, 2007-09-19 at 08:40 -0500, Ron Johnson wrote:
> > Yes, I think that it's a bit excessive but the company can afford it so why
> > not... :)
>
> Lucky SOB.
>
> I can't get my company to spring for a dual-core 2GB system with
> SATA drives.
>
hehe.. I'll end up running it on a low-end de
Anyone? I know this is a low-end server so have to make the best out of
it..
On Tue, 2007-09-18 at 16:06 +0800, Ow Mun Heng wrote:
> Final specs for the server is just an ordinary desktop fitted w/ 3
> 7200rpm 500GB drives & 1 7200 80GB drive / 1 GB ram / 2G processor
> (single core
I've got 2 nearly identical SRFs to retrieve data from the DB(pg 8.2.4)
which goes something like this
= Function 1 ==
CREATE OR REPLACE FUNCTION hot(fromdate timestamp without time zone, todate
timestamp without time zone, code text[])
RETURNS SETOF trh_hot
On Thu, 2007-09-20 at 07:55 -0500, Ron Johnson wrote:
> On 09/20/07 05:43, Ow Mun Heng wrote:
> > hehe.. I'll end up running it on a low-end desktop w/ 1GB ram and a
> > celeron 2G processor w/ ~30GB data/month.
>
> I probably would too, if I wasn't half-way a
On Sat, 2007-09-22 at 16:05 -0400, Robert Treat wrote:
> On Friday 21 September 2007 22:43, Merlin Moncure wrote:
> > On 9/21/07, Robert Treat <[EMAIL PROTECTED]> wrote:
> > > On Thursday 20 September 2007 18:38, Merlin Moncure wrote:
> > > > On 9/20/07, Collin <[EMAIL PROTECTED]> wrote:
> > > > >
On Tue, 2007-09-18 at 02:24 -0700, Trevor Talbot wrote:
> On 9/17/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
>
> > > > CREATE OR REPLACE FUNCTION foo_func(fromdate timestamp, todate
> > > > timestamp, code text)
>
> > > > LANGUAGE 'sql'
On Mon, 2007-09-24 at 00:18 -0400, Alvaro Herrera wrote:
> Ow Mun Heng wrote:
>
> > how can I debug or diagnose where the issues lies? Explain analyse
> > doesn't do much since this is a Function Scan anyway.
>
> Take them out of the function and EXPLAIN ANALYZE t
On Mon, 2007-09-24 at 02:31 -0400, Alvaro Herrera wrote:
> Ow Mun Heng wrote:
>
> > Okay.. I tried that, but it seems like there's an issue w/ the CASE
> > statements.
> >
> > When I tried the prepare w/
> >
> > AND (CASE WHEN $3 IS NULL THEN true
What's the best method for migrating data from my laptop to the final
server DB?
The data currently in my laptop is using SQL_ASCII encoding and the
deployment target is defaulted to using UTF-8. (should not be a concern
as the encoding should be handled via the dump right?)
However a concern wou
Input is of form
'ppp','aaa','bbb'
I want it to be stripped of quotes to become
ppp,aaa,bbb
escaping the quote would work but it means I will have to do some magic
on the input as well to escape it prior to replacing it.
select replace('AB\'A','\'','C') this works
Can I buy a clue here?
oh
Just wonder if anyone here uses Excel to connect to PG via ODBC.
I'm using it extensively as my platform to get data from PG/MSSQL
directly into excel. (Excel uses the msqry32.exe file which is like a
stripped down sql query tool and returns data directly into excel)
When using mssql, connecting
On Wed, 2007-10-10 at 09:11 +0200, Albe Laurenz wrote:
> Ow Mun Heng wrote:
> > Input is of form
> >
> > 'ppp','aaa','bbb'
> >
> > I want it to be stripped of quotes to become
> >
> > ppp,aaa,bbb
> >
> > e
On Wed, 2007-10-10 at 10:46 +0200, Albe Laurenz wrote:
> Ow Mun Heng wrote:
> >>> Input is of form
> >>>
> >>> 'ppp','aaa','bbb'
> >>>
> >>> I want it to be stripped of quotes to become
> >>
On Wed, 2007-10-10 at 10:22 -0500, Scott Marlowe wrote:
> On 10/10/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> > Just wonder if anyone here uses Excel to connect to PG via ODBC.
> >
> > I'm using it extensively as my platform to get data from PG/MSSQL
> >
On Wed, 2007-10-10 at 12:01 -0400, Gauthier, Dave wrote:
> > -Original Message-
> >From: [EMAIL PROTECTED]
> [mailto:pgsql-general->[EMAIL PROTECTED] On Behalf Of Scott Marlowe
> >Sent: Wednesday, October 10, 2007 11:23 AM
> >To: Ow Mun Heng
> >Cc: pgs
On Mon, 2007-10-15 at 23:59 -0400, Robert James wrote:
> Can you clarify the need / value added for reporting tool, over just
> running queries and packaging the output in HTML (with a little CSS
> for styling, you can get near PDF quality). This can be done in SQL
> and a tad of PHP (or Ruby).
>
Does anyone from this list here uses pgagent from pgadmin?
it's a job schedular much like cron which is sort of integrated w/
pgadmin3. Only issue which I've found so far which I don't quite like is
that to be able to use pgagent, I have to also pull in the entire
pgadmin3 incl wxGTK and X. This i
On Tue, 2007-10-16 at 09:56 +0100, Dave Page wrote:
> Ow Mun Heng wrote:
> > Does anyone from this list here uses pgagent from pgadmin?
> >
> > it's a job schedular much like cron which is sort of integrated w/
> > pgadmin3. Only issue which I've found so far
On Tue, 2007-10-16 at 10:35 +0100, Dave Page wrote:
> Ow Mun Heng wrote:
> > I've only have 1.4.3 version for centos. (and for gentoo)
> > so.. I have to use that version.
>
> We have RPMs for later versions on the pgAdmin site - do they not work?
I'm not sure.
On Tue, 2007-10-16 at 11:10 +0100, Dave Page wrote:
> Ow Mun Heng wrote:
> > I see.. and in the docs, I was told to import pgagent.sql into the
> > postgres DB so I did that. but actually. I want to deploy it to the
> > target DB, let's call it "MyDB"
Query2 is way faster mainly because the plan does not choose a seq scan
on a table w/ >20million rows.
The only difference between Query 1 and query 2 is that the
audio_key_dtime is chosen from a table rather than provided on the
query.
I'm not sure why this is the case and why it chooses such pla
On Wed, 2007-10-17 at 20:37 +0100, Richard Huxton wrote:
> Ow Mun Heng wrote:
> > Query2 is way faster mainly because the plan does not choose a seq scan
> > on a table w/ >20million rows.
> > The only difference between Query 1 and query 2 is that the
> > audio_ke
On Wed, 2007-10-17 at 22:47 -0300, Alvaro Herrera wrote:
> Ow Mun Heng wrote:
>
> > Index Cond: ((audit_key_dtime >= $0) AND (audit_key_dtime < $1))
> >
> >
> > Index Cond: ((audit_key_dtime >= '2007-08-08 18:00:00'::timestamp without
> >
On Thu, 2007-10-18 at 07:28 +0100, Richard Huxton wrote:
> Ow Mun Heng wrote:
> > On Wed, 2007-10-17 at 20:37 +0100, Richard Huxton wrote:
> >> Your query plans don't seem to match your queries. That makes it
> >> difficult to provide meaningful advice.
> &g
On Sun, 2007-10-21 at 22:54 -0400, Guy Rouillier wrote:
> The current app uses stored procedures
> for all inserts, and PG didn't do well with that approach; substituting
> embedded inserts fixed that problem. So PG can definitely "handle" very
Can you explain what is embedded inserts?
--
On Sun, 2007-10-21 at 20:35 -0700, Shwe Yee Than wrote:
> I'm new to postgresql. I'm running postgresql database on Linux
> platform. I just wanna know the command to view all the tables inside
> a specific database. Can anyone tell me?
\d would be what you use in psql
or just do a \? for he
I'm wondering if what I'm doing is redundant.
I have a primary key on columns (A,B,C,D)
and I've also defined an index based on the same columns (A,B,C,D)
and sometimes in the query explain, I see the pkey being used for the
scan instead of the index.
So.. That made me think perhaps the addition
On Sun, 2007-10-21 at 20:49 -0700, Joshua D. Drake wrote:
> Ow Mun Heng wrote:
> > I'm wondering if what I'm doing is redundant.
> >
> > I have a primary key on columns (A,B,C,D)
> > and I've also defined an index based on the same columns (A,B,C,D)
>
1 - 100 of 229 matches
Mail list logo