On Tue, 2008-08-12 at 08:38 -0700, Lennin Caro wrote:
> you can use a cron job
>
I have my cron setup to do database wide vacuums each night and it
usually takes ~between 4-6 hours on ~200G DB size.
On days where there is huge activity, it can drag on for like 15+ hours.
I've recently dropped
-Original Message-
From: Scott Marlowe <[EMAIL PROTECTED]>
>If you throw enough drives on a quality RAID controller at it you can
>get very good throughput. If you're looking at read only / read
>mostly, then RAID5 or 6 might be a better choice than RAID-10. But
>RAID 10 is my default cho
On Mon, 2008-08-18 at 11:01 -0400, justin wrote:
> Ow Mun Heng wrote:
> > -Original Message-
> > From: Scott Marlowe <[EMAIL PROTECTED]>
> >
> > > If you're looking at read only / read
> > > mostly, then RAID5 or 6 might be a better cho
On Tue, 2008-08-19 at 02:28 -0400, David Wilson wrote:
> On Fri, Aug 15, 2008 at 11:42 PM, Amber <[EMAIL PROTECTED]> wrote:
> > Dear all:
> >We are currently considering using PostgreSQL to host a read only
> > warehouse,
> we would like to get some experiences, best practices and performance
-Original Message-
From: johnf <[EMAIL PROTECTED]>
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] schema name in SQL statement.
Date: Tue, 19 Aug 2008 22:25:14 -0700
On Tuesday 19 August 2008 10:06:55 pm Scott Marlowe wrote:
> On Tue, Aug 19, 2008 at 10:53 PM, johnf <[EMAIL PROTEC
Hi,
I want to find out if there's a method to change this
select to_char('1 day 09:18:42.37996'::interval,'HH24:MI:SS')
to something like
24+9(hours) = 33:18:42 instead of returning It as 09:19:42
I've not found a way to do this (yet)
--
Sent via pgsql-general mailing list (pgsql-general@po
On Thu, 2008-08-21 at 11:53 +0200, Tomasz Ostrowski wrote:
> On 2008-08-21 11:09, Ow Mun Heng wrote:
>
> > I want to find out if there's a method to change this
> > select to_char('1 day 09:18:42.37996'::interval,'HH24:MI:SS')
> > to something lik
CREATE INDEX idx_d_trh_code_id_partial
ON xmms.d_trh_table
USING btree
(code_id) where code_id not in ('P000','000') and code_id is not null;
ERROR: functions in index predicate must be marked IMMUTABLE
Just trying something new.
I want to create partial indexes on code_id which are not nu
On Mon, 2008-08-25 at 12:23 +0800, Ow Mun Heng wrote:
> CREATE INDEX idx_d_trh_code_id_partial
> ON xmms.d_trh_table
> USING btree
> (code_id) where code_id not in ('P000','000') and code_id is not null;
> ERROR: functions in index predicate must b
On Mon, 2008-08-25 at 10:18 -0400, Tom Lane wrote:
> Ow Mun Heng <[EMAIL PROTECTED]> writes:
> > On Mon, 2008-08-25 at 12:23 +0800, Ow Mun Heng wrote:
> >> CREATE INDEX idx_d_trh_code_id_partial
> >> ON xmms.d_trh_table
> >> USING btree
> >>
On Tue, 2008-08-26 at 00:58 -0400, Tom Lane wrote:
> Ow Mun Heng <[EMAIL PROTECTED]> writes:
> > On Mon, 2008-08-25 at 10:18 -0400, Tom Lane wrote:
> >> I suppose code_id is varchar or some such?
>
> > Yep
> > After a few more investigation on the usefuln
I posed this question to the Slony List as well, but no response yet.
I'll post it here as well, to elicit some responses, as there's a larger
community of people using PG+Slony who may also be on slony-list.
The question that I have is, I'm trying to determine if there's a
possibility that I can
On Tue, 2008-09-02 at 22:56 -0400, Robert Treat wrote:
> On Tuesday 02 September 2008 17:21:12 Asko Oja wrote:
> > On Tue, Sep 2, 2008 at 2:09 AM, Michael Nolan <[EMAIL PROTECTED]> wrote:
> > > Oracle handles connecting to multiple databases (even on multiple/remote
> > > computers) fairly seamless
Hi,
I frequently query PG for between 10k - 65k rows of data and was
wondering if I should be considering usage of cursors.
I’m not too well versed with it’s purpose but based on the docs, it is
supposed to be more efficient and also gives the impression of
responsiveness.
Currently, when I do t
Hi, appreciate if someone can help shed some light on what i may be doing
wrong.
I know there are caveat on using constraint exclusion to reduce the # of
partitions scanned.
pg:8.2.9
create table test (
code varchar,
dummy_col1 int,
dummy_col2 int
)
create table test_experimental_code (
code
Hi,
I'm playing around with triggers to implement partitioning.
I hit something which I don't know what and I don't have internet here
at work to find out what is the cause.
ERROR : stack depth limit exceeded
I see that this is one of the options in postgresql.conf but I don't
know exactly wha
On Fri, 2008-09-05 at 10:35 +0200, Magnus Hagander wrote:
> Ow Mun Heng wrote:
> > Hi,
> >
> > I'm playing around with triggers to implement partitioning.
> > I hit something which I don't know what and I don't have internet here
> > at work to
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Friday, September 05, 2008 11:22 PM
To: Magnus Hagander
Cc: Ow Mun Heng; pgsql-general@postgresql.org
Subject: Re: [GENERAL] max_stack_depth Exceeded
Magnus Hagander <[EMAIL PROTECTED]> writes:
> Ow Mun H
Hi,
I'm a (more than a) bit confuse as to the diference between TEXT and
varchar data-types.
AFAIK, varchar will have a max limit char of, if not mistaken ~65k? But
for TEXT, it's more like a BLOB and there's supposed to be no limit?
Anyhow, searching the archives (in my mail client - no interne
Quoted from
http://www.nabble.com/Group-Sets-For-Replication-w--Partition-d19369646.html
> Basically, what you would need to do, weekly, is to run a slonik
> "execute script" script where the SQL script consists of something
> similar to what's in tests/testpartition/gen_ddl_sql.sh:
> -> crea
y for the noise.
Ps : I can't count how many times I've bang my head against the wall on
these sort of things and when I decided to send the email to the list to ask
for help, then I solve it. :-)
-Original Message-
From: Ow Mun Heng
Sent: Saturday, October 18, 2008 4:07 PM
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Vick Khera
> Here are two (not necessarily mutually exclusive) options for you:
>1) use triggers/rules on the master table and do all your inserts
>directed to it and hav
On Mon, 2009-01-12 at 14:42 +0800, Phoenix Kiula wrote:
> > ALTER COLUMN TYPE is intended for cases where actual transformation of
> > the data is involved. Obviously varchar(20) to varchar(35) doesn't
> > really require any per-row effort, but there's no operation in the
> > system that handles t
>>On Behalf Of Asko Oja
>>Hello
>>We use plProxy (RUN ON ALL) to run queries in parallel.
>>We split our database into 16 shards and distributed it over 4 servers.
>>So now we are running queries on 16 cpu's in parallel :)
Wow.. query time improved How many fold? Any idea?
--
Sent via pgsql-ge
I don't think I understand how PG implements fulltext searching or if my
search actually needs to use fulltext search.
basically, I have a btree index on a SERIAL_NUMBER which is of sort like
ABC12345 or AAA123434 or AAB131441
I would like to have search on the specific text of the SERIAL_NUMBER
I was trying to clean up my database and after 2 days of vacuum full on
a 20GB table, I gave up and used pg_reorg which is seriously fast.
However, now I'm not sure if it is the cause of my unstable DB (8.2.13)
I can connect, initiate a simple query and the DB will do down
logs says :
The post
Hi,
Currently doing some level of aggregrate tables for some data. These
data will be used for slice/dice activity and we want to be able to
play/manipulate the data such that I can get means and stddev data.
Eg: For each Original Column eg:
population_in_town : (I get derivatives)
- mean # of
-Original Message-
From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
On Tue, May 12, 2009 at 01:23:14PM +0800, Ow Mun Heng wrote:
>> | sum of count | sum_of_count_squared | qty | qty < 100 | qty < 500 |
>>
>>
>> I'm thinking of lump
Is there a method to do this without transversing the whole 20GB table?
What about manipulating the pg_attribute table and changing atttypid just
like we can manipulate atttypmod to change from varchar(4) to varchar(8)?
Thanks
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
201 - 229 of 229 matches
Mail list logo