Thank you all very much for your thorough replies. To be honest, many of the
things you mention I had not heard of before.
It seems that the clustering by index must be the way to go. I have to
consult now if most queries will benefit from it.
I do not control directly the installation parameters
I've built some functions just for encapsulation reasons and
avoiding to pass the same parameter over and over.
I'd like to access the result from Excel but it seems (and I'm not
pretty sure it is the definitive answer) excel can suck data just
coming from views and tables.
- can someone confirm
On Tue, 26 May 2009 11:41:50 +0200
Ivan Sergio Borgonovo wrote:
> I've built some functions just for encapsulation reasons and
> avoiding to pass the same parameter over and over.
>
> I'd like to access the result from Excel but it seems (and I'm not
> pretty sure it is the definitive answer) ex
Hi,
Sergey Samokhin writes:
> Problem I'm talking about in this letter is related to how we deal
> with extremely big amounts of data to be inserted into DB. Recently I
> switched to using buffers.
You might appreciate this blog entry:
http://www.depesz.com/index.php/2007/07/05/how-to-insert-
On 2009-05-22, Brandon Metcalf wrote:
> g == gryz...@gmail.com writes:
>
> g> you should use it, whenever you need db to keep its own key internally.
> g> Advantage of sequence is also the fact, that you can have the sequence
> g> value used on different columns/tables .
>
> g> My rule of thum
On 2009-05-22, Brandon Metcalf wrote:
> Assume I have an UPDATE statement that looks like
>
> UPDATE foo
> SET
> pattern = '$pattern',
> shape = '$shape',
> length = $length,
> comment = '$comment'
> WHERE foo_id = $foo_id
>
> and length is defined as NUMERIC.
On 2009-05-21, Timmy wrote:
> Yes, most of your guess is correct.
> I'm using postgresql 8.3.x and ms windows 2000.
> The compressed string is saved to the table in binary
> format using the psycopg. I had set the table field to bytea data type.
> I want to use the plpythonu to decompress the sto
j == ja...@xnet.co.nz writes:
j> On 2009-05-22, Brandon Metcalf wrote:
j> > Assume I have an UPDATE statement that looks like
j> >
j> > UPDATE foo
j> > SET
j> > pattern = '$pattern',
j> > shape = '$shape',
j> > length = $length,
j> > comment = '$comment'
PG 8.1.17
For a given week number (2009w22) I need to calculate the beginning and ending
date that makes up that particular week in the year. I want to use the
beginning/ending date as part of a CHECK constraint on an inherited table, with
each child table based on a week of the year.
This is
On May 26, 2009, at 6:37 PM, Brandon Metcalf wrote:
j> option 2: case when '$length' = '' ...
j> you can use case like this:
j>UPDATE foo
j> SET
j>pattern = '$pattern',
j>shape = '$shape',
j>length = case when '$length'='' then length else
'$length' end,
j
d == dal...@solfertje.student.utwente.nl writes:
d> On May 26, 2009, at 6:37 PM, Brandon Metcalf wrote:
d> > j> option 2: case when '$length' = '' ...
d> >
d> > j> you can use case like this:
d> >
d> > j>UPDATE foo
d> > j> SET
d> > j>pattern = '$pattern',
d> > j>s
So for the calculated week value (i.e. 2009w22) I need to be able to
calculate the first and last day of the week (05/25/2009 and
05/31/2009). Is there a clean / fairly easy way to do this? I can
think of doing some string comparisons and walking through date values
to figure it out but was
On May 26, 2009, at 8:03 PM, Keaton Adams wrote:
PG 8.1.17
For a given week number (2009w22) I need to calculate the beginning
and ending date that makes up that particular week in the year. I
want to use the beginning/ending date as part of a CHECK constraint
on an inherited table, with
Brandon Metcalf writes:
> d == dal...@solfertje.student.utwente.nl writes:
> d> On May 26, 2009, at 6:37 PM, Brandon Metcalf wrote:
> d> > The issue here is that these reduce back to my original problem. For
> d> > example, if I use a CASE statement and I fall through to the ELSE,
> d> > then
"A deep unwavering belief is a sure sign that you're missing something."
-- Unknown
I had no intention of sparking an ideological discussion.
I read Joe's article reference previously - a simple case for using a
normalized database. I
This looks great and is a much easier solution to the problem than what I had
planned.
Thanks!
Keaton
mydb=# select ( date('2009-01-01') + ('1 day'::interval * (21-1)*7)) - ('1
day'::interval * (select 7 - extract(dow from timestamp '2009-01-01')));
?column?
-
2009-05-18
On 26/05/09 20:48, Benjamin Smith wrote:
> "A deep unwavering belief is a sure sign that you're missing
> something." -- Unknown
>
> I had no intention of sparking an ideological discussion.
I know, my apologies for going off-topic. I just had a deja-vu when I
saw Celko's article about EAV disast
t == t...@sss.pgh.pa.us writes:
t> Brandon Metcalf writes:
t> > d == dal...@solfertje.student.utwente.nl writes:
t> > d> On May 26, 2009, at 6:37 PM, Brandon Metcalf wrote:
t> > d> > The issue here is that these reduce back to my original problem.
For
t> > d> > example, if I use a CASE
Alban Hertroys wrote:
> I recall using generate_series() and EXTRACT(week FROM ...) to populate
> the table in one pass for several years, but I don't have the exact
> incantation at hand now. I'd have to experiment a bit to get that back
> again, I don't have access to it anymore.
There's som
Hi all,
I have a table which has more than 10millions records in pgsql which is
running on window 2003. During night, nobody search the database.
In morning, when people start to the application, it will take more than 30
seconds to get database back. After several times of same searching, the
zxo102 ouyang wrote:
Hi all,
I have a table which has more than 10millions records in pgsql
which is running on window 2003. During night, nobody search the
database. In morning, when people start to the application, it will
take more than 30 seconds to get database back. After several tim
While debugging applications interacting with pg, the log file (under
pg_log) gets quite long. The log file typically has a name such as
postgresql-2009-05-26_00.log; sometimes an new file will be
generated on the same day and the portion of the name that is all
zeroes will change to a larger
ray wrote:
While debugging applications interacting with pg, the log file (under
pg_log) gets quite long. The log file typically has a name such as
postgresql-2009-05-26_00.log; sometimes an new file will be
generated on the same day and the portion of the name that is all
zeroes will change
On Tue, May 26, 2009 at 5:57 PM, zxo102 ouyang wrote:
> Hi all,
> I have a table which has more than 10millions records in pgsql which is
> running on window 2003. During night, nobody search the database.
> In morning, when people start to the application, it will take more than 30
> seconds
On Tue, 26 May 2009, Scott Marlowe wrote:
Also, in the morning, have a cron job crank up that does "select * from
mybigtable" for each big table to load it into cache.
Just to clarify: on 8.3 and later versions, doing this doesn't do what
some people expect. Sequential scans like that will
Greg Smith writes:
> On Tue, 26 May 2009, Scott Marlowe wrote:
>> Also, in the morning, have a cron job crank up that does "select * from
>> mybigtable" for each big table to load it into cache.
> Just to clarify: on 8.3 and later versions, doing this doesn't do what
> some people expect. Seq
On Tue, May 26, 2009 at 7:43 PM, Tom Lane wrote:
> Greg Smith writes:
>> On Tue, 26 May 2009, Scott Marlowe wrote:
>>> Also, in the morning, have a cron job crank up that does "select * from
>>> mybigtable" for each big table to load it into cache.
>
>> Just to clarify: on 8.3 and later versions
I wont go into details about why im using this field as an array but
how would I select all the rows that have the first name 'Tom' out of
the 'names' field?
CREATE TABLE test (
id integer,
names character varying[]
);
INSERT INTO test VALUES (1, '{"''Josh Berkus''","''Peter
Eisentraut''",
28 matches
Mail list logo