-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
> The most efficient way for you to store data would be like this:
> main table
> id address
> 3 67.92
> 7 69.5
>
> child table
> id value_type value
> 3 uptime 0.3
> 3 memory 37
> 7 uptime 1.1
> 7 memory 15
Actual
of maintainabilty was greatly enhanced.
Thomas
Jason Hihn wrote:
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Greg
Spiegelberg
Sent: Wednesday, October 08, 2003 3:11 PM
To: PgSQL Performance ML
Subject: Re: [PERFORM] Compare rows
Josh Berkus wrote
Greg Spiegelberg wrote:
Josh Berkus wrote:
As you can see, the NULLs are not stored, making this system much more
efficient on storage space.
Tommorrow I'll (hopefully) write up how to query this for
comparisons. It would help if you gave a little more details about
what specific comparison
Per Josh's recommendation to implement a Vertical Child Table I came
up with 3 possible tables to handle the 3 possible value types: varchar,
numeric and bigint. Each table has 7 columns: 1 to denote the time the
data was collected, 4 which identify where the data came from, 1 to
tell me the value
Josh Berkus kirjutas N, 09.10.2003 kell 08:36:
> Chris,
> > The need to do a lot of joins would likely hurt performance somewhat,
> > as well as the way that it greatly increases the number of rows.
> > Although you could always split it into several tables, one for each
> > "value_type", and UNIO
Josh Berkus wrote:
Greg,
You lost me on that one. What's a "vertical child table"?
Currently, you store data like this:
id address uptime speed memory tty
3 67.92 0.3 11.237 6
7 69.51.1 NULL15
Christopher Browne wrote:
Wow, that takes me back to a paper I have been looking for for
_years_.
Some time in the late '80s, probably '88 or '89, there was a paper
presented in Communications of the ACM that proposed using this sort
of "hypernormalized" schema as a way of having _really_ narrow sc
The world rejoiced as [EMAIL PROTECTED] (Josh Berkus) wrote:
> Chris,
>> Some time in the late '80s, probably '88 or '89, there was a paper
>> presented in Communications of the ACM that proposed using this sort
>> of "hypernormalized" schema as a way of having _really_ narrow schemas
>> that would
Chris,
> Some time in the late '80s, probably '88 or '89, there was a paper
> presented in Communications of the ACM that proposed using this sort
> of "hypernormalized" schema as a way of having _really_ narrow schemas
> that would be exceedingly expressive. They illustrated an example of
> The
In an attempt to throw the authorities off his trail, [EMAIL PROTECTED] (Josh Berkus)
transmitted:
> child table
> idvalue_type value
> 3 uptime 0.3
> 3 speed 11.2
> 3 memory 37
> 3 tty 6
> 7 uptime 1.1
> 7 m
Greg,
> You lost me on that one. What's a "vertical child table"?
Currently, you store data like this:
id address uptime speed memory tty
3 67.92 0.3 11.237 6
7 69.51.1 NULL15 NULL
9 65.5
Greg,
On Wed, Oct 08, 2003 at 03:07:30PM -0400, Greg Spiegelberg wrote:
> Dror,
>
> I gave this some serious thought at first. I only deal with
> int8, numeric(24,12) and varchar(32) columns which I could
> reduce to 3 different tables. Problem was going from 1700-3000
I'm not sure how the dat
Here is what i think you can use:
One master table with out duplicates and one anciliary table with
duplicate for the day.
Insert the result of the select from the anciliary table into the master
table, truncate the anciliary table.
select distinct on ( {all the fields except day}) * from table
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of Greg
> Spiegelberg
> Sent: Wednesday, October 08, 2003 3:11 PM
> To: PgSQL Performance ML
> Subject: Re: [PERFORM] Compare rows
>
>
> Josh Berkus wrote:
> > Gre
Josh Berkus wrote:
Greg,
The data represents metrics at a point in time on a system for
network, disk, memory, bus, controller, and so-on. Rx, Tx, errors,
speed, and whatever else can be gathered.
We arrived at this one 642 column table after testing the whole
process from data gathering, method
Dror,
I gave this some serious thought at first. I only deal with
int8, numeric(24,12) and varchar(32) columns which I could
reduce to 3 different tables. Problem was going from 1700-3000
rows to around 300,000-1,000,000 rows per system per day that
is sending data to our database.
BTW, the int8
It's still not quite clear what you're trying to do. Many people's gut
reaction is that you're doing something strange with so many columns in
a table.
Using your example, a different approach might be to do this instead:
Day | Name | Value
--+-+---
Oct 1 |
Joe Conway wrote:
Greg Spiegelberg wrote:
The reason for my initial question was this. We save changes only.
In other words, if system S has row T1 for day D1 and if on day D2
we have another row T1 (excluding our time column) we don't want
to save it.
It still isn't entirely clear to me what y
See below.
Shridhar Daithankar wrote:
Greg Spiegelberg wrote:
The data represents metrics at a point in time on a system for
network, disk, memory, bus, controller, and so-on. Rx, Tx, errors,
speed, and whatever else can be gathered.
We arrived at this one 642 column table after testing the whol
Comment interjected below.
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of Greg
> Spiegelberg
> Sent: Wednesday, October 08, 2003 12:28 PM
> To: PgSQL Performance ML
> Subject: Re: [PERFORM] Compare rows
>
>
&g
Greg,
> The data represents metrics at a point in time on a system for
> network, disk, memory, bus, controller, and so-on. Rx, Tx, errors,
> speed, and whatever else can be gathered.
>
> We arrived at this one 642 column table after testing the whole
> process from data gathering, methods of tem
Greg Spiegelberg wrote:
The reason for my initial question was this. We save changes only.
In other words, if system S has row T1 for day D1 and if on day D2
we have another row T1 (excluding our time column) we don't want
to save it.
It still isn't entirely clear to me what you are trying to do,
Greg Spiegelberg wrote:
The data represents metrics at a point in time on a system for
network, disk, memory, bus, controller, and so-on. Rx, Tx, errors,
speed, and whatever else can be gathered.
We arrived at this one 642 column table after testing the whole
process from data gathering, methods
Josh Berkus wrote:
Greg,
Anyone have any suggestions on how to efficiently compare
rows in the same table? This table has 637 columns to be
compared and 642 total columns.
637 columns? Are you sure that's normalized? It's hard for me to conceive
of a circumstance where that many columns w
Greg,
> Anyone have any suggestions on how to efficiently compare
> rows in the same table? This table has 637 columns to be
> compared and 642 total columns.
637 columns? Are you sure that's normalized? It's hard for me to conceive
of a circumstance where that many columns would be necessa
All,
Anyone have any suggestions on how to efficiently compare
rows in the same table? This table has 637 columns to be
compared and 642 total columns.
TIA,
Greg
--
Greg Spiegelberg
Sr. Product Development Engineer
Cranel, Incorporated.
Phone: 614.318.4314
Fax: 614.431.8388
Email: [EMAIL P
26 matches
Mail list logo