will be highly appreciated, thanks in advance!
Poul Jensen
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Thank you very much for your response! It leads to another couple of
questions:
I'm building a database containing key parameters for ~500,000 data
files. The design I found logical is
Two tables for each file:
1) Larger table with detailed key parameters
(10-15 columns, ~1000 rows), call i
you want to create 1 million tables, all with one of
2 schemas?
I started out with a schema for each file, thinking I could utilize
the schema
structure in queries, but I don't see how. Schemas are useful for grouping
tables according to users/owners. Other than that, do they add anyth
Look into inheritance. It makes this easier. However, I don't care
which RDBMS you use, management of 1000 identical tables is going to
be a real pain and I think that everyone here will probably suggest
that it is not exactly a sane thing to do.
Thank you, Chris. I have omitted two import
I have ~500,000 data files each containing ~1,000 records that I want to
put into a database for easy access.
Fictive example for illustration: File w. meteorological data from a
given station.
stat_id | yr | d_o_y | hr | mn | tmp | wind
-|--|---|||--|--
78
owledge about SQL, but I'm hoping somebody can see what I'm trying to
suggest.
As someone else mentioned, you could do it with a union all view.
http://cvs.distributed.net/viewcvs.cgi/stats-sql/logdb/ has an example
of this.
Thank you - it does look as if some union all views co
using run-length encoding. Imagine you could just throw all your
data into one table, run OPTIMIZE TABLE and you'd be done. With SQL
being all about tables I'm surprised this idea (or something even
better) hasn't been implemented already.
Poul Jensen
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Martijn van Oosterhout wrote:
for (i=0; i
Apologies. I already read this in the docs, but also forgot it again.
:-| There is a little more to the solution since I need another array to
save the retrieved data after each query. So in the hope to help others,
here's how I did it:
int *all_va
t has millions of rows, by the way...
Thanks for any advice.
Poul Jensen
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message ca
(segmentation fault) in function scmp regardless what I try. Not SQL
error, I know, but if anybody can tell why I'd be grateful.
Any suggestions?
Thanks,
Poul Jensen
---(end of broadcast)---
TIP 3: Have you checked our exte
Martijn van Oosterhout wrote:
Please don't use "reply" to start new thread, thanks.
On Fri, Sep 08, 2006 at 05:55:44AM -0800, Poul Jensen wrote:
I need to fetch strings from a database with ECPG and then sort them in
C. Here is one of my failed attempts:
I actually have two questions.
1) It seems like the fastest way to find the # of distinct elements in a
column is using GROUP BY. With ECPG, if I try
EXEC SQL SELECT filenm FROM beamdata GROUP BY filenm;
I will get "sql error Too few arguments". Why? Can I correct the
query to avoid the
Joachim Wieland wrote:
On Fri, Sep 15, 2006 at 02:40:49AM -0800, Poul Jensen wrote:
1) It seems like the fastest way to find the # of distinct elements in a
column is using GROUP BY. With ECPG, if I try
EXEC SQL SELECT filenm FROM beamdata GROUP BY filenm;
I will get "sql erro
27;s hard to
accept. Is there no way to get away with a loop?
Poul Jensen
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
14 matches
Mail list logo