Hi Again
Up to this day we have set the data acquisition system running for just
one ship and writing the code to display the data. For less than 20 days
we have 6M rows.
I gave a shot to timescale, installed locally as an extension, it seems
much prettier than having to do all the partition mgmt by hand or other
tools. However this seems more than a complete engine with its own
workers, so this seems like something new and big which seems to me like
something to commit to for a long time, something to invest, on top of
the already 25+ commitment we have with PostgreSQL itself.
So this is serious decision, so ppl please share your stories with
timescale .
On 2/14/25 12:46, Achilleas Mantzios - cloud wrote:
On 2/14/25 11:34, Allan Kamau wrote:
On Thu, Feb 13, 2025 at 9:44 PM Thiemo Kellner
<thi...@gelassene-pferde.biz> wrote:
13.02.2025 10:54:05 Achilleas Mantzios - cloud
<a.mantz...@cloud.gatewaynet.com>:
> If we followed a strict normalized approach then we would
create additionally 11 tables each tag of type c) . And we are
not guaranteed that the same tags would have the same structure
over the whole fleet/manufacturers. So we are thinking of putting
all semi-structured data of tags of type c) into one table with a
single col of type jsonb . From what I read timescaledb plays
nice with jsonb (or at least not bad).
>
> Do you ppl see any gotcha with this approach ?
>
> For starters we will not convert yet to timescaledb, but store
them and handle them like normal tables. At least until we grasp
the ins and outs of this.
I have not come to see the real advantage of putting data into
something like Jason or XML columns. Sue, you make life easier
initially saving the time to put them into columns one by one, on
the other end you have the hassle of dissecting the JSON, XML you
name it when you retrieve/select the data, every query. While the
query might stay stable the computational
effort dissection will have to be done with every query execution.
For c) you could go to 6th normal form, or what number that is,
by a table like TAG(id, tag_name, tag_value). You would have to
convert the values to string to enter them. It is a pain in the
butt, imho, retrieving them again. You can alleviate by having a
specific column/table per data type, e.g. TAG(id,
tag_name,tag_value_varray,tag_value_date,...).
What is the downside of having each a table for the special tags?
More effort in setting up.
The upside is less effort in retrieval, and a much more
understandable model.
If your tag structure is volatile, you might have generic column
names on a table mapping them in a view to speaking names. Taking
this further does anchor modelling https://www.anchormodeling.com/
.
Though you haven't asked for comments on database design, I have a
suggestion, have you considered using table partitioning based on a
well defined date interval, maybe monthly?
Here your DB data population application will determine the name of
the table based on the current date then check for the existence of
the table and then construct the inherited table if one does not
exist and proceed to populate the child table accordingly.
Absolutely, this will be a necessity sooner or later, as I can see the
future coming it will be a new timescaledb functioning at the start as
a logical replica of our maindb, and timescaledb will handle all the
partitioning (i hope). At some point switch the app (both the data
loading (from the vessels) and the queries ) to the timescaledb, and
free the space in the maindb or smth along those line.
This may prove useful in future.
-Allan.