I thought that might be an answer around here :)

I guess I was looking for what might be a better approach. 

Is dynamically creating a table for each entity a bad idea? I can see something 
like creating a schema for each project (group of related entities) and then 
creating a table for each schema. I don't expect having more then a few 
thousand projects anytime soon. We have a relatively targeted audience. 

Or would it be better to use jsonb data types and create a denormalized index 
elsewhere?



> On Apr 11, 2017, at 5:17 PM, Dorian Hoxha <dorian.ho...@gmail.com> wrote:
> 
> If you are asking if you should go nosql, 99% you should not.
> 
>> On Tue, Apr 11, 2017 at 10:06 PM, Poul Kristensen <bcc5...@gmail.com> wrote:
>> dataverse.org uses Postgresql and is well documented + it is completely user 
>> driven. Maybe the concept could be usefull for you. I have installed and 
>> configuration a few to be uses for researchers.
>> 
>> regards 
>> Poul   
>> 
>> 
>> 2017-04-11 19:46 GMT+02:00 Rj Ewing <ewing...@gmail.com>:
>>> I'm looking for thoughts on the best way to handle dynamic schemas.
>>> 
>>> The application I am developing revolves around user defined entities. Each 
>>> entity is a tabular dataset with user defined columns and data types. 
>>> Entities can also be related to each other through Parent-Child 
>>> relationships. Some entities will be 100% user driven, while others (such 
>>> as an entity representing a photo) will be partially user driven (all photo 
>>> entities will have common fields + custom user additions).
>>> 
>>> I was hoping to get opinions on whether postgresql would be a suitable 
>>> backend. A couple of options I have thought of are:
>>> 
>>> 1. Each entity is represented as a table in psql. The schema would be 
>>> dynamically updated (with limits) when an entity mapping is updated. I 
>>> believe that this would provide the best data constraints and allow the 
>>> best data normalization. A concern I have is that there could be an 
>>> enormous amount of tables generated and the performance impacts this might 
>>> have in the future. I could then run elasticsearch as a denormalized cache 
>>> for efficient querying and full-text-search.
>>> 
>>> 2. Use a nosql database. This provides the "dynamic" schema aspect. A 
>>> concern here is the lack of relation support, thus leading to a more 
>>> denormalized data structure and the potential for the data to become 
>>> corrupted.
>>> 
>>> Any opinions on the use of psql for this case, or other options would be 
>>> greatly appreciated!
>>> 
>>> RJ
>> 
>> 
>> 
>> -- 
>> Med venlig hilsen / Best regards
>> Poul Kristensen
>> Linux-OS/Virtualizationexpert and Oracle DBA
> 

Reply via email to