I'm looking for thoughts on the best way to handle dynamic schemas.
The application I am developing revolves around user defined entities. Each
entity is a tabular dataset with user defined columns and data types.
Entities can also be related to each other through Parent-Child
relationships. Some
Poul,
I took a quick look at the demo site, but didn't see anything where the user
was defining the fields. It looks like they can choose from a list of
predetermined metadata fields. Looking at the code, but not actually seeing the
total db schema, it looks like they might be using the EAV pat
t;> configuration a few to be uses for researchers.
>>
>> regards
>> Poul
>>
>>
>> 2017-04-11 19:46 GMT+02:00 Rj Ewing :
>>> I'm looking for thoughts on the best way to handle dynamic schemas.
>>>
>>> The application I am developi
eav likely provide better query performance?
On Wed, Apr 12, 2017 at 7:43 AM, Merlin Moncure wrote:
> On Tue, Apr 11, 2017 at 12:46 PM, Rj Ewing wrote:
> > I'm looking for thoughts on the best way to handle dynamic schemas.
> >
> > The application I am developing r
are <25% populated.
On Fri, Apr 14, 2017 at 11:23 AM, Vincent Elschot wrote:
>
> Op 14/04/2017 om 19:03 schreef Rj Ewing:
>
> We do know where we want to end up. We've had the application running for
> a while using a triple store db. We're looking to move away from
I am evaluating postgres for as a datastore for our webapp. We are moving
away from a triple store db due to performance issues.
Our data model consists of sets of user defined attributes. Approx 10% of
the attributes tend to be 100% filled with 50% of the attributes having
approx 25% filled. This
A step in the right direction for me, however it doesn't appear to support
per field full text searching.
It is exciting though!
On Tue, Apr 18, 2017 at 3:00 PM, Bruce Momjian wrote:
> On Tue, Apr 18, 2017 at 02:38:15PM -0700, Rj Ewing wrote:
> > I am evaluating postgres for as a
eries on a table with 44 million rows?
RJ
On Tue, Apr 18, 2017 at 10:35 PM, George Neuner
wrote:
> On Tue, 18 Apr 2017 14:38:15 -0700, Rj Ewing
> wrote:
>
> >I am evaluating postgres for as a datastore for our webapp. We are moving
> >away from a triple store db due to perf
amples_lg_txt table?
something like:
SELECT COUNT(*) FROM samples WHERE id IN ( SELECT DISTINCT(s.id) FROM
samples_lg_txt s JOIN keys k ON s.key = k.id WHERE (*name = 'key1' AND tsv
@@ to_tsquery('value1')) AND (name = 'key2' AND tsv @@
to_tsquery('value2')
On Wed, Apr 19, 2017 at 6:44 PM, George Neuner wrote:
>
> On Wed, 19 Apr 2017 11:57:26 -0700, Rj Ewing
> wrote:
>
> >I did some testing using a secondary table with the key, value column.
> >However I don't think this will provide the performance that we need.
>
On Wed, Apr 19, 2017 at 8:09 PM, Jeff Janes wrote:
>
> Your best bet might be to ignore the per-field searching in the initial
> (indexed) pass of the query to get everything that has all the search
> terms, regardless of which field they occur in. And the re-check whether
> each of the found va
On Wed, Apr 19, 2017 at 9:55 PM, George Neuner wrote:
> On Wed, 19 Apr 2017 16:28:13 -0700, Rj Ewing
> wrote:
>
> >okay, messing around a bit more with the secondary k,v table it seems like
> >this could be a good solution..
> >
> >I created a keys table
12 matches
Mail list logo