On 11 avr. 2013, at 01:27, "Davide D'Alto" wrote:
> > I think I would have put this contract on the dialect rather that the
> > provider. What was your reasoning?
>
> The dialect seems involved in the conversion between the element used by the
> data store (node, relationship, document, etc.
I am currently working on a solution for dynamically adding new shards
to Hibernate Search (for example one per tenant with a list growing).
https://hibernate.atlassian.net/browse/HSEARCH-472
Things are going well but there is an interesting problem related to a
subsequent feature
https://hibern
IMHO passing the shard identifier in the Properties entries is a weak
solution in long term.
I shall prefer breaking SPI but no rational thoughts to back my out of
the box opinion.
Niko
2013/4/11 Emmanuel Bernard :
> I am currently working on a solution for dynamically adding new shards
> to Hib
Sorry all but I won't be able to make meeting. In the middle of internet
connection provider setup. Of course they showed up right at time of
meeting :(
Anyway I just have cell connection atm.
___
hibernate-dev mailing list
hibernate-dev@lists.jboss.or
Man your simple question is actually super complex.
Conclusion first: I think it's important we can always identify any
index just with a simple String, but you're very welcome to add some
kind of register indexName -> StuffWeKnowAboutIt.
This has been biting in several forms. Let's recap the dif
Although this change fixes query lookup,
it adds horrible performance:
Running CapeDwarf cluster QueryTest:
with HSEARCH-1296
21:00:27,188 INFO
[org.hibernate.search.indexes.impl.DirectoryBasedIndexManager]
(http-/192.168.1.102:8080-1) HSEARCH000168: Serialization service Avro
SerializationP
Are you sure that the async version actually had applied all writes to the
index in the measured interval?
On Apr 11, 2013 8:13 PM, "Ales Justin" wrote:
> Although this change fixes query lookup,
> it adds horrible performance:
>
> Running CapeDwarf cluster QueryTest:
>
> with HSEARCH-1296
>
> 2
You could try the new sync version but setting the blackhole backend on the
master node to remove the indexing overhead from the picture.
On Apr 11, 2013 8:39 PM, "Sanne Grinovero" wrote:
> Are you sure that the async version actually had applied all writes to the
> index in the measured interva
No, not in those 800ms, hence the failing test.
But if i add 2sec sleep in between delete and query,
the test passes.
Which is still 25x better. :)
On Apr 11, 2013, at 21:39, Sanne Grinovero wrote:
> Are you sure that the async version actually had applied all writes to the
> index in the mea
What do you mean?
On Apr 11, 2013, at 21:41, Sanne Grinovero wrote:
> You could try the new sync version but setting the blackhole backend on the
> master node to remove the indexing overhead from the picture.
> On Apr 11, 2013 8:39 PM, "Sanne Grinovero" wrote:
>> Are you sure that the async
I just made a change to help catch runtime problems that kept cropping
up. The change was to org.hibernate.mapping.Value#getColumnIterator.
The problem is that code in many modules (hem, envers) that actually
deals with mapping code were making a bad assumption here. The returned
iterator ac
One option just came to mind. Looking at the code, I have to assume
envers really just does not support formulas. Otherwise, all these
compile errors would eventually have resulted in runtime (CCE)
exceptions.
If that is really the case, I could add detection of that and throw a
more meaning
There is a "blackhole" indexing backend, which pipes all indexing
requests > /dev/null
Set this as an Infinispan Query configuration property:
default.worker.backend = blackhole
Of course that means that the index will not be updated: you might
need to adapt your test to tolerate that, but t
is this provider Google Fiber? :)
On Apr 11, 2013, at 11:04 PM, Steve Ebersole wrote:
> Sorry all but I won't be able to make meeting. In the middle of internet
> connection provider setup. Of course they showed up right at time of
> meeting :(
>
> Anyway I just have cell connection atm.
> __
14 matches
Mail list logo