The business rules we've here are currently embedded in hive code. They
range from basic standardization using case blocks to complex multi-column
validation.
Thanks.
On Mon, Apr 16, 2018 at 5:03 PM Jörn Franke wrote:
> The question is what do your rules do? Do you need to maintain a factbase
>
The question is what do your rules do? Do you need to maintain a factbase or do
they just check data quality within certain tables?
> On 16. Apr 2018, at 22:28, Joel D wrote:
>
> Ok.
>
> Rough ideas:
> To keep the business logic outside code, I was thinking to give a custom UI.
>
> Next read
Ok.
Rough ideas:
To keep the business logic outside code, I was thinking to give a custom UI.
Next read from UI data and build UDFs using the rules defined outside the
UDF.
1 UDF per data object.
Not sure these are just thoughts.
On Mon, Apr 16, 2018 at 1:40 PM Jörn Franke wrote:
> I would n
Hi Pivonka, we are more inclined towards using open source products and
closer integration with Hive since we've most of our ETL in Hive.
Thanks.
On Mon, Apr 16, 2018 at 12:51 PM Al Pivonka wrote:
> I am not the product owner an have not implemented it yet.
> I would check out
> http://cask.co/
I'd suggested logging the stack trace of the call, the logs attached don't
really give much information of where the calls are occurring during query
compilation/execution.
Try logger.info("Inside testUdf Initialize***", new
Exception("initialize");
__
I would not use Drools with Spark, it does not scale to the distributed setting.
You could translate the rules to hive queries but this would not be exactly the
same thing.
> On 16. Apr 2018, at 17:59, Joel D wrote:
>
> Hi,
>
> Any suggestions on how to implement Business Rules Engine with Hi
I am not the product owner an have not implemented it yet.
I would check out
http://cask.co/products/rules-engine/
On Mon, Apr 16, 2018 at 11:59 AM, Joel D wrote:
> Hi,
>
> Any suggestions on how to implement Business Rules Engine with Hive ETLs?
>
> For spark based Etl jobs, I was exploring Dro
Hi,
Any suggestions on how to implement Business Rules Engine with Hive ETLs?
For spark based Etl jobs, I was exploring Drools but not sure about Hive.
Thanks.
Do you use Tez session pool along with LLAP (as Thai suggests in the
previous reply)? If a new query finds an idle AM in Tez session pool, there
will be no launch cost for AM. If no idle AM is found or if you specify a
queue name, a new AM should start in order to serve the query. This is
explained
The best approach would be to use a demonized containers such as Hive LLAP
+ Tez session pool or Spark on Hive.
I’m not that familiar with Spark on Hive so I can’t comment on it but Hive
on LLAP has worked really well for me when coupled with Tez session pool.
You’ll have to specify how many Tez A
Hi All,
We have a use case where we need to return output in < 10 sec. We have
evaluated different set of tool for execution and they work find but they
do not cover all cases as well as they are not reliable(since they are in
evolving phase). But Hive works well in this context.
Using Hive LLAP,
Hello,
I am new to hive JDBC and I tried to copy code from hive server2 jdbc client
example and nifi.
Now I am able to create hive connection with proxy user indicated in URL.
However that means I have to create a connection pool for every user if they
have multiple sessions.
I want to know tha
12 matches
Mail list logo