That did the trick.
Thanks Carla!
Matt Tucker
-Original Message-
From: carla.stae...@nokia.com [mailto:carla.stae...@nokia.com]
Sent: Friday, March 30, 2012 11:31 AM
To: user@hive.apache.org
Subject: RE: Variable Substitution Depth Limit
Sorry, hit send too fast...As a work around for
ton)
Sent: Friday, March 30, 2012 11:30
To: user@hive.apache.org
Subject: RE: Variable Substitution Depth Limit
Yeah, you're not the only one who's run into that issue. There is an open Jira
item for it, so they're aware we'd like it configurable anyway...
https://issues.ap
l.com]
Sent: Friday, March 30, 2012 11:19
To: user@hive.apache.org
Subject: Re: Variable Substitution Depth Limit
Unfortunately you are going to have to roll your own hive. It was just
a concept we borrowed from Hadoop since it does not support more then
40 depth substitution. We can probably make
Unfortunately you are going to have to roll your own hive. It was just
a concept we borrowed from Hadoop since it does not support more then
40 depth substitution. We can probably make it configurable by a
hive-site property.
On Fri, Mar 30, 2012 at 10:59 AM, Tucker, Matt wrote:
> I’m trying to m
I'm trying to modify a script to allow for more code reuse, by prepending table
names with a variable.
For example: CREATE TABLE etl_${hiveconf:table}_traffic AS ...
The problem I'm running into is that after building all of these etl_* tables,
I use a final query to join all of the tables and