The table format is something like:
user_idvisiting_time visiting_web_page
user1 time11 page_string_11
user1 time12 page_string_12 with keyword 'abc'
user1 time13 page_string_13
user1 time14 page_strin
Hi Peter,
While it looks like the map-red task may have succeeded it looks like the
alter index actually failed. You should look into the execution log to see
what the exception is. Without knowing why the DDLtask failed its hard to
pinpoint the problem.
As for the original problem with the jar a
Hit send too soon...
I'm glad the ADD JAR hack appeared to work. You might verify if the
temporary files mentioned are still there and also verify that you have
write permissions for the target index directories. Other than that, I'm
not sure what to suggest. I haven't really used indexing much, b
Wow. Lots of quirks. I'm glad the ADD JAR
On Fri, Nov 2, 2012 at 6:59 PM, Peter Marron <
peter.mar...@trilliumsoftware.com> wrote:
> Hi Dean,
>
> ** **
>
> At this stage I’m really not worried about this being a hack.
>
> I just want to get it to work, and I’m grateful for all your help.
Hi Dean,
At this stage I'm really not worried about this being a hack.
I just want to get it to work, and I'm grateful for all your help.
I did as you suggested and now, as far as I can see, the Map/Reduce
has succeeded. When I look in the log for the last reduce I no longer
find an error. However
Oh, I saw this line in your Hive output and just assumed you were running
in a cluster:
Hadoop job information for Stage-1: number of mappers: 511; number of
reducers: 138
I haven't tried running a job that big in pseudodistributed mode either,
but that's beside the point.
So it seems to be
Hi Dean,
I'm running everything on a single physical machine in pseudo-distributed mode.
Well it certainly looks like the reducer is looking for a derby.jar, although I
must
confess I don't really understand why it would be doing that.
In an effort to fix that I copied the derby.jar (derby-10.4.