My tez query seems to error out.
I have a map join in which the smaller tables together are 200 MB and
trying to have one block of main table be processed by one tez task.
Using the following formula to calculate the tez container size
Small table size + each block size + memory for sort + some
Hey Viral,
Is there a similar config for tez ?
Thanks
On Mon, Mar 9, 2015 at 6:36 PM, Viral Bajaria
wrote:
> We use the hive.job.name property to set meaningful job names. Look into
> using that before submitting queries.
>
> Thanks,
> Viral
>
>
> On Mon, Mar 9, 2015 at 2:47 PM, Alex Bohr
Hello Everyone,
I was able to look up the query of hive using hive.query.name from job
history server. I wasn't able to find a similar parameter for tez.
Is there a way where you could find out all the queries that ran in a tez
session ?
Thanks
Hello everyone,
I'm trying to monitor performance of a tez query that runs every hour.
It was easy to pull the query in the MapReduce. However I'm finding it
difficult to pull this info from tez logs.
Essentially need application ID, query and time it took.
I'd really appreciate any ideas that y
limit 10;
> Query ID = navis_20141230100808_09c0a077-442e-4943-a136-710cba6e94d1
> Total jobs = 1
> Launching Job 1 out of 1
>
> Status: Running (Executing on YARN cluster with App id
> application_1419899870643_0001)
>
>
> You mean in the case of JDBC client?
>
&
Hello everyone,
Is there any way to figure out query associated with the application id
when using tez as the execution engine ?
Thanks
a Hive thing. :)
>
> so your second approach was close. just omit the partition columns yr,
> mo, day.
>
>
> On Wed, Mar 26, 2014 at 8:18 AM, P lva wrote:
>
>> Hello,
>>
>> I'm trying to convert managed partitioned text table into compressed orc
>> p
Hello,
I'm trying to convert managed partitioned text table into compressed orc
partitioned table.
I created the a new table with the same schema but when I try inserting
data the errors says there are different number of columns.
I tried doing
>From table a insert into table b(yr=2014, mo=01,
Hi,
I'm have a flume stream that stores data in a directory which is source for
an external table in hive.
However flume creates tmp file that cannot be read by hive and query
breaks.
Is there any way to avoid hive reading this tmp file ?
Thanks
cordingly.
>
> works very well for me.
>
> cheers,
> Stephen.
> PS no need to "export" or "import" data.
>
>
>
> On Wed, Jan 15, 2014 at 10:25 AM, P lva wrote:
>
>> Hello,
>>
>> I'm trying to move a hive table to a different cl
Hello,
I'm trying to move a hive table to a different cluster. My table is
partitioned. However when I use hive export-distcp-import, the partitions
are reversed. /warehouse/db/table/a/b/c in old cluster is
/warehouse/db/table/c/b/a in the new cluster.
How do I avoid this ?
Thanks
11 matches
Mail list logo