Check the base directory in Hive warehouse.
On Tue, Dec 2, 2014 at 12:42 PM, vic0777 wrote:
> Hi All,
>
> I am trying to use the new transaction feature in Hive-0.14. According to
> its document, every transaction table have a base directory and one delta
> directory for each transaction in HDFS
Hi All,
I am trying to use the new transaction feature in Hive-0.14. According to its
document, every transaction table have a base directory and one delta directory
for each transaction in HDFS for data storage. But I can not find where the
base directory is in HDFS, there is only delta direct
You need to build your spark assembly from spark 1.2 branch. this should
give your both a spark build as well as spark-assembly jar, which you need
to copy to Hive lib directory. Snapshot is fine, and spark 1.2 hasn't been
released yet.
--Xuefu
On Mon, Dec 1, 2014 at 7:41 PM, yuemeng1 wrote:
>
I think your spark cluster needs to be a build from latest Spark-1.2
branch. You need to build it yourself.
--Xuefu
On Mon, Dec 1, 2014 at 7:59 PM, yuemeng1 wrote:
> i get a spark-1.1.0-bin-hadoop2.4 from(
> http://ec2-50-18-79-139.us-west-1.compute.amazonaws.com/data/) and
> replace the Spark
I tried to update my record in hive previous version and also tried out
update in hive 0.14.0. The newer version which support hive.
I created a table with 3 buckets with 180 MB. In my warehouse the data get
stored into 3 different blocks
delta_012_012
--- Block ID: 1073751752
--- Block I
i get a spark-1.1.0-bin-hadoop2.4
from(http://ec2-50-18-79-139.us-west-1.compute.amazonaws.com/data/) and
replace the Spark 1.2.x assembly with
http://ec2-50-18-79-139.us-west-1.compute.amazonaws.com/data/spark-assembly-1.2.0-SNAPSHOT-hadoop2.3.0-cdh5.1.2.jar,but
when i run a query about join,i
hi.XueFu,
thanks a lot for your inforamtion,but as far as i know ,the latest spark
version on github is spark-snapshot-1.3,but there is no spark-1.2,only
have a branch-1.2 with spark-snapshot-1.2,can u tell me which spark
version i should built,and for now,that's
spark-assembly-1.2.0-SNAPSHO
What should I be checking for ?
I have logs in the system but unsure what should I look for, any pointers ?
The errors happen so quickly that I am not able to trap the calls and look
at the mysql side on what's happening.
With regards to the query not loading data into a partition even though
th
It seems that wrong class, HiveInputFormat, is loaded. The stacktrace is
way off the current Hive code. You need to build Spark 1.2 and copy
spark-assembly jar to Hive's lib directory and that it.
--Xuefu
On Mon, Dec 1, 2014 at 6:22 PM, yuemeng1 wrote:
> hi,i built a hive on spark package and
hi,i built a hive on spark package and my spark assembly jar is
spark-assembly-1.2.0-SNAPSHOT-hadoop2.4.0.jar,when i run a query in hive
shell,before execute this query,
i set all the require which hive need with spark.and i execute a join
query :
select distinct st.sno,sname from student st j
Can you please check you mysql service which stores metada
On Dec 2, 2014 4:18 AM, "Viral Bajaria" wrote:
> Hi,
>
> I have been running into 2 issues with dynamic partitioning queries.
>
> The query runs fine and I get errors like:
>
> *Failed with exception
> MetaException(message:javax.jdo.JDOD
Hi,
I have been running into 2 issues with dynamic partitioning queries.
The query runs fine and I get errors like:
*Failed with exception
MetaException(message:javax.jdo.JDODataStoreException: Insert of object
"org.apache.hadoop.hive.metastore.model.MPartition@5bbd373e" using
statement "INSERT
Hi,
I am trying to connect to Hive via JDBC client (Java code) but the connection
hands forever. The same issue with beeline client also. I did google around and
add "auth=noSasl" to my JDBC URL but still no luck.
HiveServer2 startup :
nohup hive --service hiveserver2
Beeline client :
beelin
Hi,
I am trying to make an avro backed table that will hold a map in one of
it's columns. One of map's keys has a highly skewed values so I want to
declare that in table definition. However when I do it like so:
CREATE TABLE my_table
PARTITIONED BY (rec_date date)
SKEWED BY (myDataMap['skewed
Hi all,
I am trying to derive a temporary column/variable in a select query and use the
column/variable to derive few other columns in hive.
I felt that hive does it support this flow kind of programming and I should do
nested querying. But with nested structure, my query is taking much longer
It is running. As I mentioned earlier only after setting transaction
properties I'm getting the error, was able to run normal hive queries if I
disable transaction related properties!
On Mon, Dec 1, 2014 at 1:52 PM, unmesha sreeveni
wrote:
>
> On Mon, Dec 1, 2014 at 12:31 PM, yogendra reddy
> w
On Mon, Dec 1, 2014 at 12:31 PM, yogendra reddy
wrote:
> WARN metastore.RetryingMetaStoreClient: MetaStoreClient lost connection.
>
It looks like hive-metastore service is not running.
Can you please check the same?
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Develop
17 matches
Mail list logo