I stand corrected. I'm actually bundling hadoop-hdfs-2.7.3, because
otherwise I'd get NoClassDefFound Exception for CanUnbuffer class. To add
to that, I added to my core-site.xml

<property>
   <name>fs.file.impl</name>
   <value>org.apache.hadoop.fs.LocalFileSystem</value>
   <description>The FileSystem for file: uris.</description></property>
<property>
   <name>fs.hdfs.impl</name>
   <value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
   <description>The FileSystem for hdfs: uris.</description></property>

to no avail


2017-07-10 21:31 GMT+02:00 Federico D'Ambrosio <fedex...@gmail.com>:

> Thanks for your reply! Your comment made me realize that the table I was
> trying to write onto didn't have any partition, while I was trying to write
> in a specific partition:
>
> val mapper: DelimitedRecordHiveMapper = new 
> DelimitedRecordHiveMapper().withColumnFields(new
> Fields(colNames)).withTimeAsPartitionField("YYYY/MM/DD")
>
> could that be the problem?
> Anyway, I tried to comment out the withTimeAsPartionField and I am now
> getting a totally different error, which could really be the actual issue
> (as an attachment the complete stacktrace):
>
> java.io.IOException: No FileSystem for scheme: hdfs
>
> which makes me think I am bundling the wrong HDFS jar in the jar
> application I'm building. Still, the version being bundled is hdfs 2.6.1,
> while the version on the cluster is 2.7.3.2.5.5.0-157 (using HDP 2.5) which
> shouldn't they be compatible?
>
> Any suggestion?
>
>
> 2017-07-10 20:02 GMT+02:00 Eugene Koifman <ekoif...@hortonworks.com>:
>
>> Are you able to write to Hive to an existing partition?  (The stack trace
>> shows that it’s being created)
>>
>>
>>
>>
>>
>> *From: *Federico D'Ambrosio <fedex...@gmail.com>
>> *Reply-To: *"d...@hive.apache.org" <d...@hive.apache.org>
>> *Date: *Monday, July 10, 2017 at 7:38 AM
>> *To: *"user@hive.apache.org" <user@hive.apache.org>, "d...@hive.apache.org"
>> <d...@hive.apache.org>
>> *Subject: *Non-local session path expected to be non-null trying to
>> write on Hive using storm-hive
>>
>>
>>
>> Greetings,
>>
>> I'mtrying to get a working dataflow stack on a 6 node cluster (2 masters
>> + 4 slaves, no Kerberos) using Kafka (2.10_0.10), Storm (1.0.1) and Hive2
>> (1.2.1). Storm is able to communicate with Kafka, but can't seemingly
>> operate on Hive (on master-1), even though it manages to connect to its
>> metastore.
>>
>> I thought originally it was a problem of permissions on either HDFS or
>> the local filesystem, but even though I set 777 permissions on /tmp/hive,
>> there's still this issue.
>>
>> In core-site.xml:
>>
>>    - hadoop.proxyuser.hcat.group
>>
>> ·         hadoop.proxyuser.hcat.hosts
>>
>>    - hadoop.proxyuser.hdfs.groups
>>    - hadoop.proxyuser.hdfs.hosts
>>    - hadoop.proxyuser.hive.groups
>>    - hadoop.proxyuser.hive.hosts
>>    - hadoop.proxyuser.root.groups
>>    - hadoop.proxyuser.root.hosts
>>
>> are all set to '*'.
>>
>> Hive2, as far as I see is correctly set to work with transactions, being
>> the target table with transactional=true, stored as orc and bucketed. In
>> the hive-site.xml:
>>
>>    - hive.compactor.worker.threads = 1
>>    - hive.compactor.initiator.on = true
>>    - hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager
>>
>> I get a Nullpointer Exception, you may find the stack trace among the
>> attached files.
>>
>> From what I can gather, the NullpointerException is thrown in the
>> following method inside SessionState:
>>
>> 1.  public static Path getHDFSSessionPath(Configuration conf) {
>>
>> 2.    SessionState ss = SessionState.get();
>>
>> 3.    if (ss == null) {
>>
>> 4.           String sessionPathString = conf.get(HDFS_SESSION_PATH_KEY);
>>
>> 5.           Preconditions.checkNotNull(sessionPathString,  "Conf non-local 
>> session path expected to be non-null");
>>
>> 6.           return new Path(sessionPathString);
>>
>> 7.    }
>>
>> 8.    Preconditions.checkNotNull(ss.hdfsSessionPath,  "Non-local session 
>> path expected to be non-null");
>>
>> 9.    return ss.hdfsSessionPath;
>>
>> 10.}
>>
>>
>>
>> Specifically, by:
>>
>> 1.  Preconditions.checkNotNull(ss.hdfsSessionPath, "Non-local session path 
>> expected to be non-null");
>>
>> So, it seems to be an hdfs related issue, but I can't understand why it's
>> happening.
>>
>> From what I gather, this occurs when Hive tries to retrieve the local
>> path of the session, which is stored in the _hive.local.session.path
>> configuration variable. The value of this variable is assigned each time a
>> new Hive session is created, and it is formed by merging the path for user
>> temporary files (hive.exec.local.scratchdir) to the session ID (
>> hive.session.id).
>>
>> If indeed is a permissions issue, what should I look into to find the
>> origin of the issue?
>>
>> Thanks for your help,
>>
>> Federico
>>
>
>
>
> --
> Federico D'Ambrosio
>



-- 
Federico D'Ambrosio

Reply via email to