Re: hive failure after HDP 2.3 upgrade

2015-11-19 Thread Brian Jeltema
ov 19, 2015 11:39 AM, "Brian Jeltema" <mailto:bdjelt...@gmail.com>> wrote: > Following up, I turned on logging in the MySQL server to capture the failing > query. The query being logged by MySQL is > > SELECT `A0`.`NAME` AS NUCORDER0 FROM `DBS` `A0` WHERE

Re: hive failure after HDP 2.3 upgrade

2015-11-19 Thread Brian Jeltema
the backslash in the ESCAPE clause should be doubled. How can I fix this? Brian > On Nov 19, 2015, at 7:28 AM, Brian Jeltema wrote: > > Originally posted in the Ambari users group, but probably more appropriate > here: > > I’ve done a rolling upgrade to HDP 2.3 and everything a

hive failure after HDP 2.3 upgrade

2015-11-19 Thread Brian Jeltema
Originally posted in the Ambari users group, but probably more appropriate here: I’ve done a rolling upgrade to HDP 2.3 and everything appears to be working now except for Hive. The HiveServer2 process is shown as ‘Started’, but it’s really broken, as is the Hive Metastore. HiveServer2 is not li

Re: EXPORTing multiple partitions

2015-06-25 Thread Brian Jeltema
, Brian Jeltema wrote: > Using Hive .13, I would like to export multiple partitions of a table, > something conceptually like: > > EXPORT TABLE foo PARTITION (id=1,2,3) to ‘path’ > > Is there any way to accomplish this? > > Brian

EXPORTing multiple partitions

2015-06-25 Thread Brian Jeltema
Using Hive .13, I would like to export multiple partitions of a table, something conceptually like: EXPORT TABLE foo PARTITION (id=1,2,3) to ‘path’ Is there any way to accomplish this? Brian

writing to bucketed table in MR job

2015-01-13 Thread Brian Jeltema
I have a table that I would like to define to be bucketed, but I also need to write to new partitions using HCatOutputFormat (or similar) from an MR job. I’m getting an unsupported operation error when I try to do that. Is there some way to make this work? I suppose I could write to a temporary

UPDATE implementation

2014-12-03 Thread Brian Jeltema
I’m anticipating using UPDATE statements in Hive 0.14. In my use case, I may need to perform 30 or so updates at a time. Will each UPDATE result in an MR job doing a full partition scan? Brian

silent mode isn't silent

2014-08-27 Thread Brian Jeltema
Hive 0.13, I execute a query in silent mode, persisting the output as: hive -S -f query.hql >/tmp/output.txt but I’m getting logging output in the output file, such as: 2014-08-27 14:53:02,741 [main] WARN org.apache.hadoop.conf.Configuration - file:/tmp/hdfs/hive_2014-08-27_14-52-58_968_6

Re: UDF with dependent JARs

2014-08-03 Thread Brian Jeltema
Thanks In Advance ;^) > Regards, > Sankar S > > > On Sat, Aug 2, 2014 at 5:17 PM, Brian Jeltema > wrote: > I've written a small UDF and placed it in a JAR (a.jar). > > The UDF has a dependency on a class in another JAR (b.jar). > > in Hive, I do: >

UDF with dependent JARs

2014-08-02 Thread Brian Jeltema
I've written a small UDF and placed it in a JAR (a.jar). The UDF has a dependency on a class in another JAR (b.jar). in Hive, I do: add jar a.jar; add jar b.jar; create temporary function .; but when I execute the UDF, the dependency in b.jar is not found (NoClassDefFoundError). If I

HCat and non-string partition types

2014-07-23 Thread Brian Jeltema
I have some Hive tables that are partitioned by an int field. When I tried to do a Sqoop import using Sqoops HCatalog support, it failed complaining that HCatalog only supports string partitions. However, I’ve used HCatalog in mapReduce jobs with int partitions successfully. The docs that I’ve s

Re: DECIMAL precision is too small

2014-06-29 Thread Brian Jeltema
Right, but in my case the numbers are never negative. On Jun 29, 2014, at 9:52 AM, Edward Capriolo wrote: > That does not work if your sorting negative numbers btw. As you would have to > - pad and reverse negative numbers. > > > On Sun, Jun 29, 2014 at 6:35 AM, Brian Je

Re: DECIMAL precision is too small

2014-06-29 Thread Brian Jeltema
ble, we could include > it in the documentation.) > > -- Lefty > > > On Sat, Jun 28, 2014 at 10:08 AM, Brian Jeltema > wrote: > Hive doesn’t support a BigDecimal data type, as far as I know. It supports a > Decimal type that > is based on BigDecimal, but the precisi

Re: DECIMAL precision is too small

2014-06-28 Thread Brian Jeltema
ghosh wrote: > Did you try BigDecimal? It is the same datatype as Java BigDecimal. > > > On Thursday, 26 June 2014 8:34 AM, Brian Jeltema > wrote: > > > Sorry, I meant 128 bit > > On Jun 26, 2014, at 11:31 AM, Brian Jeltema > wrote: > > > I need

Re: DECIMAL precision is too small

2014-06-26 Thread Brian Jeltema
Sorry, I meant 128 bit On Jun 26, 2014, at 11:31 AM, Brian Jeltema wrote: > I need to represent an unsigned 64-bit value as a Hive DECIMAL. The current > precision maximum is 38, > which isn’t large enough to represent the high-end of this value. Is there an > alternative? > > Brian >

DECIMAL precision is too small

2014-06-26 Thread Brian Jeltema
I need to represent an unsigned 64-bit value as a Hive DECIMAL. The current precision maximum is 38, which isn’t large enough to represent the high-end of this value. Is there an alternative? Brian

Re: hive/hbase integration

2014-06-25 Thread Brian Jeltema
r install environment. Also replace $HBASE_HOME > with the full path of your hbase install. > > -Deepesh > > On Mon, Jun 23, 2014 at 9:14 AM, Brian Jeltema > wrote: > I’m running Hive 0.12 on Hadoop V2 (Ambari installation) and have been trying > to use HBase integration

hive/hbase integration

2014-06-23 Thread Brian Jeltema
I’m running Hive 0.12 on Hadoop V2 (Ambari installation) and have been trying to use HBase integration. Hive generated Map/Reduce jobs are failing with: Error: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.mapreduce.TableSplit this is discussed in several discussion threads, but

Re: HCatalog access from a Java app

2014-06-21 Thread Brian Jeltema
I’m also experimenting with version 0.13, and see that it differs from 0.12 significantly. Can you give me a code example for 0.13? Thanks Brian On Jun 13, 2014, at 9:25 AM, Brian Jeltema wrote: > Version 0.12.0. > > I’d like to obtain the table’s schema, scan a table partition, an

reading from non-string partitions

2014-06-20 Thread Brian Jeltema
I have defined a table that is partitioned on a value of type int. The ReadEntity.Builder.withPartition method accepts a Map object to define the partition to read. I assumed that I had to convert the int to a string to create the map, and that it would be automatically converted back to the corre

Re: HCatalog access from a Java app

2014-06-16 Thread Brian Jeltema
ig = readerContext.getConfig(); > > Step 4: Get records > > a) for each input split get the reader: > > HCatReader hcatReader = DataTransferFactory.getHCatReader(inputSplit, config); > > Iterator records = hcatReader.read(); > > b) Iterate over the records for th

Re: HCatalog access from a Java app

2014-06-16 Thread Brian Jeltema
obInfo); > HCatSchema s = HCatInputFormat.getTableSchema(job); > > > 3. To read the HCat records > > It depends on how you' like to read the records ... will you be reading ALL > the records remotely from the client app > or you will get input splits an

Re: HCatalog access from a Java app

2014-06-13 Thread Brian Jeltema
Doing this, with the appropriate substitutions for my table, jarClass, etc: > 2. To get the table schema... I assume that you are after HCat schema > > > import org.apache.hadoop.conf.Configuration; > import org.apache.hadoop.mapreduce.InputSplit; > import org.apache.hadoop.mapreduce.Job; > im

Re: HCatalog access from a Java app

2014-06-13 Thread Brian Jeltema
emoved in Hive 0.14.0. I can provide you with the code sample if you > tell me what you are trying to do and what version of Hive you are using. > > > On Fri, Jun 13, 2014 at 7:33 AM, Brian Jeltema > wrote: > I’m experimenting with HCatalog, and would like to be able to access

HCatalog access from a Java app

2014-06-13 Thread Brian Jeltema
I’m experimenting with HCatalog, and would like to be able to access tables and their schema from a Java application (not Hive/Pig/MapReduce). However, the API seems to be hidden, which leads leads me to believe that this is not a supported use case. Is HCatalog use limited to one of the support