ov 19, 2015 11:39 AM, "Brian Jeltema" <mailto:bdjelt...@gmail.com>> wrote:
> Following up, I turned on logging in the MySQL server to capture the failing
> query. The query being logged by MySQL is
>
> SELECT `A0`.`NAME` AS NUCORDER0 FROM `DBS` `A0` WHERE
the backslash in the ESCAPE clause should be
doubled. How can I fix this?
Brian
> On Nov 19, 2015, at 7:28 AM, Brian Jeltema wrote:
>
> Originally posted in the Ambari users group, but probably more appropriate
> here:
>
> I’ve done a rolling upgrade to HDP 2.3 and everything a
Originally posted in the Ambari users group, but probably more appropriate here:
I’ve done a rolling upgrade to HDP 2.3 and everything appears to be working now
except for Hive. The HiveServer2
process is shown as ‘Started’, but it’s really broken, as is the Hive
Metastore. HiveServer2 is not li
, Brian Jeltema
wrote:
> Using Hive .13, I would like to export multiple partitions of a table,
> something conceptually like:
>
> EXPORT TABLE foo PARTITION (id=1,2,3) to ‘path’
>
> Is there any way to accomplish this?
>
> Brian
Using Hive .13, I would like to export multiple partitions of a table,
something conceptually like:
EXPORT TABLE foo PARTITION (id=1,2,3) to ‘path’
Is there any way to accomplish this?
Brian
I have a table that I would like to define to be bucketed, but I also need to
write
to new partitions using HCatOutputFormat (or similar) from an MR job. I’m
getting
an unsupported operation error when I try to do that. Is there some way to make
this work?
I suppose I could write to a temporary
I’m anticipating using UPDATE statements in Hive 0.14.
In my use case, I may need to perform 30 or so updates at a time. Will each
UPDATE
result in an MR job doing a full partition scan?
Brian
Hive 0.13, I execute a query in silent mode, persisting the output as:
hive -S -f query.hql >/tmp/output.txt
but I’m getting logging output in the output file, such as:
2014-08-27 14:53:02,741 [main] WARN org.apache.hadoop.conf.Configuration -
file:/tmp/hdfs/hive_2014-08-27_14-52-58_968_6
Thanks In Advance ;^)
> Regards,
> Sankar S
>
>
> On Sat, Aug 2, 2014 at 5:17 PM, Brian Jeltema
> wrote:
> I've written a small UDF and placed it in a JAR (a.jar).
>
> The UDF has a dependency on a class in another JAR (b.jar).
>
> in Hive, I do:
>
I've written a small UDF and placed it in a JAR (a.jar).
The UDF has a dependency on a class in another JAR (b.jar).
in Hive, I do:
add jar a.jar;
add jar b.jar;
create temporary function .;
but when I execute the UDF, the dependency in b.jar is not found
(NoClassDefFoundError).
If I
I have some Hive tables that are partitioned by an int field. When I tried to
do a Sqoop import using Sqoops HCatalog
support, it failed complaining that HCatalog only supports string partitions.
However, I’ve used HCatalog in
mapReduce jobs with int partitions successfully. The docs that I’ve s
Right, but in my case the numbers are never negative.
On Jun 29, 2014, at 9:52 AM, Edward Capriolo wrote:
> That does not work if your sorting negative numbers btw. As you would have to
> - pad and reverse negative numbers.
>
>
> On Sun, Jun 29, 2014 at 6:35 AM, Brian Je
ble, we could include
> it in the documentation.)
>
> -- Lefty
>
>
> On Sat, Jun 28, 2014 at 10:08 AM, Brian Jeltema
> wrote:
> Hive doesn’t support a BigDecimal data type, as far as I know. It supports a
> Decimal type that
> is based on BigDecimal, but the precisi
ghosh wrote:
> Did you try BigDecimal? It is the same datatype as Java BigDecimal.
>
>
> On Thursday, 26 June 2014 8:34 AM, Brian Jeltema
> wrote:
>
>
> Sorry, I meant 128 bit
>
> On Jun 26, 2014, at 11:31 AM, Brian Jeltema
> wrote:
>
> > I need
Sorry, I meant 128 bit
On Jun 26, 2014, at 11:31 AM, Brian Jeltema
wrote:
> I need to represent an unsigned 64-bit value as a Hive DECIMAL. The current
> precision maximum is 38,
> which isn’t large enough to represent the high-end of this value. Is there an
> alternative?
>
> Brian
>
I need to represent an unsigned 64-bit value as a Hive DECIMAL. The current
precision maximum is 38,
which isn’t large enough to represent the high-end of this value. Is there an
alternative?
Brian
r install environment. Also replace $HBASE_HOME
> with the full path of your hbase install.
>
> -Deepesh
>
> On Mon, Jun 23, 2014 at 9:14 AM, Brian Jeltema
> wrote:
> I’m running Hive 0.12 on Hadoop V2 (Ambari installation) and have been trying
> to use HBase integration
I’m running Hive 0.12 on Hadoop V2 (Ambari installation) and have been trying
to use HBase integration. Hive generated Map/Reduce jobs
are failing with:
Error: java.lang.ClassNotFoundException:
org.apache.hadoop.hbase.mapreduce.TableSplit
this is discussed in several discussion threads, but
I’m also experimenting with version 0.13, and see that it differs from 0.12
significantly.
Can you give me a code example for 0.13?
Thanks
Brian
On Jun 13, 2014, at 9:25 AM, Brian Jeltema
wrote:
> Version 0.12.0.
>
> I’d like to obtain the table’s schema, scan a table partition, an
I have defined a table that is partitioned on a value of type int.
The ReadEntity.Builder.withPartition method accepts a Map object
to define the partition to read. I assumed that I had to convert the
int to a string to create the map, and that it would be automatically converted
back to the corre
ig = readerContext.getConfig();
>
> Step 4: Get records
>
> a) for each input split get the reader:
>
> HCatReader hcatReader = DataTransferFactory.getHCatReader(inputSplit, config);
>
> Iterator records = hcatReader.read();
>
> b) Iterate over the records for th
obInfo);
> HCatSchema s = HCatInputFormat.getTableSchema(job);
>
>
> 3. To read the HCat records
>
> It depends on how you' like to read the records ... will you be reading ALL
> the records remotely from the client app
> or you will get input splits an
Doing this, with the appropriate substitutions for my table, jarClass, etc:
> 2. To get the table schema... I assume that you are after HCat schema
>
>
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.mapreduce.InputSplit;
> import org.apache.hadoop.mapreduce.Job;
> im
emoved in Hive 0.14.0. I can provide you with the code sample if you
> tell me what you are trying to do and what version of Hive you are using.
>
>
> On Fri, Jun 13, 2014 at 7:33 AM, Brian Jeltema
> wrote:
> I’m experimenting with HCatalog, and would like to be able to access
I’m experimenting with HCatalog, and would like to be able to access tables and
their schema
from a Java application (not Hive/Pig/MapReduce). However, the API seems to be
hidden, which
leads leads me to believe that this is not a supported use case. Is HCatalog
use limited to
one of the support
25 matches
Mail list logo