AFAIK, hive uses default delimiters nested data structures. There is no
workaround for this for now.
for (int i = 3; i < serdeParams.separators.length; i++) {
serdeParams.separators[i] = (byte) (i + 1);
}
Thanks,
Aniket
On Wed, Feb 8, 2012 at 10:15 PM, Hao Cheng wrote:
> Hi,
>
> My
Thanks all for your reply.
My problem is solved using --hive-drop-import-delims.
Now I am getting the correct count as that in MS SQL Server.
But I want to ask one more thing that if I continue to use
--hive-drop-import-delims option everytime to all tables (in case of
sqoop-import-all-tables) whi
Hi,
My data have some map of map structures with customized delimiters.
As per Hive documents, by default, '\001' is the field separator; starting from
'\002', every 2 consecutive characters are the delimiters of 1 level. My data
do not follow this rule in term of delimiters. I mostly just need
Hi, I meet the same problem once, then I change the amount of imported
columns it works fine. Sometimes blank rows would be generated by sqoop..I
do not actually know what the problem really is..
2012/2/9 Bhavesh Shah
>
>
>
>
>Hello All,
>
> I have imported near about 10 tables in Hive from
Hi all
Yesterday,I install hive 0.8.1 in hadoop 1.0.0 version ,it's ok.
But today,I install hive 0.8.1 in HDFS 0.20.1,it thorws exceptions
when execute hive shell.
Exception in thread "main" java.lang.NoSuchMethodError:
org.apache.hadoop.hive.cli.CliSessionState.setIsVerbose(Z)
Hello All,
I have imported near about 10 tables in Hive from MS SQL Server. But when I
try to cross check the records in Hive in one of the Table I have found
more record when I run the query (select count(*) from tblName;).
Then I have drop the that Table and again imported it in Hive. I have
Hi,
Im trying to build the HiveODBC driver. The hive source code base I'm
using is 0.8.0. Im following the instructions from
https://cwiki.apache.org/Hive/hiveodbc.html. Basically , I had to build
thrift/thrift-fb303 on my own and Im runningthe ant build command as
ant compile
Hi Koert,
we have a similar situation and this is what we did.
In our case, the partitions correspond to dates. We also have multiple
external tables set up this way.
The upstream process updates a status file with the earliest and
latest date available. I scan the DFS for new partitions (scan prog
Hi Koert,
That's because Hive metastore doesn't know about the partitions you added. I
was in a similar situation but I use Amazon EMR and in their version of Hive,
one can run the command "alter table src recover partitions" that goes through
the directory structure of table (src, in this case)
Hi,
Im trying to create custom input format that will work with hive version 0.7.0
& hadoop 0.20.205 (current amazon EMR setup)
in attachment are: dummy input format, record reader & input split i created
below are steps im performing to try make it work (without success), so if im
missing some
Hi Koert
As you are creating dir/sub dirs using mapreduce jobs out of hive, hive
is unaware of these sub dirs. There is no other way in such cases other than an
add partition DDL to register the dir with a hive partition.
If you are using oozie or shell to trigger your jobs,you can accom
hello all,
we have an external partitioned table in hive.
we add to this table by having map-reduce jobs (so not from hive) create
new subdirectories with the right format (partitionid=partitionvalue).
however hive doesn't pick them up automatically. we have to go into hive
shell and run "alter
hi, all
today we found one of our hive process stuck , all the threads are
stopped at ava.util.HashMap.getEntry(HashMap.java:347), here's the stack(full
stack log is attached):
"Thread-903" prio=10 tid=0x2aaab820 nid=0x2838 runnable
[0x4261b000]
java.lang.Th
13 matches
Mail list logo