Hello all,I encountered a problem while using hive, please ask you. The 
following is the specific situation.
Platform: hive on spark
Error: java.io.IOException: Too many bytes before delimiter: 2147483648
Description: When using small files of the same format for testing, there is no 
problem; but for all data (about 140G), the above error is reported.
Table creation statement:
DROP TABLE IF EXISTS test;
CREATE EXTERNAL TABLE test (
     json string
     )
     ROW FORMAT DELIMITED
     FIELDS TERMINATED BY '\ r \ n'
     STORED AS TEXTFILE


LOCATION 'hdfs: / userdata / test';


The big data platform I use is TDH Community Edition 5.0.2.
Any help is much appreciated!





 

Reply via email to