Any reason why you have more than 2G in a single line?

There is a limit of 2G in the Hadoop library we use. Also the JVM doesn't
work when your string is that long.



On Wed, Aug 26, 2015 at 11:38 AM, gsvic <victora...@gmail.com> wrote:

> Yes, it contain one line
>
> On Wed, Aug 26, 2015 at 8:20 PM, Yin Huai-2 [via Apache Spark Developers
> List] <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=13854&i=0>> wrote:
>
>> The JSON support in Spark SQL handles a file with one JSON object per
>> line or one JSON array of objects per line. What is the format your file?
>> Does it only contain a single line?
>>
>> On Wed, Aug 26, 2015 at 6:47 AM, gsvic <[hidden email]
>> <http:///user/SendEmail.jtp?type=node&node=13852&i=0>> wrote:
>>
>>> Hi,
>>>
>>> I have the following issue. I am trying to load a 2.5G JSON file from a
>>> 10-node Hadoop Cluster. Actually, I am trying to create a DataFrame,
>>> using
>>> sqlContext.read.json("hdfs://master:9000/path/file.json").
>>>
>>> The JSON file contains a parsed table(relation) from the TPCH benchmark.
>>>
>>> After finishing some tasks, the job fails by throwing several
>>> java.io.IOExceptions. For smaller files (eg 700M it works fine). I am
>>> posting a part of the log and the whole stack trace below:
>>>
>>> 15/08/26 16:31:44 INFO TaskSetManager: Starting task 10.1 in stage 1.0
>>> (TID
>>> 47, 192.168.5.146, ANY, 1416 bytes)
>>> 15/08/26 16:31:44 INFO TaskSetManager: Starting task 11.1 in stage 1.0
>>> (TID
>>> 48, 192.168.5.150, ANY, 1416 bytes)
>>> 15/08/26 16:31:44 INFO TaskSetManager: Starting task 4.1 in stage 1.0
>>> (TID
>>> 49, 192.168.5.149, ANY, 1416 bytes)
>>> 15/08/26 16:31:44 INFO TaskSetManager: Starting task 8.1 in stage 1.0
>>> (TID
>>> 50, 192.168.5.246, ANY, 1416 bytes)
>>> 15/08/26 16:31:53 INFO TaskSetManager: Finished task 10.0 in stage 1.0
>>> (TID
>>> 17) in 104681 ms on 192.168.5.243 (27/35)
>>> 15/08/26 16:31:53 INFO TaskSetManager: Finished task 8.0 in stage 1.0
>>> (TID
>>> 15) in 105541 ms on 192.168.5.193 (28/35)
>>> 15/08/26 16:31:55 INFO TaskSetManager: Finished task 11.0 in stage 1.0
>>> (TID
>>> 18) in 107122 ms on 192.168.5.167 (29/35)
>>> 15/08/26 16:31:57 INFO TaskSetManager: Finished task 5.0 in stage 1.0
>>> (TID
>>> 12) in 109583 ms on 192.168.5.245 (30/35)
>>> 15/08/26 16:32:08 INFO TaskSetManager: Finished task 4.1 in stage 1.0
>>> (TID
>>> 49) in 24135 ms on 192.168.5.149 (31/35)
>>> 15/08/26 16:32:13 WARN TaskSetManager: Lost task 2.0 in stage 1.0 (TID 9,
>>> 192.168.5.246): java.io.IOException: Too many bytes before newline:
>>> <a href="tel:2147483648" value="+12147483648" target="_blank">2147483648
>>>
>>>         at
>>> org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:249)
>>>         at
>>> org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
>>>         at
>>>
>>> org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:134)
>>>         at
>>>
>>> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
>>>         at
>>> org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:239)
>>>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:216)
>>>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
>>>         at
>>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>>>         at
>>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>>>         at
>>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>>>         at
>>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>>>         at
>>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>>>         at
>>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>>>         at
>>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>>>         at
>>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>>>         at
>>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>>>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>>>         at
>>>
>>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
>>>         at
>>>
>>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>>>         at org.apache.spark.scheduler.Task.run(Task.scala:70)
>>>         at
>>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
>>>         at
>>>
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>         at
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>         at java.lang.Thread.run(Thread.java:745)
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-developers-list.1001551.n3.nabble.com/SQLContext-read-json-path-throws-java-io-IOException-tp13841.html
>>> Sent from the Apache Spark Developers List mailing list archive at
>>> Nabble.com.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: [hidden email]
>>> <http:///user/SendEmail.jtp?type=node&node=13852&i=1>
>>> For additional commands, e-mail: [hidden email]
>>> <http:///user/SendEmail.jtp?type=node&node=13852&i=2>
>>>
>>>
>>
>>
>> ------------------------------
>> If you reply to this email, your message will be added to the discussion
>> below:
>>
>> http://apache-spark-developers-list.1001551.n3.nabble.com/SQLContext-read-json-path-throws-java-io-IOException-tp13841p13852.html
>> To unsubscribe from SQLContext.read.json("path") throws
>> java.io.IOException, click here.
>> NAML
>> <http://apache-spark-developers-list.1001551.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>
>
>
>
> --
> Victor Giannakouris - Salalidis
>
> Researcher
> Computing Systems Laboratory (CSLab),
> National Technical University of Athens
>
> Software Engineer
> isMOOD Data Technology Services
>
> LinkedIn:
> http://gr.linkedin.com/pub/victor-giannakouris-salalidis/69/585/b23/
>
> ------------------------------
> View this message in context: Re: SQLContext.read.json("path") throws
> java.io.IOException
> <http://apache-spark-developers-list.1001551.n3.nabble.com/SQLContext-read-json-path-throws-java-io-IOException-tp13841p13854.html>
>
> Sent from the Apache Spark Developers List mailing list archive
> <http://apache-spark-developers-list.1001551.n3.nabble.com/> at
> Nabble.com.
>

Reply via email to