> On Oct. 23, 2018, 7:50 p.m., Sahil Takiar wrote:
> > ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkRecordHandler.java
> > Line 49 (original), 52 (patched)
> > <https://reviews.apache.org/r/69107/diff/1/?file=2101701#file2101701line52>
> >
> >     i think volatile long is sufficient here and is probably cheaper. 
> > atomics might be expensive when done per row

I first used volatile, but I replaced it with AtomicLong because the rowNumber 
needs to be incremented and rowNumber++ on a volatile variable is not 
considered a safe operation. What do you think about that?


- Bharathkrishna


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/69107/#review209935
-----------------------------------------------------------


On Oct. 20, 2018, 7:13 p.m., Bharathkrishna Guruvayoor Murali wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/69107/
> -----------------------------------------------------------
> 
> (Updated Oct. 20, 2018, 7:13 p.m.)
> 
> 
> Review request for hive, Antal Sinkovits, Sahil Takiar, and Vihang 
> Karajgaonkar.
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> Improve record and memory usage logging in SparkRecordHandler
> 
> 
> Diffs
> -----
> 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkMapRecordHandler.java 
> 88dd12c05ade417aca4cdaece4448d31d4e1d65f 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkMergeFileRecordHandler.java
>  8880bb604e088755dcfb0bcb39689702fab0cb77 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkRecordHandler.java 
> cb5bd7ada2d5ad4f1f654cf80ddaf4504be5d035 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkReduceRecordHandler.java
>  20e7ea0f4e8d4ff79dddeaab0406fc7350d22bd7 
> 
> 
> Diff: https://reviews.apache.org/r/69107/diff/1/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Bharathkrishna Guruvayoor Murali
> 
>

Reply via email to