Thanks for reporting: https://issues.apache.org/jira/browse/SPARK-11032
You can probably workaround this by aliasing the count and just doing a
filter on that value afterwards.
On Thu, Oct 8, 2015 at 8:47 PM, Jeff Thompson <
jeffreykeatingthomp...@gmail.com> wrote:
> After upgrading from 1.4.1 t
After upgrading from 1.4.1 to 1.5.1 I found some of my spark SQL queries no
longer worked. Seems to be related to using count(1) or count(*) in a
nested query. I can reproduce the issue in a pyspark shell with the sample
code below. The ‘people’ table is from spark-1.5.1-bin-hadoop2.4/
examples/