Hi
NA function will replace null with some default value and not all my
columns are of type string, so for some other data types (long/int etc) I
have to provide some default value
But ideally those values should be null
Actually this null column drop is happening in this step
df.selectExpr( "
See also here:
https://stackoverflow.com/questions/44671597/how-to-replace-null-values-with-a-specific-value-in-dataframe-using-spark-in-jav
On Mon, Apr 29, 2019 at 5:27 PM Jason Nerothin
wrote:
> Spark SQL has had an na.fill function on it since at least 2.1. Would that
> work for you?
>
>
> ht
Spark SQL has had an na.fill function on it since at least 2.1. Would that
work for you?
https://spark.apache.org/docs/2.1.0/api/java/org/apache/spark/sql/DataFrameNaFunctions.html
On Mon, Apr 29, 2019 at 4:57 PM Shixiong(Ryan) Zhu
wrote:
> Hey Snehasish,
>
> Do you have a reproducer for this i
Hey Snehasish,
Do you have a reproducer for this issue?
Best Regards,
Ryan
On Wed, Apr 24, 2019 at 7:24 AM SNEHASISH DUTTA
wrote:
> Hi,
>
> While writing to kafka using spark structured streaming , if all the
> values in certain column are Null it gets dropped
> Is there any way to override t
Hi,
While writing to kafka using spark structured streaming , if all the values
in certain column are Null it gets dropped
Is there any way to override this , other than using na.fill functions
Regards,
Snehasish