Never mind I found the error and has nothing to do with flink.
Sorry
On Tue, Mar 20, 2018 at 12:12 PM, karim amer
wrote:
> here is the output after fixing the scala issues
>
> https://gist.github.com/karimamer/9e3bcf0a6d9110c01caa2ebd14aa7a8c
>
> On Tue, Mar 20, 2018 at 11:39
here is the output after fixing the scala issues
https://gist.github.com/karimamer/9e3bcf0a6d9110c01caa2ebd14aa7a8c
On Tue, Mar 20, 2018 at 11:39 AM, karim amer
wrote:
> Never mind after importing
>
> import org.apache.flink.api.scala._
>
> theses errors went away and i still ha
Never mind after importing
import org.apache.flink.api.scala._
theses errors went away and i still have the original problem.
Sorry my bad
On Tue, Mar 20, 2018 at 11:04 AM, karim amer
wrote:
> To clarify should i file a bug report on sbt hiding the errors in the
> previous email ?
>
To clarify should i file a bug report on sbt hiding the errors in the
previous email ?
On Tue, Mar 20, 2018 at 9:44 AM, karim amer wrote:
> After switching to Maven from Sbt I got these errors
> Error:(63, 37) could not find implicit value for evidence parameter of
ied value parameter evidence$7.
val namedStream = dataStream.map((value:String) => {
Should i file a bug report ?
On Tue, Mar 20, 2018 at 9:30 AM, karim amer wrote:
> Hi Fabian
> Sorry if i confused you The first error is from Nico's code Not my code
> or snippet
>
ou would keep error message and code consistent.
> Otherwise it's not possible to figure out what's going on.
>
> Best, Fabian
>
> 2018-03-20 0:24 GMT+01:00 karim amer :
>
>> Hi Nico,
>>
>> I tried to reproduce your code but registerDataStream keeps
gt;
> env.execute("this job")
> }
>
> def get3TupleDataStream(env: StreamExecutionEnvironment):
> DataStream[(Int, Long, String)] = {
> val data = new mutable.MutableList[(Int, Long, String)]
> data.+=((1, 1L, "Hi"))
> data.+=((2, 2L, "Hello
Hi There,
I am trying to write a CSVsink to disk but it's not getting written. I
think the file is getting overwritten or truncated once The Stream process
finishes. Does anyone know why the file is getting overwritten or truncated
and how can i fix this ?
tableEnv.registerDataStream("table", w
any google or duckduckgo search results in flink 1.3 version of the doc at
The top of the results instead of 1.4 or latest.
Hi there
I Have a CSV file with the timestamp deconstructed into 3 fields and I was
wondering what is the best way to specify the those 3 fields are the event
time ? Should I make extend CsvTableSource and do the preprocessing or can
CsvTableSource.builder() handle it. Or is there a better
10 matches
Mail list logo