Sorry if this is a novice question, but I can't figure out how to extract
the weights vector from a multiple linear regression model. I can
fit/predict, but I can't get the weight vector.
Any advice would be appreciated (even snide go read the docs comments, so
long as they point me to applicable
ok, thanks! :)
I will try that!
> Am 07.10.2015 um 21:35 schrieb Lydia Ickler :
>
> Hi,
>
> stupid question: Why is this not saved to file?
> I want to transform an array to a DataSet but the Graph stops at collect().
>
> //Transform Spectrum to DataSet
> List> dataList = new LinkedList Str
Hi, as far as I know only collect, print and execute actually trigger the
execution. What you're missing is env.execute() after the writeAsCsv call.
Hope this helps.
On Wed, Oct 7, 2015 at 9:35 PM, Lydia Ickler
wrote:
> Hi,
>
> stupid question: Why is this not saved to file?
> I want to transfor
Hi,
stupid question: Why is this not saved to file?
I want to transform an array to a DataSet but the Graph stops at collect().
//Transform Spectrum to DataSet
List> dataList = new LinkedList>();
double[][] arr = filteredSpectrum.getAs2DDoubleArray();
for (int i=0;i
Hi,
yes, once this PR is merged https://github.com/apache/flink/pull/1238 you
can switch between time characteristics and also use the aggregations
functions such as sum(...). I'm hoping to merge this by tonight. The tests
are still running right now. :D
Cheers,
Aljoscha
On Wed, 7 Oct 2015 at 17:
Hi,
yes, once this PR is merged https://github.com/apache/flink/pull/1238 you
can switch between time characteristics and also use the aggregations
functions such as sum(...). I'm hoping to merge this by tonight. The tests
are still running right now. :D
Cheers,
Aljoscha
On Wed, 7 Oct 2015 at 1
Thanks!
This works with the exception that I have to use the reduceWindow() method
when summing up my the content of the window.
There still seems to be some work to do.
With the finished Api will I be able to switch from event-time to
processing- or ingestion-time without having to adjust my cod
Hi,
right now, the 0.10-SNAPSHOT is in a bit of a weird state. We still have
the old windowing API in there alongside the new one. To make your example
use the new API that actually uses the timestamps and watermarks you would
use the following code:
env.setStreamTimeCharacteristic(TimeCharacteris
Hi Guys,
I'm trying to use the event-time windowing feature. But the windowing does
not work as expected.
What I've been doing is to write my own source which implements the
EventTimeSourceFunction and uses the collectWithTimeStamp method.
Additionally I'm emitting a watermark after each element
I've tried to split my huge file by lines count (using the bash command
split -l) in 2 different ways:
1. small lines count (huge number of small files)
2. big lines count (small number of big files)
I can't understand why the time required to effectively start the job is
more or less the s
The split functionality is in the FileInputFormat and the functionality
that takes care of lines across splits is in the DelimitedIntputFormat.
On Wed, Oct 7, 2015 at 3:24 PM, Fabian Hueske wrote:
> I'm sorry there is no such documentation.
> You need to look at the code :-(
>
> 2015-10-07 15:19
I'm sorry there is no such documentation.
You need to look at the code :-(
2015-10-07 15:19 GMT+02:00 Flavio Pompermaier :
> And what is the split policy for the FileInputFormat?it depends on the fs
> block size?
> Is there a pointer to the several flink input formats and a description of
> their
And what is the split policy for the FileInputFormat?it depends on the fs
block size?
Is there a pointer to the several flink input formats and a description of
their internals?
On Wed, Oct 7, 2015 at 3:09 PM, Fabian Hueske wrote:
> Hi Flavio,
>
> it is not possible to split by line count becaus
Hi Flavio,
it is not possible to split by line count because that would mean to read
and parse the file just for splitting.
Parallel processing of data sources depends on the input splits created by
the InputFormat. Local files can be split just like files in HDFS. Usually,
each file corresponds
Hi to all,
is there a way to split a single local file by line count (e.g. a split
every 100 lines) in a LocalEnvironment to speed up a simple map function?
For me it is not very clear how the local files (files into directory if
recursive=true) are managed by Flink..is there any ref to this inter
Perhaps we can put hands on it during the FlinkForward. :-D I have updated
the ticket description finding out that the issue is generated performing a
join just after the cross. See you in Berlin!
saluti,
Stefano
2015-10-06 9:39 GMT+02:00 Till Rohrmann :
> Hi Stefano,
>
> we'll definitely look i
16 matches
Mail list logo