[ https://issues.apache.org/jira/browse/FLINK-2259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15263021#comment-15263021 ]
ASF GitHub Bot commented on FLINK-2259: --------------------------------------- Github user rawkintrevo commented on a diff in the pull request: https://github.com/apache/flink/pull/1898#discussion_r61504020 --- Diff: flink-libraries/flink-ml/src/test/scala/org/apache/flink/ml/preprocessing/SplitterITSuite.scala --- @@ -0,0 +1,73 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.flink.ml.preprocessing + +import org.apache.flink.api.scala.ExecutionEnvironment +import org.apache.flink.api.scala._ +import org.apache.flink.test.util.FlinkTestBase +import org.scalatest.{Matchers, FlatSpec} +import org.apache.flink.ml.math.Vector +import org.apache.flink.api.scala.utils._ + + +class SplitterITSuite extends FlatSpec + with Matchers + with FlinkTestBase { + + behavior of "Flink's DataSet Splitter" + + import MinMaxScalerData._ + + it should "result in datasets with no elements in common and all elements used" in { + val env = ExecutionEnvironment.getExecutionEnvironment + + val dataSet = env.fromCollection(data) + + val splitDataSets = Splitter.randomSplit(dataSet.zipWithIndex, 0.5) + + (splitDataSets(0).count() + splitDataSets(1).count()) should equal(dataSet.count()) + + + splitDataSets(0).join(splitDataSets(1)).where(0).equalTo(0).count() should equal(0) + } + + it should "result in datasets of an expected size when precise" in { + val env = ExecutionEnvironment.getExecutionEnvironment + + val dataSet = env.fromCollection(data) + + val splitDataSets = Splitter.randomSplit(dataSet, 0.5) + + val expectedLength = dataSet.count().toDouble * 0.5 + + splitDataSets(0).count().toDouble should equal(expectedLength +- 5.0) --- End diff -- can and statistically will. removing > Support training Estimators using a (train, validation, test) split of the > available data > ----------------------------------------------------------------------------------------- > > Key: FLINK-2259 > URL: https://issues.apache.org/jira/browse/FLINK-2259 > Project: Flink > Issue Type: New Feature > Components: Machine Learning Library > Reporter: Theodore Vasiloudis > Assignee: Trevor Grant > Priority: Minor > Labels: ML > > When there is an abundance of data available, a good way to train models is > to split the available data into 3 parts: Train, Validation and Test. > We use the Train data to train the model, the Validation part is used to > estimate the test error and select hyperparameters, and the Test is used to > evaluate the performance of the model, and assess its generalization [1] > This is a common approach when training Artificial Neural Networks, and a > good strategy to choose in data-rich environments. Therefore we should have > some support of this data-analysis process in our Estimators. > [1] Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. The elements of > statistical learning. Vol. 1. Springer, Berlin: Springer series in > statistics, 2001. -- This message was sent by Atlassian JIRA (v6.3.4#6332)