I now see that the test that fails for me already has this check. But I don't 
think it is done correctly.

https://github.com/apache/flink/blob/master/flink-filesystems/flink-hadoop-fs/src/test/java/org/apache/flink/runtime/fs/hdfs/HdfsBehaviorTest.java
 has verifyOS() but a failed assumption in that method doesn't stop us from 
executing createHDFS() where there is no check, and thus the test fails.

I think we need to remove verifyOS(), and move the check directly to 
createHDFS() instead.


Similar problem may exist in:
flink-connectors\flink-connector-filesystem\src\test\java\org\apache\flink\streaming\connectors\fs\bucketing\BucketingSinkMigrationTest.java
flink-fs-tests\src\test\java\org\apache\flink\hdfstests\ContinuousFileProcessingMigrationTest.java
flink-fs-tests\src\test\java\org\apache\flink\hdfstests\HDFSTest.java

-----Original Message-----
From: Chesnay Schepler [mailto:ches...@apache.org] 
Sent: Tuesday, July 10, 2018 3:10 PM
To: dev@flink.apache.org; NEKRASSOV, ALEXEI <an4...@att.com>
Subject: Re: 'mvn verify' fails at flink-hadoop-fs

That flat-out disables all tests in the module, even those that could run on 
Windows.

We commonly add an OS check to respective tests that skip the tests, with an 
"Assume.assumeTrue(os!=windows)" statement in a "@BeforeClass" 
method.

On 10.07.2018 21:00, NEKRASSOV, ALEXEI wrote:
> I added lines below to flink-hadoop-fs/pom.xml, and that allowed me to turn 
> off the tests that were failing for me.
> Do we want to add this change to master?
> If so, do I need to document this new switch somewhere?
>
> (
> the build then hang for me at flink-runtime, but that's a different 
> issue Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 25.969 sec - in 
> org.apache.flink.runtime.taskmanager.TaskManagerRegistrationTest
> )
>
>       <build>
>               <plugins>
>                       <plugin>
>                               <groupId>org.apache.maven.plugins</groupId>
>                               <artifactId>maven-surefire-plugin</artifactId>
>                               <configuration>
>                                       <skipTests>${skipHdfsTests}</skipTests>
>                               </configuration>
>                       </plugin>
>               </plugins>
>       </build>
>
> -----Original Message-----
> From: Chesnay Schepler [mailto:ches...@apache.org]
> Sent: Tuesday, July 10, 2018 10:36 AM
> To: dev@flink.apache.org; NEKRASSOV, ALEXEI <an4...@att.com>
> Subject: Re: 'mvn verify' fails at flink-hadoop-fs
>
> There's currently no workaround except going in and manually disabling them.
>
> On 10.07.2018 16:32, Chesnay Schepler wrote:
>> Generally, any test that uses HDFS will fail on Windows. We've 
>> disabled most of them, but some slip through from time to time.
>>
>> Note that we do not provide any guarantees for all tests passing on 
>> Windows.
>>
>> On 10.07.2018 16:28, NEKRASSOV, ALEXEI wrote:
>>> I'm running 'mvn clean verify' on Windows with no Hadoop libraries 
>>> installed, and the build fails (see below).
>>> What's the solution? Is there a switch to skip Hadoop-related tests?
>>> Or I need to install Hadoop libraries?
>>>
>>> Thanks,
>>> Alex
>>>
>>>
>>> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
>>> 1.726 sec <<< FAILURE! - in 
>>> org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest
>>> org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest  Time elapsed:
>>> 1.726 sec  <<< ERROR!
>>> java.lang.UnsatisfiedLinkError:
>>> org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
>>>           at
>>> org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
>>>           at
>>> org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:570)
>>>           at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:996)
>>>           at
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:484)
>>>           at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:293)
>>>           at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
>>>           at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:891)
>>>           at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:638)
>>>           at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:503)
>>>           at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:559)
>>>           at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:724)
>>>           at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:708)
>>>           at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1358)
>>>           at
>>> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:996)
>>>           at
>>> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:867)
>>>           at
>>> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:702)
>>>           at
>>> org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:374)
>>>           at
>>> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
>>>           at
>>> org.apache.flink.runtime.fs.hdfs.HdfsBehaviorTest.createHDFS(HdfsBeh
>>> a
>>> viorTest.java:65)
>>>
>>> Running org.apache.flink.runtime.fs.hdfs.HdfsKindTest
>>> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
>>> 0.081 sec - in org.apache.flink.runtime.fs.hdfs.HdfsKindTest
>>> Running
>>> org.apache.flink.runtime.fs.hdfs.LimitedConnectionsConfigurationTest
>>> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
>>> 0.017 sec - in 
>>> org.apache.flink.runtime.fs.hdfs.LimitedConnectionsConfigurationTest
>>>
>>> Results :
>>>
>>> Tests in error:
>>>     HdfsBehaviorTest.createHDFS:65 ▒ UnsatisfiedLink 
>>> org.apache.hadoop.io.nativeio...
>>>
>>> Tests run: 24, Failures: 0, Errors: 1, Skipped: 1
>>>
>>> [INFO]
>>> --------------------------------------------------------------------
>>> -
>>> ---
>>> [INFO] Reactor Summary:
>>> [INFO]
>>> [INFO] force-shading ...................................... SUCCESS 
>>> [
>>> 2.335 s] [INFO] flink ..............................................
>>> SUCCESS [
>>> 29.794 s]
>>> [INFO] flink-annotations .................................. SUCCESS 
>>> [
>>> 2.198 s] [INFO] flink-shaded-hadoop ................................
>>> SUCCESS [  0.226 s] [INFO] flink-shaded-hadoop2 
>>> ............................... SUCCESS [
>>> 11.015 s]
>>> [INFO] flink-shaded-hadoop2-uber .......................... SUCCESS 
>>> [
>>> 16.343 s]
>>> [INFO] flink-shaded-yarn-tests ............................ SUCCESS 
>>> [
>>> 13.653 s]
>>> [INFO] flink-shaded-curator ............................... SUCCESS 
>>> [
>>> 1.386 s] [INFO] flink-test-utils-parent ............................
>>> SUCCESS [  0.191 s] [INFO] flink-test-utils-junit 
>>> ............................. SUCCESS [  3.318 s] [INFO] 
>>> flink-metrics ...................................... SUCCESS [  
>>> 0.212 s] [INFO] flink-metrics-core .................................
>>> SUCCESS [  3.502 s] [INFO] flink-core 
>>> ......................................... SUCCESS
>>> [01:30 min]
>>> [INFO] flink-java ......................................... SUCCESS
>>> [01:31 min]
>>> [INFO] flink-queryable-state .............................. SUCCESS 
>>> [
>>> 0.186 s] [INFO] flink-queryable-state-client-java ..................
>>> SUCCESS [  4.099 s] [INFO] flink-filesystems 
>>> .................................. SUCCESS [  0.198 s] [INFO] 
>>> flink-hadoop-fs .................................... FAILURE [  
>>> 8.672 s]
>>>
>>>
>>>
>>>
>>
>

Reply via email to