Github user sryza commented on a diff in the pull request:

    https://github.com/apache/spark/pull/102#discussion_r10460757
  
    --- Diff: project/SparkBuild.scala ---
    @@ -236,7 +236,8 @@ object SparkBuild extends Build {
             "com.novocode"      % "junit-interface" % "0.10"   % "test",
             "org.easymock"      % "easymock"        % "3.1"    % "test",
             "org.mockito"       % "mockito-all"     % "1.8.5"  % "test",
    -        "commons-io"        % "commons-io"      % "2.4"    % "test"
    +        "commons-io"        % "commons-io"      % "2.4"    % "test",
    +        "org.apache.hadoop"          % hadoopClient       % hadoopVersion 
% hadoopScope excludeAll(excludeNetty, excludeAsm, excludeCommonsLogging, 
excludeSLF4J, excludeOldAsm)
    --- End diff --
    
    When I try adding Hadoop to only those ones, I get a bunch of errors like 
these:
    
    [error] 
/home/sandy/spark/spark/bagel/src/main/scala/org/apache/spark/bagel/Bagel.scala:197:
 bad symbolic reference. A signature in SparkContext.class refers to term io
    [error] in package org.apache.hadoop which is not available.
    [error] It may be completely missing from the current classpath, or the 
version on
    [error] the classpath might be incompatible with the version used when 
compiling SparkContext.class.
    [error]       sc, vertices, messages, new DefaultCombiner(), None, part, 
numPartitions, storageLevel)(



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to