Hadoop – 线程“main”中的exceptionjava.lang.NullPointerException

我试图通过本教程使用Apache Hadoop for Windows Platform: http : //www.codeproject.com/Articles/757934/Apache-Hadoop-for-Windows-Platform? fid= 1858035 ,eclipse部分。 一切都很好,直到最后一步。 运行程序时,我得到:log4j:警告没有appender可以发现logging(org.apache.hadoop.metrics2.lib.MutableMetricsFactory)。

log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Exception in thread "main" java.lang.NullPointerException at java.lang.ProcessBuilder.start(Unknown Source) at org.apache.hadoop.util.Shell.runCommand(Shell.java:445) at org.apache.hadoop.util.Shell.run(Shell.java:418) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650) at org.apache.hadoop.util.Shell.execCommand(Shell.java:739) at org.apache.hadoop.util.Shell.execCommand(Shell.java:722) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:631) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:421) at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:277) at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:125) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:348) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Unknown Source) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303) at Recipe.main(Recipe.java:82) 

代码是:

 import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; import com.google.gson.Gson; public class Recipe { public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>{ private final static IntWritable one = new IntWritable(1); private Text word = new Text(); Gson gson = new Gson(); public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } Roo roo=gson.fromJson(value.toString(),Roo.class); if(roo.cookTime!=null) { word.set(roo.cookTime); } else { word.set("none"); } context.write(word, one); } } public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); @SuppressWarnings("deprecation") Job job = new Job(conf, "Recipe"); job.setJarByClass(Recipe.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); //FileInputFormat.addInputPath(job, new Path(otherArgs[0])); //FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); FileInputFormat.addInputPath(job, new Path("hdfs://127.0.0.1:9000/in")); FileOutputFormat.setOutputPath(job, new Path("hdfs://127.0.0.1:9000/output")); System.exit(job.waitForCompletion(true) ? 0 : 1); // job.submit(); } } class Id { public String oid; } class Ts { public long date ; } class Roo { public Id _id ; public String name ; public String ingredients ; public String url ; public String image ; public Ts ts ; public String cookTime; public String source ; public String recipeYield ; public String datePublished; public String prepTime ; public String description; } 

只有当我尝试通过Eclipse运行时才会发生这种情况。 通过CMD它很好:

 javac -classpath C:\hadoop-2.3\share\hadoop\common\hadoop-common-2.3.0.jar;C:\hadoop-2.3\share\hadoop\common\lib\gson-2.2.4.jar;C:\hadoop-2.3\share\hadoop\common\lib\commons-cli-1.2.jar;C:\hadoop-2.3\share\hadoop\mapreduce\hadoop-mapreduce-client-core-2.3.0.jar;Recipe.java jar -cvf Recipe.jar *.class hadoop jar c:\Hwork\Recipe.jar Recipe /in /out 

任何想法我怎么能解决这个问题?

我从这里http://qnalist.com/questions/4994960/run-spark-unit-test-on-windows-7修复它有同样的问题和解决方法&#x3002;

解决方法是:

  1. https://codeload.github.com/srccodes/hadoop-common-2.2.0-bin/zip/master下载编译好&#x7684;winutils.exe
  2. 直接将这个存档提取到d:\winutild:\winutil\bin应该被创建
  3. 添加System.setProperty("hadoop.home.dir", "d:\\winutil\\"); 在实例化Job之前将代码添加到代码中(例如,作为main方法的第一行)