Internet of Things and Cloud Computing

Special Issue

Hadoop MapReduce

  • Submission Deadline: Dec. 30, 2015
  • Status: Submission Closed
  • Lead Guest Editor: Seyed Reza Pakize
About This Special Issue
Hadoop is a Java-based programming framework that supports the storing and processing of large data sets in a distributed computing environment and it is very much appropriate for high volume of data. it's using HDFS for data storing and using MapReduce to processing that data. MapReduce is a popular programming model to support data-intensive applications using shared-nothing clusters. the main objective of MapReduce programming model is to parallelize the job execution across multiple nodes for execution. nowadays, all focus of the researchers and companies toward to Hadoop. due this, many scheduling algorithms have been proposed in the past decades. there are three important scheduling issues in mapreduce such as locality, synchronization and fairness. The most common objective of scheduling algorithms is to minimize the completion time of a parallel application and also achieve to these issues. This special issue focuses on new scheduling algorithms of hadoop MapReduce, scheduling issues and new trends in hadoop.

Aims and Scope:

1. Hadoop
2. Hadoop MapReduce issues
3. Scheduling algorithms of MapReduce
4. Hadoop problems
5. Benefits and risks of using Hadoop
6. Hadoop And Cloud services
Lead Guest Editor
  • Seyed Reza Pakize

    Department of Computer, Islamic Azad University, Yazd, Iran

Guest Editors
  • Mandana Abdollahzade Tanourjeh

    Computer Department, Imamreza International University, Torbat Heydarie, Iran