We implemented our approach in Perl language and we used Hadoop, (version 0.20.1), an open source version of MapReduce.
All the experiments of our approach were carried out using a local cluster with five nodes.
The processing nodes used in our tests are equipped with a Quad-Core AMD Opteron(TM) Processor 6234 2.40 GHz CPU and 4 GB of memory for each node.
Effect of the partitioning method on the rate of lost subgraphs
|(a) Cost of DGP and MRGP partitioning methods||(b) Effect of the number of buckets on the cost of the DGP method|
|(c) Effect of the number of workers on the runtime of our MapReduce-based framework
Chunk size and replication factor
|(a) Effect of chunk size on the runtime of our MapReduce-based framework||(b) Effect of the number of copies of data on the runtime of our MapReduce-based framework|