Amdahl’s Law

Amdahl’s Law provides a formula for fi nding the maximum improvement in performance of an overall system when only a part of the system is improved. Amdahl’s Law is named after Gene Amdahl, www.computer.org/portal/web/awards/amdahl, a well-known computer architect who contributed to the making of the IBM mainframes.

Amdahl’s Law can succinctly be explained using a simple example. Say you have a process that runs for 5 hours and this process can be divided into sub-tasks that can be parallelized. Assume that you can parallelize all but a small part of the program that takes 25 minutes to run. Then this part of the program, the one that takes 25 minutes to complete, ends up defining the best speeds that the overall program can achieve. Essentially, the linear part of the program limits the performance.

In mathematical terms this example could be seen as follows:
» Total time taken for the program to run: 5 hours (300 minutes)
» Time taken for the serial part of the program: 25 minutes
» Percentage of the overall program that can be parallelized: ~91.6
» Percentage that cannot be parallelized (or is serial in nature): 8.4
» Therefore, maximum increase in speed of the parallelized version compared to the nonparallelized version is 1 / (1 – 0.916) = ~11.9

In other words, the completely parallelized version could be more than 11 times faster than the nonparallel version of the same program. Amdahl’s Law generalizes this calculation of speed improvement in an equation, which is as follows:

1 / ((1 – P) + P/S)

where P represents the proportion that is parallelized and S the times the parallelized part performs as compared to the non-parallelized one.

This generalized equation takes into account different levels of speed increase for different parts of a program. So, for example, a program can be parallelized into four parts, P1, P2, P3, and P4, where P1, P2, P3, and P4 are 10%, 30%, 40%, and 20%, respectively. If P1 can speed up by 2x, P2 by 3x, and P3 by 4x, but P4 cannot be speeded up, then the overall running time is as follows:

0.10/2 + 0.30/3 + 0.40/4 + 0.20/1 = 0.45

Therefore, the maximum speed increase is 1/0.45 or 2.22, more than double that of the non-parallel program.

Amdahl’s Law applies as much to MapReduce parallelization as it does to multi-core programming.

Gustafson’s Law, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.85.6348, reevaluates Amdahl’s Law. It states that given more computing power, more complex problems can be solved in the same time as a simpler problem takes, when lesser computing power is used. Therefore, Gustafson’s Law contradicts the scalability limits imposed by the linear part of the program, especially when large complex repetitive tasks are carried out using more computing resources.

Source of Information : NoSQL

This post first appeared on Developer Techno, please read the originial post: here