Meanwhile, in the 1990s, the idea of grid computing was presented in academia. Carl Kesselman and Ian Foster disseminated their book The Grid: Blueprint for a New Computing Infrastructure. The new linkage was related to the idea of an electric grid. We can relate the idea of grid computing with our day-to-day example. When a device is connected to a power outlet, we are unaware of how electric power is generated and reaches the outlet, we just use it. This is what is known as virtualization. Here, we don’t know the basic architecture or method behind the scene. We don’t know how things are made available to the users, but we are aware that they are actively using it. We can predict that power is virtualized; virtualization conceals a gigantic scattering grid and power generation stations. This idea may be adapted in computing, where distinctive conveyed segments, such as storage, data management, and software assets, are incorporated [3]. In innovations like cluster, grid and now cloud computing, every one of the developments have focused on enabling access to an enormous amount of computing assets in a totally virtualized design. It influences a singular framework for examining social occasions involving resources in a total example. These are all given to the users or the organizations or to customers on the basis of “pay-per-use” or “pay-as-you-go” design (payment based on utilization).
In 1997, the term “cloud computing” was introduced in academia by Ramnath Chellappa, who defined it as a “computing paradigm where the boundaries of computing will be determined by economic rationale rather than technical limits alone.” In 1999, Salesforce started conveying applications to their clients utilizing basic websites. The actual applications were undertaken and dispersed over the web; in this manner, utility-based computing began being used in the real world. Amazon started its point of reference by creating Amazon Web Services (AWS) and conveying storage services, estimations and so on in 2002. Amazon allows clients to integrate its immense online substance with their own website. Its web services and computing facility have expanded slowly upon request. In 2006, Amazon initially launched its Elastic Compute Cloud (Amazon EC2)3 as a commercial internet benefit that allows small enterprises and individuals to lease infrastructure (resources, storage, memory) upon which they can carry and run their own applications. With the implementation of Amazon storage (Amazon S3), a “pay-per-use” model was also implemented. Cloud’s Google App Engine,4 Force.com, Eucalyptus,5 Windows Azure,6 Aneka7 and a lot more of their kind are capturing the cloud business.
The next section is about cluster, grid and mobile computing.
1.2 Cluster Computing
A computer cluster can be characterized as an arrangement of several coupled computers cooperating in such a way that every machine can be viewed as a single system image (SSI). Computer clusters are developed by merging a colossal number of computer developments, including access to fast networks, low-cost MPs, and software that delivers high-performance computing.
According to Sadashiv and Kumar [4], a cluster can be defined as the collection of distributed or parallel computers attached among themselves with the help of high-speed networks such as SCI, Myrinet, Gigabit Ethernet and InfiniBand. They function collectively in the execution of data- and compute-intensive tasks that would not be feasible for a single computer to execute alone. The clusters are mostly used for load balancing (to distribute the task over the different interconnected computers), high availability of the required data, and for compute purpose. The interconnected computers are used due to their high availability as they maintain the redundant nodes which are being utilized to convey the required service when the system components fail.
The performance of the system is upgraded enough and enhanced in that case because regardless of whether one node neglects to figure out the task, there is another backup node which will be ready to convey the task and takes on the simple single purpose without any snags [5]. At the point when numerous computers are connected in a computer cluster, they can easily share computational workload as a single virtual computer. From the client’s perspective, they are numerous machines, yet they are working as a single virtual machine. The client’s demand is received and appropriated among all the independent computers to shape a computer cluster. This outcome is adjusted and reasonable computational workload is shared among various machines, enhancing and improving the computational performance of the cluster systems. Frequently clusters are used for the most part for computational purposes, and than for taking care of IO-based exercises.
1.2.1 The Architecture of Cluster Computing Environment
Figure 1.2 represents the cluster where numerous independent computers, an operating system, a correspondence or networked system and an elite interconnecting medium, middleware and diverse application are incorporated. A computer is either a single or a multiprocessor system with memory, Input/Output provisions and operating system. A computer cluster for the most part alludes to at least two quantities of computers (nodes) interconnected. The nodes can remain in a specific bureau or can be physically particular and associated through fast LAN. The network interface equipment fills in as a correspondence processor; however, it transmits and gets packets of information between cluster nodes via a system/switch. Correspondence programming is in charge of fast and dependable information exchange among cluster nodes and the exterior. The cluster middleware remains in the middle of the numerous personal computers or workstations and several applications. It fills in like single system image producer and accessibility infrastructure. Programming situations offer proficient, compact and easy-to-use apparatuses for application improvement. Computer clusters are also being used for the execution of parallel and consecutive applications.
Figure 1.2: Architecture of computer cluster.
1.2.2 Components of Computer Cluster
A typical computer cluster has some prominent components which are used to do a specific task [6]. The components are as follows:
1 Multiple high-performance computers (PCs, workstations or SMPs)
2 State-of-the-art operating systems (layered or micro-kernel based)
3 High-performance networks/switches (such as Gigabit Ethernet and Myrinet)
4 Network interface cards (NICs)
5 Fast communication protocols and services (such as active and fast messages)
6 Cluster middleware (single system image (SSI) and system availability infrastructure):Hardware (such as digital (DEC) memory channel, hardware DSM, and SMP techniques),Operating system kernel or gluing layer (such as Solaris MC and GLUnix),Applications and subsystems applications (such as system management tools and electronic forms),Runtime systems (such as software DSM and parallel file system),Resource management and scheduling software, such as ISF (load sharing facility),CODINE (computing in distributed networked environments)
7 Parallel programming environments and tools (such as compilers, PVM (parallel virtual machine), and MPI (message passing interface)), and
8 Applications of sequential, parallel or distributed computing.
1.3 Grid Computing
Carl Kesselman and Ian Foster first coined the term grid computing in the 1990s, which is