Ticker

6/recent/ticker-posts

What is Grid computing. History of Grid computing


What is Grid computing.

 Grid computing is a process architecture that combines computer resources from various domains to reach a main objective. In grid computing, the computers on the network  can work on a task together, thus functionally as a supercomputer.





Typically, a grid computing works on various tasks within a network, but it is also capable of working on specialized applications. 

It is designed to solve problems that are too big for a supercomputer while maintaining the flexibility to process numerous smaller problems. Computing grids deliver a multiuser infrastructure that accommodates the discontinuous  demands of large information processing.

A grid is connected by parallel nodes that from a computer cluster, which runs on an operating system, Linux or free software. The cluster can vary in size from a small work station to several networks. The technology is applied to a wide range of application, such as mathematical, scientific or educational tasks through several computing resources. It is often used in structural analysis, Web services such as ATM banking, back-office infrastructure and scientific or marketing research.

Grid computing is made up of applications used for computational computer problems that are connected in a parallel networking environment. It connects each PC (personal computer ) and combines Information to form one application that is computation-intensive.

Grids have a variety of resources based on diverse software and hardware structures, computer languages and frameworks, either in a network or by using open standards with specific guidelines to achieve a common goal.

History of Grid computing:

The idea of grid computing originated  in the 1990s as a metaphor for making computer power as easy access as an electric power grid. Where parallel computing and supercomputer were primarily used in the ‘80s and ‘90s, grid computing began to take shape as an option by the mid-1990s. In 1995, the information-Wide Area Year (I-way) project was initiated, dedicated to the integration of other existing high-bandwidth networks and the managements of software run over them. This project stood out as one of the first major milestones towards true grid computing. Not long afterwards, CPU  scavenging and volunteer computing projects like distributed.net in 1997 and SETI@home  in 1999 began to harness the power of networked PCs worldwide to solve CPU-interview research problems.

Grid computing was further refined with Ian Foster and Carl Kesselman’s widely regarded 1998 word The Grid: Blue print for a New Computing Infrastructure, in which they set out to define and extend the concepts surrounding the idea. Along with the software developer Steven Tuecke, the trio previously lent their expertise to the I-way project, particularly through their own Globus Toolkit, an open-source toolkit for grid computing. In 2007, the term cloud computing ( in term of computing resources being consumed as electricity is form the power grid). Indeed, grid computing is often ( but not always ) associated with the delivery of cloud computing systems.


What is Data grid?

A data grid is a set of computer that directly interact with other to coordinate the  processing of large jobs. The participating computer are typically spread across multiple geographically remote sites. Each site may have one or more of the computer in the grid, and each sites shares data and resources with other sites. The main goal of a data grid is to leverage the collective power of all computers to accomplish a given task, in a practice known as grid computing. Software running on all the computers in a grid handless the coordinates of tasks, and users access to data, across the grid.  


Post a Comment

0 Comments