Scientific
computing often requires the availability of a massive number of computers for
performing large scale experiments. Traditionally, these needs have been
addressed by using high-performance computing solutions and installed
facilities such as clusters and super computers, which are difficult to setup,
maintain, and operate. Cloud computing provides scientists with a completely
new model of utilizing the computing infrastructure.
Scientific
computing involves the construction of mathematical models and numerical
solution techniques to solve scientific, social scientific and engineering
problems. These models often require a huge number of computing resources to
perform large scale experiments or to cut down the computational complexity
into a reasonable time frame. These needs have been initially addressed with
dedicated high-performance computing (HPC) infrastructures such as clusters.
Definition:
“Cloud
computing refers to both the applications delivered as services over the
Internet and the hardware and system software in the datacentres that provide
those services”.
A more
structured definition is given by Buyya et al.
who define a Cloud as a “type of parallel and distributed system
consisting of a collection of interconnected and virtualized computers that are
dynamically provisioned and presented as one or more unified computing
resources based on service-level agreement”..
A group of researchers led by Visiting
Professor Takashi Yoshikawa developed the world's first system for flexibly
providing high-performance computation by cloud computing.
This group developed a job-resource integrated
management system that can change system components according to the nature of
a job as well as functions and performance necessary for running a computation
job.
Infrastructure
services (Infrastructure-as-a-service), provided by cloud vendors, allow any
user to provision a large number of compute instances fairly easily. Whether
leased from public clouds or allocated from private clouds, utilizing these
virtual resources to perform data/compute intensive analyses requires employing
different parallel runtimes to implement such applications. Among many
parallelizable problems, most “pleasingly parallel” applications can be
performed using MapReduce technologies such as Hadoop, CGL-MapReduce, and
Dryad, in a fairly easy manner.The introduction of commercial cloud
infrastructure services such as Amazon EC2/S3
andGoGridallow users to provision compute clusters fairly easily and
quickly by paying a monetary value only for the duration of the usage of
resources.
The
availability of open source cloud infrastructure software such as Nimbus and Eucalyptus , and the open source
virtualization software stacks such as Xen Hypervisor, allows organizations to
build private clouds to improve the resource utilization of the available
computation facilities.
If you
like this post then please share and like this post.
Call us @ 98256
18292.
Visit us @ tccicomputercoaching.com
To learn more about
Computer class, Computer Course, Computer course
in Ahmedabad, Computer class
in Ahmedabad
No comments:
Post a Comment