02 June 2009

Distributed computing: Volunteers at the service of science

Maxim Malakhovsky,
Distributed Computing Online MagazineFrom the time of the creation of the first arithmometer to the present day, scientists of all specialties continue to complain about the lack of computing resources.

Supercomputers that occupy entire buildings and are extremely expensive to manufacture and maintain are used for particularly complex calculations. But the same tasks can be performed with the help of ordinary personal computers and even game consoles – if you take the whole world.

Gullivers and Midgets

Supercomputers are usually called computing systems of enormous performance, which consist of a large number of computing nodes combined into a single hardware resource. For example, the domestic supercomputer SKIF MSU contains 1,250 powerful 4-core processors assembled in 625 computing nodes on 14 mounting cabinets. With all the infrastructure, the SKIF occupies a room the size of a small gym.

GOLIATHS OF THE COMPUTING WORLD
The real computational performance of a supercomputer is usually determined by the speed of solving a special Linpack problem by the Gauss method (a system of thousands of linear equations). Flop/s (English Floating point operations per second) is used as a unit of performance measurement – the number of arithmetic operations performed per second. Supercomputers are capable of performing trillions of operations per second (TFflop/s, teraflop).
According to the rating of the most productive supercomputers in the world (www.top500.org ) as of November 2008, the Roadrunner supercomputer from the Alamosa National Laboratory in New Mexico (USA) is in the lead with a performance of 1,105 teraflops. The Roadrunner supercomputer is used to simulate underground nuclear weapons tests, for which it was christened Military Supercomputer by the press.
Russian supercomputers, unlike their American "counterpart", are used for the benefit of peaceful science. The MVS-100K supercomputer from the Interdepartmental Supercomputer Center of the Russian Academy of Sciences shows a computing speed of 71 teraflop and ranks 35th in the world ranking, which does not prevent it from being the most powerful supercomputer in Russia and Eastern Europe. The second most powerful Russian supercomputer SKIF MSU with a more modest 47 teraflop is located in 54th place.
MVS-100K and other supercomputers of MSC RAS participate in the pan–European system DEISA - Distributed European Infrastructure for Supercomputing Applications, Distributed European Infrastructure for Supercomputing Applications (www.deisa.eu ), which unites 11 leading national European supercomputing centers.
Roadrunner, MVS-100K and SKIF MSU belong to cluster-type supercomputers, which are increasingly replacing supercomputers based on integrated circuits. The main advantage of cluster supercomputers is the ease of increasing productivity by installing new and upgrading existing computing nodes assembled from conventional computer components.


Roadrunner is the first among the supers.

Less productive computing systems – clusters - stand apart from supercomputers, in which the function of computing nodes is assigned to autonomous computers connected to a high-speed data exchange network. Computers belonging to the cluster can be located in different rooms and even buildings – wherever a high-speed network cable can be stretched. Many research centers and large universities prefer clusters as a cheap alternative to supercomputers, using their computer equipment fleet for these purposes.

Supercomputers and clusters differ only in appearance, in fact they work in a similar way and relate to parallel network computing systems – grid networks (Eng. grid - grid, grid). This type of computing is characterized by high-speed interaction between computing nodes, which allows you to simulate the flow of complex natural processes. For example, in 2005, with the help of a cluster of 27 computers, the research group of Evgeny Izhikevich from the Institute of Neuroscience (San Diego, USA) managed to simulate 1 second of 100 billion neurons of the human brain containing a quadrillion synapses in 50 days.


The figure shows a simplified diagram of thalamo–cortical interactions from an article published in PNAS in 2008 by Izhikevich and Nobel laureate Gerald Edelman Large-scale model of mammalian thalamocortical systems. In the additional materials to the article there are two videos on which you can see the results of their work in real time.

Weather forecasting also requires significant computing power – the more accurate the forecast we want to get. Just for this purpose, the Hydrometeorological Center of Russia recently acquired a new 30-ton supercomputer with a computing capacity of 27 teraflops. This machine has significantly increased the detail and speed of hydrodynamic modeling of processes in the atmosphere and ocean, producing more accurate and operational weather forecasts.

Every hour of supercomputers' work is scheduled for many months ahead. Numerous research groups are almost fighting among themselves to gain access to the consoles of computing giants. And at the same time, technical means with a combined capacity capable of plugging the entire TOP500 of the most productive supercomputers in the world are idle in the world. This little–developed reserve is our home and work personal computers. Any of them can be connected to the global distributed computing network, which has provided dozens of scientific projects with missing computing resources.

On home and office computers, central processors are idling most of the time – their load does not exceed several percent, even when the user is working in a text editor or listening to music. Having provided wasted processor time for the needs of science, the PC owner does not feel inconveniences (except perhaps the noise from the fans due to increased heat dissipation of the processor) - calculations are performed with minimal priority, without affecting the operation of other programs, games, watching movies or other entertainment. An Internet connection is only necessary to send results to the server and receive new tasks – traffic costs will increase slightly against the background of ordinary walks on the network.

Distributed computing perfectly copes with the tasks of sorting – for example, when searching for prime numbers or matching the configuration of molecules of potential drugs and target proteins, with the processing of direct observations in radio astronomy and astrophysics, with large-scale modeling of natural processes, as well as with the development of mathematical apparatus for scientific research. This specialization is explained by the difficulties of establishing effective interaction between personnel scattered around the world. Therefore, the overall computational task is not "swallowed" entirely, as in grid networks, but is divided into separate blocks, calculated on different computers in an arbitrary order and independently of each other.

Significantly inferior to professional cluster systems in specific performance per single processor, PCs, laptops and servers of distributed computing participants are taken by numbers: their army is growing rapidly, while the launch of new supercomputers is a rare event that gets into news releases.

A striking example of how effective the involvement of users of personal computers in scientific research can be is the Folding@home project, organized by Vijay Pande from Stanford University for modeling the folding of proteins of the human body. More than 400,000 only permanently connected processors "overclocked" the project to 5000 teraflops. No supercomputer can withstand competition with a distributed network of such performance, and even daily increasing the computing potential at the expense of new participants. Even the Roadrunner record holder boasts only 1,105 teraflops with 129,600 processor cores.

THESE ARE THE IMPORTANT BUILDING BLOCKS OF LIFE
It is difficult to overestimate the importance of proteins for the human body: enzymes are involved in metabolism, thrombins help to coagulate blood, immunoglobulin protects against pathogenic bacteria and viruses, hemoglobin transports oxygen to tissues, etc. Proteins begin to perform these functions after folding (folding) from the initial linear chain of amino acids into a strictly defined three-dimensional molecular structure. Deviations from a given form lead to violations of the properties of the protein or its complete inactivity, and the accumulation of improperly folded proteins in the human body causes a number of serious diseases: many forms of cancer, Alzheimer's, Parkinson's and Huntington's diseases, sclerosis, mad cow disease, type II diabetes and many others.
Over the nine–year history of development, the Folding@home project has managed to shed light on many problems of molecular biology and come close to the long-standing dream of biophysicists - solving the mystery of folding. For these purposes, hundreds of thousands of volunteers have been involved in the project, whose processors, video cards and game consoles have made it possible to create the most powerful computing system in the world. This system helped to model the structure of the first artificial protein in history, to investigate mutations of genes that cause genetic diseases, and also to study the influence of many external and internal factors on the folding of small and large proteins of the human body.

The famous SETI@home – a project to search for radio signals from deep space – has only 315 thousand active processors with a total speed of 500 teraflop operations. Although these are not the highest indicators in the world of distributed computing, but even they are enough to take an honorable third place in the TOP500, pushing the NASA – Pleiades supercomputer (487 teraflops) to the fourth line.

Most distributed computing projects are non-commercial in nature, although some offer their participants a monetary reward for the desired event. For example, the organizers of GIMPS (Great Internet Mersenne Prime Search), a Mersenne prime number search project, promise to share a $150,000 reward for finding a prime number consisting of 100 million digits. However, this is the exception rather than the rule: most scientists who have organized distributed computing projects do not plan to enrich themselves by using volunteer computers and post all the scientific results in open form.

THE GOLDEN NUMBER
Prime numbers are numbers that are divided by one and by themselves. Mersenne numbers are prime numbers of the form 2 n – 1, where n is a natural number; they are named after the mathematician Maren Mersenne, who lived and worked in France in the 17th century. Mersenne numbers are of great importance in number theory and cryptography.
The largest prime number known to mankind was discovered on August 23, 2008 on one of the computers of the Mathematics Department of the University of California at Los Angeles (UCLA) as part of the GIMPS project. This is the 45th Mersenne prime number consists of 12,978,189 digits and is written as 2,43,112,609 - 1. The authors of the discovery won the Electronic Frontier Foundation Prize of $100,000 for finding a prime number containing more than 10 million digits.

When there were no computers yet Distributed computing has a history that began back in the days of the undivided dominance of accounts as the main computing tool.

At the end of the 18th century, the French government decided to significantly improve logarithmic and trigonometric tables in anticipation of the introduction of the metric system. The work was associated with a huge amount of calculations at that time, and therefore it was entrusted to the head of the census bureau, Baron Gaspard de Prony. As a result, his famous "computing manufactory" appeared.

Baron boldly adopted the idea of the division of labor and transferred its principles to the computing process. The project executors were divided into three levels. The lowest level in the system was occupied by ordinary people-calculators, who were required to perform accurate arithmetic operations. At the second level there were educated accountants who organized the routine process, distributing tasks and processing the data received by the calculators. The highest level was occupied by outstanding French mathematicians, among whom were Adrien Legendre and Lazare Carnot, they prepared mathematical software for the computing manufactory and summarized the results obtained. As a result, Baron de Proni managed to organize the process in such a way as to reduce very complex tasks to a set of routine operations, thanks to a clear control system and a well-established system for distributing work between calculators. Unfortunately, the work was not completed due to the revolutionary events of 1799 in France.

De Prony's ideas prompted Charles Babbage to create his "analytical engine" – the first computer prototype in history. The steam-powered computer never worked, and "computing manufactories" were used in research projects until the middle of the 20th century. In particular, they were used in the development of the first nuclear bombs in the United States and the Soviet Union.

When computers were big The idea of sharing computing resources of several machines arose at the dawn of the computer age.

In 1973, John Schoch and Jon Hupp from the famous California research center Xerox PARC (Palo Alto Research Center), wrote a program that was launched into the local PARC network at night, spread over working computers and forced them to perform calculations.

A qualitative leap in ensuring that many computers work together on a single task occurred with the advent of the first personal computers and e-mail. In 1988, Arjen Lenstra and Mark Menes wrote a program for factorization (factorization) of long numbers. To speed up the process, the program could be run on several unrelated machines, each of which processed its own small fragment of a number. New task blocks were sent to the participants' computers from the central server of the project by regular email. It took two years and several hundred personal computers for this community to successfully factorize a number with a length of one hundred characters. With the successful completion of the Lenstra-Menes project, a new viable branch – distributed computing - has grown on the tree of evolution of computing systems.

Meanwhile, the development of grid networks went on as usual and was marked in 1994 by the appearance of the Beowulf virtual cluster at NASA, the technical and software for which were prepared by Donald Becker and Thomas Sterling. "Beowulf" consisted of 16 ordinary computers united in a single high-speed network with a constant exchange of information between processors. Since the advent of cluster computing systems, supercomputers have lost their monopoly position as the "kings" of the computing world.

After the successful completion of the Lenstra-Menes project, various mathematical research projects were in progress. In 1993, the participants of one of these projects factorized a number with a length of 129, then 130 characters. Then came the fashion for searching for prime numbers. These projects were not distinguished by any technical elaboration, nor by a large number of participants. But it didn't last long.

When there were a lot of computersOn January 28, 1997, the RSA Data Security competition was launched to solve the problem of hacking by a simple iteration of the 56-bit RC5-32/12/7 encryption key. Thanks to good technical and organizational preparation, the project organized by a non-profit association distributed.net , quickly became widely known and attracted the attention of the world community to distributed computing.

On May 17, 1999, David Gedi and Craig Kasnov from the Space Research Laboratory of the University of California at Berkeley launched the SETI@home distributed search for Extraterrestrial Intelligence signals (SETI – Search for Extraterrestrial Intelligence at Home), which still remains one of the most massive projects. The huge popularity was promoted by the fact that for the first time an intriguing scientific problem was transferred to the rails of distributed computing, far from boring factorization or hacking another key.

NOISY NEIGHBORS
On the night of August 15, 1977, the radio telescope "Big Ear" (Big Ear Radio Observatory) of Ohio University recorded at a wavelength of 21 cm, coming from the constellation Sagittarius, a radio burst of unusual nature. The signal stood out significantly against the background of cosmic noise, and its power increased and decreased along the Gauss curve, that is, it corresponded to the expected characteristics of an artificial signal of extraterrestrial origin. That night, the equipment was monitored by Jerry Eman, who on the printout marked the received signal with an enthusiastic "wow!" ("wow!"), under this name and entered the history of the SETI project. The signal was caught only once – no matter how much the "Big Ear" scanned this section of the sky after, but nothing but the usual noise could be caught anymore.
Another well-known case of detecting an abnormal signal occurred with the help of the SETI@home distributed computing project. Task blocks "sliced" from recordings of cosmic noise from the Arecibo Observatory radio telescope in Puerto Rico are sent to the computers of its participants. The narrow range around 1420 MHz is analyzed as theoretically the most promising radio frequency for detecting signals from sources located within a radius of one thousand light-years. In March 2003, a mysterious signal marked as SHGb02+14a was caught at this frequency, which came from the region of space between the constellations of Aries and Pisces. The power of the radio signal significantly exceeded the usual background noise level, but its amplitude "drifted" at a speed of up to 37 Hz per second. The artificial nature of the signal was called into great doubt, although the fact of the event itself greatly excited the SETI community and strengthened its determination to continue the search for extraterrestrial intelligence.
The last mysterious burst of radio waves was recorded at the end of 2006 by the staff of the University of West Virginia. The peak power of the millisecond attenuating signal was so large by the standards of radio astronomy that it did not allow identification with any known cosmic body. The distance to the source of the burst is estimated at 1 billion light-years in the direction of the Small Magellanic Cloud galaxy. The signal was caught at a frequency of about 1.5 GHz, which is not covered by the SETI@home project. Rather, it was not covered until the summer of 2008, when the Astropulse subproject for analyzing a wide range of wavelengths was launched at SETI@home. Now we have more chances not to miss the "news" from our brothers in mind, and also to become the discoverers of space objects of nature unknown to science.

Distributed computing owes a lot to the organizers of SETI@home from Berkeley, especially the appearance of the universal BOINC platform (Berkeley Open Infrastructure for Network Computing) for launching new projects. Initially, BOINC was developed exclusively for SETI@home, but soon other scientific teams were able to evaluate the advantages of the software package. Today, the number of projects on this platform has already exceeded a hundred. For such a contribution to the development of science, BOINC developers have been repeatedly awarded by the American National Science Foundation.

The client part of the platform, the so-called BOINC client, is installed on users' computers. This convenient program allows you to connect to several projects at once, keep statistics of your participation in them and monitor the flow of calculations. Almost everyone who has basic programming skills and who has a scientific idea worthy of support can organize their own distributed computing project based on BOINC. This was done, for example, by physicist Konstantin Metlov from the Donetsk Institute of Physics and Technology (DonFTI). Almost alone, the scientist was able to launch the Magnetism@home project to calculate the magnetic configurations of cylindrical nanoelements. Despite the difficult to understand scientific topic, the project quickly gained the necessary computing resources.

The development of distributed computing continues by leaps and bounds. Even video cards and game consoles have been put under the gun of scientific progress. It is unusual to see such devices as a computing resource, but in practice they can give odds to the most powerful computer. For example, the Folding@home project has been using the potential of 8-core Cell processors of Play Station 3 game consoles since the summer of 2006, capable of producing about 20 gigaflops, which is an order of magnitude more than a conventional office computer. Thanks to an agreement with Sony, a program for modeling the dynamics of protein folding is built into the consoles initially, but the owner of the device has the right to decide whether or not to connect to the project. In the autumn of the same year, Folding@home was able to master the computing capabilities of graphics processors of ATI video cards, and in 2008, the turn came to NVIDIA video cards. Multi-core GPUs have met all expectations, showing phenomenal performance of 100 gigaflops. This technological breakthrough has made Folding@home the most powerful computing system on the planet.

For every taste and colorThe cutting edge of science in physics, astronomy, biology, mathematics and cryptography, chemistry, information technology, ecology – these areas are widely represented in the world of distributed computing and have numerous supporters.

The LHC@home project is very popular, whose participants first helped the European Organization for Nuclear Research (CERN) to design the famous Large Hadron Collider, and now they are calculating the orbits of protons and heavy ions to prepare direct experiments at this largest charged particle accelerator. A similar story is repeated in the Muon1 project: volunteer computers calculate the parameters of the basic design of the Neutrino Factory, a future accelerator capable of generating streams of light neutral particles – neutrinos.

Astronomy offers a large field of application of the computing power of our computers. After SETI@home, the Einstein@home project, a joint brainchild of scientists from the Albert Einstein Institute in Berlin, the Massachusetts Institute of Technology and other scientific organizations, has no equal in exploring the mysteries of the Universe. The project is engaged in the observation of rotating neutron stars (pulsars) in order to detect gravitational waves predicted by Einstein in the framework of General Relativity. To do this, more than 100 thousand computers of active project participants analyze data from two interferometers of the gravitational-wave observatories LIGO (Laser Interferometer Gravitational-Wave Observatory) and the German interferometer GEO 600 around the clock.

The Milky Way galaxy, in which our luminary shines, has been "absorbing" one of the dwarf star clusters from the constellation Sagittarius for a long time. The Milkyway@home project, supported by the Renseller Polytechnic Institute, should answer the question: how much the map of our galaxy will be redrawn in the future as a result of the action of powerful tidal stellar flows generated by the merger. A larger task was set by the organizers of the Cosmology@home project from the University of Illinois – to find a cosmological model of the Universe that would best match the data of astronomical observations and the values of physical constants. Against this background, Orbit@home, a project for observing asteroids, comets and other small bodies of the Solar System that may pose a threat of collision with the Earth, seems so mundane. The project uses data from the ground-based telescope network and is funded by NASA.

In addition to Folding@home, other projects are involved in the study of proteins: Rosetta@home, Predictor@home, SIMAP, Human Proteome Folding (WCG) and others. This is not the end of the list of bioinformatic research using distributed computing. A lot has been done in this area by the World Community Grid (WCG) and its main sponsor, IBM. The task of WCG is organizational and technical support for a whole group of projects, most of which relate to the field of human medicine. For example, the FightAIDS@home project, organized by scientists from the Scripps Research Institute in La Jolla (California), is developing new means of preventing acquired immunodeficiency syndrome (AIDS). The Help Conquer Cancer project of the Cancer Institute in Ontario analyzes data from X-ray crystallography of proteins involved in the development of cancer. The results of these studies will help to better understand the nature of cancer, develop new ways to diagnose and treat it. Discovering Dengue Drugs is a project of the Medical Department of the University of Texas at Galveston, where promising protease inhibitors of viruses of the Flaviviridae family are being successfully searched for. These viruses are responsible for outbreaks of Dengue fever, yellow fever and West Nile fever, from which developing countries are particularly affected.

Connect – it's easy!Register on the World Community Grid website.

  • Download, install and open the BOINC Manager.
  • Enter the Advanced View mode, find the "Add project" item in the "Tools" menu. In the list of projects, select World Community Grid (or another one, if you like it more).
  • Select the research programs in which you want to participate (it is advisable to check the box next to the inscription “If there is no work available for my computer for the projects I have selected above, please send me work from another project.")
  • After that, BOINC will prompt you to enter your username and password and start downloading the project files.
  • Visit the page with your account and join the Distributed Computing team Russia, or any other, or even create your own team.

Mathematical projects were the first to master the possibilities of distributed computing and have not slowed down at all since then. In the Seventeen or Bust project, the search for the smallest Sierpinski number continues. Primes of several types are determined in PrimeGrid at once, but Vieferich primes are searched for in the Czech project of the same name Wieferich@home. Goldbach's hypothesis is proved by the participants of GoldbachConjectureVerification. The new Fermat number divisors are calculated on computers connected to the Fermat Search project. This is only a small part of mathematical projects. In the field of cryptography, closely related to mathematics, much has been done by the community distributed.net , which launched a series of projects to test the RC5 encryption algorithm and search for optimal Golomb – OGR lines. For a change, you can also participate in the Enigma@home project to decrypt the last of the unencrypted German radiograms dating back to 1942.

Nowadays, scientists are experiencing great difficulties in developing artificial intelligence. Perhaps this task will be solved for the first time in the distributed Artificial Intelligence System project of the Canadian company Intelligence Realm, which creates a global neural network on the computers of participants. On the other hand, the solution to the problem of artificial intelligence is being selected in the FreeHAL@home project – an attempt to recreate human behavior based on computer modeling of the linguistic way of knowing the world.

Theoretical chemistry has also won a place under the sun of distributed computing. For example, scientists from the University of Munster, within the framework of the QMC@home (Quantum Monte Carlo At Home) project, are working out the applicability of algorithms of Monte Carlo statistical methods in solving problems of quantum chemistry. Using the same Monte Carlo, the methodology of modeling interatomic interaction in solids is being improved at the University of Texas at Austin. For computational support of these studies, the eOn project has been created, where progress has already been made in the study of catalytic reactions in the presence of nanoparticles.

Climatologists belong to a group of scientists who need especially a lot of computing resources to improve modeling methods. One of the tools of this kind is ClimatePrediction, a project of the University of Oxford to study climate change. Since 2002, the project participants have managed to test more than 400 thousand variants of climate models with a total model time of 40 million years. This has significantly improved the accuracy of forecasting the parameters of our climate future.

As you can see, distributed computing has penetrated into many branches of science, becoming a reliable partner of scientists. Millions of people have turned from statisticians of scientific progress into its direct participants. The international audience of distributed computing is growing, uniting people from different countries in a common desire to uncover the secrets of the universe.

An abridged version of the article was published in Popular Mechanics No. 6-2009

Portal "Eternal youth" http://vechnayamolodost.ru/02.06.2009

Found a typo? Select it and press ctrl + enter Print version