The usual description of a supercomputer is as one of the most advanced computers in terms of execution, or processing speed (particularly speed of calculation), as well as permanent storage capacity that can be rapidly accessed in ‘real’ time. Supercomputer operating systems are often variants of UNIX or the popular Linux. The base language of supercomputer code is often , and sometimes , also using special libraries to share data between computing nodes or ‘cores’. Some websites currently restrict this description of a supercomputer to mainframe ones, by ‘defining’ a supercomputer as: “a mainframe computer that is among the largest, fastest, or most powerful of those available at a given time”. For example, the Los Alamos system, nicknamed the IBM’s ‘Roadrunner’ (a cluster of 3240 computers, each with 40 processing cores) may be a challenge for the Cray XT5 supercomputer at Oak Ridge National Laboratory called the ‘Jaguar’. The former system, only the second one to break the petaflop/s barrier, has a top performance of 1.059 petaflop/s in running the Linpack benchmark application, with a petaflop/s representing one quadrillion floating point operations per second.
Thus, a supercomputer can be currently, and narrowly, described in practice as any computer capable of one quadrillion floating point operations per second when running the Linpack benchmark application. Obviously, this is a somewhat arbitrary choice, but is sufficient as a practical example of a present day supercomputer.
Any computer, including the supercomputer, can be however defined in general as a sixtuple where is a quintuple automaton (or sequential machine) which is capable of making calculations, usually by executing a set of logical, sequential instructions or (software) program(s) that can be all defined recursively. Let us recall here also that an automaton, , is a five-tuple , consisting of:
a non-empty set of (sequential machine) states,
a non-empty set of symbols; a pair of a state and a symbol is called a configuration,
a rule associating every configuration a subset of states; is called a next-state relation, or a transition relation,
a non-empty set of starting states, and
a set of final states or terminating states.
A supercomputer architecture –such as, currently, a cluster of MIMD multiprocessors, each processor of which is a SIMD– can be thus regarded as a concrete realization of an automata supercategory of first order .
A supercomputer is capable of running many programs in parallel, and at extremely high speeds for most programs. Thus, any computer and supercomputer can be at least in principle simulated by an Universal Turing Machine (UTM). Quite remarkably, the UTM cannot do this for the thinking human brain, and thus the human brain cannot be adequately described as ‘one ¡giant¿ supercomputer’.
There are also however some dissenting contenders who claim that http://www.nvidia.com/object/personal_supercomputing.htmlpersonal supercomputing is also possible, and that such personal supercomputers are also becoming available, which are not mainframes in the established sense of the word, (see for example: http://www.top500.org/‘TOP500 Supercomputing Sites’). However, the top computing speed of such ‘personal supercomputers’ which are parallel processing machines is in the range of only 4 teraflop/s, still some 250 times faster than the fastest PCs. Thus, one notes that an inexpensive model quad-core Xeon workstation running at 2.66 GHz will outperform a multimillion dollar Cray C90 supercomputer that was used in the early 1990’s much for the same tasks.
Whereas many ‘pure’ mathematicians or theoretical physicists may claim that they do not need any supercomputer, this is not the case of the majority of applied mathematicians, computer scientists, mathematical and/or experimental physicists and engineers who find increasinly the need for having access to a supercomputer: the faster the device, and the more accessible it is, the better the supercomputer is–one petaflop today, and hopefully one thousand petaflops tomorrow, with petabyte storage capability that’s not too slow to access.
Unclassified supercomputing is currently used “for highly calculation-intensive tasks such as problems involving quantum physics” (QCD, QED, QFT, AQFT), molecular dynamics and modeling/computing structures and properties of small molecules, polymers, biopolymers/biological macromolecules (or even crystal lattices), physical simulations of all kinds including fluid dynamics and aerodynamic testing, ‘long-term’ weather forecasting and climate research, theoretical nuclear fusion research, cryptographic analysis, and lots more. Supercomputers are increasingly utilized at all major universities and scientific research laboratories for both academic and industrial purposes. One also remembers that parallel processing was the key in the race for assembling the first human genome map in two parallel projects that topped a billion dollars over a five year period; today, the cost of a single human ‘whole’ genome analysis has dropped by a factor of one million, and the analysis time has been reduced to one day.
|Date of creation||2013-03-22 18:44:14|
|Last modified on||2013-03-22 18:44:14|
|Last modified by||bci1 (20947)|