WIKIBOOKS
DISPONIBILI
?????????

ART
- Great Painters
BUSINESS&LAW
- Accounting
- Fundamentals of Law
- Marketing
- Shorthand
CARS
- Concept Cars
GAMES&SPORT
- Videogames
- The World of Sports

COMPUTER TECHNOLOGY
- Blogs
- Free Software
- Google
- My Computer

- PHP Language and Applications
- Wikipedia
- Windows Vista

EDUCATION
- Education
LITERATURE
- Masterpieces of English Literature
LINGUISTICS
- American English

- English Dictionaries
- The English Language

MEDICINE
- Medical Emergencies
- The Theory of Memory
MUSIC&DANCE
- The Beatles
- Dances
- Microphones
- Musical Notation
- Music Instruments
SCIENCE
- Batteries
- Nanotechnology
LIFESTYLE
- Cosmetics
- Diets
- Vegetarianism and Veganism
TRADITIONS
- Christmas Traditions
NATURE
- Animals

- Fruits And Vegetables



ARTICLES IN THE BOOK

  1. Adobe Reader
  2. Adware
  3. Altavista
  4. AOL
  5. Apple Macintosh
  6. Application software
  7. Arrow key
  8. Artificial Intelligence
  9. ASCII
  10. Assembly language
  11. Automatic translation
  12. Avatar
  13. Babylon
  14. Bandwidth
  15. Bit
  16. BitTorrent
  17. Black hat
  18. Blog
  19. Bluetooth
  20. Bulletin board system
  21. Byte
  22. Cache memory
  23. Celeron
  24. Central processing unit
  25. Chat room
  26. Client
  27. Command line interface
  28. Compiler
  29. Computer
  30. Computer bus
  31. Computer card
  32. Computer display
  33. Computer file
  34. Computer games
  35. Computer graphics
  36. Computer hardware
  37. Computer keyboard
  38. Computer networking
  39. Computer printer
  40. Computer program
  41. Computer programmer
  42. Computer science
  43. Computer security
  44. Computer software
  45. Computer storage
  46. Computer system
  47. Computer terminal
  48. Computer virus
  49. Computing
  50. Conference call
  51. Context menu
  52. Creative commons
  53. Creative Commons License
  54. Creative Technology
  55. Cursor
  56. Data
  57. Database
  58. Data storage device
  59. Debuggers
  60. Demo
  61. Desktop computer
  62. Digital divide
  63. Discussion groups
  64. DNS server
  65. Domain name
  66. DOS
  67. Download
  68. Download manager
  69. DVD-ROM
  70. DVD-RW
  71. E-mail
  72. E-mail spam
  73. File Transfer Protocol
  74. Firewall
  75. Firmware
  76. Flash memory
  77. Floppy disk drive
  78. GNU
  79. GNU General Public License
  80. GNU Project
  81. Google
  82. Google AdWords
  83. Google bomb
  84. Graphics
  85. Graphics card
  86. Hacker
  87. Hacker culture
  88. Hard disk
  89. High-level programming language
  90. Home computer
  91. HTML
  92. Hyperlink
  93. IBM
  94. Image processing
  95. Image scanner
  96. Instant messaging
  97. Instruction
  98. Intel
  99. Intel Core 2
  100. Interface
  101. Internet
  102. Internet bot
  103. Internet Explorer
  104. Internet protocols
  105. Internet service provider
  106. Interoperability
  107. IP addresses
  108. IPod
  109. Joystick
  110. JPEG
  111. Keyword
  112. Laptop computer
  113. Linux
  114. Linux kernel
  115. Liquid crystal display
  116. List of file formats
  117. List of Google products
  118. Local area network
  119. Logitech
  120. Machine language
  121. Mac OS X
  122. Macromedia Flash
  123. Mainframe computer
  124. Malware
  125. Media center
  126. Media player
  127. Megabyte
  128. Microsoft
  129. Microsoft Windows
  130. Microsoft Word
  131. Mirror site
  132. Modem
  133. Motherboard
  134. Mouse
  135. Mouse pad
  136. Mozilla Firefox
  137. Mp3
  138. MPEG
  139. MPEG-4
  140. Multimedia
  141. Musical Instrument Digital Interface
  142. Netscape
  143. Network card
  144. News ticker
  145. Office suite
  146. Online auction
  147. Online chat
  148. Open Directory Project
  149. Open source
  150. Open source software
  151. Opera
  152. Operating system
  153. Optical character recognition
  154. Optical disc
  155. output
  156. PageRank
  157. Password
  158. Pay-per-click
  159. PC speaker
  160. Peer-to-peer
  161. Pentium
  162. Peripheral
  163. Personal computer
  164. Personal digital assistant
  165. Phishing
  166. Pirated software
  167. Podcasting
  168. Pointing device
  169. POP3
  170. Programming language
  171. QuickTime
  172. Random access memory
  173. Routers
  174. Safari
  175. Scalability
  176. Scrollbar
  177. Scrolling
  178. Scroll wheel
  179. Search engine
  180. Security cracking
  181. Server
  182. Simple Mail Transfer Protocol
  183. Skype
  184. Social software
  185. Software bug
  186. Software cracker
  187. Software library
  188. Software utility
  189. Solaris Operating Environment
  190. Sound Blaster
  191. Soundcard
  192. Spam
  193. Spamdexing
  194. Spam in blogs
  195. Speech recognition
  196. Spoofing attack
  197. Spreadsheet
  198. Spyware
  199. Streaming media
  200. Supercomputer
  201. Tablet computer
  202. Telecommunications
  203. Text messaging
  204. Trackball
  205. Trojan horse
  206. TV card
  207. Unicode
  208. Uniform Resource Identifier
  209. Unix
  210. URL redirection
  211. USB flash drive
  212. USB port
  213. User interface
  214. Vlog
  215. Voice over IP
  216. Warez
  217. Wearable computer
  218. Web application
  219. Web banner
  220. Web browser
  221. Web crawler
  222. Web directories
  223. Web indexing
  224. Webmail
  225. Web page
  226. Website
  227. Wiki
  228. Wikipedia
  229. WIMP
  230. Windows CE
  231. Windows key
  232. Windows Media Player
  233. Windows Vista
  234. Word processor
  235. World Wide Web
  236. Worm
  237. XML
  238. X Window System
  239. Yahoo
  240. Zombie computer
 



MY COMPUTER
This article is from:
http://en.wikipedia.org/wiki/Supercomputer

All text is available under the terms of the GNU Free Documentation License: http://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License 

Supercomputer

From Wikipedia, the free encyclopedia

 

A supercomputer is a computer that leads the world in terms of processing capacity, particularly speed of calculation, at the time of its introduction. The term "Super Computing" was first used by New York World newspaper in 1920 to refer to large custom-built tabulators IBM made for Columbia University.

Overview

Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for 5 years (1985–1990). Cray, himself, never used the word "supercomputer," a little-remembered fact in that he only recognized the word "computer." In the 1980s a large number of smaller competitors entered the market, in a parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash". Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as IBM and HP, who had purchased many of the 1980s companies to gain their experience, although Cray Inc. still specializes in building supercomputers.

The Cray-2 was the world's fastest computer from 1985 to 1989.
The Cray-2 was the world's fastest computer from 1985 to 1989.

The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's normal computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel become the standard. Typical numbers of processors were in the range 4–16. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. (This is commonly and humorously referred to as the attack of the killer micros in the industry.) Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, IA-64, or x86-64, and most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.

Software tools

Software tools for distributed processing include standard APIs such as MPI and PVM, and open source-based software solutions such as Beowulf and openMosix which facilitate the creation of a sort of "virtual supercomputer" from a collection of ordinary workstations or servers. Technology like ZeroConf (Rendezvous/Bonjour) pave the way for the creation of ad hoc computer clusters. An example of this is the distributed rendering function in Apple's Shake compositing application. Computers running the Shake software merely need to be in proximity to each other, in networking terms, to automatically discover and use each other's resources. While no one has yet built an ad hoc computer cluster that rivals even yesteryear's supercomputers, the line between desktop, or even laptop, and supercomputer is beginning to blur, and is likely to continue to blur as built-in support for parallelism and distributed processing increases in mainstream desktop operating systems. An easy programming language for supercomputers remains an open research topic in Computer Science.

Uses

Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum mechanical physics, weather forecasting, climate research (including research into global warming), molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion), cryptanalysis, and the like. Major universities, military agencies and scientific research laboratories are heavy users.

A particular class of problems, known as Grand Challenge problems, are problems whose full solution require semi-infinite computing resources.

Design

Processor board of a CRAY YMP vector computer
Processor board of a CRAY YMP vector computer

Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times—in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.

As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to accelerate the remaining bottlenecks.

Supercomputer challenges, technologies

  • A supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major HVAC problem.
  • Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason: hence the cylindrical shape of his famous Cray range of computers. In modern supercomputers built of many conventional cpus running in parallel, latencies of 1-5 microseconds to send a message between cpus are typical.
  • Supercomputers consume and produce massive amounts of data in a very short period of time. According to Ken Batcher, "A supercomputer is a device for turning compute-bound problems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.

Technologies developed for supercomputers include:

  • Vector processing
  • Liquid cooling
  • Non-Uniform Memory Access (NUMA)
  • Striped disks (the first instance of what was later called RAID)
  • Parallel filesystems

Processing techniques

Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD processing instructions for general-purpose computers.

Modern video game consoles in particular use SIMD extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. Indeed, some graphics cards have the computing power of several TeraFLOPS. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. As video processing has become more sophisticated, Graphics processing units (GPUs) have evolved to become more useful as general-purpose vector processors, and an entire computer science sub-dicipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU.)

Operating systems

Supercomputers predominantly run some variant of Linux or UNIX. Linux is the most popular since 2004
Supercomputers predominantly run some variant of Linux or UNIX. Linux is the most popular since 2004

Supercomputer operating systems, today most often variants of Linux or UNIX, are every bit as complex as those for smaller machines, if not more so. Their user interfaces tend to be less developed, however, as the OS developers have limited programming resources to spend on non-essential parts of the OS (i.e., parts not directly contributing to the optimal utilization of the machine's hardware). This stems from the fact that because these computers, often priced at millions of dollars, are sold to a very small market, their R&D budgets are often limited. (The advent of Unix and Linux allows reuse of conventional desktop software and user interfaces.)

Interestingly this has been a continuing trend throughout the supercomputer industry, with former technology leaders such as Silicon Graphics taking a back seat to such companies as NVIDIA, who have been able to produce cheap, feature-rich, high-performance, and innovative products due to the vast number of consumers driving their R&D.

Historically, until the early-to-mid-1980s, supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed). For the most part, supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. Similarly different and incompatible vectorizing and parallelizing compilers for Fortran existed. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of UNIX operating system variants (such as Cray's Unicos and today's Linux.)

For this reason, in the future, the highest performance systems are likely to have a UNIX flavor but with incompatible system-unique features (especially for the highest-end systems at secure facilities).

Programming

The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Special-purpose Fortran compilers can often generate faster code than C or C++ compilers, so Fortran remains the language of choice for scientific programming, and hence for most programs run on supercomputers. To exploit the parallelism of supercomputers, programming environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are being used.

Modern Supercomputer Architecture

The Columbia Supercomputer at NASA's Advanced Supercomputing Facility at Ames Research Center
The Columbia Supercomputer at NASA's Advanced Supercomputing Facility at Ames Research Center

As of November 2006, the top ten supercomputers on the Top500 list (and indeed the bulk of the remainder of the list) have the same top-level architecture. Each of them is a cluster of MIMD multiprocessors, each processor of which is SIMD. The supercomputers vary radically with respect to the number of multiprocessors per cluster, the number of processors per multiprocessor,and the number of simultaneous instructions per SIMD processor. Within this hierarchy we have:

  • A computer cluster is a collection of computers that are highly interconnected via a high-speed network or switching fabric. Each computer runs under a separate instance of an Operating System (OS).
  • A multiprocessing computer is a computer, operating under a single OS and using more than one CPU, where the application-level software is indifferent to the number of processors. The processors share tasks using Symmetric multiprocessing(SMP) and Non-Uniform Memory Access(NUMA).
  • An SIMD processor executes the same instruction on more than one set of data at the same time. The processor could be a general purpose commodity processor or special-purpose vector processor. It could also be high performance processor or a low power processor.

As of November 2006, the fastest machine is Blue Gene/L. This machine is a cluster of 65,536 computers, each with two processors, each of which processes two data streams concurrently. By contrast, Columbia is a cluster of 20 machines, each with 512 processors, each of which processes two data streams concurrently.

As of 2005, Moore's Law and economies of scale are the dominant factors in supercomputer design: a single modern desktop PC is now more powerful than a 15-year old supercomputer, and the design concepts that allowed past supercomputers to out-perform contemporaneous desktop machines have now been incorporated into commodity PCs. Furthermore, the costs of chip development and production make it uneconomical to design custom chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production.

Additionally, many problems carried out by supercomputers are particularly suitable for parallelization (in essence, splitting up into smaller parts to be worked on simultaneously) and, particularly, fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of computers of standard design which can be programmed to act as one large computer.

Special-purpose supercomputers

Special-purpose supercomputers are high-performance computing devices with a hardware architecture dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. They are used for applications such as astrophysics computation and brute-force codebreaking.

Examples of special-purpose supercomputers:

  • Deep Blue, for playing chess
  • Reconfigurable computing machines or parts of machines
  • GRAPE, for astrophysics and molecular dynamics
  • Deep Crack, for breaking the DES cipher

The fastest supercomputers today

Measuring supercomputer speed

The speed of a supercomputer is generally measured in "FLOPS" (FLoating Point Operations Per Second) or TFLOPS (1012 FLOPS); this measurement is based on a particular benchmark which does LU decomposition of a large matrix. This mimics a class of real-world problems, but is significantly easier to compute than a majority of actual real-world problems.

The Top500 list

Main article: TOP500

Since 1993, the fastest supercomputers have been ranked on the Top500 list according to their LINPACK benchmark results. The list does not claim to be unbiased or definitive, but it is is the best current definition of the "fastest" supercomputer available at any given time.

Current fastest supercomputer system

A BlueGene/L cabinet. IBM's Blue Gene/L is the fastest supercomputer in the world.
A BlueGene/L cabinet. IBM's Blue Gene/L is the fastest supercomputer in the world.

On March 25, 2005, IBM's Blue Gene/L prototype became the fastest supercomputer in a single installation using its 65536 nodes to run at 135.5 TFLOPS (1012 FLOPS). The Blue Gene/L is a cluster of nodes, each based on a customized version of IBM's PowerPC 440 processor with 512MiB of local memory. The prototype was developed at IBM's Rochester, Minnesota facility, but production versions were rolled out to various sites, including Lawrence Livermore National Laboratory (LLNL).

On October 28, 2005 the machine reached 280.6 TFLOPS with 131072 nodes. The LLNL system is expected to achieve at least 360 TFLOPS, and a future update will take it to 0.5 PFLOPS. Before this, a Blue Gene/L fitted with 32,768 nodes managed seven hours of sustained calculating at a 70.7 teraflops—another first. [1] In November of 2005 IBM Blue Gene/L became the number 1 on TOP500's most powerful supercomputer list and it has held on to this top spot as predicted [2]. In June 2006 LLNL's 131,072-node machine broke another record, sustaining 207.3 TFLOPS[3].

References: [4] [5] [6]

The MDGRAPE-3 supercomputer, which was completed in June 2006, reportedly reached one petaflop calculation speed, though it may not qualify as a general-purpose supercomputer as its specialized hardware is optimized for molecular dynamics simulations. See: [7] [8] [9]

Quasi-supercomputing

Some types of large-scale distributed computing for embarrassingly parallel problems take the clustered supercomputing concept to an extreme. One such example is the BOINC platform which is a host for a number of distributed computing projects recorded on April 17th 2006 processing power of over 418.6 TFLOPS through 1 Million plus computers on the network [10]. On April 17th 2006 BOINC's largest project SETI@home has a reported processing power of 250.1 TFLOPS through 900,000+ computers [11].

On May 16, 2005, the distributed computing project Folding@home reported a processing power of 195 TFLOPS on their CPU statistics page.[12]. Still higher powers have occasionally been recorded: on February 2, 2005, 207 TFLOPS were noted as coming from Windows, Mac, and Linux clients [13].

GIMPS's distributed Mersenne Prime search achieves currently 20 TFLOPS.

Google's search engine system may be faster with estimated total processing power of between 126 and 316 TFLOPS. Tristan Louis estimates the systems to be composed of between 32,000 and 79,000 dual 2 GHz Xeon machines. [14] Since it would be logistically difficult to cool so many servers at one site, Google's system would presumably be another form of distributed computing project: grid computing. The New York Times estimates that the Googleplex and its server farms contain 450,000 servers.[1]

Research and Development

There are several announced projects to develop new supercomputers. As of November 2006, the next goal is the "petaflop supercomputer".

India is developing a supercomputer that can reach one petaflop. The project is under the leadership of Dr. Karmarkar who invented the Karmarkar's algorithm. The Tata group of companies are funding the project.[15]

Another project is Cyclops64.


 

Timeline of supercomputers

For entries prior to 1993, this list refers to various sources[citation needed]. From 1993 to present, the list reflects the Top500 listing.

Historical, present:

See also

General concepts, history
  • Beowulf cluster
  • Distributed computing
  • Flash mob computer
  • Grid computing
  • High-performance computing (HPC)
  • History of computing
  • MOSIX
  • Parallel computing
  • Metacomputing
  • Quantum computer
Other classes of computer
  • Minisupercomputer
  • Mainframe computer
  • Superminicomputer
  • Minicomputer
  • Microcomputer
Supercomputer companies in operation

These companies make supercomputer hardware and/or software, either as their sole activity, or as one of several activities.

  • Cray Inc.
  • Fujitsu
  • Groupe Bull
  • Param
  • IBM
  • nCUBE
  • NEC Corporation
  • Quadrics
  • Supercomputer Systems
  • SGI
Defunct supercomputer companies

These companies have either folded, or no longer operate in the supercomputer market.

  • Control Data Corporation (CDC)
  • Convex Computer
  • Kendall Square Research
  • MasPar Computer Corporation
  • Meiko Scientific
  • Sequent Computer Systems
  • Thinking Machines

Notes

  1. ^ The New York Times, June 14, 2006

External links

Information resources

  • TOP500 Supercomputer list
  • LinuxHPC.org Linux High Performance Computing and Clustering Portal
  • WinHPC.org Windows High Performance Computing and Clustering Portal
  • Cluster Resources
  • Cluster Builder
  • CDAC

Supercomputing centers, organizations

Organizations

  • DEISA Distributed European Infrastructure for Supercomputing Applications, a facility integrating eleven European supercomputing centers.
  • EPCC Edinburgh Parallel Computing Centre. Based in the University of Edinburgh.
  • HPC-UK strategic collaboration between the UK's three leading supercomputer centres - Manchester Computing, EPCC and Daresbury Laboratory
  • NAREGI Japanese NAtional REsearch Grid Initiative involving several supercomputer centers
  • TeraGrid, a national facility integrating nine US supercomputing centers

Centers

  • BSC Barcelona Supercomputing Center - Spanish national supercomputing facility and R&D center
  • HPCx UK national supercomputer service operated by EPCC and Daresbury Lab
  • CESCA Supercomputing Centre of Catalonia - Centre de Supercomputacio de Catalunya
  • CSAR UK national supercomputer service operated by Manchester Computing
  • GSIC Global Scientific Information and Computing Center at the Tokyo Institute of Technology
  • IRB
  • NASA Advanced Supercomputing facility
  • National Center for Atmospheric Research (NCAR)
  • National Center for Supercomputing Applications (NCSA)
  • Ohio Supercomputer Center (OSC)
  • Pittsburgh Supercomputing Center operated by University of Pittsburgh and Carnegie Mellon University.
  • San Diego Supercomputer Center (SDSC)
  • SARA
  • System X at Virginia Tech
  • Texas Advanced Computing Center (TACC)
  • WestGrid

Specific machines, general-purpose

  • Linux NetworkX press release: Linux NetworX to build "largest" Linux supercomputer
  • ASCI White press release
  • Article about Japanese "Earth Simulator" computer
  • "Earth Simulator" website (in English)
  • NEC high-performance computing information
  • Superconducting Supercomputer

Specific machines, special-purpose

  • Google Supercomputer
  • Papers on the GRAPE special-purpose computer
  • More special-purpose supercomputer information
  • Information about the APEmille special-purpose computer
  • Information about the apeNEXT special-purpose computer
  • Information about the QCDOC project, machines


 

Retrieved from "http://en.wikipedia.org/wiki/Supercomputer"