According to Size
Micro Computers
the more common term, personal computer or PC, a computer designed for an individual. A
microcomputer contains a microprocessor (a central p rocessing unit on a microchip), memory
in the form of read-only memory and random access memory, I/O ports and a bus or system of
interconnecting wires, housed in a unit that is usually called a motherboard. In an ascending
hierarchy of general computer sizes, we find:
• An embedded systems programming computer, which is embedded in something and
doesn't support direct human interaction but nevertheless meets all the other criteria of a
microcomputer
• Microcomputer
• workstation, as used to mean a more powerful personal computer for special applications
• minicomputer, now restyled a "mid-range server"
• mainframe or mainframe computer, which is now usually referred to by its manufacturers
as a "large server"
• Supercomputer, formerly almost a synonym for "Cray supercomputer" but now meaning a
very large server and sometimes including a system of computers using parallel
processing
• A parallel processing system is a system of interconnected computers that work on the
same application together, sharing tasks that can be performed concurrently.
Mini Computers
A minicomputer, a term no longer much used, is a computer of a size intermediate
between a microcomputer and a mainframe. Typically, minicomputers have been standalone
computers (computer systems with attached terminals and other devices) sold to
small and mid-size businesses for general business applications and to large enterprises
for department-level operations. In recent years, the minicomputer has evolved into the
"mid-range server" and is part of a network. IBM's AS/400e is a good example.
Mainframes
Mainframe is an industry term for a large computer, typically manufactured by a large
company such as IBM for the commercial applications of Fortune 1000 businesses and other
large-scale computing purposes. Historically, a mainframe is associated with centralized rather
than d istributed computing. Today, IBM refers to its larger processors as large servers and
emphasizes that they can be used to serve distributed users and smaller servers in a
computing network.
Super Computers
A supercomputer is a computer that performs at or near the currently highest operational rate
for computers. A supercomputer is typically used for scientific and engineering applications
that must handle very large databases or do a great amount of computation (or both). At any
given time, there are usually a few well-publicized supercomputers that operate at the very
latest and always incredible speeds. The term is also sometimes applied to far slower (but still
impressively fast) computers. Most supercomputers are really multiple computers that perform
parallel processing. In general, there are two parallel processing approaches: symmetric
multiprocessing (SMP) and massively parallel processing (MPP).
Perhaps the best-known builder of supercomputers has been Cray Research, now a part of
Silicon Graphics. Some supercomputers are at "supercomputer center," usually university
research centers, some of which, in the United States, are interconnected on an Internet
backbone known as vBNS or NSFNet. This network is the foundation for an evolving network
infrastructure known as the National Technology Grid. Internet2 is a university-led proje ct that
is part of this initiative.
At the high end of supercomputing are computers like IBM's "Blue Pacific," announced on
October 29, 1998. Built in partnership with Lawrence Livermore National Laboratory in
California. Blue Pacific is reported to operated at 3.9 teraflop (trillion operations per second),
15,000 times faster than the average personal computer. It consists of 5,800 processors
containing a total of 2.6 trillion bytes of memory and interconnected with five miles of cable. It
was built to simulate the physics of a nuclear explosion. IBM is also building an academic
supercomputer for the San Diego Supercomputer Center that will operate at 1 teraflop.
It's based on IBM's RISC System/6000 and the AIX operating system and will have 1,000
micropro cessors with IBM's own POWER3 chip. At the lower end of supercomputing, a new
trend, called clustering, suggests more of a build-it-yourself approach to supercomputing. The
Beowulf Project offers guidance on how to "strap together" a number of off-the-shelf personal
computer processors, using Linux operating systems, and interconnecting the processors with
Fast Ethernet.
Applications must be written to manage the parallel processing.
0 comments:
Post a Comment
Hello