What is Port?

Wednesday, July 25, 2012
On computer and telecommunication devices, a port (noun) is generally a specific place for being physically connected to some other device, usually with a socket and plug of some kind. Typically, a personal computer is provided with one or more serial ports and usually one parallel port. Types of Ports: Parallel An interface on a computer that supports transmission of multiple bits at the same time; almost exclusively used for connecting a printer. On IBM or compatible computers, the parallel port uses a 25-pin connector. Serial It is a general-purpose personal computer communications port in which 1 bit of information is transferred at a time. In the past, most digital cameras were connected to a computer's serial port in order to transfer images to the computer. Recently, however, the serial port is being replaced by the much faster USB port on digital cameras as well as computers. SCSI A port that's faster than the serial and parallel ports but slower and harder to configure than the newer USB port. Also know as the Small Computer System Interface. A high-speed connection that enables devices, such as hard-disk drives and network adapters, to be attached to a computer. USB USB (Universal Serial Bus) is a plug-and-play hardware interface for peripherals such as the keyboard, mouse, joystick, scanner, printer and modem. USB has a maximum bandwidth of 12 Mbits/sec and up to 127 devices can be attached. With USB, a new device can be added to your computer without having to add an adapter card. It typically is located at the back of the PC Firewire FireWire is simply a really fast port that lets you connect computer peripherals and consumer electronics to your computer without the need to restart. It is a simple common plug-in serial connector on the back of your computer. It has the ability to chain devices together in a number of different ways without terminators for example, simply join 2 computers with a FireWire cable for instant highspeed networking.

Input Devices

Mouse
A mouse is a small device that a computer user pushes across a desk surface in order to point to a place on a display screen and to select one or more actions to take from that position. The mouse first became a widely-used computer tool when Apple Computer made it a standard part of the Apple Macintosh. Today, the mouse is an integral part of the graphical user interface (GUI) of any personal computer. The mouse apparently got its name by being about the same size and color as a toy mouse.
Keyboard
On most computers, a keyboard is the primary text input device. A keyboard on a computer is almost identical to a keyboard on a typewriter. Computer keyboards will typically have extra keys, however. Some of these keys (common examples include Control, Alt, and Meta) are meant to be used in conjunction with other keys just like shift on a regular typewriter. Other keys (common examples include Insert, Delete, Home, End, Help, function keys, etc.) are meant to be used independently and often perform editing tasks.
Joystick
In computers, a joystick is a cursor control device used in computer games. The joystick, which got its name from the control stick used by a pilot to control the ailerons and elevators of an airplane, is a hand-held lever that pivots on one end and transmits its coordinates to a computer. It often has one or more push-buttons, called switches, whose position can also be read by the computer.
Digital Camera
A digital camera records and stores photographic images in digital form that can be fed to a computer as the impressions are recorded or stored in the camera for later loading into a computer or printer. Currently, Kodak, Canon, and several other companies make digital cameras.
Microphone
A device that converts sound waves into audio signals. These could be used for sound recording as well as voice chatting through internet.
Scanner
A scanner is a device that captures images from photographic prints, posters, magazine pages, and similar sources for computer editing and display. Scanners come in hand-held, feed-in, and flatbed types and for scanning black-and-white only, or color. Very high resolution scanners are used for scanning for high-resolution printing, but lower resolution scanners are adequate for capturing images for computer display. Scanners usually come with software, such as Adobe's Photoshop product, that lets you resize and otherwise modify a captured image

All computers have the following essential hardware components

Input
The devices used to give the computer data or commands are called Input devices. Includes keyboard, mouse, scanner, etc
Processor
A processor is the logic circuitry that responds to and processes the basic instructions that drive a computer.
The term processor has generally replaced the term central processing unit (CPU). The processor in a personal computer or embedded in small devices is often called a microprocessor.
Short for microprocessor, the central processing unit in a computer. The processor is the logic of a computer and functions comparably to a human central nervous system, directing signals from one component to another and enabling everything to happen
Memory
Memory is the electronic holding place for instructions and data that your computer's microprocessor can reach quickly. When your computer is in normal operation, its memory usually contains the main parts of the operating system and some or all of the application programs and related data that are being used. Memory is often used as a shorter synonym for random access memory (RAM). This kind of memory is located on one or more microchips that are physically close to the microprocessor in your computer. Most desktop and notebook computers sold today include at least 16 megabytes of RAM, and are upgradeable to include more. The more RAM you have, the less frequently the computer has to access instructions and data from the more slowly accessed hard disk form of storage.
Memory is also called primary or main memory.
Storage
Computer storage is the holding of data in an electromagnetic form for access by a computer processor. It is also called secondary storage. In secondary storage data resides on hard disks, tapes, and other external devices.
Primary storage is much faster to access than secondary storage because of the proximity of the storage to the processor or because of the nature of the storage devices. On the other hand, secondary storage can hold much more data than primary storage.
Output
The devices to which the computer writes data are called Output devices. Often converts the data into a human readable form. Monitor and printer are output devices.

Computer Types According to Capability

Tuesday, July 24, 2012
Supercomputers
A supercomputer is a computer that performs at or near the currently highest operational rate for computers. A supercomputer is typically used for scientific and engineering applications that must handle very large databases or do a great amount of computation (or both). At any given time, there are usually a few well-publicized supercomputers that operate at the very latest and always incredible speeds. Perhaps the best-known builder of supercomputers has been Cray Research, now a part of Silicon Graphics. Some supercomputers are at "supercomputer center," usually university research centers, some of which, in the United States, are interconnected on an Internet backbone (A backbone is a larger transmission line that carries data gathered from smaller lines that interconnect with it) known as vBNS or NSFNet. At the high end of supercomputing are computers like IBM's "Blue Pacific," announced on October 29, 1998. Built in partnership with Lawrence Livermore National Laboratory in California, Blue Pacific is reported to operated at 3.9 teraflop (trillion floating point operations per second), 15,000 times faster than the average personal computer. It consists of 5,800 processors containing a total of 2.6 trillion bytes of memory and interconnected with five miles of cable.
Mainframe Computers
A very large and expensive computer capable of supporting hundreds, or even thousands, of users simultaneously. In the hierarchy that starts with a simple microprocessor (in watches, for example) at the bottom and moves to supercomputers at the top, mainframes are just below supercomputers. In some ways, mainframes are more powerful than supercomputers because they support more simultaneous programs. But supercomputers can execute a single program faster than a mainframe. The distinction between small mainframes and minicomputers is vague (not clearly expressed), depending really on how the manufacturer wants to market its machines.
Servers / Minicomputers
A midsized computer. In size and power, minicomputers lie between workstations and mainframes. In the past decade, the distinction between large minicomputers and small mainframes has blurred, however, as has the distinction between small minicomputers and workstations. But in general, a minicomputer is a multiprocessing system capable of supporting from 4 to about 200 users simultaneously.
Desktops
These are also called microcomputers. Low-end desktops are called PC’s and high-end ones “Workstations”. These are generally consisting of a single processor only, some times 2, along with MB’s of memory, and GB’s of storage. PC’s are used for running productivity applications, Web surfing, messaging. Workstations are used for more demanding tasks like low-end 3-D simulations and other engineering & scientific apps. These are not as reliable and fault-tolerant as servers. Workstations cost a few thousand dollars; PC around a $1000.
Portables
Portable computer is a personal computer that is designed to be easily transported and relocated, but is larger and less convenient to transport than a notebook computer. The earliest PCs designed for easy transport were called portables. As the size and weight of most portables decreased, they became known as laptop computer and later as notebook computer. Today, larger transportable computers continue to be called portable computers. Most of these are special-purpose computers - for example, those for use in industrial environments where they need to be moved about frequently.
PDA (personal digital assistant) is a term for any small mobile hand-held device that provides computing and information storage and retrieval capabilities for personal or business use, often for keeping schedule calendars and address book information handy. The term handheld is a synonym. Many people use the name of one of the popular PDA products as a generic term. These include Hewlett-Packard's Palmtop and 3Com's PalmPilot.
Most PDAs have a small keyboard. Some PDAs have an electronically sensitive pad on which handwriting can be received. Apple's Newton, which has been withdrawn from the market, was the first widely-sold PDA that accepted handwriting. Typical uses include schedule and address book storage and retrieval and note-entering. However, many applications have been written for PDAs. Increasingly, PDAs are combined with telephones and paging systems.
Some PDAs offer a variation of the Microsoft Windows operating system called Windows CE. Other products have their own or another operating system.
Ranking w.r.t. installed number
PC’s
PDA’s
Workstations
Servers
Wearable (picture is provided)
Mainframes
Supercomputers

At the highest level, two things are required for computing
Hardware
Computer equipment such as a CPU, disk drives, CRT, or printer
Software
A computer program, which provides the instructions which enable the computer hardware to work

Future of the Web: Semantic Web

The Semantic Web is an idea of World Wide Web inventor Tim Berners-Lee that the Web as a whole can be made more intelligent and perhaps even intuitive about how to serve a user's needs. Berners-Lee observes that although search engines index much of the Web's content, they have little ability to select the pages that a user really wants or needs. He foresees a number of ways in which developers and authors, singly or in collaborations, can use self-descriptions and other techniques so that context-understanding programs can selectively find what users want.

Who invented the Web & Why?

"CERN is a meeting place for physicists from all over the world, who collaborate on complex physics, engineering and information handling projects. Thus, the need for the WWW system arose "from the geographical dispersion of large collaborations, and the fast turnover of fellows, students, and visiting scientists," who had to get "up to speed on projects and leave a lasting contribution before leaving."
CERN possessed both the financial and computing resources necessary to start the project. In the original proposal, Berners-Lee outlined two phases of the project: First, CERN would "make use of existing software and hardware as well as implementing simple browsers for the user's workstations, based on an analysis of the requirements for information access needs by experiments."
Second, they would "extend the application area by also allowing the users to add new material."
Berners-Lee expected each phase to take three months "with the full manpower complement": he was asking for four software engineers and a programmer. The proposal talked about "a simple scheme to incorporate several different servers of machine-stored information already available at CERN."
Set off in 1989, the WWW quickly gained great popularity among Internet users. For instance, at 11:22 am of April 12, 1995, the WWW server at the SEAS of the University of Pennsylvania "responded to 128 requests in one minute. Between 10:00 and 11:00

What is a Web site?

A Web site is a related collection of World Wide Web (WWW) files that includes a beginning file called a home page. A company or an individual tells you how to get to their Web site by giving you the address of their home page. From the home page, you can get to all the other pages on their site. For example, the Web site for IBM has the home page address of http://www.ibm.com. IBM's home page address leads to thousands of pages but a web site can also be just of few pages.
https://www.google.com.pk/

URL (Uniform Resource Locator)

URL (Uniform Resource Locator, previously Universal Resource Locator) - pronounced YU-AHR-EHL or, in some quarters, UHRL - is the address of a file (resource) accessible on the Internet. The type of file or resource depends on the Internet application protocol. Using the World Wide Web's protocol, the Hypertext Transfer Protocol (HTTP), the resource can be an HTML page (like the one you're reading), an image file, or any other file supported by HTTP. The URL contains the name of the protocol required to access the resource, a domain name that identifies a specific computer on the Internet, and a pathname (hierarchical description of a file location) on the computer.
On the Web (which uses the Hypertext Transfer Protocol), an example of a URL is: http://www.ietf.org/rfc/rfc2396.txt Which describes a Web page to be accessed with an HTTP (Web browser) application that is located on a computer named www.ietf.org. The pathname for the specific file in that computer is /rfc/rfc2396.txt.
An HTTP URL can be for any Web page, not just a home page, or any individual file.
Examples:
https://www.google.com.pk/
http://www.youtube.com/
http://www.facebook.com/

Browser

A browser is an application program that provides a way to look at and interact with all the information on the World Wide Web. The word "browser" seems to have originated prior to the Web as a generic term for user interfaces that let you browse (navigate through and read) text files online. By the time the first Web browser with a graphical user interface was generally available (Mosaic, in 1993), the term seemed to apply to Web content, too. Technically, a Web browser is a client program that uses the Hypertext Transfer Protocol (HTTP) to make requests of Web servers throughout the Internet on behalf of the browser user.

Evolution of Computing

Turing Machine – 1936
Introduced by Alan Turing in 1936, Turing machines are one of the key abstractions used in modern computability theory, the study of what computers can and cannot do. A Turing machine is a particularly simple kind of computer, one whose operations are limited to reading and writing symbols on a tape, or moving along the tape to the left or right. The tape is marked off into squares, each of which can be filled with at most one symbol. At any given point in its operation, the Turing machine can only read or write on one of these squares, the square located directly below its "read/write" head.

The “Turing test”
A test proposed to determine if a computer has the ability to think. In 1950, Alan Turing (Turing, 1950) proposed a method for determining if machines can think. This method is known as The Turing Test. The test is conducted with two people and a machine. One person plays the role of an interrogator and is in a separate room from the machine and the other person. The interrogator only knows the person and machine as A and B. The interrogator does not know which the person is and which the machine is. Using a teletype, the interrogator, can ask A and B any question he/she wishes. The aim of the interrogator is to determine which the person is and which the machine is. The aim of the machine is to fool the interrogator into thinking that it is a person. If the machine succeeds then we can conclude that machines can think.

Vacuum Tube – 1904:
A vacuum tube is just that: a glass tube surrounding a vacuum (an area from which all gases has been removed). What makes it interesting is that when electrical contacts are put on the ends, you can get a current to flow though that vacuum. A British scientist named John A. Fleming made a vacuum tube known today as a diode. Then the diode was known as a "valve,"

ABC – 1939
The Atanasoff-Berry Computer was the world's first electronic digital computer. It was built by John Vincent Atanasoff and Clifford Berry at Iowa State University during 1937- 42. It incorporated several major innovations in computing including the use of binary arithmetic, regenerative memory, parallel processing, and separation of memory and computing functions.

Harvard Mark 1 – 1943:
Howard Aiken and Grace Hopper designed the MARK series of computers at Harvard University. The MARK series of computers began with the Mark I in 1944. Imagine a giant roomful of noisy, clicking metal parts, 55 feet long and 8 feet high. The 5-ton device contained almost 760,000 separate pieces. Used by the US Navy for gunnery and ballistic calculations, the Mark I was in operation until 1959. The computer, controlled by pre-punched paper tape, could carry out addition, subtraction, multiplication, division and reference to previous results. It had special subroutines for logarithms and trigonometric functions and used 23 decimal place numbers. Data was stored and counted mechanically using 3000 decimal storage wheels, 1400 rotary dial switches, and 500 miles of wire. Its electromagnetic relays classified the machine as a relay computer. All output was displayed on an electric typewriter. By today's standards, the Mark I was slow, requiring 3-5 seconds for a multiplication operation

ENIAC – 1946:
ENIAC I (Electrical Numerical Integrator And Calculator). The U.S. military sponsored their research; they needed a calculating device for writing artillery-firing tables (the settings used for different weapons under varied conditions for target accuracy). John Mauchly was the chief consultant and J Presper Eckert was the chief engineer. Eckert was a graduate student studying at the Moore School when he met John Mauchly in 1943. It took the team about one year to design the ENIAC and 18 months and 500,000 tax dollars to build it. The ENIAC contained 17,468 vacuum tubes, along with 70,000 resistors and 10,000 capacitors.
Transistor – 1947
Shockley. This was perhaps the most important electronics event of the 20th century, as it later made possible the integrated circuit and microprocessor that are the basis of modern electronics. Prior to the transistor the only alternative to its current regulation and switching functions (TRANSfer resISTOR) was the vacuum tubes, which could only be miniaturized to a certain extent, and wasted a lot of energy in the form of heat. Compared to vacuum tubes, it offered:
smaller size
better reliability
lower power consumption
lower cost

Floppy Disk – 1950
Invented at the Imperial University in Tokyo by Yoshiro Nakamats

UNIVAC 1 – 1951
UNIVAC-1. The first commercially successful electronic computer, UNIVAC I, was also the first general purpose computer - designed to handle both numeric and textual information. It was designed by J. Presper Eckert and John Mauchly. The implementation of this machine marked the real beginning of the computer era. Remington Rand delivered the first UNIVAC machine to the U.S. Bureau of Census in 1951. This machine used magnetic tape for input. first successful commercial computer design was derived from the ENIAC (same developers)
first client = U.S. Bureau of the Census
$1 million
48 systems built

Compiler - 1952
Grace Murray Hopper an employee of Remington-Rand worked on the NUIVAC. She took up the concept of reusable software in her 1952 paper entitled "The Education of a Computer" and developed the first software that could translate symbols of higher computer languages into machine language. (Compiler)

ARPANET – 1969
The Advanced Research Projects Agency was formed with an emphasis towards research, and thus was not oriented only to a military product. The formation of this agency was part of the U.S. reaction to the then Soviet Union's launch of Sputnik in 1957. (ARPA draft, III-6). ARPA was assigned to research how to utilize their investment in computers via Command and Control Research (CCR). Dr. J.C.R. Licklider was chosen to head this effort. Developed for the US DoD Advanced Research Projects Agency 60,000 computers connected for communication among research organizations and universities

Intel 4004 – 1971
The 4004 was the world's first universal microprocessor. In the late 1960s, many scientists had discussed the possibility of a computer on a chip, but nearly everyone felt that integrated circuit technology was not yet ready to support such a chip. Intel's Ted Hoff felt differently; he was the first person to recognize that the new silicon-gated MOS technology might make a single-chip CPU (central processing unit) possible. Hoff and the Intel team developed such architecture with just over 2,300 transistors in an area of only 3 by 4 millimeters. With its 4-bit CPU, command register, decoder, decoding control, control monitoring of machine commands and interim register, the 4004 was one heck of a little invention. Today's 64-bit microprocessors are still based on similar designs, and the microprocessor is still the most complex mass-produced product ever with more than 5.5 million transistors performing hundreds of millions of calculations each second - numbers that are sure to be outdated fast.

Altair 8800 – 1975
By 1975 the market for the personal computer was demanding a product that did not require an electrical engineering background and thus the first mass produced and marketed personal computer (available both as a kit or assembled) was welcomed with open arms. Developers Edward Roberts, William Yates and Jim Bybee spent 1973-1974 to develop the MITS (Micro Instruments Telemetry Systems ) Altair 8800. The price was $375, contained 256 bytes of memory (not 256k),but had no keyboard, no display, and no auxiliary storage device. Later, Bill Gates and Paul Allen wrote their first product for the Altair -- a BASIC compiler (named after a planet on a Star Trek episode).

Cray 1 – 1976
It looked like no other computer before, or for that matter, since. The Cray 1 was the world's first "supercomputer," a machine that leapfrogged existing technology when it was introduced in 1971. And back then, you couldn't just order up fast processors from Intel. "There weren't any microprocessors," says Gwen Bell of The Computer Museum History Center. "These individual integrated circuits that are on the board performed different functions." Each Cray 1, like this one at The Computer Museum History Center, took months to build. The hundreds of boards and thousands of wires had to fit just right. "It was really a hand-crafted machine," adds Bell. "You think of all these wires as a kind of mess, but each one has a precise length."

IBM PC – 1981
On August 12, 1981, IBM released their new computer, re-named the IBM PC. The "PC" stood for "personal computer" making IBM responsible for popularizing the term "PC". The first IBM PC ran on a 4.77 MHz Intel 8088 microprocessor. The PC came equipped with 16 kilobytes of memory, expandable to 256k. The PC came with one or two 160k Floppy Disks Drives and an optional color monitor. The price tag started at $1,565, which would be nearly $4,000 today.

Apple Macintosh – 1984
Apple introduced the Macintosh to the nation on January 22, 1984. The original Macintosh had 128 kilobytes of RAM, although this first model was simply called "Macintosh" until the 512K model came out in September 1984. The Macintosh retailed for $2495. It wasn't until the Macintosh that the general population really became aware of the mouse-driven graphical user interface.

World Wide Web -1989
"CERN is a meeting place for physicists from all over the world, who collaborate on complex physics, engineering and information handling projects. Thus, the need for the WWW system arose "from the geographical dispersion of large collaborations, and the fast turnover of fellows, students, and visiting scientists," who had to get "up to speed on projects and leave a lasting contribution before leaving." CERN possessed both the financial and computing resources necessary to start the project. In the original proposal, Berners-Lee outlined two phases of the project: First, CERN would "make use of existing software and hardware as well as implementing simple browsers for the user's workstations, based on an analysis of the requirements for information access needs by experiments." Second, they would "extend the application area by also allowing the users to add new material." Berners-Lee expected each phase to take three months "with the full manpower complement": he was asking for four software engineers and a programmer. The proposal talked about "a simple scheme to incorporate several different servers of machine-stored information already available at CERN." Set off in 1989, the WWW quickly gained great popularity among Internet users. For instance, at 11:22 am of April 12, 1995, the WWW server at the SEAS of the University of Pennsylvania "responded to 128 requests in one minute. Between 10:00 and 11:00

Quantum Computing with Molecules
Factoring a number with 400 digits--a numerical feat needed to break some security codes--would take even the fastest supercomputer in existence billions of years. But a newly conceived type of computer, one that exploits quantum-mechanical interactions, might complete the task in a year or so, thereby defeating many of the most sophisticated encryption schemes in use. Sensitive data are safe for the time being, because no one has been able to build a practical quantum computer. But researchers have now demonstrated the feasibility of this approach. Such a computer would look nothing like the machine that sits on your desk; surprisingly, it might resemble the cup of coffee at its side. Several research groups believe quantum computers based on the molecules in a liquid might one day overcome many of the limits facing conventional computers. Roadblocks to improving conventional computers will ultimately arise from the fundamental physical bounds to miniaturization (for example, because transistors and electrical wiring cannot be made slimmer than the width of an atom). Or they may come about for practical reasons--most likely because the facilities for fabricating still more powerful microchips will become prohibitively expensive. Yet the magic of quantum mechanics might solve both these problems.

Introduction

Monday, July 23, 2012
Charles Babbage (1791-1871)
Creator of the Analytical Engine - the first general-purpose digital computer (1833) The Analytical Engine was not built until 1943 (in the form of the Harvard Mark I)
The Analytical Engine
A programmable, mechanical, digital machine Could carryout any calculation Could make decisions based upon the results of the previous calculation Components: input; memory; processor; output
Ada, Countess of Lovelace(1815-52)
Ada: the mother? Wrote a program for computing the Bernoulli’s sequence on the Analytical Engine - world’s 1st computer program Ada: A programming language specifically designed by the US Dept of Defense for developing military applications was named Ada to honor her contributions towards computing
A lesson that we all can learn from Babbage’s Life
Charles Babbage had huge difficulties raising money to fund his research As a last resort, he designed a clever mathematical scheme along with Ada, the Countess of Lovelace It was designed to increase their odds while gambling. They bet money on horse races to raise enough money to support their research experiments Guess what happened at the end? The lost every penny that they had.
Fast
Bored
Storage

Here is a fact:

In 1997 Deep Blue, a supercomputer designed by IBM, beat Gary Kasparov, the World Chess Champion That computer was exceptionally fast, did not get tired or bored. It just kept on analyzing the situation and kept on searching until it found the perfect move from its list of possible moves …
 

Introduction to Computing Copyright © 2011-2012 | Powered by Blogger