|
In the earliest days of computing, the instructions in a program and the data it used to perform calculations on were considered to be totally different kinds of information. This meant that the instructions to the computer were given in a completely different way from the input of data. For instance, the data could be stored in cards with holes in them while the instructions were implemented by manipulating levers. In particular, it meant that instructions had to be entered individually as they were needed, which in turn meant that the same instruction might need to be given thousands of times.
John von Neumann (1903 - 1957) realized that because the instructions were given in the form of symbols, the computer itself could process them in the same way that it processed data. This had major consequences for computing; von Neumann has even been called the father of computing as a result. One of its important consequences is that programs can be stored within the computer itself and implemented with only one instruction, which saves operator and computer time, and which makes possible the large programs used today (even the simplest word processing system, for example, would be far beyond the reach of computing in the 1930s or 1940s).
It was not until the 1980s that computers began to move beyond von Neumann\'s ideas. By this time, the von Neumann architecture (in which an individual instruction is read in from the memory and then processed, then the next and so on) had become the major brake on the speed at which computers could work. The computers which are being developed today employ parallel processing, where there are several processing units connected together in particular ways, implementing different instructions all at the same time. SMcL |
|