Human society, as it developed, mastered not only matter and energy, but also information. With the appearance and mass distribution of computers, man gained a powerful means for the effective use of information resources, for enhancing his intellectual activity. From this moment (mid XX century), the transition from industrial society to information society, in which information becomes the main resource, began.
The possibility of using complete, timely and reliable information by the members of society depends to a large extent on the degree of development and mastering of new information technologies, the basis of which are computers. Let us consider the main milestones in the history of their development.
The word “computer” means “calculator”, i.e. a device for calculations. The need for automation of data processing, including computing, arose a long time ago. More than 1500 years ago, counting sticks, pebbles, etc. were used for counting.
This topic is relevant. Since computers have covered all spheres of human activity. Nowadays it is difficult to imagine that we can do without computers. But not so long ago, until the early 70s, computers were available to a very limited range of specialists, and their use, as a rule, remained shrouded in secrecy and little known to the general public. However, in 1971, there was an event which fundamentally changed the situation and with fantastic speed turned computer into an everyday working tool for tens of millions of people. In that remarkable year, no doubt, an almost unknown company Intel from a small American town with the beautiful name of Santa Clara (California) released the first microprocessor. Exactly to it we owe the appearance of a new class of computer systems – personal computers, which are now used, in fact, by everyone, from elementary school students and accountants to scientists and engineers.
In the 21st century it is impossible to imagine life without a personal computer. Computers have become indispensable in our life and the main assistant of people. Nowadays in the world there are many computers of different companies, different groups of complexity, purpose and generations.
In this paper, I aim to give a fairly broad picture of the history of computer technology development.
Thus, the purpose of my paper is to review the development of computer technology from ancient times to the present, and to give a brief overview of counting devices from before the mechanical period to modern computers.
COUNTING DEVICES BEFORE THE ADVENT OF THE COMPUTER
At all times people have needed to count. As to when mankind learned to count, we can only speculate. But we can say with confidence that for simple counting our ancestors used fingers, a method which we successfully use to this day. But what to do in the case if you want to remember the results of calculations or count what is more than the fingers of your hands. In this case, you can make notches on the wood or on the bone. This is most likely what the first people did, as evidenced by archaeological excavations. Probably the oldest of such tools is a bone with notches, found in the ancient settlement of Dolní Věstonice in the southeast of Bohemia in Moravia. This object, called the Vestonica Bone, was presumably used up to 30 thousand years B.C. Despite the fact that at the dawn of human civilizations some sophisticated calculating systems were invented, the use of notches for counting was still going on for quite a long time. Finger counting is by far the oldest and easiest method of calculating. In many peoples, the fingers remain a tool for counting even at higher stages of development. Among these peoples were the Greeks, who kept counting on the fingers as a practical tool for a very long time.
Counting on stones
To make the process of counting more convenient, primitive man began to use small stones instead of fingers. He would put up a pyramid of stones and determine how many stones were in it, but if the number was large, it was difficult to count the number of stones by eye. Therefore he began to stack smaller pyramids of the same size from stones, and because of ten fingers on hands the pyramid was made by ten stones.
Counting on the Abacus
In the times of the most ancient cultures, man had to solve problems connected with trading calculations, with the calculation of time, with the determination of the area of land, etc. The growth of these calculations even led to the fact that from one country to another were invited specially trained people, proficient in the technique of arithmetic calculation. So, sooner or later there had to appear devices to facilitate everyday calculations.
So, in Ancient Greece and Rome, counting instruments called abacus (from the Greek word abakion meaning “board covered with dust”) were invented. The abacus is also called the Roman abacus. Calculations on them were carried out by moving counting dice and pebbles (calculi) in strip recesses of boards made of bronze, stone, ivory, colored glass. In its primitive form, the abacus was a board (later it took the form of a board divided into columns by partitions). Lines were drawn on it which divided it into columns, and the pebbles were arranged in these columns according to the same positional principle that is used to put numbers on our abacus. These counters survived until the Renaissance.
In the countries of the Ancient East (China, Japan, Indochina), there were Chinese counts. On each thread or wire in these accounts there were five and two knuckles. Counting was done in ones and fives.
The first device for performing multiplication was a set of wooden blocks known as Napier’s sticks. They were invented by Scotsman John Neper (1550-1617). On such a set of wooden sticks was placed a multiplication table. John Neper also invented the logarithms.
This invention left a notable mark on history with John Neeper’s invention of logarithms, as reported in a 1614 publication. His tables, whose calculation was very time-consuming, were later “built” into a handy device that extremely speeded up the calculation process, the logarithmic ruler; it was invented in the late 1620s. In 1617. Neper also invented another way of multiplying numbers. The instrument, called Neeper’s knuckles, consisted of a set of segmented rods, which could be arranged in such a way that by adding numbers, in horizontally adjacent segments, we got the result of their multiplication.
Neper’s theory of logarithms was destined to find extensive application. However, his “knuckles” were soon superseded by the logarithmic ruler and other calculating devices-mainly of the mechanical type-whose first inventor was the brilliant Frenchman Blaise Pascal.
The logarithmic ruler
The development of calculating devices kept pace with advances in mathematics. Shortly after the discovery of the logarithms in 1623 the logarithmic rule was invented.
In 1654 Robert Bissacar, and in 1657 independently S. Patridge (England) developed the rectangular logarithmic ruler, a counting instrument for simplifying calculations, by means of which operations on numbers are replaced by operations on the logarithms of these numbers. The design of the ruler has largely survived to this day.
The logarithmic ruler was destined to have a long life: from the 17th century to the present day. Calculations with a slide rule are simple, quick, but approximate. This makes it unsuitable for accurate calculations, e.g. in finance.
Leonardo da Vinci (1452-1519) designed a mechanical thirteen-digit adding machine with ten wheels. Based on these blueprints, IBM built a workable machine these days for advertising purposes.
The first mechanical counting machine was made in 1623 by mathematics professor Wilhelm Schickard (1592-1636). It mechanized the operations of addition and subtraction, and performed multiplication and division with elements of mechanization. But Schickard’s machine soon burned down in a fire. The biography of mechanical calculating devices therefore traces back to the adding machine made in 1642 by Blaise Pascal.
In 1673, another great mathematician, Gottfried Leibniz, developed a calculating machine that could already multiply and divide.
In 1880. In 1880 V.T. Odner created in Russia an arithmometer with variable number of cogs, and in 1890 set up mass production of improved arithmometers, which in the first quarter of 19th century were the basic mathematical machines used all over the world. Their modernization “Felix” was produced in the USSR until the 50s.
The idea of construction of automatic computing machine which would operate without human help was first proposed by English mathematician Charles Babbage (1791-1864) in the early 19th century. In 1820-1822 he built a machine able to calculate tables of values for polynomials of second order.
Blaise Pascal’s machine
The first mechanical machine capable of addition and subtraction is said to have been invented in 1646 by the young 18-year-old French mathematician and physicist Blaise Pascal. It was called the pascaline.
This machine was designed to work with 6-8 digit numbers and could only add and subtract and had a better way of recording the result than any before it. Pascal’s machine measured 36/13/8 centimeters, this small brass box was convenient to carry around. It had several special handles by which it was operated, had a number of small wheels with teeth. The first wheel counted ones, the second counted tens, the third hundreds, etc. Addition in Pascal’s machine is made by turning the wheels forward. By moving them backward, subtraction is performed.
Although the “Pascaline” was universally admired, it did not make the inventor rich. Nevertheless, the principle of coupled wheels he invented was the basis on which most calculating machines were built for the next three centuries. Pascal’s engineering ideas had a great influence on many other inventions in the field of computing.
The main disadvantage of the Pascaline was the inconvenience of performing all operations on it, except simple addition. The first machine that made it easy to do subtraction, multiplication and division was invented later in the 17th century in Germany. Gottfried Wilhelm Leibniz was credited with this invention.
Gottfried Leibniz’s machine
The next step was the invention of a machine that could do multiplication and division. The German Gottfried Leibniz invented such a machine in 1671. While in Paris, Leibniz met the Dutch mathematician and astronomer Christiaan Huygens. Seeing how much calculation an astronomer had to do, Leibniz decided to invent a mechanical device that made ba calculations easier. “Since it is unworthy of such wonderful men, like slaves, to waste time on computational work that could be entrusted to anyone with the use of a machine.”
Although Leibniz’s machine was similar to a “pascaline,” it had a moving part and a handle that could be used to turn a special wheel or cylinders inside the machine. Such a mechanism made it possible to speed up the repetitive addition operations required for multiplication. Self-repetition was also automatic.
In 1673 he made a mechanical calculator. But it was not this machine that made him famous in the first place, but the creation of differential and integral calculus. He also laid the foundations of the binary notation system, which later found use in automatic calculating devices.
Jacquart’s punch cards.
The next stage in the development of calculating devices seemed to have nothing to do with numbers, at least at first. Throughout the 18th century, French silk textile factories experimented with various mechanisms that operated the machine by means of perforating tape, perforation cards, or wooden drums. In all three systems, the thread was raised or lowered according to the presence or absence of holes – thus creating the desired fabric pattern.
French weaver and mechanic Joseph Jacquart created the first example of a machine controlled by inputting information into it. In 1802 he built a machine, which facilitated the production of fabrics with complex patterns. When making such fabric, each of the rows of threads must be raised or lowered. The loom then pulls another thread between the raised and lowered threads. Then each of the threads is lowered or raised in a certain order and the machine runs the thread through them again. This process is repeated many times until the desired length of fabric with the pattern is obtained. Jacquart used rows of holes on cards to set the pattern on the fabric. If ten threads were used, each row of card had room for ten holes. The card was attached to a machine in a device that could detect holes in the card. This device used probes to check each row of holes on the card.
The machine was programmed with a whole deck of punch cards, each of which controlled one stroke of the shuttle. The information on the card controlled the machine.
Of all the inventors of the past centuries who contributed in one way or another to the development of computer technology, Charles Babbage of England was the closest to creating a computer in its modern sense.
Charles Babbage’s difference machine
In 1812, the English mathematician Charles Babbage began working on a so called difference machine, which was to calculate any function, including trigonometric functions, as well as to make tables. In 1822 Charles Babbage built a counting device which he called a difference machine. This machine was used to input information on cards. The machine used digit wheels with cogs to perform a number of mathematical operations. However, due to a lack of funds, this machine was not completed, and was turned over to the King’s College Museum in London, where it is kept, to this day.
This failure, however, did not stop Babbage, and in 1834 he embarked on a new project: the construction of an Analytical Machine, which should perform calculations without human intervention. To do so, it had to be able to execute programs entered by punch cards (cards made of thick paper with information applied through holes, like in looms), and have a “warehouse” for remembering data and intermediate results (in modern terminology, memory). From 1842 to 1848, Babbage worked hard, spending his own money. The analytical machine, unlike its predecessor, had not only to solve mathematical problems of one particular type, but to carry out various computational operations according to instructions given by the operator. It was, in fact, nothing less than the first universal programmable computer. But if the Divergence Machine had doubtful chances of success, the Analytical Machine looked unrealistic at all. It simply could not be built and put into operation. In its final form, the machine had to be no smaller than a railroad locomotive. Its internal construction was a mess of steel, copper and wooden parts, clockwork mechanisms, driven by a steam engine. The slightest instability of any tiny part would lead to a hundredfold increased disturbance in other parts, and then the whole machine would fall into disrepair.
Unfortunately, he was not able to complete the work on the Analytical Machine – it was too complicated for the technology of the time. But it is to Babbage’s credit that he was the first to suggest, and partially to realize, the idea of program-controlled computing. It was the Analytical Machine which, in its essence, was the prototype of the modern computer.
In 1985 the staff of the Science Museum in London decided to find out at last whether it was possible to actually build a Babbage computational machine. After several years of hard work, the effort was crowned with success. In November 1991, shortly before the famous inventor’s bicentennial, the difference machine made its first serious calculation.
It was not until 19 years after Babbage’s death that one of the principles underlying the Analytical Machine – the use of punch cards – was embodied in a working device. It was a statistical tabulator built by an American, Herman Hollerith, to speed up the processing of the 1890 US census.
Herman Hollerith’s tabulator
In the late 19th century, more complex mechanical devices were created. The most important of these was the device developed by the American Herman Hollerith. Its uniqueness lay in the fact that it was the first to use the idea of punched cards and calculations were carried out with the help of electric current. This combination made the machine so workable that it was widely used in its day. For example, in the 1890 U.S. census, Hollerith used his machines to do in three years what would have taken seven years to do by hand in a much larger number of people.
Conrad Zuse’s Binary Digital Machine
It wasn’t until 100 years later that Babbage’s machine caught the attention of engineers. At the end of the 1930s, German engineer Konrad Zuse developed the first binary digital machine, the Z1. It made extensive use of electromechanical relays, that is, mechanical switches actuated by electric current. In 1941, Konrad Zuse created the Z3 machine, which was fully program-controlled.
Howard Aiken Computing Machine
World War II gave a major boost to the development of computer technology: the US military needed a computer.
In 1944 the American Howard Aiken built the then rather powerful “Mark 1” computer at one of the IBM company enterprises. This machine used mechanical elements (counting wheels) for numbers representation and electromechanical relays for control. The data processing program was entered from a punched tape. Dimensions: 15/2.5 m, 750000 parts. “Mark-1” could multiply two 23-digit numbers in 4 seconds.
THE BEGINNING OF THE ERA OF THE COMPUTER
The first ENIAC computer was created at the end of 1945 in the USA.
The main ideas, on which computing technology was developed for many years, were formulated in 1946 by the American mathematician John von Neumann. They were called the von Neumann architecture.
In 1949, the first computer with von Neumann architecture was built – English machine EDSAC. One year later, the American EDVAC computer appeared.
Serial production of computers began in the 1950s.
It is customary to divide the electronic computer technology into generations, associated with a change of element base. In addition, different generations of machines differ in their logical architecture and software, speed, RAM, method of input and output, etc.
FIRST GENERATION OF COMPUTER
The first generation of computers are the tube machines of the 50’s. Counting speed of the fastest first generation machines was up to 20 thousand operations per second. To enter programs and data, they used punched and punched cards. Because the internal memory of these machines was small (could hold a few thousand numbers and program commands), they were mainly used for engineering and scientific calculations, not related to the processing of large amounts of data. They were quite cumbersome constructions with thousands of lamps that occupied sometimes hundreds of square meters and consumed hundreds of kilowatts of electricity. Programs for these machines were written in computer command languages, so programming at that time was available to only a few.
SECOND GENERATION OF THE COMPUTER
In 1949, the first semiconductor device to replace the electronic tube was created in the USA. It was called a transistor. In 60s, transistors became the element base for the second generation computers. Transition to semiconductor elements improved the quality of the computer by all parameters: they became more compact, more reliable, less power-consuming. The performance of most machines reached tens and hundreds of thousands of operations per second. The volume of internal memory increased by hundreds of times compared to the first generation computers. External (magnetic) memory devices: magnetic drums, storages on magnetic tapes were highly developed. This made it possible to create reference and retrieval systems on computers (due to the need to store large amounts of information on magnetic media for a long time). During the second generation the high-level programming languages began to be actively developed. The first of them were FORTRAN, ALGOL, COBOL. Programming as an element of literacy began to spread widely, mainly among people with higher education.
THE THIRD GENERATION OF THE COMPUTER
The third generation of computers was created on a new element base – integrated circuits: complex electronic circuits were mounted on a small plate of semiconductor material, the area less than 1 cm2. They were called integrated circuits (ICs). The first ICs contained dozens, then hundreds of elements (transistors, resistors, etc.). When the degree of integration (number of elements) approached a thousand, they were called large integrated circuits – LSIs; then there were ultra-large integrated circuits – VLSIs. Third-generation computers began to be produced in the second half of the 1960s, when the American company IBM began producing the IBM-360 system of machines. In the Soviet Union in the 70s, began production of machines series EC EC (Unified Computer System). The transition to the third generation is associated with significant changes in computer architecture. It became possible to execute simultaneously several programs on one machine. Such mode of operation is named multiprogram (multiprogram) mode. The speed of most powerful models has reached several millions of operations per second. The third generation machines have a new type of external memory devices – magnetic disks. New types of input-output devices: displays, graph plotters were widely used. During this period the field of application of computers expanded considerably. Databases, first artificial intelligence systems, computer-aided design (CAD) and control systems (ACS) were created. In 70-s a line of small (mini) computers received a powerful development.
THE FOURTH GENERATION OF THE COMPUTER
Another revolutionary event in electronics occurred in 1971, when the American company Intel announced the creation of the microprocessor. A microprocessor is an ultra-large integrated circuit capable of acting as the main unit of a computer – the processor. Initially, microprocessors were built into various technical devices: machine tools, cars, aircraft. Combining a microprocessor with input/output devices, external memory, a new type of computer was created: a microcomputer. Microcomputers belong to the fourth generation of machines. The essential difference of microcomputers from their predecessors is their small size (the size of a household television set) and comparative cheapness. This is the first type of computer that appeared in retail sales.
The most popular type of computer today are personal computers (PCs). The first PC was born in 1976 in the United States. Since 1980, the “trendsetter” in the PC market is the American company IBM. Its designers managed to create such an architecture, which became, in fact, the international standard for professional PCs. Machines of this series were called the IBM PC (Personal Computer). The appearance and spread of the PC in its significance for social development is comparable with the appearance of printing. It was the PC made computer literacy a mass phenomenon. With the development of this type of machine appeared the concept of “information technology”, without which it is no longer possible to do in most areas of human activity.
Another line in the development of computers of the fourth generation is the supercomputer. Machines of this class have a performance of hundreds of millions and billions of operations per second. A supercomputer is a multiprocessor computing complex.
FIFTH GENERATION OF COMPUTER SYSTEMS
Developments in the field of computing continue. Computers of the fifth generation are machines of the near future. Their main quality should be a high intellectual level. They will be capable of voice input, voice communication, machine “vision”, machine “touch”.
Fifth generation machines are realized artificial intelligence.
The personal computer has quickly entered our lives. A few decades ago it was a rarity to see any personal computer – they were, but they were very expensive, and not even every firm could have a computer in its office. Now, however, every home has a computer, which has already become deeply embedded in human life.
Modern computers represent one of the most significant achievements of human thought, the impact of which on the development of scientific and technological progress is difficult to overestimate. The field of application of computers is enormous and constantly expanding.
Even 30 years ago there were only about 2000 different applications of microprocessor technology. These are production management (16%), transport and communication (17%), information-computer technology (12%), military technology (9%), household appliances (3%), education (2%), aviation and space (15%), medicine (4%), scientific research, municipal and urban economy, bank accountancy, metrology and other spheres.
For many, a world without a computer is a distant history, about as distant as the discovery of America or the October Revolution. But every time you turn on a computer, it’s impossible to stop marveling at the human genius that created this miracle.
Today’s PC-based personal computers are the most widely used kind of computers, their power is constantly growing and their field of application is expanding. These computers can be networked, allowing dozens or hundreds of users to easily exchange information and simultaneously access shared databases. E-mail facilities enable computer users to send text and facsimile messages to other cities and countries and to obtain information from large data banks using a conventional telephone network.
Global electronic communication system Intеrnеt provides for a very low cost possibility of obtaining information from all corners of the world, provides opportunities for voice and fax communications, facilitates the creation of intracorporate network information transfer for firms that have offices in different cities and countries.
However, the ability of IBM PC-compatible personal computers to process information is still limited, and not in all situations, their use is justified.
Personal computers, of course, have undergone significant changes during their victorious march across the planet, but they have also changed the world itself.