
Programming history and first languages
22.03.2024
53
0
0
0
0
The history of programming languages dates back to the 1940s of the last century, when the need arose to interact with electronic computers by giving them commands. Since the advent of the first computer in 1945, computers themselves and the ways of interacting with them have become significantly more complex.
The first programming languages were a simple set of binary codes, and the newest ones are already complex object-oriented structures that allow you to write programs with very complex functionality with minimal time and resources. In this article, we will look in detail at the at the history and evolution of programming languages and also analyze their features and influence on each other.
A Brief History of Programming Development
In fact, we can say that the history of the development of programming languages began much earlier than the invention of the computer. The basis for all modern languages was the algorithm invented by Ada Lovelace in 1843 to interact with Charles Babbage's Analytical Engine.

Ada, pic from Wiki
This device could not yet be called a computer it was completely mechanical, but still, the Lovelace algorithm made it possible to give the machine quite complex tasks (for example, solving an equation with several unknowns).

Charles Babbage's analytical machine

Charles Babbage
The modern history of programming is divided into five periods.
- The first languages were based on computer-readable binary code, which made the process of writing programs long and complex.
- Towards the end of the 1940s, a new syntax appeared—now code words, rather than sequences of numbers, were used to communicate with a computer.
- High-level languages of the third generation made it possible to engage in programming without a thorough knowledge of the structure and principles of the operation of a computer.
- The next “breakthrough” was the emergence of languages whose syntax was close to spoken English.
- The newest languages, the fifth generation, are object-oriented: the program consists of separate independent modules and operates with objects, not code words.
Binary codes
To better understand the history of the origin of programming languages, let's remember the structure of the computer. Computers perceive only sequences of zeros and ones, that is, the presence or absence of voltage in the circuit. That is why the first languages were binary; they consisted of zeros and ones arranged in a certain sequence.
Such algorithms were understandable for machines but very complex for people; the error was impossible to track visually. In addition, to write programs, it was necessary to know the structure of a specific electronic computer and the operating features of its individual blocks. Therefore, there were few programmers, and their work was complex and monotonous. Under such conditions, creating complex programs was out of the question.
The first low-level language
The transition of computer programs from binary codes to human-readable algorithms occurred in the mid-40s. The first “human-oriented” language can be called Plankalküll (translated from German as “computation planning”), created by an engineer from Germany, Konrad Zuse.

Konrad Zuse
When the history of face programming is considered, the personality of this scientist is usually forgotten, but in fact, his contribution is difficult to overestimate.
The German engineer not only created the language but also developed his own computer, the Z1. To control the device, a keyboard from a typewriter was used, and the power source was a motor from an old vacuum cleaner. Zuse consistently improved his invention; the Z4 version can already be safely called the prototype of modern computers.

Z4 computer
It was for the Z4 that an innovative concept at that time was developed - dividing the operation of the device into two parts: a computer program and calculations that the computer would perform on its basis (in fact, an analogue of modern “hard” and “soft”). Plankalkül no longer had a binary but a symbolic vocabulary, as well as its own assignment operators. This language could have been a real breakthrough, but Germany's loss in World War II stopped all development.
Assembly language
The advent of assemblers made it possible to simplify the programming process compared to binary codes. Now computer commands could be given not in long digital sequences, but in combinations of numbers and code words. Programming in assembly language has become much easier compared to binary code, but this language is still quite complex and cumbersome.
To be fair, it is worth noting that it can only be called cumbersome in comparison with modern languages. For that time, this was a breakthrough, since the size of computer programs was now reduced significantly, but a simpler architecture was needed to write complex algorithms. In addition, like planks, assembly language is low-level. Programs written on this basis are suitable only for specific computers; when moving to a new platform, the algorithm must be changed and modified.
In the fifties of the last century, the history of programming development moved to a new stage—high-level languages appeared. But assemblers are still used today. Since programs written in low-level languages work significantly faster than high-level ones, they are often used by hackers, antivirus creators, as well as driver and computer game developers.
High-level languages
The emergence of high-level languages has further simplified the process of writing programs. Third-generation languages were not tied to a specific computer model. To ensure interaction between the program and the machine, compilers were used, which translated the program code into a language “understandable” to a given computer.
Thus, there was no need to write code in a form that was understandable for a specific machine. Third-generation languages have become more abstract and understandable to humans. Now even people who did not understand all the intricacies of computer operation could become programmers. In addition, it became possible to concentrate all efforts on the logic of the algorithm being developed and not on the design features of a particular computer.
The first language in this category was Fortran, written for computers by IBM (its name stands for “formula translator"—FOFmula TRANslator). Interestingly, the Fortran compiler was already optimizing, since programmers were not interested in using code whose performance was inferior to assembly language. Initially, the attitude towards Fortran was skeptical, but subsequently, its popularity grew, forcing other PC manufacturers to write compilers for their devices. Currently, Fortran is also used, mainly due to the fact that many programs have already been written in it, which are pointless to rewrite.

an IBM computer running Fortran
Looking at the history of programming languages in detail, it is worth noting that Fortran is not the only known high-level language.
- ALGOL was developed in Switzerland as a European competitor to the American product. Subsequently, the word “Algol” became commonly used, so the name was changed to ALGOL 58.
- LISP was created for processing lists (its name stands for LISt Processing). The language works on a function basis, which makes it fairly easy to write and debug complex programs. LISP has several dialects that differ slightly in functionality.
- COBOL is intended primarily for developing software for business and economics. The language has a clear syntax and structure and provides efficient work with large amounts of data, but it is not suitable for complex engineering calculations.
It was the third generation of languages that made it possible to create programs whose functionality could be useful to ordinary users, not just scientists or financiers.
The only disadvantages of high-level languages were the greater weight of programs and increased memory requirements compared to low-level ones. But the history of programming development continued, and structured languages were already on the way.
Structural languages
In the 1970s of the last century, the history of the origin of programming received new developments. The first structural languages were created, which represented a program in the form of a visual hierarchical structure, and their syntax became even closer to human language. The structures made it possible to combine different types of data, work with them as a single format, and also build their interdependent sequences. This is much easier than working with individual variables, which can easily get confusing if the code is long enough.
The most famous structured language that has had a significant impact on the entire programming industry as a whole is C, but it is far from the only one.
One of the first fourth-generation languages was Prolog, created in 1974, which was based on the use of human logic to write programs. Currently, various versions of Prolog are used to create functions for working with large amounts of data (including for search engines).
The Smalltalk language is interesting because it was during its development that the term “object-oriented” was first uttered. It was used by developer Alan Kay, who sought to create a language whose structure would be similar to the cellular structure of an organism, where individual cells exchange information with each other. It's worth noting that Alan also developed a user interface that was icon-based rather than console-based.
Structural languages had one significant drawback: they did not allow working with complex and long program codes. Creating new, increasingly complex software required a new approach.
Object-Oriented Programming
Fifth-generation languages appeared in the eighties of the last century and are currently the most advanced. Object-oriented languages work with classes—basic data types (for example, strings or numbers). Based on classes, objects are created that have all the properties of their category but may differ in certain characteristics. That is, the class contains not only the variables themselves but also functions that work with them.
The program is written not as a whole "canvas," but in modules, which makes programming a simpler and, at the same time, more creative task. In addition, individual components are easier to create and edit. It also became possible to copy individual pieces of code, slightly changing them to perform related tasks, instead of writing everything from scratch. The testing process has also become significantly easier; you can check not the entire program but individual classes or modules.
In fact, the first OOP is Simula 67, which was created back in 1967. The language had built-in syntax support, classes, and subclasses, but it was too ahead of its time and, moreover, was not very effectively implemented, so it was undeservedly forgotten.
But still, it cannot be ignored in the story about the history of programming languages. Moreover, the developer is one of the most popular OOPs: C++ B. Björn Stroustrup, when working, relied specifically on the Simula concept.
Other examples of modern OOP are JavaScript, C#, Object Pascal, and Python.
Comments
(0)
Send
It's empty now. Be the first (o゚v゚)ノ
Used termins
- Programming language ⟶ Is a formal set of instructions that can be used to produce various kinds of output, including software applications, algorithms, and data processing. Programming languages provide a way for developers to communicate with computers, enabling them to specify operations and control the behavior of hardware.
Related questions
- How old is internet Computers were able to communicate with each other in the 60s in the 20th century, but all of them did this in their own unique way. There is no commonly accepted protocol to establish a connection. In On January 1, 1983, ARPANET officially switched to the TCP/IP standard, which had been developed earlier. This was the birth of the internet. 41 since then.
- How old is internet Computers were able to communicate with each other in the 60s in the 20th century, but all of them did this in their own unique way. There is no commonly accepted protocol to establish a connection. In On January 1, 1983, ARPANET officially switched to the TCP/IP standard, which had been developed earlier. This was the birth of the internet. 41 since then.
- How did email begin to be widely used In 1971 Ray Tomlinson sent the first mail message between two computers on the ARPANET, introducing the now-familiar address syntax with the '@' symbol designating the user's system address. But in parallel with ARPANET, other companies were also developing email systems with similar functionality.