In some ways, the history of programming language theory predates even the development of programming languages themselves. The
lambda calculus, developed by Alonzo Church and
Stephen Cole Kleene in the 1930s, is considered by some to be the world's first programming language, even though it was intended to
model computation rather than being a means for programmers to
describe algorithms to a computer system. Many modern
functional programming languages have been described as providing a "thin veneer" over the lambda calculus,[2] and many are easily described in terms of it.
The first programming language to be invented was
Plankalkül, which was designed by
Konrad Zuse in the 1940s, but not publicly known until 1972 (and not implemented until 1998). The first widely known and successful
high-level programming language was
Fortran, developed from 1954 to 1957 by a team of
IBM researchers led by
John Backus. The success of FORTRAN led to the formation of a committee of scientists to develop a "universal" computer language; the result of their effort was
ALGOL 58. Separately,
John McCarthy of
MIT developed
Lisp, the first language with origins in academia to be successful. With the success of these initial efforts, programming languages became an active topic of research in the 1960s and beyond.
Some other key events in the history of programming language theory since then:
1950s
Noam Chomsky developed the
Chomsky hierarchy in the field of linguistics, a discovery which has directly impacted programming language theory and other branches of computer science.
In 1964,
Peter Landin is the first to realize Church's lambda calculus can be used to model programming languages. He introduces the
SECD machine which "interprets" lambda expressions.
In 1966, Landin introduces
ISWIM, an abstract computer
programming language in his article The Next 700 Programming Languages. It is influential in the design of languages leading to the
Haskell programming language.
In 1966,
Corrado Böhm introduced the programming language
CUCH (Curry-Church).[3]
In 1972,
logic programming and
Prolog were developed thus allowing computer programs to be expressed as mathematical logic.
A team of scientists at
Xerox PARC led by
Alan Kay develop
Smalltalk, an object-oriented language widely known for its innovative development environment.
Backus, at the 1977
Turing Award lecture, assailed the current state of industrial languages and proposed a new class of programming languages now known as
function-level programming languages.
In 1985, the release of
Miranda sparks an academic interest in lazy-evaluated pure functional programming languages. A committee was formed to define an open standard resulting in the release of the Haskell 1.0 standard in 1990.
There are several fields of study that either lie within programming language theory, or which have a profound influence on it; many of these have considerable overlap. In addition, PLT makes use of many other branches of
mathematics, including
computability theory,
category theory, and
set theory.
Formal semantics is the formal specification of the behaviour of computer programs and programming languages. Three common approaches to describe the semantics or "meaning" of a computer program are
denotational semantics,
operational semantics and
axiomatic semantics.
Type theory is the study of
type systems; which are "a tractable syntactic method for proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute".[4] Many programming languages are distinguished by the characteristics of their type systems.
Program analysis is the general problem of examining a program and determining key characteristics (such as the absence of classes of
program errors). Program transformation is the process of transforming a program in one form (language) to another form.
Comparative programming language analysis
Comparative programming language analysis seeks to classify programming languages into different types based on their characteristics; broad categories of programming languages are often known as
programming paradigms.
Generic and metaprogramming
Metaprogramming is the generation of higher-order programs which, when executed, produce programs (possibly in a different language, or in a subset of the original language) as a result.
Domain-specific languages
Domain-specific languages are languages constructed to efficiently solve problems of a particular part of domain.
Compiler theory is the theory of writing compilers (or more generally, translators); programs that translate a program written in one language into another form. The actions of a compiler are traditionally broken up into syntax analysis (
scanning and
parsing), semantic analysis (determining what a program should do), optimization (improving the performance of a program as indicated by some metric; typically execution speed) and code generation (generation and output of an equivalent program in some target language; often the
instruction set of a CPU).
Michael J. C. Gordon. Programming Language Theory and Its Implementation. Prentice Hall.
Gunter, Carl and
Mitchell, John C. (eds.). Theoretical Aspects of Object Oriented Programming Languages: Types, Semantics, and Language Design. MIT Press.