In
mathematics, the logarithm is the
inverse function to
exponentiation. That means that the logarithm of a number x to the baseb is the
exponent to which b must be raised to produce x. For example, since 1000 = 103, the logarithm base 10 of 1000 is 3, or log10 (1000) = 3. The logarithm of x to baseb is denoted as logb (x), or without parentheses, logbx, or even without the explicit base, log x, when no confusion is possible, or when the base does not matter such as in
big O notation.
Logarithms were introduced by
John Napier in 1614 as a means of simplifying calculations.[1] They were rapidly adopted by navigators, scientists, engineers,
surveyors and others to perform high-accuracy computations more easily. Using
logarithm tables, tedious multi-digit multiplication steps can be replaced by table look-ups and simpler addition. This is possible because the logarithm of a
product is the
sum of the logarithms of the factors:
provided that b, x and y are all positive and b ≠ 1. The
slide rule, also based on logarithms, allows quick calculations without tables, but at lower precision. The present-day notion of logarithms comes from
Leonhard Euler, who connected them to the
exponential function in the 18th century, and who also introduced the letter e as the base of natural logarithms.[2]
The concept of logarithm as the inverse of exponentiation extends to other mathematical structures as well. However, in general settings, the logarithm tends to be a multi-valued function. For example, the
complex logarithm is the multi-valued
inverse of the complex exponential function. Similarly, the
discrete logarithm is the multi-valued inverse of the exponential function in finite groups; it has uses in
public-key cryptography.
Motivation
Addition,
multiplication, and
exponentiation are three of the most fundamental arithmetic operations. The inverse of addition is subtraction, and the inverse of multiplication is
division. Similarly, a logarithm is the inverse operation of
exponentiation. Exponentiation is when a number b, the base, is raised to a certain power y, the exponent, to give a value x; this is denoted
For example, raising 2 to the power of 3 gives 8:
The logarithm of base b is the inverse operation, that provides the output y from the input x. That is, is equivalent to if b is a positive
real number. (If b is not a positive real number, both exponentiation and logarithm can be defined but may take several values, which makes definitions much more complicated.)
One of the main historical motivations of introducing logarithms is the formula
by which
tables of logarithms allow multiplication and division to be reduced to addition and subtraction, a great aid to calculations before the invention of computers.
Definition
Given a positive
real numberb such that b ≠ 1, the logarithm of a positive real number x with respect to base b[nb 1] is the exponent by which b must be raised to yield x. In other words, the logarithm of x to base b is the unique real number y such that .[3]
The logarithm is denoted "logbx" (pronounced as "the logarithm of x to base b", "the base-b logarithm of x", or most commonly "the log, base b, of x").
An equivalent and more succinct definition is that the function logb is the
inverse function to the function .
Examples
log2 16 = 4, since 24 = 2 × 2 × 2 × 2 = 16.
Logarithms can also be negative: since
log10 150 is approximately 2.176, which lies between 2 and 3, just as 150 lies between 102 = 100 and 103 = 1000.
For any base b, logbb = 1 and logb 1 = 0, since b1 = b and b0 = 1, respectively.
Several important formulas, sometimes called logarithmic identities or logarithmic laws, relate logarithms to one another.[4]
Product, quotient, power, and root
The logarithm of a product is the sum of the logarithms of the numbers being multiplied; the logarithm of the ratio of two numbers is the difference of the logarithms. The logarithm of the p-th power of a number is ptimes the logarithm of the number itself; the logarithm of a p-th root is the logarithm of the number divided by p. The following table lists these identities with examples. Each of the identities can be derived after substitution of the logarithm definitions or in the left hand sides.
Formula
Example
Product
Quotient
Power
Root
Change of base
The logarithm logbx can be computed from the logarithms of x and b with respect to an arbitrary base k using the following formula:[nb 2]
Typical
scientific calculators calculate the logarithms to bases 10 and e.[5] Logarithms with respect to any base b can be determined using either of these two logarithms by the previous formula:
Given a number x and its logarithm y = logbx to an unknown base b, the base is given by:
which can be seen from taking the defining equation to the power of
Particular bases
Among all choices for the base, three are particularly common. These are b = 10, b =
e (the
irrational mathematical constant e ≈ 2.71828183 ), and b = 2 (the
binary logarithm). In
mathematical analysis, the logarithm base e is widespread because of analytical properties explained below. On the other hand, base 10 logarithms (the
common logarithm) are easy to use for manual calculations in the
decimal number system:[6]
Thus, log10 (x) is related to the number of
decimal digits of a positive integer x: The number of digits is the smallest
integer strictly bigger than log10 (x) .[7]
For example, log10(5986) is approximately 3.78 . The
next integer above it is 4, which is the number of digits of 5986. Both the natural logarithm and the binary logarithm are used in
information theory, corresponding to the use of
nats or
bits as the fundamental units of information, respectively.[8]
Binary logarithms are also used in
computer science, where the
binary system is ubiquitous; in
music theory, where a pitch ratio of two (the
octave) is ubiquitous and the number of
cents between any two pitches is a scaled version of the binary logarithm, or log 2 times 1200, of the pitch ratio (that is, 100 cents per
semitone in
conventional equal temperament), or equivalently the log base 21/1200 ; and in
photography rescaled base 2 logarithms are used to measure
exposure values,
light levels,
exposure times, lens
apertures, and
film speeds in "stops".[9]
Many disciplines write log x as an abbreviation for logbx when the intended base can be inferred based on the context or discipline (or when the base is indeterminate or immaterial). In computer science, log usually refers to log2, and in mathematics log usually refers to loge .[10]
In other contexts, log often means log10 .[11]
The following table lists common notations for logarithms to these bases and the fields where they are used. The "ISO notation" column lists designations suggested by the
International Organization for Standardization.[12]
The history of logarithms in seventeenth-century Europe saw the discovery of a new
function that extended the realm of analysis beyond the scope of algebraic methods. The method of logarithms was publicly propounded by
John Napier in 1614, in a book titled Mirifici Logarithmorum Canonis Descriptio (Description of the Wonderful Canon of Logarithms).[19][20] Prior to Napier's invention, there had been other techniques of similar scopes, such as the
prosthaphaeresis or the use of tables of progressions, extensively developed by
Jost Bürgi around 1600.[21][22] Napier coined the term for logarithm in Middle Latin, "logarithmus," derived from the Greek, literally meaning, "ratio-number," from logos "proportion, ratio, word" + arithmos "number".
The
common logarithm of a number is the index of that power of ten which equals the number.[23] Speaking of a number as requiring so many figures is a rough allusion to common logarithm, and was referred to by
Archimedes as the "order of a number".[24] The first real logarithms were heuristic methods to turn multiplication into addition, thus facilitating rapid computation. Some of these methods used tables derived from trigonometric identities.[25] Such methods are called
prosthaphaeresis.
Before Euler developed his modern conception of complex natural logarithms,
Roger Cotes had a nearly equivalent result when he showed in 1714 that[27]
Logarithm tables, slide rules, and historical applications
By simplifying difficult calculations before calculators and computers became available, logarithms contributed to the advance of science, especially
astronomy. They were critical to advances in
surveying,
celestial navigation, and other domains.
Pierre-Simon Laplace called logarithms
"...[a]n admirable artifice which, by reducing to a few days the labour of many months, doubles the life of the astronomer, and spares him the errors and disgust inseparable from long calculations."[28]
As the function f(x) = bx is the inverse function of logbx, it has been called an antilogarithm.[29] Nowadays, this function is more commonly called an
exponential function.
Log tables
A key tool that enabled the practical use of logarithms was the table of logarithms.[30] The first such table was compiled by
Henry Briggs in 1617, immediately after Napier's invention but with the innovation of using 10 as the base. Briggs' first table contained the
common logarithms of all integers in the range from 1 to 1000, with a precision of 14 digits. Subsequently, tables with increasing scope were written. These tables listed the values of log10x for any number x in a certain range, at a certain precision. Base-10 logarithms were universally used for computation, hence the name common logarithm, since numbers that differ by factors of 10 have logarithms that differ by integers. The common logarithm of x can be separated into an
integer part and a
fractional part, known as the characteristic and
mantissa. Tables of logarithms need only include the mantissa, as the characteristic can be easily determined by counting digits from the decimal point.[31] The characteristic of 10 · x is one plus the characteristic of x, and their mantissas are the same. Thus using a three-digit log table, the logarithm of 3542 is approximated by
The value of 10x can be determined by reverse look up in the same table, since the logarithm is a
monotonic function.
Computations
The product and quotient of two positive numbers c and d were routinely calculated as the sum and difference of their logarithms. The product cd or quotient c/d came from looking up the antilogarithm of the sum or difference, via the same table:
and
For manual calculations that demand any appreciable precision, performing the lookups of the two logarithms, calculating their sum or difference, and looking up the antilogarithm is much faster than performing the multiplication by earlier methods such as
prosthaphaeresis, which relies on
trigonometric identities.
Calculations of powers and
roots are reduced to multiplications or divisions and lookups by
and
Trigonometric calculations were facilitated by tables that contained the common logarithms of
trigonometric functions.
Slide rules
Another critical application was the
slide rule, a pair of logarithmically divided scales used for calculation. The non-sliding logarithmic scale,
Gunter's rule, was invented shortly after Napier's invention.
William Oughtred enhanced it to create the slide rule—a pair of logarithmic scales movable with respect to each other. Numbers are placed on sliding scales at distances proportional to the differences between their logarithms. Sliding the upper scale appropriately amounts to mechanically adding logarithms, as illustrated here:
For example, adding the distance from 1 to 2 on the lower scale to the distance from 1 to 3 on the upper scale yields a product of 6, which is read off at the lower part. The slide rule was an essential calculating tool for engineers and scientists until the 1970s, because it allows, at the expense of precision, much faster computation than techniques based on tables.[32]
Analytic properties
A deeper study of logarithms requires the concept of a function. A function is a rule that, given one number, produces another number.[33] An example is the function producing the x-th power of b from any real number x, where the base b is a fixed number. This function is written as f(x) = bx. When b is positive and unequal to 1, we show below that f is invertible when considered as a function from the reals to the positive reals.
Existence
Let b be a positive real number not equal to 1 and let f(x) = bx.
It is a standard result in real analysis that any continuous strictly monotonic function is bijective between its domain and range. This fact follows from the
intermediate value theorem.[34] Now, f is
strictly increasing (for b > 1), or strictly decreasing (for 0 < b < 1),[35] is continuous, has domain , and has range . Therefore, f is a bijection from to . In other words, for each positive real number y, there is exactly one real number x such that .
We let denote the inverse of f. That is, logby is the unique real number x such that . This function is called the base-blogarithm function or logarithmic function (or just logarithm).
Characterization by the product formula
The function logbx can also be essentially characterized by the product formula
More precisely, the logarithm to any base b > 1 is the only
increasing functionf from the positive reals to the reals satisfying f(b) = 1 and[36]
Graph of the logarithm function
As discussed above, the function logb is the inverse to the exponential function . Therefore, their
graphs correspond to each other upon exchanging the x- and the y-coordinates (or upon reflection at the diagonal line x = y), as shown at the right: a point (t, u = bt) on the graph of f yields a point (u, t = logbu) on the graph of the logarithm and vice versa. As a consequence, logb (x)diverges to infinity (gets bigger than any given number) if x grows to infinity, provided that b is greater than one. In that case, logb(x) is an
increasing function. For b < 1, logb (x) tends to minus infinity instead. When x approaches zero, logbx goes to minus infinity for b > 1 (plus infinity for b < 1, respectively).
Derivative and antiderivative
Analytic properties of functions pass to their inverses.[34] Thus, as f(x) = bx is a continuous and
differentiable function, so is logby. Roughly, a continuous function is differentiable if its graph has no sharp "corners". Moreover, as the
derivative of f(x) evaluates to ln(b) bx by the properties of the
exponential function, the
chain rule implies that the derivative of logbx is given by[35][37]
That is, the
slope of the
tangent touching the graph of the base-b logarithm at the point (x, logb (x)) equals 1/(x ln(b)).
The derivative of ln(x) is 1/x; this implies that ln(x) is the unique
antiderivative of 1/x that has the value 0 for x = 1. It is this very simple formula that motivated to qualify as "natural" the natural logarithm; this is also one of the main reasons of the importance of the
constant e.
The derivative with a generalized functional argument f(x) is
This definition has the advantage that it does not rely on the exponential function or any trigonometric functions; the definition is in terms of an integral of a simple reciprocal. As an integral, ln(t) equals the area between the x-axis and the graph of the function 1/x, ranging from x = 1 to x = t. This is a consequence of the
fundamental theorem of calculus and the fact that the derivative of ln(x) is 1/x. Product and power logarithm formulas can be derived from this definition.[41] For example, the product formula ln(tu) = ln(t) + ln(u) is deduced as:
The equality (1) splits the integral into two parts, while the equality (2) is a change of variable (w = x/t). In the illustration below, the splitting corresponds to dividing the area into the yellow and blue parts. Rescaling the left hand blue area vertically by the factor t and shrinking it by the same factor horizontally does not change its size. Moving it appropriately, the area fits the graph of the function f(x) = 1/x again. Therefore, the left hand blue area, which is the integral of f(x) from t to tu is the same as the integral from 1 to u. This justifies the equality (2) with a more geometric proof.
The power formula ln(tr) = r ln(t) may be derived in a similar way:
converges (i.e. gets arbitrarily close) to a number known as the
Euler–Mascheroni constantγ = 0.5772.... This relation aids in analyzing the performance of algorithms such as
quicksort.[42]
Logarithms are easy to compute in some cases, such as log10 (1000) = 3. In general, logarithms can be calculated using
power series or the
arithmetic–geometric mean, or be retrieved from a precalculated
logarithm table that provides a fixed precision.[45][46]Newton's method, an iterative method to solve equations approximately, can also be used to calculate the logarithm, because its inverse function, the exponential function, can be computed efficiently.[47] Using look-up tables,
CORDIC-like methods can be used to compute logarithms by using only the operations of addition and
bit shifts.[48][49] Moreover, the
binary logarithm algorithm calculates lb(x)recursively, based on repeated squarings of x, taking advantage of the relation
Power series
Taylor series
For any real number z that satisfies 0 < z ≤ 2, the following formula holds:[nb 5][50]
Equating the function ln(z) to this infinite sum (
series) is shorthand for saying that the function can be approximated to a more and more accurate value by the following expressions (known as
partial sums):
For example, with z = 1.5 the third approximation yields 0.4167, which is about 0.011 greater than ln(1.5) = 0.405465, and the ninth approximation yields 0.40553, which is only about 0.0001 greater. The nth partial sum can approximate ln(z) with arbitrary precision, provided the number of summands n is large enough.
In elementary calculus, the series is said to
converge to the function ln(z), and the function is the
limit of the series. It is the
Taylor series of the
natural logarithm at z = 1. The Taylor series of ln(z) provides a particularly useful approximation to ln(1 + z) when z is small, |z| < 1, since then
For example, with z = 0.1 the first-order approximation gives ln(1.1) ≈ 0.1, which is less than 5% off the correct value 0.0953.
This series can be derived from the above Taylor series. It converges quicker than the Taylor series, especially if z is close to 1. For example, for z = 1.5, the first three terms of the second series approximate ln(1.5) with an error of about 3×10−6. The quick convergence for z close to 1 can be taken advantage of in the following way: given a low-accuracy approximation y ≈ ln(z) and putting
the logarithm of z is:
The better the initial approximation y is, the closer A is to 1, so its logarithm can be calculated efficiently. A can be calculated using the
exponential series, which converges quickly provided y is not too large. Calculating the logarithm of larger z can be reduced to smaller values of z by writing z = a · 10b, so that ln(z) = ln(a) + b · ln(10).
A closely related method can be used to compute the logarithm of integers. Putting in the above series, it follows that:
If the logarithm of a large integer n is known, then this series yields a fast converging series for log(n+1), with a
rate of convergence of .
Arithmetic–geometric mean approximation
The
arithmetic–geometric mean yields high-precision approximations of the
natural logarithm. Sasaki and Kanada showed in 1982 that it was particularly fast for precisions between 400 and 1000 decimal places, while Taylor series methods were typically faster when less precision was needed. In their work ln(x) is approximated to a precision of 2−p (or p precise bits) by the following formula (due to
Carl Friedrich Gauss):[51][52]
Here M(x, y) denotes the
arithmetic–geometric mean of x and y. It is obtained by repeatedly calculating the average (x + y)/2 (
arithmetic mean) and (
geometric mean) of x and y then let those two numbers become the next x and y. The two numbers quickly converge to a common limit which is the value of M(x, y). m is chosen such that
to ensure the required precision. A larger m makes the M(x, y) calculation take more steps (the initial x and y are farther apart so it takes more steps to converge) but gives more precision. The constants π and ln(2) can be calculated with quickly converging series.
Feynman's algorithm
While at
Los Alamos National Laboratory working on the
Manhattan Project,
Richard Feynman developed a bit-processing algorithm to compute the logarithm that is similar to long division and was later used in the
Connection Machine. The algorithm relies on the fact that every real number x where 1 < x < 2 can be represented as a product of distinct factors of the form 1 + 2−k. The algorithm sequentially builds that product P, starting with P = 1 and k = 1: if P · (1 + 2−k) < x, then it changes P to P · (1 + 2−k). It then increases by one regardless. The algorithm stops when k is large enough to give the desired accuracy. Because log(x) is the sum of the terms of the form log(1 + 2−k) corresponding to those k for which the factor 1 + 2−k was included in the product P, log(x) may be computed by simple addition, using a table of log(1 + 2−k) for all k. Any base may be used for the logarithm table.[53]
Applications
Logarithms have many applications inside and outside mathematics. Some of these occurrences are related to the notion of
scale invariance. For example, each chamber of the shell of a
nautilus is an approximate copy of the next one, scaled by a constant factor. This gives rise to a
logarithmic spiral.[54]Benford's law on the distribution of leading digits can also be explained by scale invariance.[55] Logarithms are also linked to
self-similarity. For example, logarithms appear in the analysis of algorithms that solve a problem by dividing it into two similar smaller problems and patching their solutions.[56] The dimensions of self-similar geometric shapes, that is, shapes whose parts resemble the overall picture are also based on logarithms.
Logarithmic scales are useful for quantifying the relative change of a value as opposed to its absolute difference. Moreover, because the logarithmic function log(x) grows very slowly for large x, logarithmic scales are used to compress large-scale scientific data. Logarithms also occur in numerous scientific formulas, such as the
Tsiolkovsky rocket equation, the
Fenske equation, or the
Nernst equation.
Scientific quantities are often expressed as logarithms of other quantities, using a logarithmic scale. For example, the
decibel is a
unit of measurement associated with
logarithmic-scalequantities. It is based on the common logarithm of
ratios—10 times the common logarithm of a
power ratio or 20 times the common logarithm of a
voltage ratio. It is used to quantify the loss of voltage levels in transmitting electrical signals,[citation needed] to describe power levels of sounds in
acoustics,[57] and the
absorbance of light in the fields of
spectrometry and
optics. The
signal-to-noise ratio describing the amount of unwanted
noise in relation to a (meaningful)
signal is also measured in decibels.[58] In a similar vein, the
peak signal-to-noise ratio is commonly used to assess the quality of sound and
image compression methods using the logarithm.[59]
The strength of an earthquake is measured by taking the common logarithm of the energy emitted at the quake. This is used in the
moment magnitude scale or the
Richter magnitude scale. For example, a 5.0 earthquake releases 32 times (101.5) and a 6.0 releases 1000 times (103) the energy of a 4.0.[60]Apparent magnitude measures the brightness of stars logarithmically.[61] In
chemistry the negative of the decimal logarithm, the decimal cologarithm, is indicated by the letter p.[62] For instance,
pH is the decimal cologarithm of the
activity of
hydronium ions (the form
hydrogenionsH+ take in water).[63] The activity of hydronium ions in neutral water is 10−7mol·L−1, hence a pH of 7. Vinegar typically has a pH of about 3. The difference of 4 corresponds to a ratio of 104 of the activity, that is, vinegar's hydronium ion activity is about 10−3 mol·L−1.
Semilog (log–linear) graphs use the logarithmic scale concept for visualization: one axis, typically the vertical one, is scaled logarithmically. For example, the chart at the right compresses the steep increase from 1 million to 1 trillion to the same space (on the vertical axis) as the increase from 1 to 1 million. In such graphs,
exponential functions of the form f(x) = a · bx appear as straight lines with
slope equal to the logarithm of b.
Log-log graphs scale both axes logarithmically, which causes functions of the form f(x) = a · xk to be depicted as straight lines with slope equal to the exponent k. This is applied in visualizing and analyzing
power laws.[64]
Psychology
Logarithms occur in several laws describing
human perception:[65][66]Hick's law proposes a logarithmic relation between the time individuals take to choose an alternative and the number of choices they have.[67]Fitts's law predicts that the time required to rapidly move to a target area is a logarithmic function of the distance to and the size of the target.[68] In
psychophysics, the
Weber–Fechner law proposes a logarithmic relationship between
stimulus and
sensation such as the actual vs. the perceived weight of an item a person is carrying.[69] (This "law", however, is less realistic than more recent models, such as
Stevens's power law.[70])
Psychological studies found that individuals with little mathematics education tend to estimate quantities logarithmically, that is, they position a number on an unmarked line according to its logarithm, so that 10 is positioned as close to 100 as 100 is to 1000. Increasing education shifts this to a linear estimate (positioning 1000 10 times as far away) in some circumstances, while logarithms are used when the numbers to be plotted are difficult to plot linearly.[71][72]
Logarithms also occur in
log-normal distributions. When the logarithm of a
random variable has a
normal distribution, the variable is said to have a log-normal distribution.[74] Log-normal distributions are encountered in many fields, wherever a variable is formed as the product of many independent positive random variables, for example in the study of turbulence.[75]
Logarithms are used for
maximum-likelihood estimation of parametric
statistical models. For such a model, the
likelihood function depends on at least one
parameter that must be estimated. A maximum of the likelihood function occurs at the same parameter-value as a maximum of the logarithm of the likelihood (the "log likelihood"), because the logarithm is an increasing function. The log-likelihood is easier to maximize, especially for the multiplied likelihoods for
independent random variables.[76]
Benford's law describes the occurrence of digits in many
data sets, such as heights of buildings. According to Benford's law, the probability that the first decimal-digit of an item in the data sample is d (from 1 to 9) equals log10 (d + 1) − log10 (d), regardless of the unit of measurement.[77] Thus, about 30% of the data can be expected to have 1 as first digit, 18% start with 2, etc. Auditors examine deviations from Benford's law to detect fraudulent accounting.[78]
For example, to find a number in a sorted list, the
binary search algorithm checks the middle entry and proceeds with the half before or after the middle entry if the number is still not found. This algorithm requires, on average, log2 (N) comparisons, where N is the list's length.[81] Similarly, the
merge sort algorithm sorts an unsorted list by dividing the list into halves and sorting these first before merging the results. Merge sort algorithms typically require a time
approximately proportional toN · log(N).[82] The base of the logarithm is not specified here, because the result only changes by a constant factor when another base is used. A constant factor is usually disregarded in the analysis of algorithms under the standard
uniform cost model.[83]
A function f(x) is said to
grow logarithmically if f(x) is (exactly or approximately) proportional to the logarithm of x. (Biological descriptions of organism growth, however, use this term for an exponential function.[84]) For example, any
natural numberN can be represented in
binary form in no more than log2N + 1bits. In other words, the amount of
memory needed to store N grows logarithmically with N.
Entropy and chaos
Entropy is broadly a measure of the disorder of some system. In
statistical thermodynamics, the entropy S of some physical system is defined as
The sum is over all possible states i of the system in question, such as the positions of gas particles in a container. Moreover, pi is the probability that the state i is attained and k is the
Boltzmann constant. Similarly,
entropy in information theory measures the quantity of information. If a message recipient may expect any one of N possible messages with equal likelihood, then the amount of information conveyed by any one such message is quantified as log2N bits.[85]
Lyapunov exponents use logarithms to gauge the degree of chaoticity of a
dynamical system. For example, for a particle moving on an oval billiard table, even small changes of the initial conditions result in very different paths of the particle. Such systems are
chaotic in a
deterministic way, because small measurement errors of the initial state predictably lead to largely different final states.[86] At least one Lyapunov exponent of a deterministically chaotic system is positive.
Fractals
Logarithms occur in definitions of the
dimension of
fractals.[87] Fractals are geometric objects that are self-similar in the sense that small parts reproduce, at least roughly, the entire global structure. The
Sierpinski triangle (pictured) can be covered by three copies of itself, each having sides half the original length. This makes the
Hausdorff dimension of this structure ln(3)/ln(2) ≈ 1.58. Another logarithm-based notion of dimension is obtained by
counting the number of boxes needed to cover the fractal in question.
Music
Four different octaves shown on a linear scale, then shown on a logarithmic scale (as the ear hears them)
Logarithms are related to musical tones and
intervals. In
equal temperament, the frequency ratio depends only on the interval between two tones, not on the specific frequency, or
pitch, of the individual tones. For example, the
note A has a frequency of 440
Hz and
B-flat has a frequency of 466 Hz. The interval between A and B-flat is a
semitone, as is the one between B-flat and
B (frequency 493 Hz). Accordingly, the frequency ratios agree:
Therefore, logarithms can be used to describe the intervals: an interval is measured in semitones by taking the base-21/12 logarithm of the
frequency ratio, while the base-21/1200 logarithm of the frequency ratio expresses the interval in
cents, hundredths of a semitone. The latter is used for finer encoding, as it is needed for non-equal temperaments.[88]
Interval (the two tones are played at the same time)
in the sense that the ratio of π(x) and that fraction approaches 1 when x tends to infinity.[89] As a consequence, the probability that a randomly chosen number between 1 and x is prime is inversely
proportional to the number of decimal digits of x. A far better estimate of π(x) is given by the
offset logarithmic integral function Li(x), defined by
are called complex logarithms of z, when z is (considered as) a complex number. A complex number is commonly represented as z = x + iy, where x and y are real numbers and i is an
imaginary unit, the square of which is −1. Such a number can be visualized by a point in the
complex plane, as shown at the right. The
polar form encodes a non-zero complex number z by its
absolute value, that is, the (positive, real) distance r to the
origin, and an angle between the real (x) axisRe and the line passing through both the origin and z. This angle is called the
argument of z.
The absolute value r of z is given by
Using the geometrical interpretation of
sine and
cosine and their periodicity in 2π, any complex number z may be denoted as
for any integer number k. Evidently the argument of z is not uniquely specified: both φ and φ' = φ + 2kπ are valid arguments of z for all integers k, because adding 2kπradians or k⋅360°[nb 7] to φ corresponds to "winding" around the origin counter-clock-wise by kturns. The resulting complex number is always z, as illustrated at the right for k = 1. One may select exactly one of the possible arguments of z as the so-called principal argument, denoted Arg(z), with a capital A, by requiring φ to belong to one, conveniently selected turn, e.g. −π < φ ≤ π[92] or 0 ≤ φ < 2π.[93] These regions, where the argument of z is uniquely determined are called
branches of the argument function.
Using this formula, and again the periodicity, the following identities hold:[94]
where ln(r) is the unique real natural logarithm, ak denote the complex logarithms of z, and k is an arbitrary integer. Therefore, the complex logarithms of z, which are all those complex values ak for which the ak-th power of e equals z, are the infinitely many values
for arbitrary integers k.
Taking k such that φ + 2kπ is within the defined interval for the principal arguments, then ak is called the principal value of the logarithm, denoted Log(z), again with a capital L. The principal argument of any positive real number x is 0; hence Log(x) is a real number and equals the real (natural) logarithm. However, the above formulas for logarithms of products and powers
do not generalize to the principal value of the complex logarithm.[95]
The illustration at the right depicts Log(z), confining the arguments of z to the interval (−π, π]. This way the corresponding branch of the complex logarithm has discontinuities all along the negative real x axis, which can be seen in the jump in the hue there. This discontinuity arises from jumping to the other boundary in the same branch, when crossing a boundary, i.e. not changing to the corresponding k-value of the continuously neighboring branch. Such a locus is called a
branch cut. Dropping the range restrictions on the argument makes the relations "argument of z", and consequently the "logarithm of z",
multi-valued functions.
In the context of
finite groups exponentiation is given by repeatedly multiplying one group element b with itself. The
discrete logarithm is the integer n solving the equation
where x is an element of the group. Carrying out the exponentiation can be done efficiently, but the discrete logarithm is believed to be very hard to calculate in some groups. This asymmetry has important applications in
public key cryptography, such as for example in the
Diffie–Hellman key exchange, a routine that allows secure exchanges of
cryptographic keys over unsecured information channels.[99]Zech's logarithm is related to the discrete logarithm in the multiplicative group of non-zero elements of a
finite field.[100]
From the perspective of
group theory, the identity log(cd) = log(c) + log(d) expresses a
group isomorphism between positive
reals under multiplication and reals under addition. Logarithmic functions are the only continuous isomorphisms between these groups.[103] By means of that isomorphism, the
Haar measure (
Lebesgue measure) dx on the reals corresponds to the Haar measure dx/x on the positive reals.[104] The non-negative reals not only have a multiplication, but also have addition, and form a
semiring, called the
probability semiring; this is in fact a
semifield. The logarithm then takes multiplication to addition (log multiplication), and takes addition to log addition (
LogSumExp), giving an
isomorphism of semirings between the probability semiring and the
log semiring.
^Proof:
Taking the logarithm to base k of the defining identity one gets
The formula follows by solving for
^z
Some mathematicians disapprove of this notation. In his 1985 autobiography,
Paul Halmos criticized what he considered the "childish ln notation", which he said no mathematician had ever used.[15] The notation was invented by the 19th century mathematician
I. Stringham.[16][17]
^
Wegener, Ingo (2005). Complexity Theory: Exploring the limits of efficient algorithms. Berlin, DE / New York, NY:
Springer-Verlag. p. 20.
ISBN978-3-540-21045-0.
^Goodrich, Michael T.;
Tamassia, Roberto (2002). Algorithm Design: Foundations, analysis, and internet examples. John Wiley & Sons. p. 23. One of the interesting and sometimes even surprising aspects of the analysis of data structures and algorithms is the ubiquitous presence of logarithms ... As is the custom in the computing literature, we omit writing the base b of the logarithm when b = 2 .
^Napier, John (1614),
Mirifici Logarithmorum Canonis Descriptio [The Description of the Wonderful Canon of Logarithms] (in Latin), Edinburgh, Scotland: Andrew Hart The sequel ... Constructio was published posthumously:
Napier, John (1619), Mirifici Logarithmorum Canonis Constructio [The Construction of the Wonderful Rule of Logarithms] (in Latin), Edinburgh: Andrew Hart Ian Bruce has made an
annotated translation of both books (2012), available from 17centurymaths.com.
^Enrique Gonzales-Velasco (2011) Journey through Mathematics – Creative Episodes in its History, §2.4 Hyperbolic logarithms, p. 117, Springer
ISBN978-0-387-92153-2
^Spiegel, Murray R.; Moyer, R.E. (2006), Schaum's outline of college algebra, Schaum's outline series, New York:
McGraw-Hill,
ISBN978-0-07-145227-4, p. 264
^Hart; Cheney; Lawson; et al. (1968), Computer Approximations, SIAM Series in Applied Mathematics, New York: John Wiley, section 6.3, pp. 105–11
^Zhang, M.; Delgado-Frias, J.G.; Vassiliadis, S. (1994), "Table driven Newton scheme for high precision logarithm generation", IEE Proceedings - Computers and Digital Techniques, 141 (5): 281–92,
doi:
10.1049/ip-cdt:19941268,
ISSN1350-2387, section 1 for an overview
^Meggitt, J.E. (April 1962), "Pseudo Division and Pseudo Multiplication Processes", IBM Journal of Research and Development, 6 (2): 210–26,
doi:
10.1147/rd.62.0210,
S2CID19387286
^Kahan, W. (20 May 2001), Pseudo-Division Algorithms for Floating-Point Logarithms and Exponentials
^Maling, George C. (2007), "Noise", in Rossing, Thomas D. (ed.), Springer handbook of acoustics, Berlin, New York:
Springer-Verlag,
ISBN978-0-387-30446-5, section 23.0.2
^Crauder, Bruce; Evans, Benny; Noell, Alan (2008), Functions and Change: A Modeling Approach to College Algebra (4th ed.), Boston: Cengage Learning,
ISBN978-0-547-15669-9, section 4.4.
^Wegener, Ingo (2005), Complexity theory: exploring the limits of efficient algorithms, Berlin, New York:
Springer-Verlag,
ISBN978-3-540-21045-0, pp. 1–2
^Wegener, Ingo (2005), Complexity theory: exploring the limits of efficient algorithms, Berlin, New York:
Springer-Verlag, p. 20,
ISBN978-3-540-21045-0
^Cherkassky, Vladimir; Cherkassky, Vladimir S.; Mulier, Filip (2007), Learning from data: concepts, theory, and methods, Wiley series on adaptive and learning systems for signal processing, communications, and control, New York:
John Wiley & Sons,
ISBN978-0-471-68182-3, p. 357