# Zipf’s law

Zipf’s law (named for Harvard linguistic professor George Kingsley Zipf) models the occurrence of distinct objects in particular sorts of collections. Zipf’s law says that the $i$th most frequent object will appear $1/i^{\theta}$ times the frequency of the most frequent object, or that the $i$th most frequent object from an object “vocabulary” of size $V$ occurs

 $O(i)=\frac{n}{i^{\theta}H_{\theta}(V)}$

times in a collection of $n$ objects, where $H_{\theta}(V)$ is the harmonic number of order $\theta$ of $V$.

Zipf’s law typically holds when the “objects” themselves have a property (such as length or size) which is modelled by an exponential distribution or other skewed distribution that places restrictions on how often “larger” objects can occur.

An example of where Zipf’s law applies is in English texts, to frequency of word occurrence. The commonality of English words follows an exponential distribution, and the nature of communication is such that it is more efficient to place emphasis on using shorter words. Hence the most common words tend to be short and appear often, following Zipf’s law.

The value of $\theta$ typically ranges between 1 and 2, and is between 1.5 and 2 for the English text case.

Another example is the populations of cities. These follow Zipf’s law, with a few very populous cities, falling off to very numerous cities with a small population. In this case, there are societal forces which supply the same type of “restrictions” that limited which length of English words are used most often.

A final example is the income of companies. Once again the ranked incomes follow Zipf’s law, with competition pressures limiting the range of incomes available to most companies and determining the few most successful ones.

The underlying theme is that efficiency, competition, or attention with regards to resources or information tends to result in Zipf’s law holding to the ranking of objects or datum of concern.