Heaps’ law describes the portion of a vocabulary which is represented by an instance document (or set of instance documents) consisting of words chosen from the vocabulary. This can be formulated as
Where is the portion of the vocabulary () represented by the instance text of size . and are free parameters determined empirically.
With English text corpuses, typically is between 10 and 100, and .
|(generated by GNU Octave and gnuplot)|
Heaps’ law means that as more instance text is gathered, there will be diminishing returns in terms of discovery of the full vocabulary from which the distinct terms are drawn.
It is interesting to note that Heaps’ law applies in the general case where the “vocabulary” is just some set of distinct types which are attributes of some collection of objects. For example, the objects could be people, and the types could be country of origin of the person. If persons are selected randomly (that is, we are not selecting based on country of origin), then Heaps’ law says we will quickly have representatives from most countries (in proportion to their population) but it will become increasingly difficult to cover the entire set of countries by continuing this method of sampling.
Baeza-Yates and Ribeiro-Neto, Modern Information Retrieval, ACM Press, 1999.
|Date of creation||2013-03-22 13:01:56|
|Last modified on||2013-03-22 13:01:56|
|Last modified by||akrowne (2)|