Parts 1-3 present and criticize Partee and Kamp's 1995 well known analysis of the typicality effects. The main virtue of this analysis is in the use of supermodels, rather than fuzzy models, in order to represent vagueness in predicate meaning. The main problem is that typicality of an item in a predicate is represented by a value assigned by a measure function, indicating the proportion of supervaluations in which the item falls under the predicate. A number of issues cannot be correctly represented by the measure function, including the typicality effects in sharp predicates; the conjunction fallacy; the context dependency of the typicality effects etc. In Parts 4-5, it is argued that these classical problems are solved if the typicality ordering is taken to be the order in which entities are learnt to be denotation members (or non-members) through contexts and their extensions. A modified formal model is presented, which clarifies the connections between the typicality effects, predicate meaning, and its acquisition.