Next: Discriminant Analysis
Up: Multivariate Analysis Methods
Previous: Principal Components Analysis
The routines implemented are CLUSTER which has 8 options for
hierarchical clustering and PARTITION which carries out
non-hierarchical clustering. We will look at the hierarchical options
available first.
The automatic classification of the n row-objects of an n by mtable generally produces output in one of two forms: the assignments
to clusters found for the n objects; or a series of clusterings of
the n objects, from the initial situation when each object may be
considered a singleton cluster to the other extreme when all objects
belong to one cluster. The former is non-hierarchical clustering or
partitioning.
The latter is hierarchical clustering. Brief consideration will show
that a sequence of n-1 agglomerations are needed to successively
merge the two closest objects and/or clusters at each stage, so that
we have a set of n (singleton) clusters, n-1 clusters, ...,
2 clusters, 1 cluster. This is usually represented by a hierarchic
tree or a dendrogram, and a ``slice'' through the dendrogram
defines a partition of the objects. Unfortunately, no rigid guideline
can be indicated for deriving such a partition from a dendrogram
except that large increases in cluster criterion values (which scale
the dendrogram) can indicate a partition of interest.
In carrying out the sequence of agglomerations, various criteria are
feasible for defining the newly-constituted cluster:
- The minimum variance criterion
- (method MVAR) constructs clusters
which are of minimal variance internally (i.e. compact) and maximal
variance externally (i.e. isolated). It is useful for synoptic
clustering, and for all clustering work where another method cannot be
explicitly justified.
- The minimum variance hierarchy:
- All options, with the exception of MNVR, construct a set of
Euclidean distances from the input set of n vectors. Thus the internal
storage required is large. Option MNVR allows a minimum variance
hierarchy (identical to option MVAR) to be obtained, without
requiring storage of distances. Computational time is slightly higher
than the latter option.
- The single link method
- (method SLNK) often gives a very skew or
"chained" hierarchy. It is therefore not useful for summarising data, but
may indicate very anomalous or outlying objects, -- these will be among
the last to be agglomerated in the hierarchy.
- The complete link method
- (method CLNK) often does not differ
unduly from the minimum variance method, but its restrictive criterion is
not suitable if the data is noisy.
- The average link method
- (method ALNK) is a reasonable
compromise between the (lax) single link method and the (rigid) complete
link criterion: all of these methods may be of interest if a graph
representation of the results of the clustering is desired.
- The weighted average link method
- (method WLNK) does not take the
relative sizes of clusters into account in agglomerating them. This, and
the two following methods, are included for completeness and for
consistency with other software packages, but are not recommended for
general use.
- The median method
- (method MEDN) replaces a cluster, on
agglomeration, with the median value. It is not guaranteed that these
criterion values will vary monotonically, and this may present difficulty
with the interpretation of the dendrogram representation.
- The centroid method
- (method CNTR) replaces a cluster, on
agglomeration, with the centroid value. As in the case of the last
option, reversals or inversions in the hierarchy are possible.
The Minimal Spanning Tree, which is closely related to the single link
method, has been used in such applications as interferogram analysis
and in galaxy clustering studies. It is useful as a detector of
outlying data points (i.e. anomalous objects).
Routine PARTITION operates in one two options. For both, a
partition of minimum variance, given the number of clusters, is
sought. Two iterative refinement algorithms (minimum distance or the
exchange method) constitute the options available.
Next: Discriminant Analysis
Up: Multivariate Analysis Methods
Previous: Principal Components Analysis
Petra Nass
1999-06-15