# Difference between revisions of "SimilarityEM"

m |
|||

Line 5: | Line 5: | ||

A part of [[Wordnet_plus]] | A part of [[Wordnet_plus]] | ||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− |

## Latest revision as of 22:03, 24 November 2006

The idea of this page is to create an iterative algorithm that assigns synsets to words in some corpus (probably by looking at limited windows) to maximize the the sum of all pairwise values for some function <math>S_{i,j}</math> over synsets <math>i</math> and <math>j</math>. Then, we recompute <math>S_{i,j}</math> for the synsets we have so as to maximize the value for the current synset assignments. We continue until the similarities become stable.

We would have to be careful to just not arbitrarily assign all words to a single highly evocative synset, but to make sure that all senses are somehow appropriately represented. This would require a careful starting point, perhaps the emperical evocation we already have and conserving evocation.

A part of Wordnet_plus