Difference between revisions of "WNImage"

From CSWiki
Jump to: navigation, search
(Todo)
(Todo)
Line 1: Line 1:
 
== Todo ==
 
== Todo ==
* Make the results make more sense.
+
* Make the results make more sense
* Put results on the web.
 
* Figure out some way to normalize to prevent "over-generalization."
 
  
 
== CVS Access ==
 
== CVS Access ==

Revision as of 20:18, 8 May 2006

Todo

  • Make the results make more sense

CVS Access

The WNImage tools in the repository under wnimage. The repository is named wnp. To access it, follow the instructions at [1].

Files

CVS Files

  • gen_Xsynsets.py - generates the Xsynset database file for a given list of words on stdin. The db gets pickled to Xsynsetdb.pkl. The max_depth parameter specifies how many links to follow. It currently only follows hypernyms and it crawls all senses of a word.
  • gen_weighted_Xsynsets.py - generates the wXsynset database file all the Xsynsets in Xsynsetdb.pkl and outputs the new pickled db as wXsynsetdb.pkl. Using alpha=0.5, it computed a weighted Xsynset, i.e. one which simply has a numerical value assigned to each word. It is currently alpha^path_length from source to target.
  • gen_caption_vects.py - generates a db of the weighted (alpha=0.5) Xsynset for each image caption. pass in the captions file as the first parameter. You need to run this after gen_Xsynsets.py and it expects the result of that to be named Xsynsetdb.py. Note: this generates a pretty large file and it doesn't save much time so it might be scrapped.
  • rank_caption_synsets.py - ranks the top N (second parameter) synsets for all the captions. The captions file is the first parameter.
  • extract_caption_words.sh - extracts and uniquifies all the words in the captions of a captions file. Input on stdin and output on stdout.
  • Xsynsettools.py - a library for utility functions relating to Xsynsets. Currently just has a function to generate Xsynsets.
  • similarity.py - a library for similarity computations. Currently just has cosine similarity.
  • captionstools.py - a library for utility functions relating to image caption manipulation (i.e. reading, vector extraction, etc.)

Experimental Files

These are large data files (too large and/or time consuming to put in everybody's CVS). All files are in on psy-build2 at /wnimage.

  • captions - This is the captions file that has been dos2unix-ified.
  • captionwords - A sorted and uniquified list of all the words that occur in the captions file.
  • Xsynsetdb.pkl - This is the database of Xsynsets generated using gen_Xsynsets.py. You should create a simlink from this file to your working dir.
  • wXsynsetdb.pkl - This is the database of weighted Xsynsets generated using gen_weighted_Xsynsets.py. You should create a simlink from this file to your working dir.

The results subdirectory contains working results for experiments.

  • top5synsets - The top 5 synsets for each image caption using Xsynsetdb and cosine similarity.

Notes

5/5/06

  • The principal goal (or first milestone) of this project is to use Xsynsets rank the synsets associated with the given image.
  • Each Xsynset will be implemented in python as a dictionary. In summary, an Xsynset uses the following structures:
    • synset 
      the synset number within wordnet is recorded.
      path 
      a path is a list of synsets (starting node to ending node).
      typed_path 
      a tuple where the first element is a wordnet connection type and the second is a path. This represents a path through wordnet where all the traversed edges are of the type specified in the first element of the tuple.
      entry 
      is a dictionary entry where the key is a synset and the value is a list of typed_paths. The list of typed_paths are all those paths which go from the Xsynset's generator synset to the given target synset, while only traversing one type of connection.
      Xsynset 
      is a list of entries. If a synset does not appear as any key in the Xsynset, then it cannot be reached from the generator synset within the threshold number of steps.

5/3/06

  • JBG has done some basic disambiguation using Lesk.

Personal Work Notes