Ralph Grishman,

New York University


Much of the information on the Web is encoded as text, in a form which is easy for people to use but hard for computers to manipulate. The role of information extraction is to make the structure of this information explicit, by creating data base entries capturing specified types of entities, relations, and events in the text. We consider some of the challenges of information extraction and how they have been addressed. In particular, we consider what knowledge is required and how the means for creating this knowledge has developed over the past decade, shifting from hand-coded rules to supervised learning methods and now to semi-supervised and unsupervised techniques.


Date: 2009-May-25     Time: 14:00:00     Room: 336

For more information: