Words are fundamental components of human language, but their meanings tend to change over time; e.g., face ('body part' -> 'facial expression'), gay ('happy' -> 'homosexual'), mouse ('rodent' -> 'device'). Changes like these present challenges for computers to learn accurate representations of word meanings — a task that is crucial to natural language systems. This course explores data-driven computational approaches to word meaning representation and semantic change. Topics include latent models of word meaning (e.g., LSA, word2vec), corpus-based detection of semantic change, probabilistic diachronic models of word meaning, and cognitive mechanisms of word sense extension (e.g., chaining, metaphor). The course involves a strong hands-on component that focuses on large-scale text analyses and seminar-style presentations.