Seminario del 2017

Recently, artificial intelligence and deep learning started to occupy a central role in applications. Despite their effectiveness, it is often hard to interpret the inner representation of data provided by these systems. We will present one of the most popular architectures to generate word embeddings: A geometric representation of words dependent on the context in which they can be found in a given dataset. Thereafter, we will take advantage of this model to analyse the semantic shift of words, when used in two different contexts. In particular, we will show how the t-distributed stochastic neighbours embedding can provide a reasonable low-dimensional representation of word embeddings, allowing to explore their most "persistent" regions, through topological methods. Keywords: Artificial intelligence, lyrics, word embedding, semantic shift 75 minutes talk, 45 minutes discussion.

indietro