Natural language processing Deep learning



neural networks have been used implementing language models since 2000s. lstm helped improve machine translation , language modeling.


other key techniques in field negative sampling , word embedding. word embedding, such word2vec, can thought of representational layer in deep learning architecture transforms atomic word positional representation of word relative other words in dataset; position represented point in vector space. using word embedding rnn input layer allows network parse sentences , phrases using effective compositional vector grammar. compositional vector grammar can thought of probabilistic context free grammar (pcfg) implemented rnn. recursive auto-encoders built atop word embeddings can assess sentence similarity , detect paraphrasing. deep neural architectures provide best results constituency parsing, sentiment analysis, information retrieval, spoken language understanding, machine translation, contextual entity linking, writing style recognition, categorizing clinical narratives, , others.


google translate (gt) uses large end-to-end long short-term memory network. gnmt uses example-based machine translation method in system learns millions of examples. translates whole sentences @ time, rather pieces. google translate supports on 1 hundred languages. network encodes semantics of sentence rather memorizing phrase-to-phrase translations . gt can translate directly 1 language another, rather using english intermediate.








Comments

Popular posts from this blog

Independence United Arab Emirates

History Alexandra College

Management School of Computer Science, University of Manchester