We guarantee that all our online courses will meet or exceed your
expectations. If you are not fully satisfied with a course - for
any reason at all - simply request a full refund. We guarantee no
hassles. That's our promise to you.
Go ahead and order with confidence!
| Unit 01: Getting an Idea of NLP and its Applications | |||
| Module 01: Introduction to NLP | |||
| Module 02: By the End of This Section | |||
| Module 03: Installation | |||
| Module 04: Tips | |||
| Module 05: U – Tokenization | |||
| Module 06: P – Tokenization | |||
| Module 07: U – Stemming | |||
| Module 08: P – Stemming | |||
| Module 09: U – Lemmatization | |||
| Module 10: P – Lemmatization | |||
| Module 11: U – Chunks | |||
| Module 12: P – Chunks | |||
| Module 13: U – Bag of Words | |||
| Module 14: P – Bag of Words | |||
| Module 15: U – Category Predictor | |||
| Module 16: P – Category Predictor | |||
| Module 17: U – Gender Identifier | |||
| Module 18: P – Gender Identifier | |||
| Module 19: U – Sentiment Analyzer | |||
| Module 20: P – Sentiment Analyzer | |||
| Module 21: U – Topic Modeling | |||
| Module 22: P – Topic Modeling | |||
| Module 23: Summary | |||
| Unit 02: Feature Engineering | |||
| Module 01: Introduction | |||
| Module 02: One Hot Encoding | |||
| Module 03: Count Vectorizer | |||
| Module 04: N-grams | |||
| Module 05: Hash Vectorizing | |||
| Module 06: Word Embedding | |||
| Module 07: FastText | |||
| Unit 03: Dealing with corpus and WordNet | |||
| Module 01: Introduction | |||
| Module 02: In-built corpora | |||
| Module 03: External Corpora | |||
| Module 04: Corpuses & Frequency Distribution | |||
| Module 05: Frequency Distribution | |||
| Module 06: WordNet | |||
| Module 07: Wordnet with Hyponyms and Hypernyms | |||
| Module 08: The Average according to WordNet | |||
| Unit 04: Create your Vocabulary for any NLP Model | |||
| Module 01: Introduction and Challenges | |||
| Module 02: Building your Vocabulary Part-01 | |||
| Module 03: Building your Vocabulary Part-02 | |||
| Module 04: Building your Vocabulary Part-03 | |||
| Module 05: Building your Vocabulary Part-04 | |||
| Module 06: Building your Vocabulary Part-05 | |||
| Module 07: Tokenization Dot Product | |||
| Module 08: Similarity using Dot Product | |||
| Module 09: Reducing Dimensions of your Vocabulary using token improvement | |||
| Module 10: Reducing Dimensions of your Vocabulary using n-grams | |||
| Module 11: Reducing Dimensions of your Vocabulary using normalizing | |||
| Module 12: Reducing Dimensions of your Vocabulary using case normalization | |||
| Module 13: When to use stemming and lemmatization? | |||
| Module 14: Sentiment Analysis Overview | |||
| Module 15: Two approaches for sentiment analysis | |||
| Module 16: Sentiment Analysis using rule-based | |||
| Module 17: Sentiment Analysis using machine learning – 1 | |||
| Module 18: Sentiment Analysis using machine learning – 2 | |||
| Module 19: Summary | |||
| Unit 05: Word2Vec in Detail and what is going on under the hood | |||
| Module 01: Introduction | |||
| Module 02: Bag of words in detail | |||
| Module 03: Vectorizing | |||
| Module 04: Vectorizing and Cosine Similarity | |||
| Module 05: Topic modeling in Detail | |||
| Module 06: Make your Vectors will more reflect the Meaning, or Topic, of the Document | |||
| Module 07: Sklearn in a short way | |||
| Module 08: Summary | |||
| Unit 06: Find and Represent the Meaning or Topic of Natural Language Text | |||
| Module 01: Keyword Search VS Semantic Search | |||
| Module 02: Problems in TI-IDF leads to Semantic Search | |||
| Module 03: Transform TF-IDF Vectors to Topic Vectors under the hood | |||