Skip to course content

AI for Natural Language Processing (NLP) – Introduction

Prepare text data and build introductory NLP models for classification, topic modelling, and text generation.

Get Course Info

Audience: Developers · Data Analysts · Data Scientists

Duration: 3 days

Format: Lectures & hands-on labs (50 % / 50 %)

Overview

Modern NLP techniques let us understand and generate text at scale. This course starts with classic preprocessing (tokenising, TF-IDF, Naïve Bayes) and advances to deep-learning models (RNN, LSTM, Transformers) using libraries such as NLTK, spaCy, TensorFlow, and Hugging Face.

Objective

Prepare text data and build introductory NLP models for classification, topic modelling, and text generation.

What You Will Learn

  • Text preprocessing (stemming, tokenising, stop-word removal)
  • Bag-of-Words, TF-IDF, word-frequency techniques
  • Visualising text data & word clouds
  • Naïve Bayes & SVM for text classification
  • Word embeddings & Word2Vec
  • Topic modelling with Gensim
  • Deep-Learning for NLP: RNN, LSTM, Transformers (ELMo, ULMFiT, BERT)
  • Text generation with TensorFlow

Course Details

Audience: Developers · Data Analysts · Data Scientists

Duration: 3 days

Format: Lectures & hands-on labs (50 % / 50 %)

Prerequisites:
  • Programming background
  • Basic Python & Jupyter notebooks

Setup: Cloud-based lab (Google Colab recommended) · Laptop · Chrome

Detailed Outline

  • AI vocabulary
  • ML types
  • Hardware / software ecosystem
  • Filtering & stop-words
  • Stemming & tokenisation
  • Word clouds
  • Unicode handling
  • Lab
  • N-grams
  • Bag-of-Words
  • Vectorising text
  • Lab
  • Naïve Bayes
  • SVM
  • Lab
  • LDA
  • Gensim
  • Lab
  • Word embeddings
  • RNN & LSTM
  • Transformers & attention
  • Lab: text generation
  • Intro to Rasa framework
  • Group exercise on real dataset

Ready to Get Started?

Contact us to learn more about this course and schedule your training.