Time only SS
Time Monday, 11.45h-13.15h and 14.15h-15.45h
Room 135
Credits 3 SWS / 5 ECTS
Exam Lab Work


  • First lesson in term SS 20: 20.04.2020

Important notice for the summer term 2020: The course will initially be offered as a synchronous distance learning course during the SARS-CoV-2-related restrictions in term SS 20. I will post the corresponding Zoom link to the group of online registered users in the Persönlicher Stundenplan before 20.04.2020. The timetable according to the Starplan will apply. If or when we may meet again in lecture halls, the room indicated in Starplan will apply.

Prerequisites for participation

  • Successful completion of either the CSM Machine Learning Lecture or the CSM Object Recognition Lecture

Structure, Contents, Documents

In this lab student-groups must implement selected applications from Artificial Intelligence, Machine Learning, Object Recognition and Natural Language Processing. All applications must be implemented in Python Jupyter-Notebooks. The Python machine learning libraries scikit-learn, Tensorflow and Keras will be applied.

I recommend to download Anaconda for Python 3.7 or 3.8. Then a new virtual environment shall be created by conda create -n pia python=3.6 anaconda. In this virtual environment use pip install to install tensorflow, keras, gensim, and other required modules.

Each of the lab-exercises (applications) will be graded. The final grad of the course is the mean of the exercise grades.

Gitlab Repo for all resources

Lecture Contents Document Links
20.04.2020 Short Introduction and Registration
27.04.2020 Data Mining Process in Scikit-Learn Instruction [.html], Instruction [.ipynb], churnPrediction Data, Pics Folder Pics archive must be decompressed and saved in the same directory as the notebook
04.05.2020 Introduction to Tensorflow Instruction [.html], Instruction [.ipynb]
11.05.2020 Generative Adversarial Networks (GAN) in Tensorflow Instruction [.html], Instruction [.ipynb]
18.05.2020 Generative Adversarial Networks (GAN) in Tensorflow
25.05.2020 Word Embeddings and Deep Neural Networks for Document Classification (Keras) Instruction [.html], Instruction [.ipynb], NewsFeed Data, Pics Folder Pics archive must be decompressed and saved in the same directory as the notebook
08.06.2020 Reinforcement Learning (1) Instruction [.html], Instruction [.ipynb], Instruction [.html], Instruction [.ipynb] , RL_pics Folder Pics archive must be decompressed and saved in the same directory as the notebook
15.06.2020 Reinforcement Learning (2) Instruction Q-Learning [.html], Instruction Q-Learning [.ipynb], RL_pics Folder
22.06.2020 Deep Reinforcement Learning with Tensorflow Instruction DQN [.html], Instruction DQN [.ipynb], RL_pics Folder
29.06.2020 Deep Reinforcement Learning with Tensorflow
06.07.2020 Music Generation (Tensorflow) Instruction RNN Sequence Prediction [.html], Instruction RNN Sequence Prediction [.ipynb]

Description of the Lab-Exercises

1. Introduction Machine Learning Process in general

In this lab-exercises the entire Data Mining process will be implemented using an example for churn-prediction:

  • Access and clean data
  • Understand data: Descriptive statistics and visualization
  • Transformation of categorical attributes
  • Feature Selection and -Extraction
  • Normalisation and Scaling
  • Test, evaluate and optimize learned models
  • Performance measures and visualization of performance

Goals of this lecture:

  • Understand all steps required to implement and solve typical machine-learning tasks
  • Know how to define and design machine-learning projects

2. Introduction to Tensorflow

This lecture will introduce the Tensorflow framework for Deep Learning. The students will build a simple neural network to classify the MNIST dataset.

Goals of this lecture:

  • Understand the basics of Tensorflow Graphs, lazy execution and why this is important for performance
  • Build a basic classifier for the MNIST Dataset using Tensorflow
  • Train the classifier using mini-batches

3. MNIST GAN (Tensorflow)

In this lecture the students will build a generative adversarial neural network (GAN) to generate images similar to the MNIST dataset.

Goals of this lecture:

  • Understand the basics of GANs
  • Understand more advanced Tensorflow features (Variable Scopes, Variable reuse, Tensorboard)
  • Build and train a simple GAN network to generate MNIST-like images
  • Use the Deeplearn machines @ HdM

4. Deep Reinforcement Learning (Tensorflow)

In this lecture the students will build various reinforcement learning agents. Starting from simple gridworld environments, the goal is to implement and understand the core concepts of “classical” reinforcement learning. In the second part the students will build a Deep Q-Network agent (DQN) that is able to play Atari video games like Pong or Breakout.

Goals of this lecture:

  • Understand/Recap the core concepts of reinforcment learning (agent-environment loop, observations, policies, exploration, etc.)
  • Solve a gridworld environment with dynamic programming
  • Implement a gridworld agent with tabular Q-learning
  • Implement a video game playing agent with Deep Q-Networks (DQN)

5. Word Embeddings and Text Classification (Keras)

This lecture comprises:

  • Access and preprocess text data
  • Distributional semantics and word-embeddings
  • Learn word-embeddings from Wikipedia dump
  • Apply word-embeddings for text classification with deep neural networks (CNNs)

Goals of this lecture:

  • Understand the concept and potential of word-embeddings
  • Know how to build a deep neural network for text-classification

6. Time-Series

This lecture will introduce how recurrent neural networks (RNNs) can be used to learn on temporal sequences of data to predict the next values in the future. For this, the students will build and train a LSTM network that comopses new music given an initial seed of music. The LSTM will be trained on classical music.

Goals of this lecture:

  • Understand the basics of RNNs and LSTM networks
  • Understand how RNNs are able to predict the future (and why uncertainty increases over time)
  • Build a model that can compose new music