THIS COURSE MAY BE TAKEN INDIVIDUALLY OR AS PART OF THE PROFESSIONAL CERTIFICATE PROGRAM IN MACHINE LEARNING & ARTIFICIAL INTELLIGENCE.
The use of machine learning models in industry continues to grow. The goal of this bootcamp is to teach participants how to use deep learning (DL) tools to process data in different modalities, ranging from text, images and graphs. In this course, you will learn how to formulate your problems in the machine learning terms and how to effectively utilize existing deep learning packages to solve them. We will discuss issues that impact classification performance, foresee likely hurdles and explore possible remedies for improving model accuracy. This is a hands-on course where lectures will be supplemented by the guided practical tutorials and in class-programming labs where participants will learn how to implement, train and improve supervised models using PyTorch package.
Enrollment for this course is limited to 30 participants to allow for more personalized instruction.
- Understand broad opportunities for automation with machine learning
- Explore modern natural language processing and computer vision tools, formulations, and problems
- Understand how to formulate new problems into machine learning terms
- Learn alternative approaches for feature representation and modeling for graphs, text and images.
- Outline key aspects of practical problems that are likely to impact performance
- Obtain hands-on experience in implementing, debugging and tuning neural models in pytorch.
- Learn to discuss scaling issues (amount of data, dimensionality, storage, and computation)
- Identify limitations of existing tools
- Understand current machine learning trends and opportunities that they bring
Who Should Attend
This course is designed for people with basic analytic skills and familiarity with supervised learning. The course assumes an undergraduate degree in computer science or another technical area such as statistics, physics, electrical engineering, etc., with exposure to vectors and matrices, basic concepts of probability. A high-level understanding of programming (thinking in terms of programs) is also beneficial.
For professionals whose work involves data hands-on, the course aims to provide a deeper understanding and sharper intuitions about translating their problems into the deep learning terms and writing programs to implement these formulations and which methods to consider in what contexts.
Laptops with Google Chrome are required. Tablets will not be sufficient for the computing activities performed in this course.
Class runs from 9:00am until 5:00pm each day with one-hour lunch and two 15-minute breaks.
[9:00-10:30 am] Introduction to Machine Learning (Lecture)
[10:30am-noon] Training and Improving Classifiers with Sklearn (Practicum)
[1:00pm-2:30pm] Introduction to Neural Networks (Lecture)
[2:30pm-5:00pm] Building Neural Networks with PyTorch (Practicum)
The SKLearn lab will have a tutorial for sentiment analysis and mnist (via a Google Colab Notebook) with emphasis on how to improve performance, then time for students to try their own classifiers on a separate sentiment analysis task.
The PyTorch lab willhave a tutorial on PyTorch and how to build feed-forward nets for the same tasks as in the Sklearn lab (with emphasis on how to improve performance), and time for students to try to build their own network for the separate sentiment analysis task.
[9:00-10:30 am] Advanced Neural Networks for Sequential Data and Transfer Learning (Lecture)
[10:30am-noon] Building Advanced Networks for Text in PyTorch and Using PreTrained Models (Practicum)
[1:00pm-2:30pm] Advanced Neural Networks for Images and Graphs (Lecture)
[2:30pm-4:00pm] Building Advanced Networks for Images in PyTorch and Using PreTrained Models (Practicum)
[4:00pm-5:00pm] Discussion of Participants' Problems
The first lab will focus on sentiment analysis and walk them through how to build an RNN/Transformer, and how to extend a pretrained model. Then there will be time to try to tune your own model on a separate sentiment analysis task.
The second lab will walk you through how to build a CNN to solve MNIST (or a simple vision task), and how to leverage an imagenet pre-trained CNN for a new task (as well as tricks like data augmentation). Then there will be time to try to tune your own model for a separate image classification task. There may be a subset of MiniPlaces or CIFAR-10.