Machine learning is a rapidly expanding area with a diverse collection of tools and approaches. Successfully applying such methods to real tasks may seem to require expertise that many do not possess. However, all these methods share the same basic concepts, use the same building blocks.
Understanding these basics, formulations, and when they are appropriate, is key to using machine learning techniques successfully in practice. This foundational course covers the essential concepts and methods in machine learning, providing participants with an entry level expertise they need to get started and quickly move ahead.
This course was previously titled Machine Learning for Big Data and Text Analysis.
EARN A PROFESSIONAL CERTIFICATE IN MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE
Machine Learning for Big Data and Text Processing: Foundations may be taken individually or as a core course for the Professional Certificate Program in Machine Learning and Artificial Intelligence.
It is highly recommended that you apply for a course at least 6-8 weeks before the start date to guarantee there will be space available. After that date you may be placed on a waitlist. Courses with low enrollment may be cancelled up to 4 weeks before start date if sufficient enrollments are not met. If you are able to access the online application form, then registration for that particular course is still open.
- Understand the basic machine learning concepts and methods including neural networks
- Learn how to formulate/set up problems as machine learning tasks
- Assess which types of methods are likely to be useful for a given class of problems
- Understand strengths and weakness of learning algorithms
Who Should Attend:
This course is appropriate to obtain a better understanding of machine learning basics. It is most suitable for those with an undergraduate degree in computer science or other related technical areas. A high-level understanding of programming (thinking in terms of programs) is helpful.
The foundational course describes key concepts, formulations, algorithms, and practical knowledge for people who are getting started or need to brush up in machine learning, and provides participants with core knowledge to succeed in the advanced level course.
Laptops are required for this course. Tablets will not be sufficient for the computing activities performed in this course.
9:00am: Introduction to ML (Barzilay)
10:00am: Formulation of ML problems (Barzilay)
11:00am: Coffee break
11:15am: Linear classification/regression (Barzilay)
12:15pm: Lunch (provided)
1:30pm: Non-linear classification (Jaakkola)
2:15pm: Feedforward neural networks (Jaakkola)
3:15pm: Coffee break
3:30pm: Feedforward neural networks (Jaakkola)
4:00pm: Tutorial on ML packages
5:00pm : Adjourn
9:00am: Unsupervised learning, clustering (Barzilay)
10:00am: Collaborative filtering (Barzilay)
11:00am: Coffee break
11:15am: Convolutional networks (images, text)
12:15pm: Lunch break (on your own)
1:30pm: Recurrent neural networks (Jaakkola)
2:30pm: Coffee break
2:45pm: Reinforcement learning (Jaakkola)
4:00pm: Tutorial on DNN packages
Class runs 9:00 am to 5:00 pm each day.
Regina Barzilay is a professor in the Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. Her research interests are in natural language processing. She is a recipient of various awards including of the NSF Career Award, the MIT Technology Review TR-35 Award, Microsoft Faculty Fellowship and several Best Paper Awards at NAACL and ACL. She received her Ph.D. in Computer Science from Columbia University, and spent a year as a postdoc at Cornell University.
Tommi Jaakkola is a professor of Electrical Engineering and Computer Science and also a member of the Computer Science and Artificial Intelligence Laboratory. He received M.Sc. in theoretical physics, and Ph.D. from MIT in computational neuroscience. His work pertains to inferential, algorithmic and estimation questions in machine learning, including large scale probabilistic distributed inference, deep learning, and causal inference. The applied side of his work has focused on problems in natural language processing such as parsing, regulatory models in computational biology, computational chemistry, and recommender systems. He received the Sloan Research Fellowship 2002 and many awards for his publications, across areas.
This course takes place on the MIT campus in Cambridge, Massachusetts. We can also offer this course for groups of employees at your location. Please complete the Custom Programs request form for further details.