[hcs-d] Fwd: Deep learning tutorial

Juan Perdomo jperdomo at college.harvard.edu
Mon Sep 11 17:38:17 EDT 2017

Hey everyone,

See Prof. Singer's message about a workshop given by Alex Smola on deep
learning that's going on tomorrow. Should be pretty interesting.

---------- Forwarded message ----------
From: Yaron Singer <yaron at seas.harvard.edu>
Date: Mon, Sep 11, 2017 at 5:05 PM
Subject: Fwd: Deep learning tutorial
To: Juan Perdomo <jperdomo at college.harvard.edu>

---------- Forwarded message ---------
From: Yaron Singer <yaron at seas.harvard.edu>
Date: Sun, Sep 10, 2017 at 3:20 PM
Subject: Deep learning tutorial
To: econcs-general at eecs.harvard.edu <econcs-general at eecs.harvard.edu>, <
Ml-all at seas.harvard.edu>, theory-group at eecs.harvard.edu <
theory-group at eecs.harvard.edu>
Cc: David Parkes <parkes at eecs.harvard.edu>, Smola, Alexander <
smola at amazon.com>, Alexander Rush <srush at seas.harvard.edu>, Finale
Doshi-Velez <finale at seas.harvard.edu>, Doyle, Kevin Patrick <
kevin_doyle at harvard.edu>, Margo Seltzer <margo at eecs.harvard.edu>

Hi all,

Courtsey of the Harvard Data Science Initiative, this Tuesday we'll have
Alex Smola from Amazon and CMU at Harvard to give a half day tutorial about
deep learning.

This is a great opportunity to get up to speed on deep learning as well as
see some of the cutting edge research Alex and his team are working on.

Coffee and refreshment will be provided to help you reenergize!

*Date: *Tuesday September 12
*Time:* 12:30 -- 4:30
*Location:* Sever 213

Tutorial on Deep Learning with Apache MXNet Gluon


This tutorial introduces Gluon, a flexible new interface that pairs MXNet’s
speed with a user-friendly frontend. Symbolic frameworks like Theano
and TensorFlow offer speed and memory efficiency but are harder to program.
Imperative frameworks like Chainer and PyTorch are easy to debug but they
can seldom compete with the symbolic code when it comes to speed. Gluon
reconciles the two, removing a crucial pain point by using just-in-time
compilation and an efficient runtime engine for efficiency.

In this crash course, we’ll cover deep learning basics, the fundamentals of
Gluon, advanced models, and multiple-GPU deployments. We will walk
you through MXNet’s NDArray data structure and automatic differentiation
tools. Well show you how to define neural networks at the atomic level, and
through Gluon’s predefined layers. We’ll demonstrate how to serialize
models and build dynamic graphs. Finally, we will show you how to hybridize
your networks, simultaneously enjoying the benefits of imperative and
symbolic deep learning.

Yaron Singer
Assistant Professor of Computer Science
John A. Paulson School of Engineering and Applied Sciences
Harvard University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.hcs.harvard.edu/pipermail/hcs-discuss/attachments/20170911/bba8a5ac/attachment.html>

More information about the hcs-discuss mailing list