APPLY TO OUR ARTIFICIAL INTELLIGENCE BOOTCAMP
Transform your career into Artificial Intelligence or Machine Learning Engineer. Average starting salary is $115,000 for graduates.
The bootcamp is organized into two tracks, “Theory” and “Practical” – recognizing that the AI practitioner needs a survey of concepts combined with a strong application of practical techniques through labs. Initially, the foundational material and tools of the Data Science practitioner are presented. Topics proceed quickly into exploratory data analysis and machine learning, where the data is organized, characterized, and manipulated. From week three, the candidate moves from engineered models into five weeks of Deep Learning or models that represent themselves.
Who should Apply
Candidates should take this course if they have a STEM background and experience coding in Python. Strong applicants will have a Masters or PhD degree in STEM. Candidates without a programming background will need to take our prerequisite Python course.
NYC Location: 137 West 25th Street NYC, 11th floor
SF Location: 690 Texas Street, San Francisco, CA 94107
$15,000 | Scholarships available for highly qualified candidates.
Weeks 1-8: Lectures, Labs, Guest Speakers
Pass Experfy Harvard Launch lab Certification
Weeks 9-12: Onsite or Offsite Paid Projects
Job Placement or join our AI Startup Incubator
Peter Morgan is a published author and computer science industry veteran. Before entering industry, he solved high energy physics problems while enrolled in the PhD program in physics at the University of Massachusetts at Amherst. Peter then spent several years in industry as a Solutions Architect, and three years as a Research Associate on an experiment lead by Stanford University to measure the mass of the neutrino. He is currently Chief AI Officer at Ivy Data Science where he oversees AI technology strategy and platform & training development.
Week 1 – Survey of Deep Learning
An overview of Deep Learning’s novel learning and modeling characteristics. Learning the features and model from the data through unsupervised and autoencoding methods. The recent general availability of high-level computational frameworks and hardware systems enabling wider applicability and domain transfer of Deep Learning systems.
(1) SciPy, NumPy, Pandas, Scikit-learn (2) Cloud Computing, Databases (3) Statistics – moments, Gaussian (4) Algorithms, Data Structures
Week 2 – Machine Learning Foundations
The goal of this week is to cover fundamental Machine Learning algorithms with special emphasis on Statistical Pattern Recognition.
(1) Supervised Learning – Feature Engineering, Regression, Decision Trees, Random Forest (2) Naïve Bayesian Classifier, SVM (3) PCA/ICA, Unsupervised Learning, Clustering, K-Means (4) Cross-validation, Ensemble Methods, Kernel Methods (5) Temporal Difference, Q-learning, Multi-agent systems
Week 3 – Computer Vision leads to Deep Learning Revolution
Several core innovations have contributed to the rise of Deep Learning from conventional Machine Learning. The first big gains first came from Computer Vision competitions. This week, students will choose whether they want to be a cardiologist or a radiologist! For the cardiologists, the “core” track, an analysis of cardiac ECG signals will be performed to detect arrhythmias and predict impending cardiac events. For the radiologists, “advanced” track, CT scans will be analyzed to discover nodes in the lung tissue that will lead to cancer.
(1) Computer Vision, Convolutional Neural Networks (2) GPU computing, MXnet, Kaggle (3) Image classification, Object detection and recognition (4) Facial Recognition, LFW, Autonomous vehicles, SLAM
Week 4 – Super-human Capabilities
The theory is now firmly transitioned into the world of Deep Learning, topics now come at a faster pace, with comprehensive coverage of both historical and contemporary topics considered mandatory for the practitioner. On the practical lab work, participants work to refine existing models for ECG and CT. Also, in the lab, additional instruments are added for the visualization and monitoring of their learning environments. The AIs are given the power to augment their respective signal types and further the ability to generate anomaly IM alerts.
(1) Second order methods, Virtual Environments (2) Visualizing CNNs, TensorBoard (3) ImageNet Example, VGGNet example (4) Augmented Cognition, Partitioned Hessian methods
Week 5 – Upping the Complexity: Recurrent Neural Networks
While the theoretical track moves through a survey of RNNs, the laboratory track moves the MVP into the domain of online training and refinement of models. Core track participants will be applying online training to update their ECG models. Advanced track participants will write code to innovate their Lung nodule detection model and tune it for a particular patient.
(1) Recurrent Neural Networks overview, Natural Language Processing, LSTM, Time series data (2) Streaming data, Stream processing Deep Learning analysis (3) Text, Case studies, Penn Tree Bank Data Set (4) Sentiment analysis, Enron email dataset, Twitter data set
Week 6 – Bringing RNN & CNN together
Headline-grabbing applications of Deep Learning are often models that have highly recursive structures, often times having neural activation feedback into sub models or diverge into hybrid RNN/CNN pipelines. The theoretical track will comprehensively review case studies the exemplify these approaches in many different applications and industries. The practical track will work on the existing MVP, dealing with the issues associated with modifying input parameters to the models.
(1) Market data, Case study, Anomaly detection (2) Hybrid CNN/RNN – Image captioning, Images from words, Video processing (3) Sound, Speech, Music, TIMIT data set (4) IoT Overview, Hardware, Sensor data
Week 7 – Commercialization & Survey of Reinforcement Learning
While the theoretical track covers various topics on both reinforcement learning and new deep learning developments, the practical track focuses on tasks associated with the commercialization of the MVP. Code refactoring, unit testing, cloud orchestration, distributed processing, are some of the practical modifications that will help to illuminate the path from testing to production.
(1) Deep Q Networks, Gaussian Processes, Deepmind (2) Reinforcement Learning, OpenAI Playground Lab (3) One-shot Learning, Bayesian Inference (4) Neural Turing Machine, GAN (5) MXNet, Robotics
Week 8 – Survey Latest Industry Developments
Theory covers the latest developments in deep learning and neural networks. Participants sketch out plans for their capstone projects, determine Deep Learning applications of interest, research available information and determine features and market fit. Small POCs are worked through to determine viability of approach, small slide presentation decks for MVP proposal are completed and presented to class.
(1) Connection with physical law, WaveNet (2) Neuromorphic computing, Biocomputing (3) AGI, Frameworks, Human Brain Project (4) Probabilistic programming, Automatic statistician, Turing (language), Visualization Tools