Deep learning is a field with intense computational requirements, and your choice of GPU will fundamentally determine your deep learning experience. But what features are important if you want to buy a new GPU? GPU RAM, cores, tensor cores, caches? How to make a cost-efficient choice? This blog post will delve into these questions, tackle common misconceptions, give you an intuitive understanding of how to think about GPUs, and will lend you advice, which will help you to make a choice that is right for you.
[Read more…] about Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep LearningLLM.int8() and Emergent Features
When I attended NAACL, I wanted to do a little test. I had two pitches for my LLM.int8() paper. One pitch is about how I use advanced quantization methods to achieve no performance degradation transformer inference at scale that makes large models more accessible. The other pitch talks about emergent outliers in transformers and how they radically change what transformers learn and how they function.
From that, I learned that quantization research is like printers. Nobody cares about printers. Nobody likes printers. But everybody is happy if printers do their job.
[Read more…] about LLM.int8() and Emergent FeaturesHow to Choose Your Grad School
If you are reading this, then you probably finished the long and arduous journey to grad school. You emerged victoriously, and this success is well-deserved. But which school should you choose? How to make a right choice if all schools look great in their own way? This blog post is centered around these questions. It is most useful if you are a computer science student aiming to study machine learning and, in particular, natural language processing in the US, but most of the information here is equally valid for any field of research and any country.
The choice of grad school that is right for you can be tricky and confusing. We live in a time of hyper-competitiveness, where even undergrads need to optimize for metrics like paper count to make it to the next level — grad school. This heavily career-centered perspective was probably advantageous to get you into grad school, and it remains crucial to get you to the level after that: a great job in industry or academia. So choosing the school which is best for your career can feel like an obvious choice. However, a PhD is a very long journey, and choosing your grad school based on this perspective alone might make you more vulnerable to burn-out, disillusionment, and general dissatisfaction.
In this blog post, I will discuss this career-centered perspective in detail, but I also provide you with three other views that hopefully help you make a balanced choice that not only leads to academic success but long-term satisfaction and a full and rich life. Balancing your decision based on all four perspectives probably leads to a better choice than looking at one angle alone. Before I go into the details, let me briefly introduce these four perspectives: The Career Perspective, the Identity Perspective, the Stability Perspective, and the Variability Perspective.
On Creativity in Academia
I recently had a discussion about creativity with a colleague. We were discussing music and how creative many bands and groups are. At the end of our conversation, my friend told me, half-sarcastic-half-serious, how much more creative the people in the music industry are than him and that he just cannot find good ideas in his area of research even though he tried so hard for such a long time. I was a bit surprised because I thought of him as someone very creative. However, it is not uncommon to hear scientists lament about their lack of creativity compared to academic superstars. I think about creativity in academia is a bit distorted and a straight view can help to feel less bad about one’s own creativity.
Sparse Networks from Scratch: Faster Training without Losing Performance
This blog post is about my work, Sparse Networks from Scratch: Faster Training without Losing Performance, with Luke Zettlemoyer on fast training of neural networks which we keep sparse throughout training. We show that by developing an algorithm, sparse momentum, we can initialize a neural network with sparse random weights and train it to dense performance levels — all while doing just a single training run. Furthermore, If we use optimized sparse convolution algorithms, we can speed up training between 3.5x for VGG to 12x for Wide Residual Networks. This stands in stark contrast to computationally expensive methods which require repetitive prune-and-retrain cycles as used by the Lottery Ticket Hypothesis (Frankle and Carbin, 2019) and other work. Thus we show that training sparse networks to dense performance levels does not require “winning the initialization lottery” but can be done reliably from random weights if combined with a method that moves weights around the network in a smart way. We call the paradigm that maintains sparsity throughout training while maintaining dense performance levels sparse learning. While this work shows that sparse learning is possible, future work holds the promise to train larger and deep networks on more data while requiring the same or less computational resources as current dense networks.
[Read more…] about Sparse Networks from Scratch: Faster Training without Losing Performance
A Full Hardware Guide to Deep Learning
Deep Learning is very computationally intensive, so you will need a fast CPU with many cores, right? Or is it maybe wasteful to buy a fast CPU? One of the worst things you can do when building a deep learning system is to waste money on hardware that is unnecessary. Here I will guide you step by step through the hardware you will need for a cheap high-performance system.
[Read more…] about A Full Hardware Guide to Deep LearningMachine Learning PhD Applications — Everything You Need to Know
I studied in depth how to be successful in my PhD applications and it paid off: I got admitted to Stanford, University of Washington, UCL, CMU, and NYU. This blog post is a mish-mash of how to proceed in your PhD applications from A to Z. It discusses what is important and what is not. It discusses application materials like the statement of purpose (SoP) and how to make sense of these application materials.
[Read more…] about Machine Learning PhD Applications — Everything You Need to Know
TPUs vs GPUs for Transformers (BERT)
On the computational side, there have been confusions about how TPUs and GPUs relate to BERT. BERT base was trained with 4 TPU pods (16 TPU chips) in 4 days and BERT large with 16 TPUs (64 TPU chips) in 4 days. Does this mean only Google can train a BERT model? Does this mean that GPUs are dead? There are two fundamental things to understand here: (1) A TPU is a matrix multiplication engine — it does matrix multiplication and matrix operations, but not much else. It is fast at computing matrix multiplication, but one has to understand that (2) the slowest thing in matrix multiplication is to get the elements from the main memory and load it into the processing unit. In other words, the most expensive part in matrix multiplication is memory loads. Note the computational load for BERT should be about 90% for matrix multiplication. From these facts, we can do a small technical analysis on this topic.
Deep Learning Hardware Limbo
With the release of the Titan V, we now entered deep learning hardware limbo. It is unclear if NVIDIA will be able to keep its spot as the main deep learning hardware vendor in 2018 and both AMD and Intel Nervana will have a shot at overtaking NVIDIA. So for consumers, I cannot recommend buying any hardware right now. The most prudent choice is to wait until the hardware limbo passes. This might take as little as 3 months or as long as 9 months. So why did we enter deep learning hardware limbo just now?
Credit Assignment in Deep Learning
This morning I got an email about my blog post discussing the history of deep learning which rattled me back into a time of my academic career which I rather not think about. It was a low point which nearly ended my Master studies at the University of Lugano, and it made me feel so bad about blogging that I took two long years to recover. So what has happened?