<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>High Performance Computing Archives &mdash; Tim Dettmers</title>
	<atom:link href="https://timdettmers.com/tag/high-performance-computing/feed/" rel="self" type="application/rss+xml" />
	<link>https://timdettmers.com/tag/high-performance-computing/</link>
	<description>Making deep learning accessible.</description>
	<lastBuildDate>Wed, 10 Dec 2025 15:06:20 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.0.11</generator>

 
<site xmlns="com-wordpress:feed-additions:1">106749684</site>	<item>
		<title>Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning</title>
		<link>https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/</link>
					<comments>https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/#comments</comments>
		
		<dc:creator><![CDATA[Tim Dettmers]]></dc:creator>
		<pubDate>Mon, 30 Jan 2023 15:50:00 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[CPU]]></category>
		<category><![CDATA[High Performance Computing]]></category>
		<category><![CDATA[Matrix Multiplication]]></category>
		<category><![CDATA[Parallel Computing]]></category>
		<category><![CDATA[PCIe Lanes]]></category>
		<category><![CDATA[Sparse Training]]></category>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?p=4</guid>

					<description><![CDATA[<p>Making the right choice when it comes to buying a GPU is critical. So how do you select the GPU which is right for you? This blog post will delve into that question and will lend you advice which will help you to make choice that is right for you.</p>
<p>The post <a rel="nofollow" href="https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/">Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning</a> appeared first on <a rel="nofollow" href="https://timdettmers.com">Tim Dettmers</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<div class="wp-container-1 wp-block-group eplus-nHlelL"><div class="wp-block-group__inner-container">
<p class="eplus-huy78r">Deep learning is a field with intense computational requirements, and your choice of GPU will fundamentally determine your deep learning experience. But what features are important if you want to buy a new GPU? GPU RAM, cores, tensor cores, caches? How to make a cost-efficient choice? This blog post will delve into these questions, tackle common misconceptions, give you an intuitive understanding of how to think about GPUs, and will lend you advice, which will help you to make a choice that is right for you.</p>



<span id="more-6"></span>



<p class="eplus-sxlaEI">This blog post is designed to give you different levels of understanding of GPUs and the new Ampere series GPUs from NVIDIA. You have the choice: (1) If you are not interested in the details of how GPUs work, what makes a GPU fast compared to a CPU, and what is unique about the new NVIDIA RTX 40 Ampere series, you can skip right to the performance and performance per dollar charts and the recommendation section. The cost/performance numbers form the core of the blog post and the content surrounding it explains the details of what makes up GPU performance.</p>



<p class="eplus-1g9jAa">(2) If you worry about specific questions, I have answered and addressed the most common questions and misconceptions in the later part of the blog post.</p>



<p class="eplus-ErgVnq">(3) If you want to get an in-depth understanding of how GPUs, caches, and Tensor Cores work, the best is to read the blog post from start to finish. You might want to skip a section or two based on your understanding of the presented topics.</p>





<h2 class="eplus-dYbLao"><strong>Overview</strong></h2>



<p class="eplus-xVYlmp">This blog post is structured in the following way. First, I will explain what makes a GPU fast. I will discuss CPUs vs GPUs, Tensor Cores, memory bandwidth, and the memory hierarchy of GPUs and how these relate to deep learning performance. These explanations might help you get a more intuitive sense of what to look for in a GPU. I discuss the unique features of the new NVIDIA RTX 40 Ampere GPU series that are worth considering if you buy a GPU. From there, I make GPU recommendations for different scenarios. After that follows a Q&amp;A section of common questions posed to me in Twitter threads; in that section, I will also address common misconceptions and some miscellaneous issues, such as cloud vs desktop, cooling, AMD vs NVIDIA, and others.&nbsp;</p>



<h2 class="eplus-TzWEWa">How do GPUs work?</h2>



<p class="eplus-YDMSkp">If you use GPUs frequently, it is useful to understand how they work. This knowledge will help you to undstand cases where are GPUs fast or slow. In turn, you might be able to understand better why you need a GPU in the first place and how other future hardware options might be able to compete. You can skip this section if you just want the useful performance numbers and arguments to help you decide which GPU to buy. The best high-level explanation for the question of how GPUs work is my following Quora answer:</p>



<span class="quora-content-embed" data-name="Why-are-GPUs-well-suited-to-deep-learning/answer/Tim-Dettmers-1">Read <a class="quora-content-link" data-width="560" data-height="260" href="https://www.quora.com/Why-are-GPUs-well-suited-to-deep-learning/answer/Tim-Dettmers-1" data-type="answer" data-id="21379913" data-key="bbb3732f88834d75dfa98d816eb9eccd" load-full-answer="False" data-embed="jqubkoa"></a><a href="https://www.quora.com/Tim-Dettmers-1">Tim Dettmers</a>&#8216; <a href="/Why-are-GPUs-well-suited-to-deep-learning?top_ans=21379913">answer</a> to <a href="/Why-are-GPUs-well-suited-to-deep-learning" ref="canonical"><span class="rendered_qtext">Why are GPUs well-suited to deep learning?</span></a> on <a href="https://www.quora.com">Quora</a><script type="text/javascript" src="https://www.quora.com/widgets/content"></script></span>



<p class="eplus-bfPm9F">This is a high-level explanation that explains quite well why GPUs are better than CPUs for deep learning. If we look at the details, we can understand what makes one GPU better than another.</p>



<h2 class="eplus-9AkYN1">The Most Important GPU Specs for Deep Learning Processing Speed</h2>



<p class="eplus-XflNML">This section can help you build a more intuitive understanding of how to think about deep learning performance. This understanding will help you to evaluate future GPUs by yourself. This section is sorted by the importance of each component. Tensor Cores are most important, followed by memory bandwidth of a GPU, the cache hierachy, and only then FLOPS of a GPU.</p>



<h3 class="eplus-YyGRdI">Tensor Cores</h3>



<p class="eplus-JKvKVO">Tensor Cores are tiny cores that perform very efficient matrix multiplication. Since the most expensive part of any deep neural network is matrix multiplication Tensor Cores are very useful. In fast, they are so powerful, that I do not recommend any GPUs that do not have Tensor Cores.</p>



<p class="eplus-JKvKVO">It is helpful to understand how they work to appreciate the importance of these computational units specialized for matrix multiplication. Here I will show you a simple example of A*B=C matrix multiplication, where all matrices have a size of 32&#215;32, what a computational pattern looks like with and without Tensor Cores. This is a simplified example, and not the exact way how a high performing matrix multiplication kernel would be written, but it has all the basics. A CUDA programmer would take this as a first “draft” and then optimize it step-by-step with concepts like double buffering, register optimization, occupancy optimization, instruction-level parallelism, and many others, which I will not discuss at this point. </p>



<p class="eplus-JQnNxT">To understand this example fully, you have to understand the concepts of cycles. If a processor runs at 1GHz, it can do 10^9 cycles per second. Each cycle represents an opportunity for computation. However, most of the time, operations take longer than one cycle. Thus we essentially have a queue where the next operations needs to wait for the next operation to finish. This is also called the latency of the operation.</p>



<p class="eplus-Nh32jj">Here are some important latency cycle timings for operations. These times can change from GPU generation to GPU generation. <a href="https://www.nvidia.com/en-us/on-demand/session/gtcspring21-s33322/">These numbers are for Ampere GPUs</a>, which have relatively slow caches.</p>



<ul class="eplus-yOHNRh"><li>Global memory access (up to 80GB): ~380 cycles</li><li>L2 cache: ~200 cycles</li><li>L1 cache or Shared memory access (up to 128 kb per Streaming Multiprocessor): ~34 cycles</li><li>Fused multiplication and addition, a*b+c (FFMA): 4 cycles</li><li>Tensor Core matrix multiply: 1 cycle</li></ul>



<p class="eplus-xmhx78">Each operation is always performed by a pack of 32 threads. This pack is termed a warp of threads. Warps usually operate in a synchronous pattern — threads within a warp have to wait for each other. All memory operations on the GPU are optimized for warps. For example, loading from global memory happens at a granularity of 32*4 bytes, exactly 32 floats, exactly one float for each thread in a warp. We can have up to 32 warps = 1024 threads in a streaming multiprocessor (SM), the GPU-equivalent of a CPU core. The resources of an SM are divided up among all active warps. This means that sometimes we want to run fewer warps to have more registers/shared memory/Tensor Core resources per warp.</p>



<p class="eplus-5GehcT">For both of the following examples, we assume we have the same computational resources. For this small example of a 32&#215;32 matrix multiply, we use 8 SMs (about 10% of an RTX 3090) and 8 warps per SM.</p>



<p class="eplus-G9pv5H">To understand how the cycle latencies play together with resources like threads per SM and shared memory per SM, we now look at examples of matrix multiplication. While the following example roughly follows the sequence of computational steps of matrix multiplication for both with and without Tensor Cores, please note that these are very simplified examples. Real cases of matrix multiplication involve much larger shared memory tiles and slightly different computational patterns.</p>



<h4 class="eplus-Lj4Hjc">Matrix multiplication without Tensor Cores</h4>



<p class="eplus-eBAn6w">If we want to do an A*B=C matrix multiply, where each matrix is of size 32&#215;32, then we want to load memory that we repeatedly access into shared memory because its latency is about five times lower (200 cycles vs 34 cycles). A memory block in shared memory is often referred to as a memory tile or just a tile. Loading two 32&#215;32 floats into a shared memory tile can happen in parallel by using 2*32 warps. We have 8 SMs with 8 warps each, so due to parallelization, we only need to do a single sequential load from global to shared memory, which takes 200 cycles.</p>



<p class="eplus-bSRgVT">To do the matrix multiplication, we now need to load a vector of 32 numbers from shared memory A and shared memory B and perform a fused multiply-and-accumulate (FFMA). Then store the outputs in registers C. We divide the work so that each SM does 8x dot products (32&#215;32) to compute 8 outputs of C. Why this is exactly 8 (4 in older algorithms) is very technical. I recommend Scott Gray’s blog post on <a href="https://github.com/NervanaSystems/maxas/wiki/SGEMM">matrix multiplication</a> to understand this. This means we have 8x shared memory accesses at the cost of 34 cycles each and 8 FFMA operations (32 in parallel), which cost 4 cycles each. In total, we thus have a cost of:</p>



<p class="eplus-UJKQje">200 cycles (global memory) + 8*34 cycles (shared memory) + 8*4 cycles (FFMA) = 504 cycles</p>



<p class="eplus-G6zbco">Let&#8217;s look at the cycle cost of using Tensor Cores.</p>



<h4 class="eplus-dS0Vu0">Matrix multiplication with Tensor Cores</h4>



<p class="eplus-8bWcQj">With Tensor Cores, we can perform a 4&#215;4 matrix multiplication in one cycle. To do that, we first need to get memory into the Tensor Core. Similarly to the above, we need to read from global memory (200 cycles) and store in shared memory. To do a 32&#215;32 matrix multiply, we need to do 8&#215;8=64 Tensor Cores operations. A single SM has 8 Tensor Cores. So with 8 SMs, we have 64 Tensor Cores — just the number that we need! We can transfer the data from shared memory to the Tensor Cores with 1 memory transfers (34 cycles) and then do those 64 parallel Tensor Core operations (1 cycle). This means the total cost for Tensor Cores matrix multiplication, in this case, is:</p>



<p class="eplus-ZzxYmo">200 cycles (global memory) + 34 cycles (shared memory) + 1 cycle (Tensor Core) = 235 cycles.</p>



<p class="eplus-KVQYN0">Thus we reduce the matrix multiplication cost significantly from 504 cycles to 235 cycles via Tensor Cores. In this simplified case, the Tensor Cores reduced the cost of both shared memory access and FFMA operations.&nbsp;</p>



<p class="eplus-KVQYN0">This example is simplified, for example, usually each thread needs to calculate which memory to read and write to as you transfer data from global memory to shared memory. With the new Hooper (H100) architectures we additionally have the Tensor Memory Accelerator (TMA) compute these indices in hardware and thus help each thread to focus on more computation rather than computing indices.</p>



<h4>Matrix multiplication with Tensor Cores and Asynchronous copies (RTX 30/RTX 40) and TMA (H100)</h4>



<p>The RTX 30 Ampere and RTX 40 Ada series GPUs additionally have support to perform asynchronous transfers between global and shared memory. The H100 Hopper GPU extends this further by introducing the Tensor Memory Accelerator (TMA) unit. the TMA unit combines asynchronous copies and index calculation for read and writes simultaneously — so each thread no longer needs to calculate which is the next element to read and each thread can focus on doing more matrix multiplication calculations. This looks as follows.</p>



<p>The TMA unit fetches memory from global to shared memory (200 cycles). Once the data arrives, the TMA unit fetches the next block of data asynchronously from global memory. While this is happening, the threads load data from shared memory and perform the matrix multiplication via the tensor core. Once the threads are finished they wait for the TMA unit to finish the next data transfer, and the sequence repeats. </p>



<p>As such, due to the asynchronous nature, the second global memory read by the TMA unit is already progressing as the threads process the current shared memory tile.  This means, the second read takes only 200 &#8211; 34 &#8211; 1 = 165 cycles. </p>



<p>Since we do many reads, only the first memory access will be slow and all other memory accesses will be partially overlapped with the TMA unit. Thus on average, we reduce the time by 35 cycles.</p>



<p>165 cycles (wait for async copy to finish) + 34 cycles (shared memory) + 1 cycle (Tensor Core) = 200 cycles.</p>



<p>Which accelerates the matrix multiplication by another 15%.</p>



<p class="eplus-2785xF">From these examples, it becomes clear why the next attribute, memory bandwidth, is so crucial for Tensor-Core-equipped GPUs. Since global memory is the by far the largest cycle cost for matrix multiplication with Tensor Cores, we would even have faster GPUs if the global memory latency could be reduced. We can do this by either increasing the clock frequency of the memory (more cycles per second, but also more heat and higher energy requirements) or by increasing the number of elements that can be transferred at any one time (bus width).</p>


<div class="crp_related   crp_related_block  mobile-only crp-rounded-thumbs"><ul><li><a href="https://timdettmers.com/2026/01/23/my-journey-towards-coding-agents-building-sera/"     class="crp_link post-1248"><figure><img loading="lazy"  width="150" height="150"  src="https://i0.wp.com/timdettmers.com/wp-content/plugins/contextual-related-posts/default.png?resize=150%2C150&#038;ssl=1" class="crp_thumb crp_default_thumb" alt="My Journey Towards Coding Agents: Building SERA" title="My Journey Towards Coding Agents: Building SERA" data-recalc-dims="1" /></figure><span class="crp_title">My Journey Towards Coding Agents: Building SERA</span></a></li><li><a href="https://timdettmers.com/2025/12/10/why-agi-will-not-happen/"     class="crp_link post-1233"><figure><img loading="lazy"  width="150" height="150"  src="https://i0.wp.com/timdettmers.com/wp-content/plugins/contextual-related-posts/default.png?resize=150%2C150&#038;ssl=1" class="crp_thumb crp_default_thumb" alt="Why AGI Will Not Happen" title="Why AGI Will Not Happen" data-recalc-dims="1" /></figure><span class="crp_title">Why AGI Will Not Happen</span></a></li><li><a href="https://timdettmers.com/2026/01/13/use-agents-or-be-left-behind/"     class="crp_link post-1238"><figure><img loading="lazy"  width="150" height="150"  src="https://i0.wp.com/timdettmers.com/wp-content/plugins/contextual-related-posts/default.png?resize=150%2C150&#038;ssl=1" class="crp_thumb crp_default_thumb" alt="Use Agents or Be Left Behind? A Personal Guide to Automating Your Own Work" title="Use Agents or Be Left Behind? A Personal Guide to Automating Your Own Work" data-recalc-dims="1" /></figure><span class="crp_title">Use Agents or Be Left Behind? A Personal Guide to Automating&hellip;</span></a></li></ul><div class="crp_clear"></div></div>


<h3 class="eplus-7uVl7z">Memory Bandwidth</h3>



<p class="eplus-Sc9M0X">From the previous section, we have seen that Tensor Cores are very fast. So fast, in fact, that they are idle most of the time as they are waiting for memory to arrive from global memory. For example, during GPT-3-sized training, which uses huge matrices — the larger, the better for Tensor Cores — we have a Tensor Core TFLOPS utilization of about 45-65%, meaning that even for the large neural networks about 50% of the time, Tensor Cores are idle.</p>



<p class="eplus-RsGhaC">This means that when comparing two GPUs with Tensor Cores, one of the single best indicators for each GPU’s performance is their memory bandwidth. For example, The A100 GPU has 1,555 GB/s memory bandwidth vs the 900 GB/s of the V100. As such, a basic estimate of speedup of an A100 vs V100 is 1555/900 = 1.73x.</p>



<h3 class="eplus-Nd29of">L2 Cache / Shared Memory / L1 Cache / Registers</h3>



<p class="eplus-Qs2qZL">Since memory transfers to the Tensor Cores are the limiting factor in performance, we are looking for other GPU attributes that enable faster memory transfer to Tensor Cores. L2 cache, shared memory, L1 cache,  and amount of registers used are all related. To understand how a memory hierarchy enables faster memory transfers, it helps to understand how matrix multiplication is performed on a GPU.</p>



<p class="eplus-ecxtvX">To perform matrix multiplication, we exploit the memory hierarchy of a GPU that goes from slow global memory, to faster L2 memory, to fast local shared memory, to lightning-fast registers. However, the faster the memory, the smaller it is. </p>



<p>While logically, L2 and L1 memory are the same, L2 cache is larger and thus the average physical distance that need to be traversed to retrieve a cache line is larger. You can see the L1 and L2 caches as organized warehouses where you want to retrieve an item. You know where the item is, but to go there takes on average much longer for the larger warehouse. This is the essential difference between L1 and L2 caches. Large = slow, small = fast.</p>



<p class="eplus-ecxtvX">For matrix multiplication we can use this hierarchical separate into smaller and smaller and thus faster and faster chunks of memory to perform very fast matrix multiplications. For that, we need to chunk the big matrix multiplication into smaller sub-matrix multiplications. These chunks are called memory tiles, or often for short just tiles.</p>



<p class="eplus-ecxtvX">We perform matrix multiplication across these smaller tiles in local shared memory that is fast and close to the streaming multiprocessor (SM) — the equivalent of a CPU core. With Tensor Cores, we go a step further: We take each tile and load a part of these tiles into Tensor Cores which is directly addressed by registers. A matrix memory tile in L2 cache is 3-5x faster than global GPU memory (GPU RAM), shared memory is ~7-10x faster than the global GPU memory, whereas the Tensor Cores’ registers are ~200x faster than the global GPU memory.&nbsp;</p>



<p class="eplus-ifo0Kt">Having larger tiles means we can reuse more memory. I wrote about this in detail in my <a href="https://timdettmers.com/2018/10/17/tpus-vs-gpus-for-transformers-bert/">TPU vs GPU</a> blog post. In fact, you can see TPUs as having very, very, large tiles for each Tensor Core. As such, TPUs can reuse much more memory with each transfer from global memory, which makes them a little bit more efficient at matrix multiplications than GPUs.</p>



<p class="eplus-h6CECa">Each tile size is determined by how much memory we have per streaming multiprocessor (SM) and how much we L2 cache we have across all SMs. We have the following shared memory sizes on the following architectures:</p>



<ul class="eplus-CmF7pW"><li>Volta (Titan V): 128kb shared memory / 6 MB L2</li><li>Turing (RTX 20s series): 96 kb shared memory / 5.5 MB L2</li><li>Ampere (RTX 30s series): 128 kb shared memory / 6 MB L2</li><li>Ada (RTX 40s series): 128 kb shared memory / 72 MB L2</li></ul>



<p class="eplus-mGR5cO">We see that Ada has a much larger L2 cache allowing for larger tile sizes, which reduces global memory access. For example, for BERT large during training, the input and weight matrix of any matrix multiplication fit neatly into the L2 cache of Ada (but not other Us). As such, data needs to be loaded from global memory only once and then data is available throught the L2 cache, making matrix multiplication about 1.5 &#8211; 2.0x faster for this architecture for Ada. For larger models the speedups are lower during training but certain sweetspots exist which may make certain models much faster. Inference, with a batch size larger than 8 can also benefit immensely from the larger L2 caches.</p>



<h2 class="eplus-Uq5SHY">Estimating Ada / Hopper Deep Learning Performance</h2>



<p class="eplus-gsZ188">This section is for those who want to understand the more technical details of how I derive the performance estimates for Ampere GPUs. If you do not care about these technical aspects, it is safe to skip this section.</p>



<h3 class="eplus-t3NVr4">Practical Ada / Hopper Speed Estimates</h3>



<p class="eplus-H6ehT0">Suppose we have an estimate for one GPU of a GPU-architecture like Hopper, Ada, Ampere, Turing, or Volta. It is easy to extrapolate these results to other GPUs from the same architecture/series. Luckily, NVIDIA already <a href="https://developer.nvidia.com/deep-learning-performance-training-inference">benchmarked the A100 vs V100 vs H100</a> across a wide range of computer vision and natural language understanding tasks. Unfortunately, NVIDIA made sure that these numbers are not directly comparable by using different batch sizes and the number of GPUs whenever possible to favor results for the H100 GPU. So in a sense, the benchmark numbers are partially honest, partially marketing numbers. In general, you could argue that using larger batch sizes is fair, as the H100/A100 GPU has more memory. Still, to compare GPU architectures, we should evaluate unbiased memory performance with the same batch size.</p>



<p class="eplus-3xKGXX">To get an unbiased estimate, we can scale the data center GPU results in two ways: (1) account for the differences in batch size, (2) account for the differences in using 1 vs 8 GPUs. We are lucky that we can find such an estimate for both biases in the data that NVIDIA provides.&nbsp;</p>



<p class="eplus-LO0cd2">Doubling the batch size increases throughput in terms of images/s (CNNs) by 13.6%. I benchmarked the same problem for transformers on my RTX Titan and found, surprisingly, the very same result: 13.5% — it appears that this is a robust estimate.</p>



<p class="eplus-obwQUJ">As we parallelize networks across more and more GPUs, we lose performance due to some networking overhead. The A100 8x GPU system has better networking (NVLink 3.0) than the V100 8x GPU system (NVLink 2.0) — this is another confounding factor. Looking directly at the data from NVIDIA, we can find that for CNNs, a system with 8x A100 has a 5% lower overhead than a system of 8x V100. This means if going from 1x A100 to 8x A100 gives you a speedup of, say, 7.00x, then going from 1x V100 to 8x V100 only gives you a speedup of 6.67x.&nbsp; For transformers, the figure is 7%.&nbsp;</p>



<p class="eplus-M8lGMx">Using these figures, we can estimate the speedup for a few specific deep learning architectures from the direct data that NVIDIA provides. The Tesla A100 offers the following speedup over the Tesla V100:</p>



<ul class="eplus-2N544K"><li>SE-ResNeXt101: 1.43x</li><li>Masked-R-CNN: 1.47x</li><li>Transformer (12 layer, Machine Translation, WMT14 en-de): 1.70x</li></ul>



<p class="eplus-hLEvfM">Thus, the figures are a bit lower than the theoretical estimate for computer vision. This might be due to smaller tensor dimensions, overhead from operations that are needed to prepare the matrix multiplication like img2col or Fast Fourier Transform (FFT), or operations that cannot saturate the GPU (final layers are often relatively small). It could also be artifacts of the specific architectures (grouped convolution).</p>



<p class="eplus-K95gae">The practical transformer estimate is very close to the theoretical estimate. This is probably because algorithms for huge matrices are very straightforward. I will use these practical estimates to calculate the cost efficiency of GPUs.</p>



<h3 class="eplus-CUQjZa">Possible Biases in Estimates</h3>



<p class="eplus-UQWjwg">The estimates above are for H100, A100 , and V100 GPUs. In the past, NVIDIA sneaked unannounced performance degradations into the “gaming” RTX GPUs: (1) Decreased Tensor Core utilization, (2) gaming fans for cooling, (3) disabled peer-to-peer GPU transfers. It might be possible that there are unannounced performance degradations in the RTX 40 series compared to the full Hopper H100. </p>



<p class="eplus-IQX3ZD">As of now, one of these degradations was found for Ampere GPUs: Tensor Core performance was decreased so that RTX 30 series GPUs are not as good as Quadro cards for deep learning purposes. This was also done for the RTX 20 series, so it is nothing new, but this time it was also done for the Titan equivalent card, the RTX 3090. The RTX Titan did not have performance degradation enabled.</p>



<p>Currently, no degradation for Ada GPUs are known, but I update this post with news on this and let my followers on <a href="https://twitter.com/Tim_Dettmers">twitter</a> know.</p>



<h2 class="eplus-hICahu"><strong>Advantages and Problems for RTX40 and RTX 30 Series</strong></h2>



<p class="eplus-0zpF2c">The new NVIDIA Ampere RTX 30 series has additional benefits over the NVIDIA Turing RTX 20 series, such as sparse network training and inference. Other features, such as the new data types, should be seen more as an ease-of-use-feature as they provide the same performance boost as Turing does but without any extra programming required. </p>



<p>The Ada RTX 40 series has even further advances like 8-bit Float (FP8) tensor cores. The RTX 40 series also has similar power and temperature issues compared to the RTX 30. The issue of melting power connector cables in the RTX 40 can be easily prevented by connecting the power cable correctly.</p>



<h3 class="eplus-2ZdClk">Sparse Network Training</h3>



<p class="eplus-vRUUJh">Ampere allows for fine-grained structure automatic sparse matrix multiplication at dense speeds. How does this work? Take a weight matrix and slice it into pieces of 4 elements. Now imagine 2 elements of these 4 to be zero. Figure 1 shows how this could look like. </p>


<div class="wp-block-image eplus-IiM2y7">
<figure class="aligncenter size-large is-resized"><img data-attachment-id="935" data-permalink="https://timdettmers.com/sparse_matrix_ampere/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/sparse_matrix_ampere.png?fit=321%2C387&amp;ssl=1" data-orig-size="321,387" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Ampere Sparse Matrix Multiplication Tensor Cores Matrix" data-image-description="&lt;p&gt;Figure X: Structure supported by the sparse matrix multiplication feature in Ampere GPUs. Figure is taken from Jeff Pool&#8217;s GTC 2020 presentation on  &lt;a href=&quot;https://developer.download.nvidia.com/video/gputechconf/gtc/2020/presentations/s22085-accelerating-sparsity-in-the-nvidia-ampere-architecture%E2%80%8B.pdf&quot; rel=&quot;noopener noreferrer&quot; target=&quot;_blank&quot;&gt;Accelerating Sparsity in the NVIDIA Ampere Architecture&lt;/a&gt; by the courtesy of NVIDIA.&lt;/p&gt;
" data-image-caption="&lt;p&gt;Figure X: Structure supported by the sparse matrix multiplication feature in Ampere GPUs. Figure is taken from Jeff Pool&#8217;s GTC 2020 presentation on  &lt;a href=&quot;https://developer.download.nvidia.com/video/gputechconf/gtc/2020/presentations/s22085-accelerating-sparsity-in-the-nvidia-ampere-architecture%E2%80%8B.pdf&quot; rel=&quot;noopener noreferrer&quot; target=&quot;_blank&quot;&gt;Accelerating Sparsity in the NVIDIA Ampere Architecture&lt;/a&gt; by the courtesy of NVIDIA.&lt;/p&gt;
" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/sparse_matrix_ampere.png?fit=249%2C300&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/sparse_matrix_ampere.png?fit=321%2C387&amp;ssl=1" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/sparse_matrix_ampere.png?resize=258%2C311&#038;ssl=1" alt="Figure 1: Structure supported by the sparse matrix multiplication feature in Ampere GPUs. The figure is taken from Jeff Pool's GTC 2020 presentation on  Accelerating Sparsity in the NVIDIA Ampere Architecture by the courtesy of NVIDIA." class="wp-image-935" width="258" height="311" srcset="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/sparse_matrix_ampere.png?w=321&amp;ssl=1 321w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/sparse_matrix_ampere.png?resize=249%2C300&amp;ssl=1 249w" sizes="(max-width: 258px) 100vw, 258px" data-recalc-dims="1" /><figcaption>Figure 1: Structure supported by the sparse matrix multiplication feature in Ampere GPUs. The figure is taken from Jeff Pool&#8217;s GTC 2020 presentation on <a href="https://developer.download.nvidia.com/video/gputechconf/gtc/2020/presentations/s22085-accelerating-sparsity-in-the-nvidia-ampere-architecture%E2%80%8B.pdf" rel="noreferrer noopener" target="_blank">Accelerating Sparsity in the NVIDIA Ampere Architecture</a> by the courtesy of NVIDIA.</figcaption></figure></div>


<p class="eplus-JGsMdm">When you multiply this sparse weight matrix with some dense inputs, the sparse matrix tensor core feature in Ampere automatically compresses the sparse matrix to a dense representation that is half the size as can be seen in Figure 2. After this compression, the densely compressed matrix tile is fed into the tensor core which computes a matrix multiplication of twice the usual size. This effectively yields a 2x speedup since the bandwidth requirements during matrix multiplication from shared memory are halved.</p>



<figure class="wp-block-image size-large eplus-O3WVqX"><img data-attachment-id="934" data-permalink="https://timdettmers.com/sparse_matmul/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/sparse_matmul.png?fit=1055%2C638&amp;ssl=1" data-orig-size="1055,638" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Sparse Matrix Multiplication in Ampere" data-image-description="&lt;p&gt;Figure X: The sparse matrix is compressed to a dense representation before the matrix multiplication is performed. Figure is taken from Jeff Pool&#8217;s GTC 2020 presentation on  &lt;a href=&quot;https://developer.download.nvidia.com/video/gputechconf/gtc/2020/presentations/s22085-accelerating-sparsity-in-the-nvidia-ampere-architecture%E2%80%8B.pdf&quot; rel=&quot;noopener noreferrer&quot; target=&quot;_blank&quot;&gt;Accelerating Sparsity in the NVIDIA Ampere Architecture&lt;/a&gt; by the courtesy of NVIDIA.&lt;/p&gt;
" data-image-caption="&lt;p&gt;Figure X: The sparse matrix is compressed to a dense representation before the matrix multiplication is performed. Figure is taken from Jeff Pool&#8217;s GTC 2020 presentation on  &lt;a href=&quot;https://developer.download.nvidia.com/video/gputechconf/gtc/2020/presentations/s22085-accelerating-sparsity-in-the-nvidia-ampere-architecture%E2%80%8B.pdf&quot; rel=&quot;noopener noreferrer&quot; target=&quot;_blank&quot;&gt;Accelerating Sparsity in the NVIDIA Ampere Architecture&lt;/a&gt; by the courtesy of NVIDIA.&lt;/p&gt;
" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/sparse_matmul.png?fit=300%2C181&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/sparse_matmul.png?fit=1024%2C619&amp;ssl=1" width="1024" height="619" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/sparse_matmul.png?resize=1024%2C619&#038;ssl=1" alt="Figure 2: The sparse matrix is compressed to a dense representation before the matrix multiplication is performed. " class="wp-image-934" srcset="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/sparse_matmul.png?resize=1024%2C619&amp;ssl=1 1024w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/sparse_matmul.png?resize=300%2C181&amp;ssl=1 300w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/sparse_matmul.png?resize=768%2C464&amp;ssl=1 768w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/sparse_matmul.png?w=1055&amp;ssl=1 1055w" sizes="(max-width: 1000px) 100vw, 1000px" data-recalc-dims="1" /><figcaption>Figure 2: The sparse matrix is compressed to a dense representation before the matrix multiplication is performed. The figure is taken from Jeff Pool&#8217;s GTC 2020 presentation on  <a href="https://developer.download.nvidia.com/video/gputechconf/gtc/2020/presentations/s22085-accelerating-sparsity-in-the-nvidia-ampere-architecture%E2%80%8B.pdf" rel="noopener noreferrer" target="_blank">Accelerating Sparsity in the NVIDIA Ampere Architecture</a> by the courtesy of NVIDIA.</figcaption></figure>



<p class="eplus-dCX1Ah">I was working on <a href="https://arxiv.org/abs/1907.04840">sparse network training</a> in my research and I also wrote a <a href="https://timdettmers.com/2019/07/11/sparse-networks-from-scratch/">blog post about sparse training</a>. One criticism of my work was that “You reduce the FLOPS required for the network, but it does not yield speedups because GPUs cannot do fast sparse matrix multiplication.” Well, with the addition of the sparse matrix multiplication feature for Tensor Cores, my algorithm, or <a href="https://arxiv.org/abs/2002.03231" rel="nofollow">other</a> <a href="https://arxiv.org/abs/2002.07376" rel="nofollow">sparse</a> <a href="https://arxiv.org/abs/1911.11134" rel="nofollow">training</a> <a href="https://arxiv.org/abs/1902.05967" rel="nofollow">algorithms</a>, now actually provide speedups of up to 2x during training.</p>



<figure class="wp-block-image size-large eplus-uspqxA"><a href="https://arxiv.org/abs/1907.04840"><img data-attachment-id="779" data-permalink="https://timdettmers.com/2019/07/11/sparse-networks-from-scratch/sparse_momentum/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2019/07/sparse_momentum.png?fit=1096%2C528&amp;ssl=1" data-orig-size="1096,528" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Sparse Momentum Dettmers &#038; Zettlemoyer 2019" data-image-description="&lt;p&gt;Figure X: The sparse training algorithm developed has three stages: (1) Determine the importance of each layer. (2) Remove the smallest, unimportant weights. (3) Grow new weights proportional to the importance of each layers.&lt;/p&gt;
" data-image-caption="&lt;p&gt;Figure X: The sparse training algorithm developed has three stages: (1) Determine the importance of each layer. (2) Remove the smallest, unimportant weights. (3) Grow new weights proportional to the importance of each layers.&lt;/p&gt;
" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2019/07/sparse_momentum.png?fit=300%2C145&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2019/07/sparse_momentum.png?fit=1024%2C493&amp;ssl=1" width="1024" height="493" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2019/07/sparse_momentum.png?resize=1024%2C493&#038;ssl=1" alt="Figure 3: The sparse training algorithm that I developed has three stages: (1) Determine the importance of each layer. (2) Remove the smallest, unimportant weights. (3) Grow new weights proportional to the importance of each layer. Read more about my work in my sparse training blog post." class="wp-image-779" srcset="https://i0.wp.com/timdettmers.com/wp-content/uploads/2019/07/sparse_momentum.png?resize=1024%2C493&amp;ssl=1 1024w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2019/07/sparse_momentum.png?resize=300%2C145&amp;ssl=1 300w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2019/07/sparse_momentum.png?resize=768%2C370&amp;ssl=1 768w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2019/07/sparse_momentum.png?w=1096&amp;ssl=1 1096w" sizes="(max-width: 1000px) 100vw, 1000px" data-recalc-dims="1" /></a><figcaption>Figure 3: The <a href="https://arxiv.org/abs/1907.04840">sparse training algorithm</a> that I developed has three stages: (1) Determine the importance of each layer. (2) Remove the smallest, unimportant weights. (3) Grow new weights proportional to the importance of each layer. Read more about my work in my <a href="https://timdettmers.com/2019/07/11/sparse-networks-from-scratch/">sparse training blog post</a>.</figcaption></figure>



<p class="eplus-9YQAyM">While this feature is still experimental and training sparse networks are not commonplace yet, having this feature on your GPU means you are ready for the future of sparse training.</p>



<h3 class="eplus-uvNoM4">Low-precision Computation</h3>



<p class="eplus-AgG4J1">In my work, I’ve previously shown that new data types can improve stability during <a href="https://arxiv.org/abs/1511.04561">low-precision backpropagation</a>. </p>



<figure class="wp-block-image size-large eplus-4aBkKy"><img data-attachment-id="941" data-permalink="https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/8-bit_data_types/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/8-bit_data_types.png?fit=869%2C268&amp;ssl=1" data-orig-size="869,268" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="8-bit_data_types" data-image-description="&lt;p&gt;Figure X: Low-precision deep learning 8-bit datatypes that I developed. Deep learning training benefits from highly specialized data types. My dynamic tree datatype uses a dynamic bit that indicates the beginning of a binary bisection tree that quantized the range [0, 0.9] while all previous bits are used for the exponent. This allows to dynamically represent large numbers and small numbers with high precision.&lt;/p&gt;
" data-image-caption="&lt;p&gt;Figure X: Low-precision deep learning 8-bit datatypes that I developed. Deep learning training benefits from highly specialized data types. My dynamic tree datatype uses a dynamic bit that indicates the beginning of a binary bisection tree that quantized the range [0, 0.9] while all previous bits are used for the exponent. This allows to dynamically represent large numbers and small numbers with high precision.&lt;/p&gt;
" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/8-bit_data_types.png?fit=300%2C93&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/8-bit_data_types.png?fit=869%2C268&amp;ssl=1" width="869" height="268" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/8-bit_data_types.png?resize=869%2C268&#038;ssl=1" alt="Figure 4: Low-precision deep learning 8-bit datatypes that I developed. Deep learning training benefits from highly specialized data types. My dynamic tree datatype uses a dynamic bit that indicates the beginning of a binary bisection tree that quantized the range [0, 0.9] while all previous bits are used for the exponent. This allows to dynamically represent numbers that are both large and small with high precision." class="wp-image-941" srcset="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/8-bit_data_types.png?w=869&amp;ssl=1 869w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/8-bit_data_types.png?resize=300%2C93&amp;ssl=1 300w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/8-bit_data_types.png?resize=768%2C237&amp;ssl=1 768w" sizes="(max-width: 869px) 100vw, 869px" data-recalc-dims="1" /><figcaption>Figure 4: Low-precision deep learning 8-bit datatypes that I developed. Deep learning training benefits from highly specialized data types. My dynamic tree datatype uses a dynamic bit that indicates the beginning of a binary bisection tree that quantized the range [0, 0.9] while all previous bits are used for the exponent. This allows to dynamically represent numbers that are both large and small with high precision.</figcaption></figure>



<p class="eplus-3DDVFs">Currently, if you want to have stable backpropagation with 16-bit floating-point numbers (FP16), the big problem is that ordinary FP16 data types only support numbers in the range [-65,504, 65,504]. If your gradient slips past this range, your gradients explode into NaN values. To prevent this during FP16 training, we usually perform loss scaling where you multiply the loss by a small number before backpropagating to prevent this gradient explosion.&nbsp;</p>



<p class="eplus-WQx9qD">The BrainFloat 16 format (BF16) uses more bits for the exponent such that the range of possible numbers is the same as for FP32: [-3*10^38, 3*10^38]. BF16 has less precision, that is significant digits, but gradient precision is not that important for learning. So what BF16 does is that you no longer need to do any loss scaling or worry about the gradient blowing up quickly. As such, we should see an increase in training stability by using the BF16 format as a slight loss of precision.</p>



<p class="eplus-NLJqlG">What this means for you: With BF16 precision, training might be more stable than with FP16 precision while providing the same speedups. With 32-bit TensorFloat (TF32) precision, you get near FP32 stability while giving the speedups close to FP16. The good thing is, to use these data types, you can just replace FP32 with TF32 and FP16 with BF16 — no code changes required!</p>



<p class="eplus-S1SD30">Overall, though, these new data types can be seen as lazy data types in the sense that you could have gotten all the benefits with the old data types with some additional programming efforts (proper loss scaling, initialization, normalization, using Apex). As such, these data types do not provide speedups but rather improve ease of use of low precision for training.</p>


<div class="crp_related   crp_related_block  mobile-only crp-rounded-thumbs"><ul><li><a href="https://timdettmers.com/2025/12/10/why-agi-will-not-happen/"     class="crp_link post-1233"><figure><img loading="lazy"  width="150" height="150"  src="https://i0.wp.com/timdettmers.com/wp-content/plugins/contextual-related-posts/default.png?resize=150%2C150&#038;ssl=1" class="crp_thumb crp_default_thumb" alt="Why AGI Will Not Happen" title="Why AGI Will Not Happen" data-recalc-dims="1" /></figure><span class="crp_title">Why AGI Will Not Happen</span></a></li><li><a href="https://timdettmers.com/2026/01/13/use-agents-or-be-left-behind/"     class="crp_link post-1238"><figure><img loading="lazy"  width="150" height="150"  src="https://i0.wp.com/timdettmers.com/wp-content/plugins/contextual-related-posts/default.png?resize=150%2C150&#038;ssl=1" class="crp_thumb crp_default_thumb" alt="Use Agents or Be Left Behind? A Personal Guide to Automating Your Own Work" title="Use Agents or Be Left Behind? A Personal Guide to Automating Your Own Work" data-recalc-dims="1" /></figure><span class="crp_title">Use Agents or Be Left Behind? A Personal Guide to Automating&hellip;</span></a></li><li><a href="https://timdettmers.com/2026/01/23/my-journey-towards-coding-agents-building-sera/"     class="crp_link post-1248"><figure><img loading="lazy"  width="150" height="150"  src="https://i0.wp.com/timdettmers.com/wp-content/plugins/contextual-related-posts/default.png?resize=150%2C150&#038;ssl=1" class="crp_thumb crp_default_thumb" alt="My Journey Towards Coding Agents: Building SERA" title="My Journey Towards Coding Agents: Building SERA" data-recalc-dims="1" /></figure><span class="crp_title">My Journey Towards Coding Agents: Building SERA</span></a></li></ul><div class="crp_clear"></div></div>


<h3 class="eplus-UXUbWi">Fan Designs  and GPUs Temperature Issues</h3>



<p class="eplus-yw9sMm">While the new fan design of the RTX 30 series performs very well to cool the GPU, different fan designs of non-founders edition GPUs might be more problematic. If your GPU heats up beyond 80C, it will throttle itself and slow down its computational speed / power. This overheating can happen in particular if you stack multiple GPUs next to each other. A solution to this is to use PCIe extenders to create space between GPUs.</p>



<p class="eplus-CTvlva">Spreading GPUs with PCIe extenders is very effective for cooling, and other fellow PhD students at the University of Washington and I use this setup with great success. It does not look pretty, but it keeps your GPUs cool! This has been running with no problems at all for 4 years now. It can also help if you do not have enough space to fit all GPUs in the PCIe slots. For example, if you can find the space within a desktop computer case, it might be possible to buy standard 3-slot-width RTX 4090 and spread them with PCIe extenders within the case. With this, you might solve both the space issue and cooling issue for a 4x RTX 4090 setup with a single simple solution.</p>


<div class="wp-block-image eplus-dDY0Kb">
<figure class="aligncenter size-large"><img data-attachment-id="861" data-permalink="https://timdettmers.com/4x_rtx2080ti_desktop_extenders/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/4x_RTX2080Ti_desktop_extenders-scaled.jpg?fit=1920%2C2560&amp;ssl=1" data-orig-size="1920,2560" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;1.9&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;Redmi Note 5&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1557156443&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;3.94&quot;,&quot;iso&quot;:&quot;1250&quot;,&quot;shutter_speed&quot;:&quot;0.05&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="4x_RTX2080Ti_desktop_extenders" data-image-description="&lt;p&gt;4x GPUs with PCIe extenders&lt;/p&gt;
" data-image-caption="&lt;p&gt;4x GPUs with PCIe extenders&lt;/p&gt;
" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/4x_RTX2080Ti_desktop_extenders-scaled.jpg?fit=225%2C300&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/4x_RTX2080Ti_desktop_extenders-scaled.jpg?fit=768%2C1024&amp;ssl=1" width="768" height="1024" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/4x_RTX2080Ti_desktop_extenders.jpg?resize=768%2C1024&#038;ssl=1" alt="Figure 5: 4x GPUs with PCIe extenders. It looks like a mess, but it is very effective for cooling. I used this rig for 2 years and cooling is excellent despite problematic RTX 2080 Ti Founders Edition GPUs." class="wp-image-861" srcset="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/4x_RTX2080Ti_desktop_extenders-scaled.jpg?resize=768%2C1024&amp;ssl=1 768w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/4x_RTX2080Ti_desktop_extenders-scaled.jpg?resize=225%2C300&amp;ssl=1 225w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/4x_RTX2080Ti_desktop_extenders-scaled.jpg?resize=1152%2C1536&amp;ssl=1 1152w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/4x_RTX2080Ti_desktop_extenders-scaled.jpg?resize=1536%2C2048&amp;ssl=1 1536w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/4x_RTX2080Ti_desktop_extenders-scaled.jpg?w=1920&amp;ssl=1 1920w" sizes="(max-width: 768px) 100vw, 768px" data-recalc-dims="1" /><figcaption>Figure 5: 4x GPUs with PCIe extenders. It looks like a mess, but it is very effective for cooling. I used this rig for 4 years and cooling is excellent despite problematic RTX 2080 Ti Founders Edition GPUs.</figcaption></figure></div>


<h3 class="eplus-JbBDhp">3-slot Design and Power Issues</h3>



<p class="eplus-DovhM1">The RTX 3090 and RTX 4090 are 3-slot GPUs, so one will not be able to use it in a 4x setup with the default fan design from NVIDIA. This is kind of justified because it runs at over 350W TDP, and it will be difficult to cool in a multi-GPU 2-slot setting. The RTX 3080 is only slightly better at 320W TDP, and cooling a 4x RTX 3080 setup will also be very difficult.</p>



<p class="eplus-pfi9QE">It is also difficult to power a 4x 350W = 1400W or 4x 450W = 1800W system in the 4x RTX 3090 or 4x RTX 4090 case. Power supply units (PSUs) of 1600W are readily available, but having only 200W to power the <a href="https://timdettmers.com/2018/12/16/deep-learning-hardware-guide/">CPU and motherboard</a> can be too tight. The components’ maximum power is only used if the components are fully utilized, and in deep learning, the CPU is usually only under weak load. With that, a 1600W PSU might work quite well with a 4x RTX 3080 build, but for a 4x RTX 3090 build, it is better to look for high wattage PSUs (+1700W). Some of my followers have had great success with cryptomining PSUs — have a look in the comment section for more info about that. Otherwise, it is important to note that not all outlets support PSUs above 1600W, especially in the US. This is the reason why in the US, there are currently few standard desktop PSUs above 1600W on the market. If you get a server or cryptomining PSUs, beware of the form factor — make sure it fits into your computer case.</p>



<h3 class="eplus-wvweIn">Power Limiting: An Elegant Solution to Solve the Power Problem?</h3>



<p class="eplus-eq3oCc">It is possible to set a power limit on your GPUs. So you would be able to programmatically set the power limit of an RTX 3090 to 300W instead of their standard 350W. In a 4x GPU system, that is a saving of 200W, which might just be enough to build a 4x RTX 3090 system with a 1600W PSU feasible. It also helps to keep the GPUs cool. So setting a power limit can solve the two major problems of a 4x RTX 3080 or 4x RTX 3090 setups, cooling, and power, at the same time. For a 4x setup, you still need effective blower GPUs (and the standard design may prove adequate for this), but this resolves the PSU problem.</p>



<figure class="wp-block-image size-large eplus-uDERbv"><img data-attachment-id="933" data-permalink="https://timdettmers.com/power_limit_nvidia_smi/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/power_limit_nvidia_smi.png?fit=1187%2C1195&amp;ssl=1" data-orig-size="1187,1195" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Power Limit Cooling Effect NVIDIA SMI" data-image-description="&lt;p&gt;Figure X: Reducing the power limit has a slight cooling effect. Reducing the RTX 2080 Ti power limit by 50-60 W decreases temperatures slightly and fans run more silent.&lt;/p&gt;
" data-image-caption="&lt;p&gt;Figure X: Reducing the power limit has a slight cooling effect. Reducing the RTX 2080 Ti power limit by 50-60 W decreases temperatures slightly and fans run more silent.&lt;/p&gt;
" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/power_limit_nvidia_smi.png?fit=298%2C300&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/power_limit_nvidia_smi.png?fit=1017%2C1024&amp;ssl=1" width="1017" height="1024" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/power_limit_nvidia_smi.png?resize=1017%2C1024&#038;ssl=1" alt="Figure 6: Reducing the power limit has a slight cooling effect. Reducing the RTX 2080 Ti power limit by 50-60 W decreases temperatures slightly and fans run more silent." class="wp-image-933" srcset="https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/power_limit_nvidia_smi.png?resize=1017%2C1024&amp;ssl=1 1017w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/power_limit_nvidia_smi.png?resize=298%2C300&amp;ssl=1 298w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/power_limit_nvidia_smi.png?resize=150%2C150&amp;ssl=1 150w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/power_limit_nvidia_smi.png?resize=768%2C773&amp;ssl=1 768w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2020/09/power_limit_nvidia_smi.png?w=1187&amp;ssl=1 1187w" sizes="(max-width: 1000px) 100vw, 1000px" data-recalc-dims="1" /><figcaption>Figure 6: Reducing the power limit has a slight cooling effect. Reducing the RTX 2080 Ti power limit by 50-60 W decreases temperatures slightly and fans run more silent.</figcaption></figure>



<p class="eplus-dIRoAn">You might ask, “Doesn’t this slow down the GPU?” Yes, it does, but the question is by how much. I benchmarked the 4x RTX 2080 Ti system shown in Figure 5 under different power limits to test this. I benchmarked the time for 500 mini-batches for BERT Large during inference (excluding the softmax layer). I choose BERT Large inference since, from my experience, this is the deep learning model that stresses the GPU the most. As such, I would expect power limiting to have the most massive slowdown for this model. As such, the slowdowns reported here are probably close to the maximum slowdowns that you can expect. The results are shown in Figure 7. </p>



<figure class="wp-block-image size-large eplus-WIGOOF"><img data-attachment-id="939" data-permalink="https://timdettmers.com/rtx-2080-ti-slowdown-vs-power-limit/" data-orig-file="https://timdettmers.com/wp-content/uploads/2020/09/RTX-2080-Ti-Slowdown-vs-Power-Limit.svg" data-orig-size="853,703" data-comments-opened="1" data-image-meta="[]" data-image-title="RTX 2080 Ti Slowdown vs Power Limit" data-image-description="&lt;p&gt;Figure 6: Measured slowdown for a given power limit on an RTX 2080 Ti. Measurements taken are mean processing times for 500 mini-batches of BERT Large during inference (excluding softmax layer).&lt;/p&gt;
" data-image-caption="&lt;p&gt;Figure 6: Measured slowdown for a given power limit on an RTX 2080 Ti. Measurements taken are mean processing times for 500 mini-batches of BERT Large during inference (excluding softmax layer).&lt;/p&gt;
" data-medium-file="https://timdettmers.com/wp-content/uploads/2020/09/RTX-2080-Ti-Slowdown-vs-Power-Limit.svg" data-large-file="https://timdettmers.com/wp-content/uploads/2020/09/RTX-2080-Ti-Slowdown-vs-Power-Limit.svg" width="853" height="703" src="https://timdettmers.com/wp-content/uploads/2020/09/RTX-2080-Ti-Slowdown-vs-Power-Limit.svg" alt="Figure 7: Measured slowdown for a given power limit on an RTX 2080 Ti. Measurements taken are mean processing times for 500 mini-batches of BERT Large during inference (excluding softmax layer)." class="wp-image-939"/><figcaption>Figure 7: Measured slowdown for a given power limit on an RTX 2080 Ti. Measurements taken are mean processing times for 500 mini-batches of BERT Large during inference (excluding softmax layer).</figcaption></figure>



<p class="eplus-FATkvQ">As we can see, setting the power limit does not seriously affect performance. Limiting the power by 50W — more than enough to handle 4x RTX 3090 — decreases performance by only 7%.</p>



<p></p>



<h3>RTX 4090s and Melting Power Connectors: How to Prevent Problems</h3>



<p>There was a misconception that RTX 4090 power cables melt because they were bent. However, it was found that only 0.1% of users had this problem and the problem occured due to user error. Here a video that shows that the main problem is that <a href="https://www.youtube.com/watch?v=ig2px7ofKhQ&amp;t=1065s">cables were not inserted correctly</a>.</p>



<p>So using RTX 4090 cards is perfectly safe if you follow the following install instructions:</p>



<ol><li>If you use an old cable or old GPU make sure the contacts are free of debri / dust.</li><li>Use the power connector and stick it into the socket until you hear a *click* — this is the most important part.</li><li>Test for good fit by wiggling the power cable left to right. The cable should not move.</li><li>Check the contact with the socket visually, there should be no gap between cable and socket.</li></ol>



<h3>8-bit Float Support in H100 and RTX 40 series GPUs</h3>



<p>The support of the 8-bit Float (FP8) is a huge advantage for the RTX 40 series and H100 GPUs. With 8-bit inputs it allows you to load the data for matrix multiplication twice as fast, you can store twice as much matrix elements in your caches which in the Ada and Hopper architecture are very large, and now with FP8 tensor cores you get 0.66 PFLOPS of compute for a RTX 4090 — this is more FLOPS then the entirety of the worlds fastest supercomputer in year 2007. 4x RTX 4090 with FP8 compute rival the faster supercomputer in the world in year 2010 (deep learning started to work just in 2009).</p>



<p>The main problem with using 8-bit precision is that transformers can get very unstable with so few bits and crash during training or generate non-sense during inference. I have written a <a href="https://arxiv.org/abs/2208.07339">paper about the emergence of instabilities in large language models</a> and I also written a more accessible <a href="https://timdettmers.com/2022/08/17/llm-int8-and-emergent-features/">blog post</a>.</p>



<p>The main take-way is this: Using 8-bit instead of 16-bit makes things very unstable, but if you keep a couple of dimensions in high precision everything works just fine.</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/LLM_int8_zeroshot_emergence.png?ssl=1"><img data-attachment-id="1146" data-permalink="https://timdettmers.com/llm_int8_zeroshot_emergence/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/LLM_int8_zeroshot_emergence.png?fit=1808%2C1462&amp;ssl=1" data-orig-size="1808,1462" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="LLM_int8_zeroshot_emergence" data-image-description="" data-image-caption="&lt;p&gt;Main results from my work on 8-bit matrix multiplication for Large Language Models (LLMs). We can see that the best 8-bit baseline fails to deliver good zero-shot performance. The method that I developed, LLM.int8(), can perform Int8 matrix multiplication with the same results as the 16-bit baseline.&lt;/p&gt;
" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/LLM_int8_zeroshot_emergence.png?fit=300%2C243&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/LLM_int8_zeroshot_emergence.png?fit=1024%2C828&amp;ssl=1" width="1024" height="828" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/LLM_int8_zeroshot_emergence.png?resize=1024%2C828&#038;ssl=1" alt="" class="wp-image-1146" srcset="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/LLM_int8_zeroshot_emergence.png?resize=1024%2C828&amp;ssl=1 1024w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/LLM_int8_zeroshot_emergence.png?resize=300%2C243&amp;ssl=1 300w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/LLM_int8_zeroshot_emergence.png?resize=768%2C621&amp;ssl=1 768w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/LLM_int8_zeroshot_emergence.png?resize=1536%2C1242&amp;ssl=1 1536w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/LLM_int8_zeroshot_emergence.png?w=1808&amp;ssl=1 1808w" sizes="(max-width: 1000px) 100vw, 1000px" data-recalc-dims="1" /></a><figcaption>Main results from my work on 8-bit matrix multiplication for Large Language Models (LLMs). We can see that the best 8-bit baseline fails to deliver good zero-shot performance. The method that I developed, LLM.int8(), can perform Int8 matrix multiplication with the same results as the 16-bit baseline.</figcaption></figure>



<p>But Int8 was already supported by the RTX 30 / A100 / Ampere generation GPUs, why is FP8 in the RTX 40 another big upgrade? The FP8 data type is much more stable than the Int8 data type and its easy to use it in functions like layer norm or non-linear functions, which are difficult to do with Integer data types. This will make it very straightforward to use it in training and inference. I think this will make FP8 training and inference relatively common in a couple of months. </p>



<p>If you want to read more about the advantages of Float vs Integer data types you can read my recent paper about <a href="https://arxiv.org/abs/2212.09720">k-bit inference scaling laws</a>. Below you can see one relevant main result for Float vs Integer data types from this paper. We can see that bit-by-bit, the FP4 data type preserve more information than Int4 data type and thus improves the mean LLM zeroshot accuracy across 4 tasks.</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/pythia_4bit_datatypes2.png?ssl=1"><img data-attachment-id="1151" data-permalink="https://timdettmers.com/pythia_4bit_datatypes2/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/pythia_4bit_datatypes2.png?fit=1159%2C875&amp;ssl=1" data-orig-size="1159,875" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="pythia_4bit_datatypes2" data-image-description="" data-image-caption="&lt;p&gt;4-bit Inference scaling laws for Pythia Large Language Models for different data types. We see that bit-by-bit, 4-bit float data types have better zeroshot accuracy compared to the Int4 data types.&lt;/p&gt;
" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/pythia_4bit_datatypes2.png?fit=300%2C226&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/pythia_4bit_datatypes2.png?fit=1024%2C773&amp;ssl=1" width="1024" height="773" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/pythia_4bit_datatypes2.png?resize=1024%2C773&#038;ssl=1" alt="" class="wp-image-1151" srcset="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/pythia_4bit_datatypes2.png?resize=1024%2C773&amp;ssl=1 1024w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/pythia_4bit_datatypes2.png?resize=300%2C226&amp;ssl=1 300w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/pythia_4bit_datatypes2.png?resize=768%2C580&amp;ssl=1 768w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/pythia_4bit_datatypes2.png?w=1159&amp;ssl=1 1159w" sizes="(max-width: 1000px) 100vw, 1000px" data-recalc-dims="1" /></a><figcaption>4-bit Inference scaling laws for Pythia Large Language Models for different data types. We see that bit-by-bit, 4-bit float data types have better zeroshot accuracy compared to the Int4 data types.</figcaption></figure>



<h2 class="eplus-A4G25G">Raw Performance Ranking of GPUs</h2>



<p class="eplus-xyHWc1">Below we see a chart of raw relevative performance across all GPUs. We see that there is a gigantic gap in 8-bit performance of H100 GPUs and old cards that are optimized for 16-bit performance.</p>



<p></p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUS_Ada_raw_performance3.png?ssl=1"><img data-attachment-id="1160" data-permalink="https://timdettmers.com/gpus_ada_raw_performance3/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUS_Ada_raw_performance3.png?fit=1703%2C1673&amp;ssl=1" data-orig-size="1703,1673" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="GPUS_Ada_raw_performance3" data-image-description="" data-image-caption="&lt;p&gt;Shown is raw relative performance of GPUs. For example, an RTX 4090 has about 0.33x performance of a H100 SMX for 8-bit inference. In other words, a H100 SMX is three times faster for 8-bit inference compared to a RTX 4090.&lt;/p&gt;
" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUS_Ada_raw_performance3.png?fit=300%2C295&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUS_Ada_raw_performance3.png?fit=1024%2C1006&amp;ssl=1" width="1024" height="1006" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUS_Ada_raw_performance3.png?resize=1024%2C1006&#038;ssl=1" alt="" class="wp-image-1160" srcset="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUS_Ada_raw_performance3.png?resize=1024%2C1006&amp;ssl=1 1024w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUS_Ada_raw_performance3.png?resize=300%2C295&amp;ssl=1 300w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUS_Ada_raw_performance3.png?resize=768%2C754&amp;ssl=1 768w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUS_Ada_raw_performance3.png?resize=1536%2C1509&amp;ssl=1 1536w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUS_Ada_raw_performance3.png?w=1703&amp;ssl=1 1703w" sizes="(max-width: 1000px) 100vw, 1000px" data-recalc-dims="1" /></a><figcaption>Shown is raw relative transformer performance of GPUs. For example, an RTX 4090 has about 0.33x performance of a H100 SMX for 8-bit inference. In other words, a H100 SMX is three times faster for 8-bit inference compared to a RTX 4090.</figcaption></figure>



<p class="eplus-gzQ71E">For this data, I did not model 8-bit compute for older GPUs.  I did so, because 8-bit Inference and training are much more effective on Ada/Hopper GPUs because of the 8-bit Float data type and Tensor Memory Accelerator (TMA) which saves the overhead of computing read/write indices which is particularly helpful for 8-bit matrix multiplication. Ada/Hopper also have FP8 support, which makes in particular 8-bit training much more effective.</p>



<p class="eplus-gzQ71E">I did not model numbers for 8-bit training because to model that I need to know the latency of L1 and L2 caches on Hopper/Ada GPUs, and they are unknown and I do not have access to such GPUs. On Hopper/Ada, 8-bit training performance can well be 3-4x of 16-bit training performance if the caches are as fast as rumored. </p>



<p class="eplus-gzQ71E">But even with the new FP8 tensor cores there are some additional issues which are difficult to take into account when modeling GPU performance. For example, FP8 tensor cores do not support transposed matrix multiplication which means backpropagation needs either a separate transpose before multiplication or one needs to hold two sets of weights — one transposed and one non-transposed — in memory. I used two sets of weight when I experimented with Int8 training in my <a href="https://arxiv.org/abs/2208.07339">LLM.int8()</a> project and this reduced the overall speedups quite significantly. I think one can do better with the right algorithms/software, but this shows that missing features like a transposed matrix multiplication for tensor cores can affect performance.</p>



<p class="eplus-gzQ71E">For old GPUs, Int8 inference performance is close to the 16-bit inference performance for models below 13B parameters. Int8 performance on old GPUs is only relevant if you have relatively large models with 175B parameters or more. If you are interested in 8-bit performance of older GPUs, you can read the Appendix D of my <a href="https://arxiv.org/abs/2208.07339">LLM.int8() paper</a> where I benchmark Int8 performance.</p>



<h2 class="eplus-1SzBFE">GPU Deep Learning Performance per Dollar</h2>



<p>Below we see the chart for the performance per US dollar for all GPUs sorted by 8-bit inference performance. How to use the chart to find a suitable GPU for you is as follows:</p>



<ol><li>Determine the amount of GPU memory that you need (rough heuristic: at least 12 GB for image generation; at least 24 GB for work with transformers)</li><li>While 8-bit inference and training is experimental, it will become standard within 6 months. You might need to do some extra difficult coding to work with 8-bit in the meantime. Is that OK for you? If not, select for 16-bit performance.</li><li>Using the metric determined in (2), find the GPU with the highest relative performance/dollar  that has the amount of memory you need.</li></ol>



<p>We can see that the RTX 4070 Ti is most cost-effective for 8-bit and 16-bit inference while the RTX 3080 remains most cost-effective for 16-bit training. While these GPUs are most cost-effective, they are not necessarily recommended as they do not have sufficient memory for many use-cases. However, it might be the ideal cards to get started on your deep learning journey. Some of these GPUs are excellent for Kaggle competition where one can often rely on smaller models. Since to do well in  Kaggle competitions the method of how you work is more important than the models size, many of these smaller GPUs are excellent for Kaggle competitions. </p>



<p>The best GPUs for academic and startup servers seem to be A6000 Ada GPUs (not to be confused with A6000 Turing). The H100 SXM GPU is also very cost effective and has high memory and very strong performance. If I would build a small cluster for a company/academic lab, I would use 66-80% A6000 GPUs and 20-33% H100 SXM GPUs. If I get a good deal on L40 GPUs, I would also pick them instead of A6000, so you can always ask for a quote on these.</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUs_Ada_performance_per_dollar6.png?ssl=1"><img data-attachment-id="1192" data-permalink="https://timdettmers.com/gpus_ada_performance_per_dollar6/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUs_Ada_performance_per_dollar6.png?fit=1703%2C1673&amp;ssl=1" data-orig-size="1703,1673" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="GPUs_Ada_performance_per_dollar6" data-image-description="" data-image-caption="&lt;p&gt;Shown is relative performance per US Dollar of GPUs normalized by the cost for a desktop computer and the average Amazon and eBay price for each GPU. Additionally, the electricity cost of ownership for 5 years is added with an electricity price of 0.175 USD per kWh and a 15% GPU utilization rate. The electricity cost for a RTX 4090 is about $100 per year. How to read and interpret the chart: a desktop computer with RTX 4070 Ti cards owned for 5 years yields about 2x more 8-bit inference performance per dollar compared to a RTX 3090 GPU.&lt;/p&gt;
" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUs_Ada_performance_per_dollar6.png?fit=300%2C295&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUs_Ada_performance_per_dollar6.png?fit=1024%2C1006&amp;ssl=1" width="1024" height="1006" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUs_Ada_performance_per_dollar6.png?resize=1024%2C1006&#038;ssl=1" alt="" class="wp-image-1192" srcset="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUs_Ada_performance_per_dollar6.png?resize=1024%2C1006&amp;ssl=1 1024w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUs_Ada_performance_per_dollar6.png?resize=300%2C295&amp;ssl=1 300w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUs_Ada_performance_per_dollar6.png?resize=768%2C754&amp;ssl=1 768w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUs_Ada_performance_per_dollar6.png?resize=1536%2C1509&amp;ssl=1 1536w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/GPUs_Ada_performance_per_dollar6.png?w=1703&amp;ssl=1 1703w" sizes="(max-width: 1000px) 100vw, 1000px" data-recalc-dims="1" /></a><figcaption>Shown is relative performance per US Dollar of GPUs normalized by the cost for a desktop computer and the average Amazon and eBay price for each GPU. Additionally, the electricity cost of ownership for 5 years is added with an electricity price of 0.175 USD per kWh and a 15% GPU utilization rate. The electricity cost for a RTX 4090 is about $100 per year. How to read and interpret the chart: a desktop computer with RTX 4070 Ti cards owned for 5 years yields about 2x more 8-bit inference performance per dollar compared to a RTX 3090 GPU.</figcaption></figure>



<p></p>


<div class="crp_related   crp_related_block  mobile-only crp-rounded-thumbs"><ul><li><a href="https://timdettmers.com/2026/01/13/use-agents-or-be-left-behind/"     class="crp_link post-1238"><figure><img loading="lazy"  width="150" height="150"  src="https://i0.wp.com/timdettmers.com/wp-content/plugins/contextual-related-posts/default.png?resize=150%2C150&#038;ssl=1" class="crp_thumb crp_default_thumb" alt="Use Agents or Be Left Behind? A Personal Guide to Automating Your Own Work" title="Use Agents or Be Left Behind? A Personal Guide to Automating Your Own Work" data-recalc-dims="1" /></figure><span class="crp_title">Use Agents or Be Left Behind? A Personal Guide to Automating&hellip;</span></a></li><li><a href="https://timdettmers.com/2026/01/23/my-journey-towards-coding-agents-building-sera/"     class="crp_link post-1248"><figure><img loading="lazy"  width="150" height="150"  src="https://i0.wp.com/timdettmers.com/wp-content/plugins/contextual-related-posts/default.png?resize=150%2C150&#038;ssl=1" class="crp_thumb crp_default_thumb" alt="My Journey Towards Coding Agents: Building SERA" title="My Journey Towards Coding Agents: Building SERA" data-recalc-dims="1" /></figure><span class="crp_title">My Journey Towards Coding Agents: Building SERA</span></a></li><li><a href="https://timdettmers.com/2025/12/10/why-agi-will-not-happen/"     class="crp_link post-1233"><figure><img loading="lazy"  width="150" height="150"  src="https://i0.wp.com/timdettmers.com/wp-content/plugins/contextual-related-posts/default.png?resize=150%2C150&#038;ssl=1" class="crp_thumb crp_default_thumb" alt="Why AGI Will Not Happen" title="Why AGI Will Not Happen" data-recalc-dims="1" /></figure><span class="crp_title">Why AGI Will Not Happen</span></a></li></ul><div class="crp_clear"></div></div>


<h2 class="eplus-RwDsYV">GPU Recommendations</h2>



<p>I have a create a recommendation flow-chart that you can see below (click here for <a href="https://nanx.me/gpu/">interactive app</a> from Nan Xiao). While this chart will help you in 80% of cases, it might not quite work for you because the options might be too expensive. In that case, try to look at the benchmarks above and pick the most cost effective GPU that still has enough GPU memory for your use-case. You can estimate the GPU memory needed by running your problem in the vast.ai or Lambda Cloud for a while so you know what you need. The vast.ai or Lambda Cloud might also work well if you only need a GPU very sporadically (every couple of days for a few hours) and you do not need to download and process large dataset to get started. However, cloud GPUs are usually not a good option if you use your GPU for many months with a high usage rate each day (12 hours each day). You can use the example in the &#8220;When is it better to use the cloud vs a dedicated GPU desktop/server?&#8221; section below to determine if cloud GPUs are good for you.</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/gpu_recommendations.png?ssl=1"><img data-attachment-id="1173" data-permalink="https://timdettmers.com/gpu_recommendations/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/gpu_recommendations.png?fit=845%2C686&amp;ssl=1" data-orig-size="845,686" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="gpu_recommendations" data-image-description="" data-image-caption="&lt;p&gt;GPU recommendation chart for Ada/Hopper GPUs. Follow the answers to the Yes/No questions to find the GPU that is most suitable for you. While this chart works well in about 80% of cases, you might end up with a GPU that is too expensive. Use the cost/performance charts above to make a selection instead.&lt;/p&gt;
" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/gpu_recommendations.png?fit=300%2C244&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/gpu_recommendations.png?fit=845%2C686&amp;ssl=1" width="845" height="686" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/gpu_recommendations.png?resize=845%2C686&#038;ssl=1" alt="" class="wp-image-1173" srcset="https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/gpu_recommendations.png?w=845&amp;ssl=1 845w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/gpu_recommendations.png?resize=300%2C244&amp;ssl=1 300w, https://i0.wp.com/timdettmers.com/wp-content/uploads/2023/01/gpu_recommendations.png?resize=768%2C623&amp;ssl=1 768w" sizes="(max-width: 845px) 100vw, 845px" data-recalc-dims="1" /></a><figcaption>GPU recommendation chart for Ada/Hopper GPUs. Follow the answers to the Yes/No questions to find the GPU that is most suitable for you. While this chart works well in about 80% of cases, you might end up with a GPU that is too expensive. Use the cost/performance charts above to make a selection instead. [<a href="https://nanx.me/gpu/">interactive app</a>]</figcaption></figure>



<p class="eplus-CQxjtd"></p>



<h3 class="eplus-pb5gym">Is it better to wait for future GPUs for an upgrade? The future of GPUs.</h3>



<p class="eplus-CD5XWA">To understand if it makes sense to skip this generation and buy the next generation of GPUs, it makes sense to talk a bit about what improvements in the future will look like. </p>



<p class="eplus-CD5XWA">In the past it was possible to shrink the size of transistors to improve speed of a processor. This is coming to an end now. For example, while shrinking SRAM increased its speed (smaller distance, faster memory access), this is no longer the case. Current improvements in SRAM do not improve its performance anymore and might even be negative. While logic such as Tensor Cores get smaller, this does not necessarily make GPU faster since the main problem for matrix multiplication is to get memory to the tensor cores which is dictated by SRAM and GPU RAM speed and size. GPU RAM still increases in speed if we stack memory modules into high-bandwidth modules (HBM3+), but these are too expensive to manufacture for consumer applications. The main way to improve raw speed of GPUs is to use more power and more cooling as we have seen in the RTX 30s and 40s series. But this cannot go on for much longer.</p>



<p>Chiplets such as used by AMD CPUs are another straightforward way forward. AMD beat Intel by developing CPU chiplets. Chiplets are small chips that are fused together with a high speed on-chip network. You can think about them as two GPUs that are so physically close together that you can almost consider them a single big GPU. They are cheaper to manufacture, but more difficult to combine into one big chip. So you need know-how and fast connectivity between chiplets. AMD has a lot of experience with chiplet design. AMD&#8217;s next generation GPUs are going to be chiplet designs, while NVIDIA currently has no public plans for such designs. This may mean that the next generation of AMD GPUs might be better in terms of cost/performance compared to NVIDIA GPUs.</p>



<p>However, the main performance boost for GPUs is currently specialized logic. For example, the asynchronous copy hardware units on the Ampere generation (RTX 30 / A100 / RTX 40) or the extension, the Tensor Memory Accelerator (TMA), both reduce the overhead of copying memory from the slow global memory to fast shared memory (caches) through specialized hardware and so each thread can do more computation. The TMA also reduces overhead by performing automatic calculations of read/write indices which is particularly important for 8-bit computation where one has double the elements for the same amount of memory compared to 16-bit computation. So specialized hardware logic can accelerate matrix multiplication further.<br>Low-bit precision is another straightforward way forward for a couple of years. We will see widespread adoption of 8-bit inference and training in the next months. We will see widespread 4-bit inference in the next year. Currently, the technology for 4-bit training does not exists, but research looks promising and I expect the first high performance FP4 Large Language Model (LLM) with competitive predictive performance to be trained in 1-2 years time.</p>



<p>Going to 2-bit precision for training currently looks pretty impossible, but it is a much easier problem than shrinking transistors further. So progress in hardware mostly depends on software and algorithms that make it possible to use specialized features offered by the hardware.</p>



<p>We will probably be able to still improve the combination of algorithms + hardware to the year 2032, but after that will hit the end of GPU improvements (similar to smartphones). The wave of performance improvements after 2032 will come from better networking algorithms and mass hardware. It is uncertain if consumer GPUs will be relevant at this point. It might be that you need an RTX 9090 to run run Super HyperStableDiffusion Ultra Plus 9000 Extra or OpenChatGPT 5.0, but it might also be that some company will offer a high-quality API that is cheaper than the electricity cost for a RTX 9090 and you want to use a laptop + API for image generation and other tasks.</p>



<p>Overall, I think investing into a 8-bit capable GPU will be a very solid investment for the next 9 years. Improvements at 4-bit and 2-bit are likely small and other features like Sort Cores would only become relevant once sparse matrix multiplication can be leveraged well. We will probably see some kind of other advancement in 2-3 years which will make it into the next GPU 4 years from now, but we are running out of steam if we keep relying on matrix multiplication. This makes investments into new GPUs last longer.</p>



<h2 class="eplus-rZBwp1">Question &amp; Answers &amp; Misconceptions</h2>



<h3 class="eplus-BKkk3i">Do I need PCIe 4.0 or PCIe 5.0?</h3>



<p class="eplus-9ByauH">Generally, no. PCIe 5.0 or 4.0 is great if you have a GPU cluster. It is okay if you have an 8x GPU machine, but otherwise, it does not yield many benefits. It allows better parallelization and a bit faster data transfer. Data transfers are not a bottleneck in any application. In computer vision, in the data transfer pipeline, the data storage can be a bottleneck, but not the PCIe transfer from CPU to GPU. So there is no real reason to get a PCIe 5.0 or 4.0 setup for most people. The benefits will be maybe 1-7% better parallelization in a 4 GPU setup.</p>



<h3 class="eplus-iAnpcZ">Do I need 8x/16x PCIe lanes?&nbsp;</h3>



<p class="eplus-4e3Zwd">Same as with PCIe 4.0 — generally, no. <a href="https://timdettmers.com/2018/12/16/deep-learning-hardware-guide/">PCIe lanes</a> are needed for parallelization and fast data transfers, which are seldom a bottleneck. Operating GPUs on 4x lanes is fine, especially if you only have 2 GPUs. For a 4 GPU setup, I would prefer 8x lanes per GPU, but running them at 4x lanes will probably only decrease performance by around 5-10% if you parallelize across all 4 GPUs.</p>



<h3 class="eplus-5dLLUU">How do I fit 4x RTX 4090 or 3090 if they take up 3 PCIe slots each?</h3>



<p class="eplus-ZiOIiH">You need to get one of the two-slot variants, or you can try to spread them out with PCIe extenders. Besides space, you should also immediately think about cooling and a suitable PSU.</p>



<p class="eplus-hKh4h6">PCIe extenders might also solve both space and cooling issues, but you need to make sure that you have enough space in your case to spread out the GPUs. Make sure your PCIe extenders are long enough!</p>



<h3 class="eplus-2ceRV0">How do I cool 4x RTX 3090 or 4x RTX 3080?</h3>



<p class="eplus-Fchmll">See the previous section.</p>



<h3 class="eplus-hlYdEC">Can I use multiple GPUs of different GPU types?</h3>



<p class="eplus-cOwKhd">Yes, you can! But you cannot parallelize efficiently across GPUs of different types since you will often go at the speed of the slowest GPU (data and fully sharded parallelism).  So different GPUs work just fine, but parallelization across those GPUs will be inefficient since the fastest GPU will wait for the slowest GPU to catch up to a synchronization point (usually gradient update).</p>



<h3 class="eplus-YbZb25">What is NVLink, and is it useful?</h3>



<p class="eplus-kZZzbt">Generally, NVLink is not useful. NVLink is a high speed interconnect between GPUs. It is useful if you have a GPU cluster with +128 GPUs. Otherwise, it yields almost no benefits over standard PCIe transfers.</p>



<h3 class="eplus-pqooRC">I do not have enough money, even for the cheapest GPUs you recommend. What can I do?</h3>



<p class="eplus-ClLhtz">Definitely buy used GPUs. You can buy a small cheap GPU for prototyping and testing and then roll out for full experiments to the cloud like vast.ai or Lambda Cloud. This can be cheap if you train/fine-tune/inference on large models only every now and then and spent more time protoyping on smaller models.</p>



<h3 class="eplus-OqapSw">What is the carbon footprint of GPUs? How can I use GPUs without polluting the environment?</h3>



<p class="eplus-zai7Ps">I built a <a href="https://github.com/TimDettmers/carbonneutral">carbon calculator</a> for calculating your carbon footprint for academics (carbon from flights to conferences + GPU time). The calculator can also be used to calculate a pure GPU carbon footprint. You will find that GPUs produce much, much more carbon than international flights. As such, you should make sure you have a green source of energy if you do not want to have an astronomical carbon footprint. If no electricity provider in our area provides green energy, the best way is to buy carbon offsets. Many people are skeptical about carbon offsets. Do they work? Are they scams?</p>



<p class="eplus-EWsGx0">I believe skepticism just hurts in this case, because not doing anything would be more harmful than risking the probability of getting scammed. If you worry about scams, just invest in a portfolio of offsets to minimize risk.</p>



<p class="eplus-69USNG">I worked on a project that produced carbon offsets about ten years ago. The carbon offsets were generated by burning leaking methane from mines in China. UN officials tracked the process, and they required clean digital data and physical inspections of the project site. In that case, the carbon offsets that were produced were highly reliable. I believe many other projects have similar quality standards.</p>



<h3 class="eplus-3vCLFD">What do I need to parallelize across two machines?</h3>



<p class="eplus-fTOuun">If you want to be on the safe side, you should get at least +50Gbits/s network cards to gain speedups if you want to parallelize across machines. I recommend having at least an EDR Infiniband setup, meaning a network card with at least 50 GBit/s bandwidth. Two EDR cards with cable are about $500 on eBay.</p>



<p class="eplus-9X2IIc">In some cases, you might be able to get away with 10 Gbit/s Ethernet, but this is usually only the case for special networks (certain convolutional networks) or if you use certain algorithms (Microsoft DeepSpeed).</p>



<h3 class="eplus-PgjlVt">Is the sparse matrix multiplication features suitable for sparse matrices in general?</h3>



<p class="eplus-R0OltN">It does not seem so. Since the granularity of the sparse matrix needs to have 2 zero-valued elements, every 4 elements, the sparse matrices need to be quite structured. It might be possible to adjust the algorithm slightly, which involves that you pool 4 values into a compressed representation of 2 values, but this also means that precise arbitrary sparse matrix multiplication is not possible with Ampere GPUs.</p>



<h3 class="eplus-MYBgIz">Do I need an Intel CPU to power a multi-GPU setup?</h3>



<p class="eplus-ILUHTD">I do not recommend Intel CPUs unless you heavily use CPUs in Kaggle competitions (heavy linear algebra on the CPU). Even for Kaggle competitions AMD CPUs are still great, though. AMD CPUs are cheaper and better than Intel CPUs in general for deep learning. For a 4x GPU built, my go-to CPU would be a Threadripper. We built dozens of systems at our university with Threadrippers, and they all work great — no complaints yet. For 8x GPU systems, I would usually go with CPUs that your vendor has experience with. CPU and PCIe/system reliability is more important in 8x systems than straight performance or straight cost-effectiveness.</p>



<h3 class="eplus-V6H8gS">Does computer case design matter for cooling?</h3>



<p class="eplus-rSzCnL">No. GPUs are usually perfectly cooled if there is at least a small gap between GPUs. Case design will give you 1-3 C better temperatures, space between GPUs will provide you with 10-30 C improvements. The bottom line, if you have space between GPUs, cooling does not matter. If you have no space between GPUs, you need the right cooler design (blower fan) or another solution (water cooling, PCIe extenders), but in either case, case design and case fans do not matter.</p>



<h3 class="eplus-Umve55">Will AMD GPUs + ROCm ever catch up with NVIDIA GPUs + CUDA?</h3>



<p class="eplus-sOjnl6">Not in the next 1-2 years. It is a three-way problem: Tensor Cores, software, and community.&nbsp;</p>



<p class="eplus-NfDW5K">AMD GPUs are great in terms of pure silicon: Great FP16 performance, great memory bandwidth. However, their lack of Tensor Cores or the equivalent makes their deep learning performance poor compared to NVIDIA GPUs. Packed low-precision math does not cut it. Without this hardware feature, AMD GPUs will never be competitive. Rumors show that <a href="https://wccftech.com/amd-cdna-architecture-radeon-instinct-arcturus-gpu-120-cu-7680-cores/">some data center card</a> with Tensor Core equivalent is planned for 2020, but no new data emerged since then. Just having data center cards with a Tensor Core equivalent would also mean that few would be able to afford such AMD GPUs, which would give NVIDIA a competitive advantage.</p>



<p class="eplus-O3Dvhx">Let’s say AMD introduces a Tensor-Core-like-hardware feature in the future. Then many people would say, “But there is no software that works for AMD GPUs! How am I supposed to use them?” This is mostly a misconception. The AMD software via ROCm has come to a long way, and support via PyTorch is excellent. While I have not seen many experience reports for AMD GPUs + PyTorch, all the software features are integrated. It seems, if you pick any network, you will be just fine running it on AMD GPUs. So here AMD has come a long way, and this issue is more or less solved.</p>



<p class="eplus-OTgr4r">However, if you solve software and the lack of Tensor Cores, AMD still has a problem: the lack of community. If you have a problem with NVIDIA GPUs, you can Google the problem and find a solution. That builds a lot of trust in NVIDIA GPUs. You have the infrastructure that makes using NVIDIA GPUs easy (any deep learning framework works, any scientific problem is well supported). You have the hacks and tricks that make usage of NVIDIA GPUs a breeze (e.g., apex). You can find experts on NVIDIA GPUs and programming around every other corner while I knew much less AMD GPU experts.</p>



<p class="eplus-9AWxYr">In the community aspect, AMD is a bit like Julia vs Python. Julia has a lot of potential, and many would say, and rightly so, that it is the superior programming language for scientific computing. Yet, Julia is barely used compared to Python. This is because the Python community is very strong. Numpy, SciPy, Pandas are powerful software packages that a large number of people congregate around. This is very similar to the NVIDIA vs AMD issue.</p>



<p class="eplus-o6eRO8">Thus, it is likely that AMD will not catch up until Tensor Core equivalent is introduced (1/2 to 1 year?) and a strong community is built around ROCm (2 years?). AMD will always snatch a part of the market share in specific subgroups (e.g., cryptocurrency mining, data centers). Still, in deep learning, NVIDIA will likely keep its monopoly for at least a couple more years.</p>



<h3 class="eplus-OkbBy3">When is it better to use the cloud vs a dedicated GPU desktop/server?</h3>



<p class="eplus-eFDtSu">Rule-of-thumb: If you expect to do deep learning for longer than a year, it is cheaper to get a desktop GPU. Otherwise, cloud instances are preferable unless you have extensive cloud computing skills and want the benefits of scaling the number of GPUs up and down at will.</p>



<p>Numbers in the following paragraphs are going to change, but it serves as a scenario that helps you to understand the rough costs. You can use similar math to determine if cloud GPUs are the best solution for you.</p>



<p class="eplus-i5Jvrp">For the exact point in time when a cloud GPU is more expensive than a desktop depends highly on the service that you are using, and it is best to do a little math on this yourself. Below I do an example calculation for an AWS V100 spot instance with 1x V100 and compare it to the price of a desktop with a single RTX 3090 (similar performance). The desktop with RTX 3090 costs $2,200 (<a ref="https://pcpartpicker.com/user/tim_dettmers/saved/#view=mZ2rD3">2-GPU barebone</a> + RTX 3090). Additionally, assuming you are in the US, there is an additional $0.12 per kWh for electricity. This compares to $2.14 per hour for the AWS on-demand instance.</p>



<p class="eplus-sF5f8X">At 15% utilization per year, the desktop uses:&nbsp;</p>



<p class="eplus-jfhFj2">(350 W (GPU) + 100 W (CPU))*0.15 (utilization) * 24 hours * 365 days = 591 kWh per year</p>



<p class="eplus-uZPGyz">So 591 kWh of electricity per year, that is an additional $71.</p>



<p class="eplus-EqZNc1">The break-even point for a desktop vs a cloud instance at 15% utilization (you use the cloud instance 15% of time during the day), would be about 300 days ($2,311 vs $2,270):</p>



<p class="eplus-WO1NL4">$2.14/h * 0.15 (utilization) * 24 hours * 300 days = $2,311</p>



<p class="eplus-3brSoQ">So if you expect to run deep learning models after 300 days, it is better to buy a desktop instead of using AWS on-demand instances.</p>



<p class="eplus-CmY2qr">You can do similar calculations for any cloud service to make the decision if you go for a cloud service or a desktop.</p>



<p class="eplus-6m3gGS">Common utilization rates are the following:</p>



<ul class="eplus-sNwMkX"><li>PhD student personal desktop: &lt; 15%</li><li>PhD student slurm GPU cluster: &gt; 35%</li><li>Company-wide slurm research cluster: &gt; 60%</li></ul>



<p class="eplus-deH52E">In general, utilization rates are lower for professions where thinking about cutting edge ideas is more important than developing practical products. Some areas have low utilization rates (interpretability research), while other areas have much higher rates (machine translation, language modeling). In general, the utilization of personal machines is almost always overestimated. Commonly, most personal systems have a utilization rate between 5-10%. This is why I would highly recommend slurm GPU clusters for research groups and companies instead of individual desktop GPU machines.</p>



<h2 class="eplus-bXPSts">Version History</h2>



<ul class="eplus-xnXOsk"><li>2023-01-30: Improved font and recommendation chart. Added 5 years cost of ownership electricity perf/USD chart. Updated Async copy and TMA functionality. Slight update to FP8 training. General improvements.</li><li>2023-01-16: Added Hopper and Ada GPUs. Added GPU recommendation chart. Added information about the TMA unit and L2 cache.</li><li>2020-09-20: Added discussion of using power limiting to run 4x RTX 3090 systems. Added older GPUs to the performance and cost/performance charts. Added figures for sparse matrix multiplication.</li><li>2020-09-07: Added NVIDIA Ampere series GPUs. Included lots of good-to-know GPU details.</li><li>2019-04-03: Added RTX Titan and GTX 1660 Ti. Updated TPU section. Added startup hardware discussion.</li><li>2018-11-26: Added discussion of overheating issues of RTX cards.</li><li>2018-11-05: Added RTX 2070 and updated recommendations. Updated charts with hard performance data. Updated TPU section.</li><li>2018-08-21: Added RTX 2080 and RTX 2080 Ti; reworked performance analysis</li><li>2017-04-09: Added cost-efficiency analysis; updated recommendation with NVIDIA Titan Xp</li><li>2017-03-19: Cleaned up blog post; added GTX 1080 Ti</li><li>2016-07-23: Added Titan X Pascal and GTX 1060; updated recommendations</li><li>2016-06-25: Reworked multi-GPU section; removed simple neural network memory section as no longer relevant; expanded convolutional memory section; truncated AWS section due to not being efficient anymore; added my opinion about the Xeon Phi; added updates for the GTX 1000 series</li><li>2015-08-20: Added section for AWS GPU instances; added GTX 980 Ti to the comparison relation</li><li>2015-04-22: GTX 580 no longer recommended; added performance relationships between cards</li><li>2015-03-16: Updated GPU recommendations: GTX 970 and GTX 580</li><li>2015-02-23: Updated GPU recommendations and memory calculations</li><li>2014-09-28: Added emphasis for memory requirement of CNNs</li></ul>



<h2 class="eplus-qkgeNA">Acknowledgments</h2>



<p>I thank Suhail for making me aware of outdated prices on H100 GPUs, Gjorgji Kjosev for pointing out font issues, Anonymous for pointing out that the TMA unit does not exist on Ada GPUs, Scott Gray for pointing out that FP8 tensor cores have no transposed matrix multiplication, and reddit and HackerNews users for pointing out many other improvements.</p>



<p class="eplus-kKOsc0">For past updates of this blog post, I want to thank Mat Kelcey for helping me to debug and test custom code for the GTX 970; I want to thank Sander Dieleman for making me aware of the shortcomings of my GPU memory advice for convolutional nets; I want to thank Hannes Bretschneider for pointing out software dependency problems for the GTX 580; and I want to thank Oliver Griesel for pointing out notebook solutions for AWS instances. I want to thank Brad Nemire for providing me with an RTX Titan for benchmarking purposes. I want to thank Agrin Hilmkil, Ari Holtzman, Gabriel Ilharco, Nam Pho for their excellent feedback on the previous version of this blog post.</p>
</div></div>
<p>The post <a rel="nofollow" href="https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/">Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning</a> appeared first on <a rel="nofollow" href="https://timdettmers.com">Tim Dettmers</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/feed/</wfw:commentRss>
			<slash:comments>1665</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6</post-id>	</item>
		<item>
		<title>The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near</title>
		<link>https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/</link>
					<comments>https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/#comments</comments>
		
		<dc:creator><![CDATA[Tim Dettmers]]></dc:creator>
		<pubDate>Mon, 27 Jul 2015 10:20:05 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Neuroscience]]></category>
		<category><![CDATA[Convolution]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[High Performance Computing]]></category>
		<guid isPermaLink="false">https://timdettmers.wordpress.com/?p=312</guid>

					<description><![CDATA[<p>In this blog post I will delve into the brain and explain its basic information processing machinery and compare it to deep learning. I do this by moving step-by-step along with the brains electrochemical and biological information processing pipeline and relating it directly to the architecture of convolutional nets. Thereby we will see that a neuron and a convolutional net are very similar information processing machines. While performing this comparison, I will also discuss the computational complexity of these processes and thus derive an estimate for the brains overall computational power. I will use these estimates, along with knowledge from high performance computing, to show that it is unlikely that there will be a technological singularity in this century.</p>
<p>The post <a rel="nofollow" href="https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/">The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near</a> appeared first on <a rel="nofollow" href="https://timdettmers.com">Tim Dettmers</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>In this blog post I will delve into the brain and explain its basic information processing machinery and compare it to deep learning. I do this by moving step-by-step along with the brains electrochemical and biological information processing pipeline and relating it directly to the architecture of convolutional nets. Thereby we will see that a neuron and a convolutional net are very similar information processing machines. While performing this comparison, I will also discuss the computational complexity of these processes and thus derive an estimate for the brains overall computational power. I will use these estimates, along with knowledge from high performance computing, to show that it is unlikely that there will be a technological singularity in this century.</p>
<p><span id="more-312"></span></p>
<p>This blog post is complex as it arcs over multiple topics in order to unify them into a coherent framework of thought. I have tried to make this article as readable as possible, but I might have not succeeded in all places. Thus, if you find yourself in an unclear passage it might become clearer a few paragraphs down the road where I pick up the thought again and integrate it with another discipline.</p>
<p>First I will give a brief overview about the predictions for a technological singularity and topics which are aligned with that. Then I will start the integration of ideas between the brain and deep learning. I finish with discussing high performance computing and how this all relates to predictions about a technological singularity.</p>
<p>The part which compares the brains information processing steps to deep learning is self-contained, and readers which are not interested in predictions for a technological singularity may skip to this part.</p>
<h2>Part I: Evaluating current predictions of a technological singularity</h2>
<p>There were a lot of headlines recently about predictions that artificial intelligence will reach super-human intelligence as early as 2030 and that this might herald the beginning of human extinction, or at least dramatically altering everyday life. How was this prediction made?</p>
<h3>Factors which help to predict a singularity</h3>
<p>Ray Kurzweil has made many very accurate <a href="https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil#2029">predictions</a> and his methods to reach these predictions are quite simple for computing devices: Look at the exponential growth of computing power, efficiency, and size, and then extrapolate. This way, you could easily predict the emergence of small computers which fit into your hands and with a bit of creativity, one could imagine that one day there would be tablets and smartphones. The trends were there, you just needed to imagine what could be done with computers which you can hold in your hand.</p>
<p>Similarly, Ray Kurzweil predicted the emergence of strong AI which is as intelligent or more intelligent than humans. For this prediction he also used data for the exponential growth of computing power and compared this to an estimate for the computational power of the brain.</p>
<p>He also acknowledges that the software will be as important as the hardware, and that the software development of strong AI will take longer because such software can only be developed once fast computer systems are available. This can be felt in the area of deep learning, where solid ideas of the 1990s were unfeasible due to the slow computers. Once graphic processing units (GPUs) were used, these computing limitations were quickly removed and rapid progress could be made.</p>
<p>However, Kurzweil also stresses that once the hardware level is reached, first “simple” strong AI systems will be developed quickly. He sets the date for brain-like computational power to 2020 and the emergence of strong AI (first human like intelligence or better) to 2030. Why these numbers? With persisting growth in computing power in 2019 we will reach the computing power which is equivalent to the human brain — or will we?</p>
<p>This estimate is based on two things: (1) The estimate for the complexity of the brain, (2) the estimate for the growth in computing power. As we will see, both these estimates are not up-to-date with current technology and knowledge about neuroscience and high performance computing.</p>
<p>Our knowledge of neuroscience doubles about every year. Using this doubling period, in the year of 2005 we would only have possessed about 0.098% of the neuroscience knowledge that we have today. This number is a bit off, because the doubling time was about 2 years in 2005 while it is less than a year now, but overall it is way below 1 %.</p>
<p>The thing is that Ray Kurzweil based his predictions on the neuroscience of 2005 and never updated them. An estimate for the brains computational power based on 1% of the neuroscience knowledge does not seem right. Here is small list of a few important discoveries made in the last two years which increase the computing power of the brain by many orders of magnitude:</p>
<ul>
<li>It was shown that brain connections rather than being passive cables, can themselves process information and alter the behavior of neurons in meaningful ways, e.g. brain connections help you to see the objects in everyday life. This fact alone increases brain computational complexity by several orders of magnitude</li>
<li>Neurons which do not fire still learn: There is much more going on than electrical spikes in neurons and brain connections: Proteins, which are the little biological machines which make everything in your body work, combined with local electric potential do a lot of information processing on their own — no activation of the neuron required</li>
<li>Neurons change their genome dynamically to produce the right proteins to handle everyday information processing tasks. Brain: “Oh you are reading a blog. Wait a second, I just upregulate this reading-gene to help you understand the content of the blog better.” (This is an exaggeration — but it is not too far off)</li>
</ul>
<p>Before we look at the complexity of the brain, let us first look at brain simulations. Brain simulations are often used to predict human-like intelligence. If we can simulate a human brain, then it will not be long until we are able to develop human-like intelligence, right? So the next paragraph looks at this reasoning. Can brain simulations really provide reliable evidence for predicting the emergence of artificial intelligence?</p>
<h3>The problems with brain simulations</h3>
<p>Brain simulations simulate the electrical signals which are emitted by neurons and the size of the connections between neurons. A brain simulation starts with random signals and the whole system stabilizes according to rules which are thought to govern information processing steps in the brain. After running these rules for some time, stable signals may form which can be compared to the signals of the brain. If the signals of the simulation are similar to recordings of the brain, this increases our confidence that our chosen rules are somewhat similar to the rules that the brain uses. Thus we can validate large scale information processing rules in the brain. However, the big problem with brain simulations is, that this is pretty much all we can do.</p>
<p>We do not gain any understanding what these signals mean or what function they could possess. We cannot test any meaningful hypotheses with this brain model other than the vague “our rules produce similar activity”. The lack of precise hypotheses which make accurate predictions (“If the activity is like this, then the circuit detected an apple instead of an orange”) is one of the loudest <a href="https://www.nature.com/news/neuroscience-where-is-the-brain-in-the-human-brain-project-1.15803">criticism of the European brain simulation project</a>. The brain project is regarded as rather useless by many neuroscientists and even dangerous, because it sucks away money for useful neuroscience projects which actually shed light on neural information processing.</p>
<p>Another problem is that these brain simulations rely on models which are outdated, incomplete and which dismiss many biological parts in neurological information processing. This is mainly so, because the electrical information processing in the brain is much better understood. Another more conveniently reason is, that current models are already able to reproduce the needed output patterns (which is the main goal after all) and so there is no need to update these models to be more brain-like.</p>
<p>So to summarize, the problems with brain simulations are:</p>
<ul>
<li>Not possible to test specific scientific hypotheses (compare this to the large hadron collider project with its perfectly defined hypotheses)</li>
<li>Does not simulate real brain processing (no firing connections, no biological interactions)</li>
<li>Does not give any insight into the functionality of brain processing (the meaning of the simulated activity is not assessed)</li>
</ul>
<p>The last point is the most important argument against the usefulness of brain processing for strong-AI estimation. If we could develop a brain simulation of the visual system, which would do well on say, the MNIST and ImageNet data sets, this would be useful to estimate progress in brain-like AI. But without this, or any similar observable function, brain simulations remain rather useless with respect to AI.</p>
<p>With this said, brain simulations are still valuable to test hypothesized general rules of information processing in the brain —we have nothing better for this — but they are quite useless to make sense of what the information processing in the brain means, and thus constitute unreliable evidence for predicting the progress in AI. Anything that relies on brain simulation as evidence for predictions of future strong-AI should be looked at with great skepticism.</p>
<h3>Estimating the brains computational complexity</h3>
<p>As mentioned in the introduction, the estimates of the brain&#8217;s complexity are a decade old and many new discoveries made this old estimate obsolete. I never came across an estimate which is up to date, so here I derive my own estimate. While doing this, I will focus mostly on the electrochemical information processing and neglect the biological interactions within the neuron, because they are too complex (and this blog post is already very long). Therefore the estimate that is derived here can be thought of as a lower bound of complexity — it should always be assumed that the brain is more complex than this.</p>
<p>During the construction of this model of complexity, I will also relate every step in the model with its deep learning equivalents. This will give you a better understanding of how close deep learning is related to and how fast deep learning really is compared to the human brain</p>
<h3>Defining reference numbers for the model</h3>
<p>We know some facts and estimates which help us to start with our model building:</p>
<ul>
<li>The brain uses learning algorithms which are very different from deep learning, but the architecture of neurons is similar to convolutional nets</li>
<li>The adult brain has 86 billion neurons, about 10 trillion synapse, and about 300 billion dendrites (tree-like structures with synapses on them)</li>
<li>The brain of a child has far more than 100 billion neurons, and has synapses and dendrites in excess of 15 trillion and 150 billion, respectively</li>
<li>The brain of a fetus has more than a trillion neurons; neurons which are misplaced die quickly (this is also the reason why adults have fewer neurons than children)</li>
</ul>
<p><figure id="attachment_313" aria-describedby="caption-attachment-313" style="width: 150px" class="wp-caption alignleft"><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/cerebellum_animation_small.gif?ssl=1"><img data-attachment-id="313" data-permalink="https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/cerebellum_animation_small/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/cerebellum_animation_small.gif?fit=150%2C150&amp;ssl=1" data-orig-size="150,150" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Cerebellum_animation_small" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/cerebellum_animation_small.gif?fit=150%2C150&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/cerebellum_animation_small.gif?fit=150%2C150&amp;ssl=1" class="wp-image-313 size-full" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/cerebellum_animation_small.gif?resize=150%2C150&#038;ssl=1" alt="Cerebellum_animation_small" width="150" height="150" data-recalc-dims="1" /></a><figcaption id="caption-attachment-313" class="wp-caption-text">Location of the cerebellum which contains roughly 3/4 of all neurons and connections. Image source: <a href="https://commons.wikimedia.org/wiki/File:Cerebellum_animation_small.gif">1</a></figcaption></figure></p>
<p><figure id="attachment_314" aria-describedby="caption-attachment-314" style="width: 150px" class="wp-caption alignleft"><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/cerebrum_animation_small.gif?ssl=1"><img data-attachment-id="314" data-permalink="https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/cerebrum_animation_small/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/cerebrum_animation_small.gif?fit=150%2C150&amp;ssl=1" data-orig-size="150,150" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Cerebrum_animation_small" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/cerebrum_animation_small.gif?fit=150%2C150&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/cerebrum_animation_small.gif?fit=150%2C150&amp;ssl=1" class="wp-image-314 size-full" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/cerebrum_animation_small.gif?resize=150%2C150&#038;ssl=1" alt="Cerebrum_animation_small" width="150" height="150" data-recalc-dims="1" /></a><figcaption id="caption-attachment-314" class="wp-caption-text">Location of the cerebrum; also referred to as &#8220;the cortex&#8221;. More precisely, the cortex is the outer layer of the brain, which contains most neurons of the cerebrum. Image source: <a href="https://commons.wikimedia.org/wiki/File:Cerebrum_animation_small.gif">1</a></figcaption></figure></p>
<ul>
<li>The cerebellum, the super computer of the brain, contains roughly ¾ of all neurons (this ratio is consistent in most mammal species)</li>
<li>The cerebrum, the main driver of “intelligence”, contains roughly ¼ of all neurons</li>
<li>An average neuron in the cerebellum has about 25000 synapses</li>
<li>An average neuron in the cerebrum has about 5000-15000 synapses</li>
</ul>
<p>The number of neurons is well known; the number of synapses and dendrites is only known within a certain boundary and I chose conservative estimates here.</p>
<p>The average synapses per neuron differ wildly between neurons, and the number here is a rough average. It is known that most synapses in the cerebellum are made between dendrites of Purkinje neurons and two different types of neurons that make connections that “climb up” or “cross parallel” with the Purkinje’s synapses. It is known that Purkinje cells have about 100000 synapses each. Because these cells have by far the largest weight in the cerebellum, one can estimate the complexity of the brain best if one looks at these neurons and at the interactions that they make.</p>
<p><figure id="attachment_316" aria-describedby="caption-attachment-316" style="width: 500px" class="wp-caption aligncenter"><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/neuron_types.gif?ssl=1"><img data-attachment-id="316" data-permalink="https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/neuron_types/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/neuron_types.gif?fit=500%2C302&amp;ssl=1" data-orig-size="500,302" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="neuron_types" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/neuron_types.gif?fit=300%2C181&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/neuron_types.gif?fit=500%2C302&amp;ssl=1" class="wp-image-316 size-full" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/neuron_types.gif?resize=500%2C302&#038;ssl=1" alt="neuron_types" width="500" height="302" data-recalc-dims="1" /></a><figcaption id="caption-attachment-316" class="wp-caption-text">There are many hundreds of different types of neurons; here some of the more common neurons. Thanks to Robert Stufflebeam for this image (<a href="http://www.mind.ilstu.edu/curriculum/neurons_intro/neurons_intro.php">source</a>).</figcaption></figure></p>
<p>It is important to differentiate between the complexity of a brain region and its functional importance. While almost all computation is carried out by the cerebellum, almost all important functions are carried out by the cerebrum (or cortex). The cortex uses the cerebellum to generate predictions, corrections and conclusions, but the cortex accumulates these insights and acts upon them.</p>
<p>For the cerebrum it is known that neurons almost never have more than 50000 synapses, and unlike the cerebellum, most neurons have a number of synapses within the range of 5000-15000.</p>
<h3>How do we use these numbers?</h3>
<p>A common approach for estimating the computational complexity of the brain is to assume all information processing in the brain can be represented by the combination of impulses when a neuron fires (action potentials) and the size (mostly number of receptors) of the synapses that each neuron has. Thus one can multiply the estimates for the number of neurons and their synapses and add everything together. Then one multiplies this by the rate of fire for the average neurons which is about 200 action potentials per second. This model is what Ray Kurzweil uses to create his estimate. While this model was okay a few decades ago, it is not suitable to model the brain from a modern view point, as it leaves out much of the important neurological information processing which is so much more than mere firing neurons.</p>
<p>A model which approximates the behavior of neurons more accurately is the extended linear-nonlinear-Poisson cascade model (LNP). The extended LNP model is <a href="https://www.sciencedirect.com/science/article/pii/S0959438814000130">currently viewed as an accurate model of how neurons process information</a>. However, the extended LNP model still leaves out some fine details, which are deemed unimportant to model large scale brain function. Indeed adding these fine details to the model will add almost no additional computational complexity, but makes the model more complex to understand — thus including these details in simulations would violate the scientific method which seeks to find the simplest models for a given theory. However, this extended model is actually very similar to deep learning and thus I will include these details here.</p>
<p>There are other good models that are also suitable for this. The primary reason why I chose the LNP model is that it is very close to deep learning. This makes this model perfect to compare the architecture of a neuron to the architecture of a convolutional net. I will do this in the next section and at the same time I will derive an estimate for the complexity of the brain.</p>
<h2>Part II: The brain vs. deep learning — a comparative analysis</h2>
<p>Now I will explain step by step how the brain processes information. I will mention the steps of information processing which are well understood and which are supported by reliable evidence. On top of these steps, there are many intermediary steps at the biological level (proteins and genes) which are still poorly understood but known to be very important for information processing. I will not go into depth into these biological processes but provide a short outline, which might help the knowledge hungry readers to delve into these depths themselves. We now begin this journey from the neurotransmitters released from a firing neuron and walk along all its processes until we reach the point where the next neuron releases its neurotransmitters, so that we return to where we started.</p>
<p>The next section introduces a couple of new terms which are necessary to follow the rest of the blog post, so read it carefully if you are not familiar with basic neurobiology.</p>
<p><figure id="attachment_343" aria-describedby="caption-attachment-343" style="width: 1193px" class="wp-caption aligncenter"><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/neuron_anatomy1.jpg?ssl=1"><img data-attachment-id="343" data-permalink="https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/neuron_anatomy-2/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/neuron_anatomy1.jpg?fit=1193%2C685&amp;ssl=1" data-orig-size="1193,685" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="neuron_anatomy" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/neuron_anatomy1.jpg?fit=300%2C172&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/neuron_anatomy1.jpg?fit=1024%2C588&amp;ssl=1" class="wp-image-343 size-full" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/neuron_anatomy1.jpg?resize=1193%2C685&#038;ssl=1" alt="neuron_anatomy" width="1193" height="685" data-recalc-dims="1" /></a><figcaption id="caption-attachment-343" class="wp-caption-text">Image sources: <a href="https://commons.wikimedia.org/wiki/File:Neuron_Hand-tuned.svg">1</a>,<a href="https://commons.wikimedia.org/wiki/File:SynapseSchematic_lines.svg">2</a>,3,4</figcaption></figure></p>
<p>Neurons use the axon — a tube like structure— to transmit their electric signals over long stretches in the brain. When a neuron fires, it fires an action potential — an electrical signal— down its axon which branches into a tree of small endings, called axon terminals. On the ending of each of these axon terminals sit some proteins which convert this electrical message back into a chemical one: Small balls — called synaptic vesicles — filled with a couple of neurotransmitters each are released into an area outside of the neuron, called synaptic cleft. This area separates the axon terminal from the beginning of the next neuron (a synapse) and allows the neurotransmitter to move freely to pursue different tasks.</p>
<p>The synapses are most commonly located at a structure which looks very much like the roots of a tree or plant; this is the dendritic tree composed of dendrites which branch into larger arms (this represents the connections between neurons in a neural network), which finally reach the core of the cell, which is called soma. These dendrites hold almost all synapses which connect one neuron to the next and thus form the principal connections. A synapse may hold hundreds of receptors to which neurotransmitter can bind themselves.</p>
<p>You can imagine this compound of axon terminal and synapses at a dendrite as the (dense) input layer (of an image if you will) into a convolutional net. Each neuron may have less than 5 dendrites or as many as a few hundred thousand. Later we will see that the function of the dendritic tree is similar to the combination of a convolutional layer followed by max-pooling in a convolutional network.</p>
<p>Going back to the biological process, the synaptic vesicles merge with the surface of the axon terminal and turn themselves inside-out spilling their neurotransmitters into the synaptic cleft. There the neurotransmitters drift in a vibrating motion due to the temperature in the environment, until they (1) find a fitting lock (receptor protein) which fits their key (the neurotransmitter), (2) the neurotransmitters encounter a protein which disintegrates them, or (3) the neurotransmitters encounter a protein which pulls them back into the axon (reuptake) where they are reused. Antidepressants mostly work by (3) preventing, or (4) enhancing the reuptake of the neurotransmitter serotonin; (3) preventing reuptake will yield changes in information processing after some days or weeks, while (4) enhancing reuptake leads to changes within seconds or minutes. So neurotransmitter reuptake mechanisms are integral for minute to minute information processing. Reuptake is ignored in the LNP model.</p>
<p>However, the combination of the amount of neurotransmitters released, the number of synapses for a given neurotransmitter, and how many neurotransmitters actually make it into a fitting protein on the synapse can be thought of as the weight parameter in a densely (fully) connected layer of a neural network, or in other words, the total input to a neuron is the sum of all axon-terminal-neurotransmitter-synapse interactions. Mathematically, we can model this as the dot product between two matrices (A dot B; [amount of neurotransmitters of all inputs] dot [amount of fitting proteins on all synapses]).</p>
<p>After a neurotransmitter has locked onto a fitting protein on a synapse, it can do a lot of different things: Most commonly, neurotransmitters will just (1) open up channels, to let charged particles flow (through diffusion) into the dendrites, but it can also cause a rarer effect with huge consequences: The neurotransmitter (2) binds to a G-protein which then produces a protein signaling cascade which, (2a) activates (upregulates) a gene which is then used to produce a new protein which is integrated into either the surface of the neuron, its dendrites, and/or its synapses; which (2b) alerts existing proteins to do a certain function at a specific site (create or remove more synapses, unblock some entrances, attach new proteins to the surface of the synapse). This is ignored in the NLP model.</p>
<p>Once the channels are open, negatively or positively charged particles enter into the dendritic spine. A dendritic spine is a small mushroom-like structure on to which the synapse is attached. These dendritic spines can store electric potential and have their own dynamics of information processing. This is ignored in the NLP model.</p>
<p><figure id="attachment_337" aria-describedby="caption-attachment-337" style="width: 471px" class="wp-caption aligncenter"><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/dendritic_spine.jpg?ssl=1"><img data-attachment-id="337" data-permalink="https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/dendritic_spine/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/dendritic_spine.jpg?fit=471%2C335&amp;ssl=1" data-orig-size="471,335" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="dendritic_spine" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/dendritic_spine.jpg?fit=300%2C213&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/dendritic_spine.jpg?fit=471%2C335&amp;ssl=1" class="wp-image-337 size-full" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/dendritic_spine.jpg?resize=471%2C335&#038;ssl=1" alt="dendritic_spine" width="471" height="335" data-recalc-dims="1" /></a><figcaption id="caption-attachment-337" class="wp-caption-text">Dendritic spines have their own internals information processing dynamics which is largely determined by its shape and size. Image source: <a href="https://en.wikipedia.org/wiki/File:Spline_types_3D.png">1</a>,<a href="https://en.wikipedia.org/wiki/File:Dendritic_spines.jpg">2</a></figcaption></figure></p>
<p>The charge of the particles that may enter the dendritic spine are either negatively or positively charged — some neurotransmitters only open channels for negative particles, others only for positive ones. There are also channels which let positively charged particles leave the neuron, thus increasing the negativity of the electric potential (a neuron “fires” if it becomes too positive). The size and shape of the mushroom-like dendritic spine corresponds to its behavior. This is ignored in the NLP model.</p>
<p>Once particles entered the spine, there are many things they can affect. Most commonly, they will (1) just travel along the dendrites to the cell body in the neuron and then, if the cell gets too positively charged (depolarization) they induce an action potential (the neuron “fires”). But other actions are also common:  The charged particles accumulate in the dendritic spine directly and (2) open up voltage-gated channels which may polarize the cell further (this is an example of the dendritic spine information processing mentioned above). Another very important process are (3) dendritic spikes.</p>
<h3>Dendritic spikes</h3>
<p>Dendritic spikes are a phenomenon which has been known to exist for some years, but only in 2013 the techniques were advanced enough to collect the data to show that these spikes were important for information processing. To measure dendritic spikes, you have to attach some very tiny clamps onto dendrites with the help of a computer which moves the clamp with great precision. To have some sort of idea where your clamp is, you need a special microscope to observe the clamp as you progress onto a dendrite. Even then you mostly attach the clamp in a rather blind matter because at such tiny scale every movement made is a rather giant leap. Only a few teams in the world have the equipment and skill to attach such clamps onto dendrites.</p>
<p>However, the direct data gathered by those few teams was enough to establish dendritic spikes as important information processing events. Due to the introduction of dendritic spikes into computational models of neurons, the complexity of a single neuron has become very similar to a convolutional net with two convolutional layers. As we see later the LNP model also uses non-linearities very similar to a rectified linear function, and also makes use of a spike generator which is very similar to dropout – so a neuron is very much like an entire convolutional net. But more about that later and back to dendritic spikes and what exactly they are.</p>
<p>Dendritic spikes occur when a critical level of depolarization is reached in a dendrite. The depolarization discharges as an electric potential along the walls of the dendrite and may trigger voltage-gated channels along its way through the dendritic tree and eventually, if strong enough, the electric potential reaches the core of the neuron where it may trigger a true action potential. If the dendritic spike fails to trigger an action potential, the opened voltage-gated channels in neighboring dendrites may do exactly that a split second later. Due to channels opened from the dendritic spike more charged particles enter the neuron, which then may either trigger (common) or stifle (rare) a full action potential at the neurons cell body (soma).</p>
<p><figure id="attachment_339" aria-describedby="caption-attachment-339" style="width: 677px" class="wp-caption aligncenter"><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/dendritic_spikes.png?ssl=1"><img data-attachment-id="339" data-permalink="https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/dendritic_spikes/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/dendritic_spikes.png?fit=677%2C263&amp;ssl=1" data-orig-size="677,263" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="dendritic_spikes" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/dendritic_spikes.png?fit=300%2C117&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/dendritic_spikes.png?fit=677%2C263&amp;ssl=1" class="wp-image-339 size-full" title="background-color: #fff" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/dendritic_spikes.png?resize=677%2C263&#038;ssl=1" alt="dendritic_spikes" width="677" height="263" data-recalc-dims="1" /></a><figcaption id="caption-attachment-339" class="wp-caption-text">A shows a computer model of a neuron that does not model dendritic spikes; B models simple dynamics of dendritic spikes; C models more complex dynamics of dendritic spikes which takes into account the one dimensional diffusion of particles (which is similar to a convolution operation). Take note that these images are only snapshots in a particular moment of time. A big thanks to <a href="https://groups.oist.jp/onu">Berd Kuhn</a>. Image copyright © 2014 Anwar, Roome, Nedelescu, Chen, Kuhn and De Schutter as published in <em><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4107854/">Frontiers in Cellular Neuroscience</a> <a href="#anwar14">(Anwar et al. 2014)</a></em></figcaption></figure></p>
<p>This process is very similar to max-pooling, where a single large activation “overwrites” other neighboring values. However, after a dendritic spike, neighboring values are not overwritten like during max-pooling used in deep learning, but the opening of voltage-gated channels greatly amplifies the signals in all neighboring branches within the dendritic tree. Thus a dendritic spike may heighten the electrochemical levels in neighboring dendrites to a level which is more similar to the maximum input — this effect is close to max-pooling.</p>
<p>Indeed it was shown that dendritic spikes in the visual system serve the same purpose as max pooling in convolutional nets for object recognition: In deep learning, max-pooling is used to achieve (limited) rotation, translation, and scale invariance (meaning that our algorithm can detect an object in an image where the object is rotated, moved, or shrunk/enlarged by a few pixels). One can think of this process as setting all surrounding pixels to the same large activation and make each activation share the weight to the next layer (in software the values are discarded for computational efficiency — this is mathematically equivalent). Similarly, it was shown that dendritic spikes in the visual system are sensitive to the orientation of an object. So dendritic spikes do not only have computational similarity, but also similarities in function.</p>
<p>The analogy does not end here. During neural back-propagation — that is when the action potential travels from the cell body back into the dendritic tree — the signal cannot backpropagate into the dendritic branch where the dendritic spike originated because these are “deactivated” due to the recent electrical activity. Thus a clear learning signal is sent to inactivated branches. At first this may seem like the exact opposite from the backpropagation used for max-pooling, where everything but the max-pooling activation is backpropagated. However, the absence of a backpropagation signal in a dendrite is a rare event and represents a learning signal on its own. Thus, dendrites which produce dendritic spikes have special learning signals just like activated units in max-pooling.</p>
<p>To better understand what dendritic spikes are and what they look like, I very much want to encourage you to watch <a href="https://www.hhmi.org/research/how-do-neurons-compute-output-their-inputs">this video</a> (for which I do not have the copyright). The video shows how two dendritic spikes lead to an action potential.</p>
<p>This combination of dendritic spikes and action potentials and the structure of the dendritic tree has been found to be critical for learning and memory in the hippocampus, the main brain region responsible for forming new memories and writing them to our “hard drive” at night.</p>
<p>Dendritic spikes are one of the main drivers of computational complexity which have been left out from past models of the complexity of the brain. Also, these new findings show that neural back-propagation does not have to be neuron-to-neuron in order to learn complex functions; a single neuron already implements a convolutional net and thus has enough computational complexity to model complex phenomena. As such, there is little need for learning rules that span multiple neurons — a single neuron can produce the same outputs we create with our convolutional nets today.</p>
<p>But these findings about dendritic spikes are not the only advance made in our understanding of the information processing steps during this stage of the neural information processing pathway. Genetic manipulation and targeted protein synthesis are sources that increase computational complexity by orders of magnitude, and only recently we made advances which reveal the true extend of biological information processing.</p>
<h3>Protein signaling cascades</h3>
<p>As I said in the introduction of this part, I will not cover the parts of biological information processing extensively, but I want to give you enough information so that you can start learning more from here.</p>
<p>One thing one has to understand is that a cell looks much different from how it is displayed in text books. Cells crawl with proteins: There are about 10 billion proteins in any given human cell and these proteins are not idle: They combine with other proteins, work on a task, or jitter around to find new tasks to work on.</p>
<p>All the functions described above are the work of proteins. For example the key-and-lock mechanism and the channels that play the gatekeeper for the charged particles that leave and enter the neuron are all proteins. The proteins I mean in this paragraph are not these common proteins, but proteins with special biological functions.</p>
<p>As an example the abundant neurotransmitter glutamate may bind to a NDMA receptor which then opens up its channels for many different kinds of charged particles and after being opened, the channel only closes when the neuron fires. The strength of synapses is highly dependent on this process, where the synapse is adjusted according to the location of the NDMA receptor and the timing of signals which are backpropagated to the synapses. We know this process is critical to learning in the brain, but it is only a small piece in a large puzzle.</p>
<p>The charged particles which may enter the neuron may additionally induce protein signaling cascades own their own. For example the cascade below shows how an activated NMDA receptor (green) lets charged calcium CA2+ inside which triggers a cascade which eventually leads to AMPAR receptors (violet) being trafficked and installed on the synapse.</p>
<p><figure id="attachment_321" aria-describedby="caption-attachment-321" style="width: 767px" class="wp-caption aligncenter"><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/regulationofampartrafficking.jpg?ssl=1"><img data-attachment-id="321" data-permalink="https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/regulationofampartrafficking/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/regulationofampartrafficking.jpg?fit=767%2C599&amp;ssl=1" data-orig-size="767,599" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="RegulationOfAMPARTrafficking" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/regulationofampartrafficking.jpg?fit=300%2C234&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/regulationofampartrafficking.jpg?fit=767%2C599&amp;ssl=1" class="wp-image-321 size-full" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/regulationofampartrafficking.jpg?resize=767%2C599&#038;ssl=1" alt="RegulationOfAMPARTrafficking" width="767" height="599" data-recalc-dims="1" /></a><figcaption id="caption-attachment-321" class="wp-caption-text">Image source: <a href="https://commons.wikimedia.org/wiki/File:RegulationOfAMPARTrafficking.jpg">1</a></figcaption></figure></p>
<p>It was shown again and again that these special proteins have a great influence on the information processing in neurons, but it is difficult to pick out a specific type of protein from this seemingly chaotic soup of 10 billion proteins and study its precise function. Findings are often complex with a chain of reactions involving many different proteins until a desired end-product or end-function is reached. Often the start and end functions are known but not the exact path which led from one to the other. Sophisticated technology helped greatly to study proteins in detail, and as technology gets better and better we will further our understanding of biological information processing in neurons.</p>
<h3>Genetic manipulation</h3>
<p>The complexity of biological information processing does not end with protein signaling cascades, the 10 billion proteins are not a random soup of workers that do their tasks, but these workers are designed in specific quantities to serve specific functions that are relevant at the moment. All this is controlled by a tight feedback loop involving helper proteins, DNA, and messenger RNA (mRNA).</p>
<p>If we use programming metaphors to describe this whole process, then the DNA represents the whole github website with all its public packages, and messenger RNA is a big library which features many other smaller libraries with different functions (something like the C++ boost library).</p>
<p>It all begins with a programming problem you want to solve (a biological problem is detected). You use google and stackoverflow to find recommendations for libraries which you can use to solve the problem and soon you find a post that suggests that you use library X to solve problem Y (problem Y is detected on a local level in a cell with known solution of protein X; the protein that detected this defect then cascades into a chain of protein signals which leads to the upregulation of the gene G which can produce protein X; here upregulation is a &#8220;Hey! Produce more of this, please!&#8221; signal to the nucleus of the cell where the DNA lies). You download the library and compile it (the gene G is copied (transcribed) as a short string of mRNA from the very long string of DNA). You then do configure the install (the mRNA leaves the core) with the respective configuration (the mRNA is translated into a protein, the protein may be adjusted by other proteins after this), and install the library in a global “/lib” directory (the protein folds itself into its correct form after which it is fully functional). After you have installed the library, you import the needed part of the library to your program (the folded protein travels (randomly) to the site where it is needed) and you use certain functions of this library to solve your problem (the protein does some kind of work to solve the problem).</p>
<p>Additional to this, neurons may also dynamically alter their genome, that is they can dynamically change their github repository to add or remove libraries.</p>
<p>To understand this process further, you may want to watch the following video, which shows how HIV produces its proteins and how the virus can change the host DNA to suit its needs. The process described in this video animation is very similar to what is going on in neurons. To make it more similar to the process in neurons, imagine that HIV is a neurotransmitter and that everything contained in the HIV cell is in the neuron in the first place. What you have then is an accurate representation of how neurons make use of theirs genes and proteins:</p>
<p><iframe class="youtube-player" width="640" height="360" src="https://www.youtube.com/embed/RO8MP3wMvqg?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;start=59&#038;wmode=transparent" allowfullscreen="true" style="border:0;" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation"></iframe></p>
<p>You may ask, isn’t it so that every cell in your body has (almost) the same DNA in order to be able to replicate itself? Generally, this is true for most cells, but not true for most neurons. Neurons will typically have a genome that is different from the original genome that you were assigned to at birth. Neurons may have additional or fewer chromosomes and have sequences of information removed or added from certain chromosomes.</p>
<p>It was shown, that this behavior is important for information processing and if gone awry, this may contribute to brain disorders like depression or Alzheimer’s disease. Recently it was also shown, that neurons change their genome on a daily basis to improve information processing demands.</p>
<p>So when you sit at your desk for five days, and then on the weekend decide to go on a hike, it makes good sense that the brain adapts its neurons for this new task, because entirely different information processing is needed after this change of environment.</p>
<p>Equally, in an evolutionary sense, it would be beneficial to have different “modes” for hunting/gathering and social activity within the village — and it seems that this function might be for something like this. In general, the biological information processing apparatus is extremely efficient in responding to slower information processing demands that range from minutes to hours.</p>
<p>With respect to deep learning, an equivalent function would be to alter the function of a trained convolutional net in significant but rule-based ways; for example to apply a transformation to all parameters when changing from one to another task (recognition of street numbers -&gt; transform parameters -&gt; recognition of pedestrians).</p>
<p>Nothing of this biological information processing is modeled by the LNP model.</p>
<p>Looking back at all this, it seems rather strange that so many researchers think they that they can replicate the brain&#8217;s behavior by concentrating on the electrochemical properties and inter-neuron interactions only. Imagine that every unit in a convolutional network has its own github, from which it <em>learns</em> to dynamically download, compile and use the best libraries to solve a certain task. From all this you can see that a single neuron is probably more complex than an entire convolutional net, but we continue from here in our focus on electrochemical processes and see where it leads us.</p>
<h3>Back to the LNP model</h3>
<p>After all this above, there is only one more relevant step in information processing for our model. Once a critical level of depolarization is reached, a neuron will most often fire, but not always. There are mechanisms that prevent a neuron from firing. For example shortly after a neuron fired, its electric potential is too positive to produce a fully-fledged action potential, and thus it cannot fire again. This blockage may be present even when a sufficient electric potential is reached, because this blockade is a biological function and not a physical switch.</p>
<p>In the LNP model, this blockage of an action potential is modeled as an inhomogeneous Poisson process which has a Poisson distribution. A Poisson process with a Poisson distribution as a model means that the neuron has a very high probability to fire the first or second time it reached its threshold potential, but it may also be (with a exponentially decreasing probability) that a neuron may not fire for many more times.</p>
<p><figure id="attachment_325" aria-describedby="caption-attachment-325" style="width: 652px" class="wp-caption aligncenter"><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/poisson.png?ssl=1"><img data-attachment-id="325" data-permalink="https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/poisson/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/poisson.png?fit=652%2C347&amp;ssl=1" data-orig-size="652,347" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Poisson" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/poisson.png?fit=300%2C160&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/poisson.png?fit=652%2C347&amp;ssl=1" class="wp-image-325 size-full" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/poisson.png?resize=652%2C347&#038;ssl=1" alt="Poisson" width="652" height="347" data-recalc-dims="1" /></a><figcaption id="caption-attachment-325" class="wp-caption-text">A Poisson(0.5) distribution with a randomly drawn sample. Here 0,1,2,3 represents the waiting time until the neuron fires, thus 0 means it fires without delay, while 2 means it will not fire for two cycles even if it could fire physically.</figcaption></figure></p>
<p>There are exceptions to this rule, where neurons disable this mechanism and fire continuously at the rates which are governed by the physics alone — but these are special events which I will ignore at this point. Generally, this whole process is very similar to dropout used in deep learning which uses a uniform distribution instead of a Poisson distribution; thus this process can be viewed as some kind of regularization method that the brain uses instead of dropout.</p>
<p>In the next step, if the neuron fires, it releases an action potential. The action potential has very little difference in its amplitude, meaning the electric potential generated by the neuron almost always has the same magnitude, and thus is a reliable signal. As this signal travels down the axon it gets weaker and weaker. When it flows into the branches of the axon terminal, its final strength will be dependent on the shape and length of these branches; so each axon terminal will receive a different amount of electrical potential. This spatial information, together with the temporal information due to the spiking pattern of action potentials, is then translated into electrochemical information (it was shown that they are translated into spikes of neurotransmitters themselves that last about 2ms). To adjust the output signal, the axon terminal can move, grow or shrink (spatial), or it may alter its protein makeup which is responsible for releasing the synaptic vesicles (temporal).</p>
<p>Now we are back at the beginning: Neurotransmitters are released from the axon terminal (which can be modeled as a dense matrix multiplication) and the steps repeat themselves.</p>
<h3>Learning and memory in the brain</h3>
<p>Now that we went through the whole process back to back, let us put all this into context to see how the brain uses all this in concert.</p>
<p>Most neurons repeat the process of receive-inputs-and-fire about 50 to 1000 times per second; the firing frequency is highly dependent on the type of neuron and if a neuron is actively processesing tasks. Even if a neuron does not process a task it will fire continuously in a random fashion.  Once some meaningful information is processed, this random firing activity makes way for a highly synchronized activity between neighboring neurons in a brain region. This synchronized activity is poorly understood, but is thought to be integral to understanding information processing in the brain and how it learns.</p>
<p>Currently, it is not precisely known how the brain learns. We do know that it adjusts synapses with some sort of reinforcement learning algorithm in order to learn new memories, but the precise details are unclear and the weak and contradicting evidence indicates that we are missing some important pieces of the puzzle. We got the big picture right, but we cannot figure out the brain&#8217;s learning algorithm without the fine detail which we are still lacking.</p>
<p>Concerning memories, we know that some memories are directly stored in the hippocampus, the main learning region of the brain (if you lose your hippocampus in each brain hemisphere, you cannot form new memories). However, most long-term memories are created and integrated with other memories during your REM sleep phase, when so called sleep spindles unwind the information of your hippocampus to all other brain areas. Long-term memories are generally all local: Your visual memories are stored in the visual system; your memories for your tongue (taste, texture) are stored in the brain region responsible for your tongue, etcetera.</p>
<p>It is also known, that the hippocampus acts as a memory buffer. Once it is full, you need to sleep to empty its contents to the rest of your brain (through sleep spindles during REM sleep); this might be why babies sleep so much and so irregularly —once their learning buffer is full, they sleep to quickly clear their buffer in order to learn more after they wake. You can still learn when this memory buffer is full, but retention is much worse and new memories might wrangle with other memories in the buffer for space and displace them —so really get your needed amount of sleep. Sleeping less and irregularly is unproductive, especially for students who need to learn.</p>
<p><figure id="attachment_341" aria-describedby="caption-attachment-341" style="width: 200px" class="wp-caption aligncenter"><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/hippocampus_small.gif?ssl=1"><img data-attachment-id="341" data-permalink="https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/hippocampus_small/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/hippocampus_small.gif?fit=200%2C200&amp;ssl=1" data-orig-size="200,200" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Hippocampus_small" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/hippocampus_small.gif?fit=200%2C200&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/hippocampus_small.gif?fit=200%2C200&amp;ssl=1" class="wp-image-341 size-full" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/hippocampus_small.gif?resize=200%2C200&#038;ssl=1" alt="Hippocampus_small" width="200" height="200" data-recalc-dims="1" /></a><figcaption id="caption-attachment-341" class="wp-caption-text">The hippocampus in each hemisphere is shown in red. Image source: <a href="https://commons.wikimedia.org/wiki/File:Hippocampus_small.gif">1</a></figcaption></figure></p>
<p>Because memories are integrated with other memories during your “write buffer to hard-drive” stage, sleep is also very important for creativity. The next time you recall a certain memory after you slept, it might be altered with some new information that your brain thought to be fitting to attach to that memory.</p>
<p>I think we all had this: We wake up with some crazy new idea, only to see that it was quite nonsensical in the first place — so our brain is not perfect either and makes mistakes. But other times it just works: One time I tortured myself with a math problem for 7 hours non-stop, only to go to bed disappointed with only about a quarter of the whole problem solved. After I woke, I immediately had two new ideas how to solve the problem: The first did not work; but second made things very easy and I could sketch a solution to the math problem within 15 minutes — an ode to sleep!</p>
<p>Now why do I talk about memories when this blog post is about computation? The thing is that memory creation — or in other words — a method to store computed results for a long time, is critical for any intelligence. In brain simulations, one is satisfied if the synapse and activations occur in the same distribution as they do in the real brain, but one does not care if these synapses or activations correspond to anything meaningful — like memories or “distributed representations” needed for functions such as object recognition. This is a great flaw. Brain simulations have no memories.</p>
<p>In brain simulation, the diffusion of electrochemical particles is modeled by differential equations. These differential equations are complex, but can be modeled with simple techniques like Euler’s method to approximate these complex differential equations. The result has poor accuracy (meaning high error) but the algorithm is very computationally efficient and the accuracy is sufficient to reproduce the activities of real neurons along with their size and distribution of synapses. The great disadvantage is that we generally cannot learn parameters from a method like this — we cannot create meaningful memories.</p>
<p>However, as I have shown in <a href="https://timdettmers.com/2015/03/26/convolution-deep-learning/">my blog post about convolution</a>, we can also model diffusion by applying convolution — a very computationally complex operation. The advantage about convolution is that we can use methods like maximum-likelihood estimation with backpropagation to learn parameters which lead to meaningful representations which are akin to memories (just like we do in convolutional nets). This is exactly akin to the LNP model with its convolution operation.</p>
<p>So besides its great similarity to deep learning models, the LNP model is also justified in that it is actually possible to learn parameters which yield meaningful memories (where with memories I mean here distributed representations like those we find in deep learning algorithms).</p>
<p>This then also justifies the next point where I estimate the brain&#8217;s complexity by using convolution instead of Euler’s method on differential equations.</p>
<p>Another point to take away from for our model is, that we currently have no complexity assigned for the creation of memories (we only modeled the forward pass, not the backward pass with backpropagation). As such, we underestimate the complexity of the brain, but because we do not know how the brain learns, we cannot make any accurate estimates for the computational complexity of learning. With that said and kept in the back of our mind, let us move on to bringing the whole model together for a lower bound of computational complexity.</p>
<h3>Bringing it all together for a mathematical estimation of complexity</h3>
<p><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/brain_complexity.png?ssl=1"><img data-attachment-id="333" data-permalink="https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/brain_complexity/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/brain_complexity.png?fit=780%2C278&amp;ssl=1" data-orig-size="780,278" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="brain_complexity" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/brain_complexity.png?fit=300%2C107&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/brain_complexity.png?fit=780%2C278&amp;ssl=1" class="aligncenter wp-image-333 size-full" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/brain_complexity.png?resize=780%2C278&#038;ssl=1" alt="brain_complexity" width="780" height="278" data-recalc-dims="1" /></a></p>
<p>The next part is a bit tricky: We need to estimate the numbers for N, M, n and m and these differ widely among neurons.</p>
<p>We know that 50 of the 86 billion neurons in the brain are cerebellar granule neurons, so these neurons and their connection will be quite important in our estimation.</p>
<p>Cerebellar granule neurons are very tiny neurons with about 4 dendrites. Their main input is from the cortex. They integrate these signals and then send them along a T-shaped axon which feeds into the dendrites of Purkinje neurons.</p>
<p>Purkinje neurons are by far the most complex neurons, but there are only about 100 million of them. They may have more than a 100000 synapses each and about 1000 dendrites. Multiple Purkinje neurons bundle their outputs in about a dozen deep nuclei (a bunch of densely packed neurons) which then send signals back to the cortex.</p>
<p>This process is very crucial for non-verbal intelligence, abstract thinking and abstract creativity (creativity: Name as many words beginning with the letter A; abstract creativity: What if gravity bends space-time (general relativity)? What if these birds belonged to the same species when they came to this island (evolution)?). It was thought a few decades ago that the cerebellum only computes outputs for movement; for example while Einstein’s cerebrum was handled and studied carefully, his cerebellum was basically just cut off and put away, because it was regarded as a “primitive” brain part.</p>
<p>But since then it was shown that the cerebellum forms 1:1 connections with most brain regions of the cortex. Indeed, changes in the front part of the cerebellum during the ages 23 to 25 may change your non-verbal IQ by up to 30 points, and changes of 10-15 IQ points are common. This is very useful in most instances, whereas we lose neurons which perform a function which we do not need in everyday lives (calculus, or the foreign language which you learned but never used).</p>
<p>So it is crucial to get the estimation of the cerebellum right not only because it contains most neurons, but also because it is important for intelligence and information processing in general.</p>
<h3>Estimation of cerebellar filter dimensions</h3>
<p>Now if we look at a single dendrite, it branches off into a few branches and thus has a tree like structure. Along its total length it is usually packed with synapses. Dendritic spikes can originate in any branch of a dendrite (spatial dimension). When we take 3 branches per dendrite, and 4 dendrites in total we have a convolutional filter of size 3 and 4 for cerebellar granule neurons. Since linear convolution over two dimensions is the same as convolution over one dimension followed by convolution over the other dimension, we can also model this as a single 3&#215;4 convolution operation. Also note that this is mathematically identical to a model that describes the diffusion of particles originating from different sources (feature map) which diffuse according to a rule in their neighborhood (kernel) — this is exactly what happens at a physical level. More on this view in <a href="https://timdettmers.com/2015/03/26/convolution-deep-learning/">my blog post about convolution</a>.</p>
<p>Here I have chosen to represent the spatial domain with a single dimension. It was shown that the shape of the dendritic tree is also important in the resulting information processing and thus we would need two dimensions for the spatial domain. However, data is lacking to represent this mathematically in a meaningful way and thus I proceed with the simplification to one spatial dimension.</p>
<p>The temporal dimension is also important here: Charged particles may linger for a while until they are pumped out of the neuron. It is difficult to estimate a meaningful time frame, because the brain uses continuous time while our deep learning algorithms only know discrete time steps.</p>
<p>No single estimate makes sense from a biological perspective, but from a psychological perspective we know that the brain can take up unconscious information that is presented in an image in about 20 milliseconds (this involves only some fast, special parts of the brain). For conscious recognition of an object we need more time — at least 65 milliseconds, and on average about 80-200 milliseconds for reliable conscious recognition. This involves all the usual parts that are active for object recognition.</p>
<p>From these estimates, one can think about this process as “building up the information of the seen image over time within a neuron”. However, a neuron can only process information if it can differentiate meaningful information from random information (remember, neurons fire randomly if they do not actively process information). Once a certain level of “meaningful information” is present, the neuron actively reacts to that information. So in a certain sense information processing can be thought of as an epidemic of useful information that spreads across the brain: Information can only spread to one neuron, if the neighboring neuron is already infected with this information. Thinking in this way, such an epidemic of information infects all neurons in the brain within 80-200 milliseconds.</p>
<p>As such we can say that, while the object lacks details in the first 20 milliseconds, there is full detail at about 80-200 milliseconds. If we translate this into discrete images at the rate of 30 frames per second (normal video playback) —or in other words time steps — then 20 milliseconds would be 0.6 time steps, and 80-200 milliseconds 2.4-6 time steps. This means, that all the visual information that a neuron needs for its processing will be present in the neuron within 2.4 to 6 frames.</p>
<p>To make calculations easier, I here now choose a fixed time dimension of 5 time steps for neural processes. This means for the dendrites we have spatio-temporal convolutional filters of size 3x4x5 for cerebellar granule neurons. For Purkinje neurons a similar estimate would be filters of a size of about 10x1000x5. The non-linearity then reduces these inputs to a single number for each dendrite. This number represents an instantaneous firing rate, that is, the number represents how often the neuron fires in the respective interval of time, for example at 5 Hz, 100 Hz, 0 Hz etcetera. If the potential is too negative, no spike will result (0 HZ); if the potential is positive enough, then the magnitude of the spike is often proportional to the magnitude of the electric potential —but not always.</p>
<p>It was shown that dendritic summation of this firing rate can be linear (the sum), sub-linear (less than the sum), supra-linear (more than the sum) or bistable (less than the sum, or more than the sum, depending on the respective input); these behaviors of summation often differ from neuron to neuron. It is known that Purkinje neurons use linear summation, and thus their summation to form a spike rate is very similar to the rectified linear function max(0,x) which is commonly used in deep learning. Non-linear sums can be thought of different activation functions. It is important to add, that the activation function is determined by the type of the neuron.</p>
<p>The filters in the soma (or cell body) can be thought of as an additional temporal convolutional filter with a size of 1 in the spatial domain. So this is a filter that reduces the input to a single dimension with a time dimension of 5, that is, a 1x1x5 convolutional filter (this will be the same for all neurons).</p>
<p>Again, the non-linearity then reduces this to an instantaneous firing rate, which then is dropped out by a Poisson process, which is then fed into a weight-matrix.</p>
<p>At this point I want to again emphasize, that it is <u>not</u> correct to view the output of a neuron as binary; the information conveyed by a firing neuron is more like an if-then-else branch: “if(fire == True and dropout == False){ release_ neurotransmitters(); }else{ sleep(0.02); }”</p>
<p>The neurotransmitters are the true output of a neuron, but this is often confused. The source of this confusion is that it is very difficult to study neurotransmitter release and its dynamics with a synapse, while it is ridiculously easy to study action potentials. Most models of neurons thus model the output as action potentials because we have a lot of reliable data here; we do not have such data for neurotransmitter interactions at a real-time level. This is why action potentials are often confused as the true outputs of neurons when they are not.</p>
<p>When a neuron fires, this impulse can be thought of as being converted to a discrete number at the axon terminal (number of vesicles which are released) and is multiplied by another discrete number which represents the amount of receptors on the synapse (this whole process corresponds to a dense or fully connected weight in convolutional nets). In the next step of information processing, charged particles flow into the neuron and build up a real-valued electric potential. This has also some similarities to batch-normalization, because values are normalized into the range [0,threshold] (neuron: relative to the initial potential of the neuron; convolutional net: relative to the mean of activations in batch-normalization). When we look at this whole process, we can model it as a matrix multiplication between two real-valued matrices (doing a scaled normalization before or after this is mathematically equivalent, because matrix multiplication is a linear operation).</p>
<p>Therefore we can think of axon-terminal-synapse interactions between neurons as a matrix multiplication between two real-valued matrices.</p>
<h3>Estimation of cerebellar input/output dimensions</h3>
<p>Cerebellar granule neurons typically receive inputs from about four axons (most often connections from the cortex). Each axon forms about 3-4 synapses with the dendritic claw of the granule neuron (a dendrite ending shaped as if you would hold a tennis ball in your hand) so there are a total of about 15 inputs via synapses to the granule neurons. The granule neuron itself ends in a T shaped axon which crosses directly through the dendrites of Purkinje neurons with which it forms about 100 synapses.</p>
<p>Purkinje neurons receive inputs from about 100000 connections made with granule neurons and they themselves make about 1000 connections in the deep nuclei. There are estimates which are much higher and no accurate number for the number of synapses exists as far as I know. The number of 100000 synapses might be a slight overestimate (but 75000 would be too conservative), but I use it anyways to make the math simpler.</p>
<p>All these dimensions are taken times the time dimension as discussed above, so that the input for granule neurons for example has a dimensionality of 15&#215;5.</p>
<p>So with this we can finally calculate the complexity of a cerebellar granule neuron together with the Purkinje neurons.</p>
<p><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/brain_computational_estimate.png?ssl=1"><img data-attachment-id="334" data-permalink="https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/brain_computational_estimate/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/brain_computational_estimate.png?fit=776%2C493&amp;ssl=1" data-orig-size="776,493" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="brain_computational_estimate" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/brain_computational_estimate.png?fit=300%2C191&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/brain_computational_estimate.png?fit=776%2C493&amp;ssl=1" class="aligncenter wp-image-334 size-full" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/brain_computational_estimate.png?resize=776%2C493&#038;ssl=1" alt="brain_computational_estimate" width="776" height="493" data-recalc-dims="1" /></a></p>
<p>So my estimate would be 1.075&#215;10^21 FLOPS for the brain, the fastest computer on earth as of July 2013 has 0.58&#215;10^15 FLOPS for practical application (more about this below).</p>
<h3>Part III: Limitations and criticism</h3>
<p>While I discussed how the brain is similar to deep learning, I did not discuss how the brain is different. One great disparity is that the dropout in the brain works with respect to all inputs, while dropout in a convolutional network works with respect to each single unit. What the brain is doing makes little sense in deep learning right now; however, if you think about combining millions of convolutional nets with each other, it makes good sense to do as the brain does. The dropout of the brain certainly would work well to decouple the activity of neurons from each other, because no neuron can depend on information from a single other neuron (because it might be dropped out), so that it is forced to take into account all the neurons it is connected with, thus eliminating biased computation (which is basically regularization).</p>
<p>Another limitation of the model is that it is a lower bound. This estimate does not take into account:</p>
<ul>
<li>Backpropagation, i.e. signals that travel from the soma to the dendrites; the action potential is reflected within the axon and travels backwards (these two things may almost double the complexity)</li>
<li>Axon terminal information processing</li>
<li>Multi-neurotransmitter vesicles (can be thought of multiple output channels or filters, just as an image has multiple colors)</li>
<li>Geometrical shape of the dendritic tree</li>
<li>Dendritic spine information processing</li>
<li>Non-axodendritic synapses (axon-axon and axon-soma connections)</li>
<li>Electrical synapses</li>
<li>Neurotransmitter induced protein activation and signaling</li>
<li>Neurotransmitter induced gene regulation</li>
<li>Voltage induced (dendritic spikes and backpropagating signals) gene regulation</li>
<li>Voltage induced protein activation and signaling</li>
<li>Glia cells (besides having an extremely abnormal brain (about one in a billion), Einstein also had abnormally high levels of glia cells)</li>
</ul>
<p>All these things have been shown to be important for information processing in the brain. I did not include them in my estimate because this would have made everything:</p>
<ul>
<li>Too complex: What I have discussed so far is extremely simple if you compare that to the vastness and complexity of biological information processing</li>
<li>Too special: Non-axodendritic synapses can have unique information processing algorithms completely different from everything listed here, e.g. direct electrical communication between a neighboring bundle of neurons</li>
<li>And/or evidence is lacking to create a reliable mathematical model: Neural backpropagation, geometry of the dendritic trees, and dendritic spines</li>
</ul>
<p>Remember that these estimates are for the whole brain. Local brain regions might have higher computational processing speed than this average when they are actively processing stimuli. Also remember that the cerebellum makes up almost all computational processing. Other brain regions integrate the knowledge of the cerebellum, but the cerebellum acts as a transformation and abstraction module for almost all information in the brain (except vision and hearing).</p>
<h3>But wait, but we can do all this with much less computational power! We already have super-human performance in computer vision!</h3>
<p>I would not say that we have super-human performance in computer vision. What we have is a system that beats human at naming things in images that are taken out of context of the real world (what happens before we see something in the real world shapes our perception dramatically). We almost always can recognize things in our environment, but we most often just do not know (or care about) the name of what we see.</p>
<p>Humans do not have the visual system to label things. Try to make a list of 1000 common physical objects in the real world —not an easy task.</p>
<p>To not recognize an object for us humans would mean that we see an object but cannot make sense of it. If you forgot the name of an old classmate, it does not mean you did not recognize her; it just means you forgot her name. Now imagine you get off a train stop and you know a good friend is waiting for you somewhere at the stop. You see somebody 300 meters away waving their hands who is looking in your direction — is it your friend? You do not know; you cannot recognize if it is her. That’s the difference between mere labels and object recognition.</p>
<p>Now if you cannot recognize something in a 30&#215;30 pixel image, but the computer can, this also does not necessarily mean that the computer has super-human object recognition performance. First and foremost this means that your visual system does not work well for pixeled information. Our eyes are just not used to that.</p>
<p>Now take a look outside a window and try to label all the things you see. It will be very easy for most things, but for other things you do not know the correct labels! For example, I do not know the name for a few plants that I see when I look out of my window. However, we are fully aware what it is what we see and can name many details of the object. For example, alone by assessing their appearance, I know a lot about how much water and sunshine the unknown plants need, how fast they grow, in which way they grow, if they are old or young specimens; I know how they feel like if I touch them — or more generally — I know how these plants grow biologically and how they produce energy, and so on. I can do all this without knowing its name. Current deep learning systems cannot do this and will not do this for quite some time. Human-level performance in computer vision is far away indeed! We just reached the very first step (object recognition) and now the task is to make computer vision smart, rather than making it just good at labeling things.</p>
<p>Evolutionarily speaking, the main functions of our visual system have little to do with naming things that we see: Hunt and avoid being hunted, to orient ourselves in nature during foraging and make sure we pick the right berries and extract roots efficiently— these are all important functions, but probably one of the most important functions of our vision is the social function within a group or relationship.</p>
<p>If you Skype with someone it is quite a different communication when they have their camera enabled compared to if they have not. It is also very different to communicate with someone whose image is on a static 2D surface compared to communicating in person. Vision is critical for communication.</p>
<p>Our deep learning cannot do any of this efficiently.</p>
<h3>Making sense of a world without labels</h3>
<p>One striking case which also demonstrates the power of vision for true understanding of the environment without any labels is the case of <a href="https://en.wikipedia.org/wiki/Genie_(feral_child)">Genie</a>. Genie was strapped into place and left alone in a room at the age of 20 months. She was found with severe malnutrition 12 years later. She had almost no social interaction during this time and thus did not acquire any form of verbal language.</p>
<p>Once she got in contact with other human beings she was taught English as a language (and later also sign language), but she never really mastered it. Instead she quickly mastered non-verbal language and was truly exceptional at that.</p>
<p>To strangers she almost exclusively communicated with non-verbal language. There are instances where these strangers would stop in their place, leave everything behind, walk up to her and hand her a toy or another item — that item was always something that was known to be something liked and desired.</p>
<p>In one instance a woman got out of her car at a stoplight at an intersection, emptied her purse and handed it to Genie. The woman and Genie did not exchange a word; they understood each other completely non-verbally.</p>
<p>So what Genie did, was to pick up cues with her visual system and translated the emotional and cognitive state of that woman into non-verbal cues and actions, which she would then use to change the mental state of the woman. In turn that the woman would then desire to give the purse to Genie (which Genie probably could not even see).</p>
<p>Clearly, Genie was very exceptional at non-verbal communication — but what would happen if you pitched her against a deep learning object recognition system? The deep learning system would be much better than Genie on any data set you would pick. Do you think it would be fair to say that the convolutional net is better at object recognition than Genie is? I do not think so.</p>
<p>This shows how primitive and naïve our approach to computer vision is. Object recognition is a part of human vision, but it is not what makes it exceptional.</p>
<h3>Can we do with less computational power?</h3>
<p>“We do not need as much computational power as the brain has, because our algorithms are (will be) better than that of the brain.”</p>
<p>I hope you can see after the descriptions in this blog post that this statement is rather arrogant.</p>
<p>We do not know how the brain really learns. We do not understand information processing in the brain in detail. And yet we dare to say we can do better?</p>
<p>Even if we did know how the brain works in all its details, it would still be rather naïve to think we could create general intelligence with much less. The brain developed during many hundreds of millions of years through evolution. Evolutionary, it is the most malleable organ there is: The human cortex shrunk by about 10% during the last 20000 years, and the human brain adapted rapidly to the many ways we use verbal language — a very recent development in evolutionary terms.</p>
<p>It was also shown that the number of neurons in each animal’s brain is almost exactly the amount which it can sustain through feeding (we probably killed off the majority of all mammoths by about 20000 years ago). We humans have such large brains because we invented fire and cooking with which we could predigest food which made it possible to sustain more neurons. Without cooking, the intake of calories would not be high enough to sustain our brains and we would helplessly starve (at least a few thousand years ago; now you could survive on a raw vegan diet easily — just walk into a supermarket and buy a lot of calorie-dense foods). With this fact, it is very likely that brains are optimized exhaustively to create the best information processing which is possible for the typical calorie intake of the respective species — the function which is most expensive in an animal will be most ruthlessly optimized to enhance survival and procreation. This is also very much in line with all the complexity of the brain; every little function is optimized thoroughly and only as technology advances we can understand step by step what this complexity is made for.</p>
<p>There are many hundreds of different types of neurons in the brain, each with their designated function. Indeed, neuroscientists often can differentiate different brain regions and their function by looking at the changing architecture and neuron types in a brain region. Although we do not understand the details of how the circuits perform information processing, we can see that each of these unique circuits is designed carefully to perform a certain kind of function. These circuits are often replicated in evolutionary distinct species which share a common ancestor that branched off into these different species hundreds of millions of years ago, showing that such structures are evolutionarily optimal for the tasks they are processing.</p>
<p>The equivalent in deep learning would be, if we had 10000 different architectures of convolutional nets (with its own set of activation functions and more) which we combine meticulously to improve the overall function of our algorithm ― do you really think we can build something which can produce as complex information processing, but which follows a simple general architecture?</p>
<p>It is rather naïve to think that we can out-wit this fantastically complex organ when we are not even able to understand its learning algorithms.</p>
<p>On top of this, the statement that we will develop better algorithms than the brain uses is unfalsifiable. We can only prove it when we achieve it, we cannot disprove it. Thus it is a rather nonsensical statement that has little practical value. Theories are usually useful even when there is not enough evidence to show that they are correct.</p>
<p>The standard model of physics is an extremely useful theory used by physicists and engineers around the world in their daily life to develop the high tech products we enjoy; and yet this theory is not complete, it was amended just a few days ago when a new particle was proven to exist in the LHC experiment.</p>
<p>Imagine if there were another model, but you would only be able to use it when we have proven the existence of <em>all particles</em>. This model would then be rather useless. When it makes no predictions at all about the behavior in the world, we would be unable to manufacture and develop electronics with this theory. Similarly, the statement that we can develop more efficient algorithms than the brain does not help; it rather makes it more difficult to make further progress. The brain should really be our main point of orientation.</p>
<p>Another argument, which would be typical for Yann LeCun (he made a similar argument during a panel) would be: Arguably, airplanes are much better at flying than birds are; yet, if you describe the flight of birds it is extremely complex and every detail counts, while the flight of airplanes is described simply by the fluid flow around an airfoil. Why is it wrong to expect this simplicity from deep learning when compared to the brain?</p>
<p>I think this argument has some truth in it, but essentially, it asks the wrong question. I think it is clear that we need not to replicate everything in detail in order to achieve artificial intelligence, but the real question is: Where do we draw the line? If you get to know that neurons can be modeled in ways that closely resemble convolutional nets, would you go so far and say, that this model is too complex and we need to make it simpler?</p>
<h2>Part IV: Predicting the growth of practical computational power</h2>
<p>There is one dominant measure of performance in high-performance computing (HPC) and this measure is floating point operations per second (FLOPS) on the High Performance LINPACK (HPL) benchmark – which measures how many computations a system can do in a second when doing distributed dense matrix operations on hundreds or thousands of computers. There exists the TOP 500 list of supercomputers, which is a historical list based on this benchmark which is the main reference point for the performance of a new supercomputer system.</p>
<p>But a big but comes with the LINPACK benchmark. <a href="http://www.netlib.org/utk/people/JackDongarra/PAPERS/HPCG-benchmark.pdf">It does not reflect the performance in real, practical applications</a> which run on modern supercomputers on a daily basis, and thus, the fastest computers on the TOP 500 list are not necessarily the fastest computers for practical applications.</p>
<p>Everybody in the high performance computing community knows this, but it is so entrenched in the business routine in this area, that when you design a new supercomputer system, you basically have to show that your system will be able to get a good spot on the TOP 500 in order to get funding for that supercomputer.</p>
<p>Sometimes such systems are practically unusable, like the Tianhe-2 supercomputer which still holds the top spot on the LINPACK benchmark after more than three years. The potential of this supercomputer goes largely unused because it is too expensive to run (electricity) and the custom hardware (custom network, Intel Xeon Phi) requires new software, which would need years of development to reach the levels of sophistication of standard HPC software. The Tianhe-2 runs only at roughly one third of its capacity, or in other words, it practically stands idle for nearly 2 out of 3 minutes. The predecessor of the Tianhe-2, the Tianhe-1, fastest computer in the world in 2010 (according to LINPACK), has not been used since 2013 due to bureaucracy reasons.</p>
<p>While outside of China, other supercomputers of similar design fare better, they typically do not perform so well in practical applications. This is so, because the used accelerators like graphic processing units (GPUs) or Intel Xeon Phis can deliver high FLOPS in such a setup, but they are severely limited by network bandwidth bottlenecks.</p>
<p>To correct the growing uselessness of the LINPACK benchmark a new measure of performance was developed: The high performance conjugate gradient benchmark (HPCG). This benchmark performs conjugate gradient, which requires more communication than LINPACK and as such comes much closer to performance numbers for real applications. I will use this benchmark to create my estimates for a singularity.</p>
<p><figure id="attachment_327" aria-describedby="caption-attachment-327" style="width: 1000px" class="wp-caption aligncenter"><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/top500_2.jpg?ssl=1"><img data-attachment-id="327" data-permalink="https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/top500_2/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/top500_2.jpg?fit=1000%2C800&amp;ssl=1" data-orig-size="1000,800" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="top500_2" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/top500_2.jpg?fit=300%2C240&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/top500_2.jpg?fit=1000%2C800&amp;ssl=1" class="wp-image-327 size-full" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/top500_2.jpg?resize=1000%2C800&#038;ssl=1" alt="top500_2" width="1000" height="800" data-recalc-dims="1" /></a><figcaption id="caption-attachment-327" class="wp-caption-text">The TOP500 for the last decade and some data for the HPCG (data collection only began recently). The dashed lines indicate a forecast. The main drivers of computational growth are also shown: Multicore CPU, GPU, and in 2016-2017 3D memory, and some new unknown technology in 2020. Will this growth be sustainable?</figcaption></figure></p>
<p>However, this benchmark still dramatically overestimates the computing power that can be reached for artificial intelligence applications when we assume that these applications are based on deep learning.</p>
<p>Deep learning is currently the most promising technique for reaching artificial intelligence. It is certain that deep learning — as it is now — will not be enough, but one can say for sure that something similar to deep learning will be involved in reaching strong AI.</p>
<p>Deep learning, unlike other applications has an unusually high demand for network bandwidth. It is so high that for some supercomputer designs which are in the TOP 500 a deep learning application would run slower than on your desktop computer. Why is this so? Because parallel deep learning involves massive parameter synchronization which requires extensive network bandwidth: If your network bandwidth is too slow, then at some point deep learning gets slower and slower the more computers you add to your system. As such, very large systems which are usually quite fast may be extremely slow for deep learning.</p>
<p>The problem with all this is that the development of new network interconnects which enable high bandwidth is difficult and advances are made much more slowly than the advances of computing modules, like CPUs, GPUs and other accelerators. Just recently, Mellanox reached a milestone where they could manufacture switches and InfiniBand cards which operate at 100Gbits per second. This development is still rather experimental, and it is difficult to manufacture fiber-optic cables which can operate at this speed. As such, no supercomputer implements this new development as of yet. But with this milestone reached, there will not be another milestone for many quite a while. The doubling time for network interconnect bandwidth is about 3 years.</p>
<p>Similarly, there is a memory problem. While the speed of theoretical processing power of CPUs and GPUs keeps increasing, the memory bandwidth of RAM is almost static. This is a great problem, because now we are at a point where it costs more time to move the data to the compute circuits than to actually make computations with it.</p>
<p>With new developments such as 3D memory one can be sure that further increases in memory bandwidth will be achieved, but we have nothing after that to increase the performance further. We need new ideas and new technology. Memory will not scale itself by getting smaller and smaller.</p>
<p>However, currently the biggest hurdle of them all is power consumption. The Tianhe-2 uses 24 megawatts of power, which totals to $65k-$100k in electricity cost per day, or about $23 million per year. The power consumed by the Tianhe-2 would be sufficient to power about 6000 homes in Germany or 2000 homes in the US (A/C usage).</p>
<p><figure id="attachment_345" aria-describedby="caption-attachment-345" style="width: 753px" class="wp-caption aligncenter"><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/hpc_constraints.png?ssl=1"><img data-attachment-id="345" data-permalink="https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/hpc_constraints/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/hpc_constraints.png?fit=753%2C476&amp;ssl=1" data-orig-size="753,476" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="hpc_constraints" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/hpc_constraints.png?fit=300%2C190&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/hpc_constraints.png?fit=753%2C476&amp;ssl=1" class="wp-image-345 size-full" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/hpc_constraints.png?resize=753%2C476&#038;ssl=1" alt="hpc_constraints" width="753" height="476" data-recalc-dims="1" /></a><figcaption id="caption-attachment-345" class="wp-caption-text">An overview about how the performance constraints changed from old to new supercomputers. Adapted from <a href="https://www2.lbl.gov/Publications/Deputy-Director/bio.html">Horst Simon</a>&#8216;s <a href="https://www.researchgate.net/profile/Horst_Simon/publication/261879110_Why_we_need_Exascale_and_why_we_won't_get_there_by_2020/links/0c960535dbade00bbc000000.pdf">presentation</a></figcaption></figure></p>
<h3>Physical limitations</h3>
<p>Furthermore, there are physical problems around the corner. Soon, our circuits will be so small that electrons will start to show quantum effects. One such quantum effect is quantum tunneling. In quantum tunneling an electron sits in two neighboring circuits at once, and decides randomly to which of these two locations it will go next.</p>
<p>If this would happen at a larger scale, it would be like charging your phone right next to your TV, and the electrons decide they want to go to your cell phone cable rather than to your TV; so they jump over to the phone cable cutting off the power to your TV. Quantum tunneling will become relevant in 2016-2017 and has to be taken into account from there on. New materials and “insulated” circuits are required to make everything work from here on.</p>
<p>With new materials, we need new production techniques which will be very costly because all computer chips relied on the same, old but reliable production process. We need research and development to make our known processes working with these new materials and this will not only cost money but also cost time. This will also fuel a continuing trend where the cost for producing computer chips increases exponentially (and growth may slow due to costs). Currently, the tally is at $9bn for such a semiconductor fabrication plant (fab) increasing at a relatively stable rate of about 7-10% higher costs per year for the past decades.</p>
<p>After this, we are at the plain physical limits. A transistor will be composed of not much more than a handful of atoms. We cannot go smaller than this, and this level of manufacturing will require extensive efforts in order to get such devices working properly. This will start to happen around 2025 and the growth may slow from here due to physical limitations.</p>
<h3>Recent trends in the growth of computational power</h3>
<p>So to summarize the previous section: (1) LINPACK performance does not reflect practical performance because it does not test memory and network bandwidth constraints; (2) memory and network bandwidth are now more important than computational power, however (3) advances in memory and network bandwidth will be sporadic and cannot compete with the growth in computational power; (4) electrical costs are a severe limitation (try to justify a dedicated power plant for a supercomputer if citizen face sporadic power outages), and also (5) computational power will be limited by physical boundaries in the next couple of years.</p>
<p>It may not come to a surprise then that the growth in computational power has been slowing down in recent years; this is mainly due to power efficiencies which will only be improved gradually, but the other factors also take its toll, like network interconnects which cannot keep up with accelerators like GPUs.</p>
<p>If one takes the current estimate of practical FLOPS of the fastest supercomputer, the Tianhe-2 with 0.58 petaflops on HPCG, then it would take 21 doubling periods until the lower bound of the brain&#8217;s computational power is reached. If one uses Moore’s Law, we would reach that by 2037; if we take the growth of the last 60 years, which is about 1.8 years per doubling period, we will reach this in the year 2053. If we take a lower estimate of 3 years for the doubling period due to the problems listed above we will reach this in 2078. While for normal supercomputing applications memory bandwidth is the bottleneck for practical applications as of now, this may soon change to networking bandwidth, which doubles about every 3 years. So the 2078 estimate might be quite accurate.</p>
<p><figure id="attachment_328" aria-describedby="caption-attachment-328" style="width: 893px" class="wp-caption aligncenter"><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/growth.jpg?ssl=1"><img data-attachment-id="328" data-permalink="https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/growth/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/growth.jpg?fit=893%2C634&amp;ssl=1" data-orig-size="893,634" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="growth" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/growth.jpg?fit=300%2C213&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/growth.jpg?fit=893%2C634&amp;ssl=1" class="wp-image-328 size-full" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2015/07/growth.jpg?resize=893%2C634&#038;ssl=1" alt="growth" width="893" height="634" data-recalc-dims="1" /></a><figcaption id="caption-attachment-328" class="wp-caption-text">Growth in computing performance with respect to the HPCG benchmark. Both computing performance and factory costs are assumed to keep growing steadily at an exponential rate with doubling period of 18 or 36 months, respectively.</figcaption></figure></p>
<p>Now remember that, (1) the HPCG benchmark has much higher performance than typical deep learning applications which rely much more on network and memory bandwidth, and (2) that my estimate for the computational complexity of the brain is a lower bound. One can see that an estimate beyond 2100 might be not too far off. To sustain such a long and merciless increase in computation performance will require that we develop and implement many new ideas while operating at the border of physical limitations as soon as by 2020. Will this be possible?</p>
<p>Where there&#8217;s a will, there&#8217;s a way — the real question is: Are we prepared to pay the costs?</p>
<h1>Conclusion</h1>
<p>Here I have discussed the information processing steps of the brain and their complexity and compared them to those of deep learning algorithms. I focused on a discussion of basic electrochemical information processing and neglected biological information processing.</p>
<p>I used an extended linear-nonlinear-Poisson cascade model as groundwork and related it to convolutional architectures.</p>
<p>With this model, I could show that a single neuron has an information processing architecture which is very similar to current convolutional nets, featuring convolutional stages with rectified non-linearities which activities are then regularized by a dropout-like method. I also established a connection between max-pooling and voltage-gated channels which are opened by dendritic spikes. Similarities to batch-normalization exist.</p>
<p>This straightforward similarity gives strong reason to believe that deep learning is really on the right path. It also indicates that ideas borrowed from neurobiological processes are useful for deep learning (the problem was that progress in deep learning architectures often preceded knowledge in neurobiological processes).</p>
<p>My model shows that it can be estimated that the brain operates at least 10x^21 operations per second. With current rates of growth in computational power we could achieve supercomputers with brain-like capabilities by the year 2037, but estimates after the year 2080 seem more realistic when all evidence is taken into account. This estimate only holds true if we succed to stomp limitations like physical barriers (for example quantum-tunneling), capital costs for semiconductor fabrication plants, and growing electrical costs. At the same time we constantly need to innovate to solve memory bandwidth and network bandwidth problems which are or will be the bottlenecks in supercomputing. With these considerations taken into account, it is practically rather unlikely that we will achieve human-like processing capabilities anytime soon.</p>
<h2>Closing remarks</h2>
<p>My philosophy of this blog post was to present all information on a single web-page rather than scatter information around. I think this design helps to create a more sturdy fabric of knowledge, which, with its interwoven strains of different fields, helps to create a more thorough picture of the main ideas involved.  However, it has been quite difficult to organize all this information into a coherent picture and some points might be more confusing than enlightening. Please leave a comment below to let me know if the structure and content need improvement, so that I can adjust my next blog post accordingly.</p>
<p>I would also love general feedback for this blog post.</p>
<p>Also make sure to share this blog post with your fellow deep learning colleagues. People with raw computer science backgrounds often harbor misconceptions about the brain, its parts and how it works. I think this blog post could be a suitable remedy for that.</p>
<h2>The next blog post</h2>
<p>The second post in this series on neuroscience and psychology will focus on the most important brain regions and their function and connectivity. The last and third part in the series will focus on psychological processes, such as memory and learning, and what we can learn from that with respect to deep learning.</p>
<h4><strong>Acknowledgments</strong></h4>
<p>I would like to thank Alexander Tonn for his useful advice and for proofreading this blog post.</p>
<h4><strong>Important references and sources </strong></h4>
<p><strong>Neuroscience</strong></p>
<p>Brunel, N., Hakim, V., &amp; Richardson, M. J. (2014). Single neuron dynamics and computation. <i>Current opinion in neurobiology</i>, <i>25</i>, 149-155.</p>
<p>Chadderton, P., Margrie, T. W., &amp; Häusser, M. (2004). Integration of quanta in cerebellar granule cells during sensory processing. <i>Nature</i>, <i>428</i>(6985), 856-860.</p>
<p>De Gennaro, L., &amp; Ferrara, M. (2003). Sleep spindles: an overview. <i>Sleep medicine reviews</i>, <i>7</i>(5), 423-440.</p>
<p>Ji, D., &amp; Wilson, M. A. (2007). Coordinated memory replay in the visual cortex and hippocampus during sleep. <i>Nature neuroscience</i>, <i>10</i>(1), 100-107.</p>
<p>Liaw, J. S., &amp; Berger, T. W. (1999). Dynamic synapse: Harnessing the computing power of synaptic dynamics. <i>Neurocomputing</i>, <i>26</i>, 199-206.</p>
<p>Ramsden, S., Richardson, F. M., Josse, G., Thomas, M. S., Ellis, C., Shakeshaft, C., &#8230; &amp; Price, C. J. (2011). Verbal and non-verbal intelligence changes in the teenage brain. <i>Nature</i>, <i>479</i>(7371), 113-116.</p>
<p>Smith, S. L., Smith, I. T., Branco, T., &amp; Häusser, M. (2013). Dendritic spikes enhance stimulus selectivity in cortical neurons in vivo. <i>Nature</i>, <i>503</i>(7474), 115-120.</p>
<p><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3230671/">Stoodley, C. J., &amp; Schmahmann, J. D. (2009). Functional topography in the human cerebellum: a meta-analysis of neuroimaging studies. <i>Neuroimage</i>,<i>44</i>(2), 489-501.</a></p>
<p><strong>High performance computing</strong></p>
<p>Dongarra, J., &amp; Heroux, M. A. (2013). Toward a new metric for ranking high performance computing systems. <i>Sandia Report, SAND2013-4744</i>, <i>312</i>.</p>
<p><a href="https://prod-ng.sandia.gov/techlib-noauth/access-control.cgi/2013/138752.pdf">PDF: HPCG Specification</a></p>
<p><a href="https://top500.org/news/no-exascale-for-you-an-interview-with-berkeley-labs-horst-simon/">Interview: Why there will be no exascale computing before 2020</a></p>
<p><a href="https://www.researchgate.net/profile/Horst_Simon/publication/261879110_Why_we_need_Exascale_and_why_we_won't_get_there_by_2020/links/0c960535dbade00bbc000000.pdf">Slides: Why there will be no exascale computing before 2020</a></p>
<p><a href="http://vrworld.com/2015/03/23/jack-dongarra-on-the-great-exascale-challenge-and-rising-hpc-powers/">Interview: Challenges of exascale computing</a></p>
<p><strong>Image references</strong></p>
<p><a id="anwar14"></a><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4107854/">Anwar, H., Roome, C. J., Nedelescu, H., Chen, W., Kuhn, B., &amp; De Schutter, E. (2014). Dendritic diameters affect the spatial variability of intracellular calcium dynamics in computer models. <i>Frontiers in cellular neuroscience</i>, <i>8</i>.</a></p>
<p>The post <a rel="nofollow" href="https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/">The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near</a> appeared first on <a rel="nofollow" href="https://timdettmers.com">Tim Dettmers</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/feed/</wfw:commentRss>
			<slash:comments>183</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">312</post-id>	</item>
		<item>
		<title>How to Parallelize Deep Learning on GPUs Part 2/2: Model Parallelism</title>
		<link>https://timdettmers.com/2014/11/09/model-parallelism-deep-learning/</link>
					<comments>https://timdettmers.com/2014/11/09/model-parallelism-deep-learning/#comments</comments>
		
		<dc:creator><![CDATA[Tim Dettmers]]></dc:creator>
		<pubDate>Sun, 09 Nov 2014 19:16:55 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[High Performance Computing]]></category>
		<category><![CDATA[Parallel Computing]]></category>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?p=85</guid>

					<description><![CDATA[<p>Model parallelism enables fast training of large neural networks: Read on to understand how it is done and why it is so good for large networks.</p>
<p>The post <a rel="nofollow" href="https://timdettmers.com/2014/11/09/model-parallelism-deep-learning/">How to Parallelize Deep Learning on GPUs Part 2/2: Model Parallelism</a> appeared first on <a rel="nofollow" href="https://timdettmers.com">Tim Dettmers</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>In my last blog post I explained what model and data parallelism is and analysed <a title="How to Parallelize Deep Learning on GPUs Part 1/2: Data Parallelism" href="http://timdettmers.com/2014/10/09/deep-learning-data-parallelism/" target="_blank" rel="noopener noreferrer">how to use data parallelism effectively in deep learning</a>. In this blog post I will focus on model parallelism.</p>
<p><span id="more-85"></span></p>
<p>To recap, model parallelism is, when you split the model among GPUs and use the same data for each model; so each GPU works on a part of the model rather than a part of the data. In deep learning, one approach is to do this by splitting the weights, e.g. a 1000&#215;1000 weight matrix would be split into a 1000&#215;250 matrix if you use four GPUs.</p>
<figure><img id="featured-image" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2014/11/modelpara1.png?resize=1025%2C626" alt="" width="1025" height="626 /" data-recalc-dims="1"><figcaption>Model parallelism diagram. Synchronizing communication is needed after each dot product with the weight matrix for both forward and backward pass.</figcaption></figure>
<p>One advantage of this approach is immediately apparent: If we split the weights among the GPUs we can have very large neural networks which weights would not fit into the memory of a single GPU. In part I mentioned this in an earlier blog post, where I also said that such large neural networks are largely unnecessary. However, for very big unsupervised learning tasks – which will become quite important in the near future – such large networks will be needed in order to learn fine grained features that could learn “intelligent” behavior.</p>
<p>How does a forward and backward pass work with such split matrices? This is most obvious when we do the matrix algebra step by step:</p>
<p>We start looking at <img src="https://s0.wp.com/latex.php?latex=%7B%5Cbf%7BA%7D%2C%5Cbf%7BB%7D+%3D+%5Cbf%7BC%7D%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="{&#92;bf{A},&#92;bf{B} = &#92;bf{C}}" class="latex" /> &nbsp;which would be the dot matrix multiply for the usual forward pass case. The dimensions for using model parallelism with two GPUs for a batch size of 128 and a 1000&#215;500 weight matrix would be:</p>
<p>Standard: 128&#215;1000 dot 1000&#215;500 = 128&#215;500</p>
<p>Split by weight matrix first dimension: 128&#215;500 dot 500&#215;500 = 128&#215;500 -&gt; add matrices</p>
<p>Split by weight matrix second dimension: 128&#215;1000 dot 1000&#215;250 = 128&#215;250 -&gt; stack matrices</p>
<p>To calculate the errors in the layer below we need to pass the current error through to the next layer, or more mathematically, we calculate the deltas by taking the dot product of the error of the previous layer <img src="https://s0.wp.com/latex.php?latex=%7Bi%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="{i}" class="latex" /> and the weights that connect to the next layer <img src="https://s0.wp.com/latex.php?latex=%7Bj%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="{j}" class="latex" /> , i.e. <img src="https://s0.wp.com/latex.php?latex=%7B%5Cbf%7Bdelta_j%7D%2C%5Cbf%7BW%5ET%7D+%3D+%5Cbf%7Bdelta_i%7D%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="{&#92;bf{delta_j},&#92;bf{W^T} = &#92;bf{delta_i}}" class="latex" />:</p>
<p>Standard: 128&#215;500 dot 500&#215;1000 = 128&#215;1000</p>
<p>Split by weight matrix first dimension: 128&#215;500 dot 500&#215;500 = 128&#215;500 -&gt; stack matrices</p>
<p>Split by weight matrix second dimension: 128&#215;250 dot 250&#215;1000 = 128&#215;1000 -&gt; add matrices</p>
<p>We see here, we need to synchronize (adding or stacking weights) after each dot product and you may think that this is slow when compared to data parallelism, where we synchronize only once. But one can quickly see that this is not so for most cases if we do the math: In data parallelism a 1000&#215;500 gradient needs to be transferred once for the 1000&#215;500 layer – that’s 500000 elements; for model parallelism we just need to transfer a small matrix for each forward and backward pass with a total of 128000 or 160000 elements – that’s nearly 4 times less data! So the network card bandwidth is still the main bottleneck in the whole application, but much less so than in the data parallelism case.</p>
<p>This is of course all relative and depends on the network architecture. Data parallelism will be quite fast for small networks and very slow for large networks, the opposite is true for model parallelism. The more parameters we have, the more beneficial is model parallelism. Its true strength comes to play if you have neural networks where the weights do not fit into a single GPU memory. Here model parallelism might achieve that for which one would need thousands of CPUs.</p>
<p>However, if you run small networks where the GPUs are not saturated and have some free capacity (not all cores are running), then model parallelism will be slow. Unlike data parallelism, there are no tricks you can use to hide the communication needed for synchronization, this is because we have only partial information for the whole batch. With this partial information we cannot compute the activities in the next layer and thus have to wait for the completion of the synchronization to move forward.</p>
<p>How the advantages and disadvantages can be combined is best shown by Alex Krizhevsky who <a title="One weird trick for parallelizing convolutional neural networks" href="https://arxiv.org/pdf/1404.5997v2.pdf" target="_blank" rel="noopener noreferrer">demonstrates the efficiency</a> of using data parallelism in the convolutional layers and model parallelism in the dense layers of a convolutional neural network.</p>
<p>The post <a rel="nofollow" href="https://timdettmers.com/2014/11/09/model-parallelism-deep-learning/">How to Parallelize Deep Learning on GPUs Part 2/2: Model Parallelism</a> appeared first on <a rel="nofollow" href="https://timdettmers.com">Tim Dettmers</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://timdettmers.com/2014/11/09/model-parallelism-deep-learning/feed/</wfw:commentRss>
			<slash:comments>21</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">85</post-id>	</item>
		<item>
		<title>How to Parallelize Deep Learning on GPUs Part 1/2: Data Parallelism</title>
		<link>https://timdettmers.com/2014/10/09/deep-learning-data-parallelism/</link>
					<comments>https://timdettmers.com/2014/10/09/deep-learning-data-parallelism/#comments</comments>
		
		<dc:creator><![CDATA[Tim Dettmers]]></dc:creator>
		<pubDate>Thu, 09 Oct 2014 14:59:09 +0000</pubDate>
				<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Hardware]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[High Performance Computing]]></category>
		<category><![CDATA[Parallel Computing]]></category>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?p=65</guid>

					<description><![CDATA[<p>Model parallelism is the bread and butter parallelism algorithm for deep learning. Here I explain how it works, and where the bottlenecks lie, which may cripple performance.</p>
<p>The post <a rel="nofollow" href="https://timdettmers.com/2014/10/09/deep-learning-data-parallelism/">How to Parallelize Deep Learning on GPUs Part 1/2: Data Parallelism</a> appeared first on <a rel="nofollow" href="https://timdettmers.com">Tim Dettmers</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>In my <a title="How To Build and Use a Multi GPU System for Deep Learning" href="http://timdettmers.com/2014/09/21/how-to-build-and-use-a-multi-gpu-system-for-deep-learning/">last blog post</a> I showed what to look out for when you build a GPU cluster. Most importantly, you want a fast network connection between your servers and using MPI in your programming will make things much easier than to use the options available in CUDA itself.</p>
<p><span lang="en-US">In this blog post I explain how to utilize such a cluster to parallelize neural networks in different ways and what the advantages and downfalls are for such algorithms. The two different algorithms are data and model parallelism. In this blog entry I will focus on data parallelism.</span></p>
<p><span id="more-65"></span></p>
<p><span lang="en-US">So what are these two? Data parallelism is when you use the same model for every thread, but feed it with different parts of the data; model parallelism is when you use the same data for every thread, but split the model among threads.</span></p>
<p><span lang="en-US">For neural networks this means that data parallelism uses the same weights and but different mini-batches in each thread; the gradients need to be synchronized, i.e. averaged, after each pass through a mini-batch.</span></p>
<p><span lang="en-US">Model parallelism splits the weights of the net equally among the threads and all threads work on a single mini-batch; here the generated output after each layer needs to be synchronized, i.e. stacked, to provide the input to the next layer.</span></p>
<p><span lang="en-US">Each method has its advantages and disadvantages which change from architecture to architecture. Let us look at data parallelism first and its bottlenecks first and in the next post I will look at model parallelism.<br />
</span></p>
<p><b>Severity of the network bottleneck of data parallelism</b></p>
<p><span lang="en-US">The idea of data parallelism is simple. If you have, say, 4 GPUs you split a mini-batch into parts for each of them, say, you split a mini-batch with 128 examples into 32 examples for each GPU. Then you feed the respective batch through the net and obtain gradients for each split of the mini-batch. You then use MPI to collect all the gradients and update the parameters with the overall average.</span></p>
<figure><img id="featured-image" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2014/10/datapara1.png?resize=1025%2C626" alt="Featured image" width="1025" height="626"  data-recalc-dims="1"><figcaption>Data parallelism diagram. There is no communication in the forward pass, and during the backward pass you synchronize gradients.</figcaption></figure>
<p><span lang="en-US">The biggest problem with this approach is that during the backward pass you have to pass the whole gradient to the all other GPUs. If you have a 1000&#215;1000 weight matrix then you need to pass 4000000 bytes to each network. If we take a 40Gbit/s network card – which is already quite fast – then you will need <img src="https://s0.wp.com/latex.php?latex=%7B%5Cfrac%7B4000000%7D%7B40%7D%5Cfrac%7B1%7D%7B40%5Ctimes+1024%5Ctimes+1024+%5Ctimes+102%7D%5Cfrac%7B1%7D%7B8%5Ctimes+1000%7D+%3D+0.75%5Cmbox%7Bms%7D%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="{&#92;frac{4000000}{40}&#92;frac{1}{40&#92;times 1024&#92;times 1024 &#92;times 102}&#92;frac{1}{8&#92;times 1000} = 0.75&#92;mbox{ms}}" class="latex" /> &nbsp;to pass the data from one node to another node (however, there is some additional overhead that is neglected here). If you have six GPUs in two nodes you need to pass the data to five other GPUs, three of which need to go through the network card (3x 0.75ms), while two can use PCIe 3.0 to pass the data to the other two GPUs (about three times as fast; 2x 0.25ms). However, the PCIe pass is independent of the network card pass, so the time needed is determined by the network card time alone, i.e. 2.25ms. However, only one GPU can transfer data through the network card at any one time in any one node, so that we have to multiply that time by three, i.e. 7.75ms. Now the bottom line is, that we just need about 0.2ms for a matrix multiply through that layer (100&#215;1000 dot 1000&#215;1000) and about twice as much for the backward pass. We can pass the gradient while we work on the next layer, but in the end the network card speed limits our overall computation by quite a bit. This is more marked the larger you scale your system: A four node system working on the same problem needs about 20.25ms to pass the gradients around to the other GPUs. One can easily see that data parallelism does not scale with size of the cluster. </span></p>
<p><span lang="en-US">To counter this bottleneck is to reduce the parameters of the gradient through max pooling, maxout units or by simply using convolution. Another way is to increase the computational time/network time ratio by other means, e.g. by using is computationally intensive optimization techniques like RMSProp. You need the same time to pass the gradients to each other, but more time is spend on computation, thus increasing the utility of the fast GPUs.</span></p>
<p><span lang="en-US">Another thing you can do when you use computationally intensive optimization techniques is to hide latency of networking under the computation of the gradients. This means while you passing the first gradient to all other nodes, you can already start a big RMSProp computation asynchronously for the next layer. This technique can give a speedup of about 0-20 % depending on network architecture.</span></p>
<p><span lang="en-US">But this is not the only problem with data parallelism. There is a very technical bottleneck hidden in the GPU architecture which took me quite a while to understand. To understand why the GPU architecture is a problem we first need to look at the usage and purpose of mini-batches.</span></p>
<p><span lang="en-US"><b>A divergence: Why do we use mini-batches?</b></span></p>
<p><span lang="en-US">If we start with randomly initialized parameters or even if we start with pretrained parameters, we do not need a pass through all the data to get an accurate gradient update that will head into the direction of a local minimum. If we take MNIST as an example, if we have a gradient which includes 10 common mistakes that the network does for each class (mini-batch size of about 128), then we will go into a direction that reduces the error greatly already as the gradient captures rough and common mistakes. If we choose a greater batch size (say 512) then we not only capture common errors, but also catch errors that are more subtle. However, it is not very sensible to fine-tune a system if you know it still has major errors. So overall we gain little by increasing the batch size. We need more computation to do roughly the same and this is the main argument why we use a mini-batch size as small as possible. However, if we choose a mini-batch size that is too small, then we do not capture all the common errors which are relevant for the data set and thus our gradient might not head near a local optimum, so there is a limit how small you can make mini-batches.</span></p>
<p><span lang="en-US">How does this relate to data parallelism? If we want a mini-batch size of 128 and use data parallelism to divide it among, say, eight GPUs, then each net calculates gradients for 16 samples which is then averages with the data from the other GPUs. And exactly here kicks the hardware bottleneck in.</span></p>
<p><span lang="en-US"><b>Memory tiles: Patches of fast GPU memory for efficient dot product calculations</b></span></p>
<p><span lang="en-US">To calculate dot products on the GPU, you need to copy small patches, called memory tiles, into shared memory, i.e. very fast but very small memory (limited to a few kilobytes). The problem is that the standard cuBLAS uses either a 64&#215;128 memory tiles and when you have a batch size less than 64 you waste a lot of precious shared memory. Also if you use a batch size not equal to a multiple of 32 you equally waste shared memory (threads are only started in blocks of 32 threads), so one should use a batch size which is a multiple of 32 or multiple of 64 if possible. For data parallelism this means that you lose significant processing speed once you go below a batch size of 64 for each GPU. If you have many GPUs this can be quite limiting and this is yet another reason why the data parallelism approach does not scale well beyond a certain point.</span></p>
<p><span lang="en-US">All in all this sounds quite dire for data parallelism, but data parallelism has its uses. If you know the bottlenecks, you can wield data parallelism as a might tool for certain applications. This is demonstrated by Alex Krishevsky in his <a title="One weird trick for parallelizing convolutional neural networks" href="https://arxiv.org/pdf/1404.5997v2.pdf" target="_blank" rel="noopener noreferrer">paper</a> where he uses data parallelism in the convolutional layers of his net, and thus achieves a speedup of 3.74x by using four GPUs and 6.25x using eight GPUs. His system features two CPUs and 8 GPUs in one node, so he can use the full PCIe speed for the two sets of four GPUs and relatively fast PCIe connection between CPUs to distribute the data among all eight GPUs. </span></p>
<p><span lang="en-US">Besides convolutional neural networks, another use of data parallelism might be to use it in recurrent neural networks, which typically have less parameters and highly computationally intensive gradient updates – both are wins for data parallelism.</span></p>
<p><span lang="en-US">In my next blog post I will focus on model parallelism, which is efficient for large networks and scales well to larger clusters.</span></p>
<p>The post <a rel="nofollow" href="https://timdettmers.com/2014/10/09/deep-learning-data-parallelism/">How to Parallelize Deep Learning on GPUs Part 1/2: Data Parallelism</a> appeared first on <a rel="nofollow" href="https://timdettmers.com">Tim Dettmers</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://timdettmers.com/2014/10/09/deep-learning-data-parallelism/feed/</wfw:commentRss>
			<slash:comments>20</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">65</post-id>	</item>
		<item>
		<title>How To Build and Use a Multi GPU System for Deep Learning</title>
		<link>https://timdettmers.com/2014/09/21/how-to-build-and-use-a-multi-gpu-system-for-deep-learning/</link>
					<comments>https://timdettmers.com/2014/09/21/how-to-build-and-use-a-multi-gpu-system-for-deep-learning/#comments</comments>
		
		<dc:creator><![CDATA[Tim Dettmers]]></dc:creator>
		<pubDate>Sun, 21 Sep 2014 15:52:40 +0000</pubDate>
				<category><![CDATA[Hardware]]></category>
		<category><![CDATA[GPU]]></category>
		<category><![CDATA[High Performance Computing]]></category>
		<category><![CDATA[Parallel Computing]]></category>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?p=52</guid>

					<description><![CDATA[<p>You can use a GPU cluster to accelerate deep learning dramatically. Here you learn how to build and use a successful cluster and how to make sure that you avoid the bottlenecks in large deep learning systems.</p>
<p>The post <a rel="nofollow" href="https://timdettmers.com/2014/09/21/how-to-build-and-use-a-multi-gpu-system-for-deep-learning/">How To Build and Use a Multi GPU System for Deep Learning</a> appeared first on <a rel="nofollow" href="https://timdettmers.com">Tim Dettmers</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>When I started using GPUs for deep learning my deep learning skills improved quickly. When you can run experiments of algorithms and algorithms with different parameters and gain rapid feedback you can just learn much more quickly. At the beginning, deep learning is a lot of trial and error: You have to get a feel what parameters need to be adjusted, or what puzzle piece is missing in order to get a good result. A GPU helps you to fail quickly and learn important lessons so that you can keep improving. Soon my deep learning skills were sufficient to take the <a title="How I did it: Crowdflower 2nd place" href="https://www.kaggle.com/c/crowdflower-weather-twitter/discussion/6488#35640" target="_blank" rel="noopener noreferrer">2<sup>nd</sup> place in the Crowdflower competition</a> where the task was to predict weather labels from given tweets (sunny, raining etc.).</p>
<p>After this success I was tempted to use multiple GPUs in order to train deep learning algorithms even faster. I also took interest in learning very large models which do not fit into a single GPU. I thus wanted to build a little GPU cluster and explore the possibilities to speed up deep learning with multiple nodes with multiple GPUs. At the same time I was offered to do contract work as a data base developer through my old employer. This gave me opportunity to get the money to build the GPU cluster I thought of.</p>
<p><span id="more-52"></span></p>
<p><strong>Important components in a GPU cluster</strong></p>
<p>When I did my research on which hardware to buy I soon realized, that the main bottleneck will be the network bandwidth, i.e. how much data can be transferred from computer to computer per second. The network bandwidth of network cards (affordable cards are at about 4GB/s) does not come even close to the speed of PCIe 3.0 bandwidth (15.75 GB/s). So GPU-to-GPU communication within a computer will be fast, but it will be slow between computers. On top of that most network card only work with memory that is registered with the CPU and so the GPU to GPU transfer between two nodes would be like this: GPU 1 to CPU 1 to Network Card 1 to Network Card 2 to CPU 2 to GPU 2. What this means is, if one chooses a slow network card then there might be no speedups over a single computer. Even with fast network cards, if the cluster is large, one does not even get speedups from GPUs when compared to CPUs as the GPUs just work too fast for the network cards to keep up with them.</p>
<p>This is the reason why many big companies like Google and Microsoft are using CPU rather than GPU clusters to train their big neural networks. Luckily, Mellanox and Nvidia recently came together to work on that problem and the result is GPUDirect RDMA, a network card driver that can make sense of GPU memory addresses and thus can transfer data directly from GPU to GPU between computers.</p>
<figure><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2014/09/rdma.png"><img data-attachment-id="10" data-permalink="https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/gpu-pic-copy/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2014/08/gpu-pic-copy.jpg?fit=791%2C418&amp;ssl=1" data-orig-size="791,418" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;2.4&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;U9200&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1407955348&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;4.13&quot;,&quot;iso&quot;:&quot;64&quot;,&quot;shutter_speed&quot;:&quot;0.033333&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="GPU pic &#8211; Copy" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2014/08/gpu-pic-copy.jpg?fit=300%2C159&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2014/08/gpu-pic-copy.jpg?fit=791%2C418&amp;ssl=1" class="alignnone size-full wp-image-10" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2014/09/rdma.png?resize=660%2C297" alt="GPU pic - Copy" width="660" height="297"  data-recalc-dims="1"></a><figcaption>NVIDIA GPUDirect RDMA can bypass the CPU for inter-node communication – data is directly transfered between two GPUs.</figcaption></figure>
<p>Generally your best bet for cheap network cards is eBay. I won an auction for a set of 40Gbit/s Mellanox network cards that support GPUDirect RDMA along with the fitting fibre cable on eBay. I already had two GTX Titan GPUs with 6GB of memory and as I wanted to build huge models that do not fit into a single memory, so I decided to keep the 6GB cards and buy more of them to build a cluster that features 24GB memory. In retrospect this was a rather foolish (and expensive) idea, but little did I know about the performance of such large models and how to evaluate the performance of GPUs. All the lessons I learned from this can be found <a title="Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning" href="https://timdettmers.com/2020/09/07/which-gpu-for-deep-learning/" target="_blank" rel="noopener noreferrer">here</a>. Besides that the hardware is rather straightforward. For fast inter-node communication PCIe 3.0 is faster than PCIe 2.0, so I got a PCIe 3.0 board. It is also a good idea to have about two times the RAM than you have GPU memory to be able to work more freely to handle big nets. As deep learning programs use a single thread for a GPU most of the time, a CPU with as many cores as GPUs you have is often sufficient.</p>
<p><strong>Hardware: Check. Software: ?</strong></p>
<p>There are basically two options how to do multi-GPU programming. You do it in CUDA and have a single thread and manage the GPUs directly by setting the current device and by declaring and assigning a dedicated memory-stream to each GPU, or the other options is to use <a title="CUDA-aware MPI introduction" href="https://developer.nvidia.com/blog/introduction-cuda-aware-mpi/" target="_blank" rel="noopener noreferrer">CUDA-aware MPI</a> where a single thread is spawned for each GPU and all communication and synchronization is handled by MPI. The first method is rather complicated as you need to create efficient abstractions where you loop through the GPUs and handle streaming and computing. Even with efficient abstractions your code can blow up quickly in line count making it less readable and maintainable.</p>
<figure><a href="https://i0.wp.com/timdettmers.com/wp-content/uploads/2014/09/mpi.png"><img data-attachment-id="10" data-permalink="https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/gpu-pic-copy/" data-orig-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2014/08/gpu-pic-copy.jpg?fit=791%2C418&amp;ssl=1" data-orig-size="791,418" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;2.4&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;U9200&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1407955348&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;4.13&quot;,&quot;iso&quot;:&quot;64&quot;,&quot;shutter_speed&quot;:&quot;0.033333&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="GPU pic &#8211; Copy" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2014/08/gpu-pic-copy.jpg?fit=300%2C159&amp;ssl=1" data-large-file="https://i0.wp.com/timdettmers.com/wp-content/uploads/2014/08/gpu-pic-copy.jpg?fit=791%2C418&amp;ssl=1" class="alignnone size-full wp-image-10" src="https://i0.wp.com/timdettmers.com/wp-content/uploads/2014/09/mpi.png?resize=607%2C113" alt="GPU pic - Copy" width="607" height="113"  data-recalc-dims="1"></a><figcaption>Some sample MPI code. The first action spreads one chuck of data to all others computer in the network; the second action receives one chuck of data from every process. That is all you need to do, it is very easy!</figcaption></figure>
<p>The second option is much more efficient and clean. MPI is the standard in high performance computing and its standardized library means that you can be sure that a MPI method really does what it is supposed to do. Underlying MPI the same principles are used as in the first method describes above, but the abstraction is so good that it is quite easy to adapt single GPU code to multiple GPU code (at least for data parallelism). The result is clean and maintainable code and as such I would always recommend using MPI for multi-GPU computing. &nbsp;As MPI libraries come in many languages and you can pair them with the language of your choice. With these two components you are ready to go and can immediately start programming deep learning algorithms for multiple GPUs.</p>
<p>[Image source: &nbsp;<a title="GPUDirect" href="https://developer.nvidia.com/gpudirect" target="_blank" rel="noopener noreferrer">NVIDIA GPUDirect Key Technologie</a>s]</p>
<p>The post <a rel="nofollow" href="https://timdettmers.com/2014/09/21/how-to-build-and-use-a-multi-gpu-system-for-deep-learning/">How To Build and Use a Multi GPU System for Deep Learning</a> appeared first on <a rel="nofollow" href="https://timdettmers.com">Tim Dettmers</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://timdettmers.com/2014/09/21/how-to-build-and-use-a-multi-gpu-system-for-deep-learning/feed/</wfw:commentRss>
			<slash:comments>139</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">52</post-id>	</item>
	</channel>
</rss>
