<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	
	>
<channel>
	<title>
	Comments on: Deep Learning Hardware Limbo	</title>
	<atom:link href="https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/feed/" rel="self" type="application/rss+xml" />
	<link>https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/</link>
	<description>Making deep learning accessible.</description>
	<lastBuildDate>Sat, 19 Sep 2020 16:36:19 +0000</lastBuildDate>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.0.11</generator>
	<item>
		<title>
		By: Tim Dettmers		</title>
		<link>https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-41645</link>

		<dc:creator><![CDATA[Tim Dettmers]]></dc:creator>
		<pubDate>Sun, 26 Aug 2018 12:58:35 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.com/?p=627#comment-41645</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-41636&quot;&gt;Kremena Gocheva&lt;/a&gt;.

For my thoughts on the RTX cards you might want to read my newly updated &lt;a href=&quot;http://timdettmers.com/2018/08/21/which-gpu-for-deep-learning/&quot;&gt;GPU recommendation blog post&lt;/a&gt;.]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-41636">Kremena Gocheva</a>.</p>
<p>For my thoughts on the RTX cards you might want to read my newly updated <a href="http://timdettmers.com/2018/08/21/which-gpu-for-deep-learning/">GPU recommendation blog post</a>.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Kremena Gocheva		</title>
		<link>https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-41636</link>

		<dc:creator><![CDATA[Kremena Gocheva]]></dc:creator>
		<pubDate>Sun, 26 Aug 2018 08:46:21 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.com/?p=627#comment-41636</guid>

					<description><![CDATA[Hello Tim!
Great article with predictions coming true... and also sort of catchment for commenting new developments by some knowledgeable readers! 

I follow the comments for several months now and would like to hear your initial thoughts on the RTX series. The fact that AI is mentioned merely as a tool for raytracing and there&#039;s no emphasis on CUDA cores and tensor cores in the consumer cards  seems to suggest there&#039;s not so big productivity gain to highlight there in terms of deep learning performance. More so since NVIDIA apparently promotes DGX-2 stations for training and RTX for inference only. If this proves to be the case, it would be a trifle disappointing... Can you tell from the specs shared so far, what the RTX series may turn out to mean for low budget DL?

Also, I wonder if you or any of the readers have experiences with deep learning using 
the single precision and half precision announced? Will it, for example, mean that models have to be optimized in new ways, and the older ones would need to be rewritten for training in such environment?]]></description>
			<content:encoded><![CDATA[<p>Hello Tim!<br />
Great article with predictions coming true&#8230; and also sort of catchment for commenting new developments by some knowledgeable readers! </p>
<p>I follow the comments for several months now and would like to hear your initial thoughts on the RTX series. The fact that AI is mentioned merely as a tool for raytracing and there&#8217;s no emphasis on CUDA cores and tensor cores in the consumer cards  seems to suggest there&#8217;s not so big productivity gain to highlight there in terms of deep learning performance. More so since NVIDIA apparently promotes DGX-2 stations for training and RTX for inference only. If this proves to be the case, it would be a trifle disappointing&#8230; Can you tell from the specs shared so far, what the RTX series may turn out to mean for low budget DL?</p>
<p>Also, I wonder if you or any of the readers have experiences with deep learning using<br />
the single precision and half precision announced? Will it, for example, mean that models have to be optimized in new ways, and the older ones would need to be rewritten for training in such environment?</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Tim Dettmers		</title>
		<link>https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-41415</link>

		<dc:creator><![CDATA[Tim Dettmers]]></dc:creator>
		<pubDate>Tue, 21 Aug 2018 16:07:32 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.com/?p=627#comment-41415</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-41296&quot;&gt;Lemans24&lt;/a&gt;.

I agree, neither AMD nor Intel are currently a threat to NVIDIA.]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-41296">Lemans24</a>.</p>
<p>I agree, neither AMD nor Intel are currently a threat to NVIDIA.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Lemans24		</title>
		<link>https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-41296</link>

		<dc:creator><![CDATA[Lemans24]]></dc:creator>
		<pubDate>Sun, 19 Aug 2018 04:58:57 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.com/?p=627#comment-41296</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-40853&quot;&gt;Tim Dettmers&lt;/a&gt;.

Just been reviewing your comments over the winter and just as we thought: Amd and Intel are nowhere to be seen advancing deep learning in affordable hardware and most importantly, software that is better than CUDA!!! Nvidia is still the king of deep learning hardware for the foreseeable future...
Can’t wait to order a Titan RTX!!]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-40853">Tim Dettmers</a>.</p>
<p>Just been reviewing your comments over the winter and just as we thought: Amd and Intel are nowhere to be seen advancing deep learning in affordable hardware and most importantly, software that is better than CUDA!!! Nvidia is still the king of deep learning hardware for the foreseeable future&#8230;<br />
Can’t wait to order a Titan RTX!!</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Tim Dettmers		</title>
		<link>https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-40853</link>

		<dc:creator><![CDATA[Tim Dettmers]]></dc:creator>
		<pubDate>Mon, 06 Aug 2018 15:41:29 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.com/?p=627#comment-40853</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-40576&quot;&gt;Hamir Shekhawat&lt;/a&gt;.

This is not an easy situation. It&#039;s difficult to predict how prices change when the GTX 1180 will be released. There were rumors that NVIDIA produced way too many GPUs (due to cryptocurrency demand) and cannot get rid of them, so GPUs might fall further. However, as you say, time if of the essence.

I am still running a Maxwell Titan X which is already 3 years old and still is running quite okay. I guess you could get 2 years runtime from a GTX 1080 Ti before it becomes slow. However, you probably could get 4-year runtime out of a GTX 1180 if you wait for another month.

Another solution might be to rent a Hetzner GPU server (cheaper than AWS if you have them up all the time) for a few months until the Titan X Turing (Feburary 2019 maybe?) or GTX 1080 Ti hits the market (???).

In the end, it is personal preference. You have to decide if you want to have a cheap GPU now for 2 years, or an expensive GPU later for 4 years.]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-40576">Hamir Shekhawat</a>.</p>
<p>This is not an easy situation. It&#8217;s difficult to predict how prices change when the GTX 1180 will be released. There were rumors that NVIDIA produced way too many GPUs (due to cryptocurrency demand) and cannot get rid of them, so GPUs might fall further. However, as you say, time if of the essence.</p>
<p>I am still running a Maxwell Titan X which is already 3 years old and still is running quite okay. I guess you could get 2 years runtime from a GTX 1080 Ti before it becomes slow. However, you probably could get 4-year runtime out of a GTX 1180 if you wait for another month.</p>
<p>Another solution might be to rent a Hetzner GPU server (cheaper than AWS if you have them up all the time) for a few months until the Titan X Turing (Feburary 2019 maybe?) or GTX 1080 Ti hits the market (???).</p>
<p>In the end, it is personal preference. You have to decide if you want to have a cheap GPU now for 2 years, or an expensive GPU later for 4 years.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Hamir Shekhawat		</title>
		<link>https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-40576</link>

		<dc:creator><![CDATA[Hamir Shekhawat]]></dc:creator>
		<pubDate>Tue, 31 Jul 2018 07:42:19 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.com/?p=627#comment-40576</guid>

					<description><![CDATA[I just completed my bachelors and have dived pretty deep into machine learning. I am tired of using AWS and really want to switch to a personal hardware now. How long do you suggest to wait to buy one? The prices of GPU have come down almost to that of there MSRP and my budget can allow one 1080 ti currently. I wanna start learning Reinforcement Learning and want to work on DeepMind&#039;s Lab and research papers but time is of essence. What do you suggest I should do and is buying a 1080 ti right now future proof? 
P.S. I live in India and since all elecronics are imported here, everything is ~22% more expensive and it takes time for new hardware to come to India. Shipping from US is not an option. Life in India is pretty hard when it comes to computer components (EXPENSIVE!)]]></description>
			<content:encoded><![CDATA[<p>I just completed my bachelors and have dived pretty deep into machine learning. I am tired of using AWS and really want to switch to a personal hardware now. How long do you suggest to wait to buy one? The prices of GPU have come down almost to that of there MSRP and my budget can allow one 1080 ti currently. I wanna start learning Reinforcement Learning and want to work on DeepMind&#8217;s Lab and research papers but time is of essence. What do you suggest I should do and is buying a 1080 ti right now future proof?<br />
P.S. I live in India and since all elecronics are imported here, everything is ~22% more expensive and it takes time for new hardware to come to India. Shipping from US is not an option. Life in India is pretty hard when it comes to computer components (EXPENSIVE!)</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Tim Dettmers		</title>
		<link>https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-39373</link>

		<dc:creator><![CDATA[Tim Dettmers]]></dc:creator>
		<pubDate>Mon, 02 Jul 2018 09:17:04 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.com/?p=627#comment-39373</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-38545&quot;&gt;Tfer&lt;/a&gt;.

They released only limited information. It seems they will have a GPU with 32 GB RAM, 1 TB/s bandwidth and something similar to TensorCores. But it was also mentioned that this card will be &quot;expensive&quot; and I am not sure if it will be interesting for most deep learning researchers.]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-38545">Tfer</a>.</p>
<p>They released only limited information. It seems they will have a GPU with 32 GB RAM, 1 TB/s bandwidth and something similar to TensorCores. But it was also mentioned that this card will be &#8220;expensive&#8221; and I am not sure if it will be interesting for most deep learning researchers.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Tfer		</title>
		<link>https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-38545</link>

		<dc:creator><![CDATA[Tfer]]></dc:creator>
		<pubDate>Mon, 11 Jun 2018 18:16:39 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.com/?p=627#comment-38545</guid>

					<description><![CDATA[Did AMD announce any new neural network stuff at Computex this year?]]></description>
			<content:encoded><![CDATA[<p>Did AMD announce any new neural network stuff at Computex this year?</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Tim Dettmers		</title>
		<link>https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-34120</link>

		<dc:creator><![CDATA[Tim Dettmers]]></dc:creator>
		<pubDate>Wed, 09 May 2018 08:34:05 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.com/?p=627#comment-34120</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-33560&quot;&gt;bthomas&lt;/a&gt;.

Thanks for your feedback — I did not know that the winders driver behaves in that way. Interesting!]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-33560">bthomas</a>.</p>
<p>Thanks for your feedback — I did not know that the winders driver behaves in that way. Interesting!</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: bthomas		</title>
		<link>https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-33560</link>

		<dc:creator><![CDATA[bthomas]]></dc:creator>
		<pubDate>Thu, 03 May 2018 18:56:25 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.com/?p=627#comment-33560</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-31995&quot;&gt;Tim Dettmers&lt;/a&gt;.

The only reason i am able to get those performance numbers fpr Titan Xp over 1080ti is that I can use the full 12GB on the Titan Xp as compared to  just under 10GB on the 1080ti as I am able to use the Titan Xp with the tcc driver. The windows device driver always allocates twice as much memory as the Titan Xp in tcc mode as well as the Titan Xp uses bidirectional dma transfers which is noticable when you are reading/writing multiple gigabytes  from a gpu card per second. I am able to do twice as many simulations using 12GB of memory as compared to 11GB memory on the 1080ti and definitely makes a difference if you want to make any real time analysis. 

Really hoping that a Titan Xv card is in the works by the end of the year...]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/comment-page-1/#comment-31995">Tim Dettmers</a>.</p>
<p>The only reason i am able to get those performance numbers fpr Titan Xp over 1080ti is that I can use the full 12GB on the Titan Xp as compared to  just under 10GB on the 1080ti as I am able to use the Titan Xp with the tcc driver. The windows device driver always allocates twice as much memory as the Titan Xp in tcc mode as well as the Titan Xp uses bidirectional dma transfers which is noticable when you are reading/writing multiple gigabytes  from a gpu card per second. I am able to do twice as many simulations using 12GB of memory as compared to 11GB memory on the 1080ti and definitely makes a difference if you want to make any real time analysis. </p>
<p>Really hoping that a Titan Xv card is in the works by the end of the year&#8230;</p>
]]></content:encoded>
		
			</item>
	</channel>
</rss>
