<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	
	>
<channel>
	<title>
	Comments on: About Me	</title>
	<atom:link href="https://timdettmers.com/about/feed/" rel="self" type="application/rss+xml" />
	<link>https://timdettmers.com/about/</link>
	<description>Making deep learning accessible.</description>
	<lastBuildDate>Tue, 09 Dec 2025 16:46:09 +0000</lastBuildDate>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.0.11</generator>
	<item>
		<title>
		By: Tim Dettmers		</title>
		<link>https://timdettmers.com/about/comment-page-1/#comment-71466</link>

		<dc:creator><![CDATA[Tim Dettmers]]></dc:creator>
		<pubDate>Mon, 27 Apr 2020 04:44:18 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?page_id=1#comment-71466</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://timdettmers.com/about/comment-page-1/#comment-71333&quot;&gt;Steve Chang&lt;/a&gt;.

I usually use 3 monitors and have no on-board video. I just attach all three monitors to one GPU and it is just fine. The one GPU usually has about 400 MB to 600 MB less GPU memory because the monitors taking up a bit of space, but for most deep learning models that is just fine.]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://timdettmers.com/about/comment-page-1/#comment-71333">Steve Chang</a>.</p>
<p>I usually use 3 monitors and have no on-board video. I just attach all three monitors to one GPU and it is just fine. The one GPU usually has about 400 MB to 600 MB less GPU memory because the monitors taking up a bit of space, but for most deep learning models that is just fine.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Steve Chang		</title>
		<link>https://timdettmers.com/about/comment-page-1/#comment-71333</link>

		<dc:creator><![CDATA[Steve Chang]]></dc:creator>
		<pubDate>Thu, 23 Apr 2020 19:14:13 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?page_id=1#comment-71333</guid>

					<description><![CDATA[I am in the process of building a deep learning &quot;desktop&quot; machine.  I have a rather silly question.  The motherboard I am intending to use does not have on-board video to drive 2 monitors...  If I have an RTX 2070 SLI setup, do I need to buy another video card?  Or will my two RTX 2070 drive 2 monitors in regular VGA/desktop display mode while I use the GPUs for deep learning?  Sorry for the naive question, but its apparently not easily answered through google searches.]]></description>
			<content:encoded><![CDATA[<p>I am in the process of building a deep learning &#8220;desktop&#8221; machine.  I have a rather silly question.  The motherboard I am intending to use does not have on-board video to drive 2 monitors&#8230;  If I have an RTX 2070 SLI setup, do I need to buy another video card?  Or will my two RTX 2070 drive 2 monitors in regular VGA/desktop display mode while I use the GPUs for deep learning?  Sorry for the naive question, but its apparently not easily answered through google searches.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Tim Dettmers		</title>
		<link>https://timdettmers.com/about/comment-page-1/#comment-70462</link>

		<dc:creator><![CDATA[Tim Dettmers]]></dc:creator>
		<pubDate>Sat, 04 Apr 2020 02:17:38 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?page_id=1#comment-70462</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://timdettmers.com/about/comment-page-1/#comment-69083&quot;&gt;SAURAV GHOSH&lt;/a&gt;.

It is powerful but you can get issues with compatibility of libraries. I have no direct experience with AMD cards and there is not much data out there how good the experience is so I cannot give you a full recommendation. If you go ahead with the AMD card it would be great to hear how your experience has been.]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://timdettmers.com/about/comment-page-1/#comment-69083">SAURAV GHOSH</a>.</p>
<p>It is powerful but you can get issues with compatibility of libraries. I have no direct experience with AMD cards and there is not much data out there how good the experience is so I cannot give you a full recommendation. If you go ahead with the AMD card it would be great to hear how your experience has been.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: SAURAV GHOSH		</title>
		<link>https://timdettmers.com/about/comment-page-1/#comment-69083</link>

		<dc:creator><![CDATA[SAURAV GHOSH]]></dc:creator>
		<pubDate>Sat, 22 Feb 2020 15:27:27 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?page_id=1#comment-69083</guid>

					<description><![CDATA[Hi , I read your post on deep learning hardware. I am planning to buying to hardware for deep learning research . I am a NLP researcher and also planning venture into Kaggle . I am seeking your advice on RX580 GPU , do you think it is powerful enough for doing NLP + Image modelling. Looking forward for you response]]></description>
			<content:encoded><![CDATA[<p>Hi , I read your post on deep learning hardware. I am planning to buying to hardware for deep learning research . I am a NLP researcher and also planning venture into Kaggle . I am seeking your advice on RX580 GPU , do you think it is powerful enough for doing NLP + Image modelling. Looking forward for you response</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Tim Dettmers		</title>
		<link>https://timdettmers.com/about/comment-page-1/#comment-64352</link>

		<dc:creator><![CDATA[Tim Dettmers]]></dc:creator>
		<pubDate>Tue, 22 Oct 2019 01:34:32 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?page_id=1#comment-64352</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://timdettmers.com/about/comment-page-1/#comment-64205&quot;&gt;Ashis M&lt;/a&gt;.

I like that thinking! I think that could work. A couple of things though: For BERT I would recommend a GPU with more memory (RTX 2080 Ti). Also, you will live with some caveats: The CPU does not have many cores which can become a problem with you need to load large amounts of data with each mini-batch; this can cost 10-20% runtime performance. For NLP this should not be the biggest problem though. Theoretically, PCIe 3.0 GPUs are compatible with PCIe 2.0 motherboards, but once someone commented here that it did not work for him. So theory does not always work out practically. I would recommend keeping an eye on the return policy if you buy a GPU so that you can return it if you find it is not compatible with your motherboard.]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://timdettmers.com/about/comment-page-1/#comment-64205">Ashis M</a>.</p>
<p>I like that thinking! I think that could work. A couple of things though: For BERT I would recommend a GPU with more memory (RTX 2080 Ti). Also, you will live with some caveats: The CPU does not have many cores which can become a problem with you need to load large amounts of data with each mini-batch; this can cost 10-20% runtime performance. For NLP this should not be the biggest problem though. Theoretically, PCIe 3.0 GPUs are compatible with PCIe 2.0 motherboards, but once someone commented here that it did not work for him. So theory does not always work out practically. I would recommend keeping an eye on the return policy if you buy a GPU so that you can return it if you find it is not compatible with your motherboard.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Ashis M		</title>
		<link>https://timdettmers.com/about/comment-page-1/#comment-64205</link>

		<dc:creator><![CDATA[Ashis M]]></dc:creator>
		<pubDate>Fri, 18 Oct 2019 04:12:11 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?page_id=1#comment-64205</guid>

					<description><![CDATA[After going through your post on building a deep learning PC  ,  I feel I can revive my old desktop  &#038; defer buying an  i7 9750H + RTX 2060  based laptop .  I am mostly interested in  sequence processing  like trying out ELMO , BERT , BiLSTM / CRF . Would it be wise to upgrade my  i3 3220 by adding an RTX 2070  Super GPU ?   The motherboard has 1x16 PCI Express 2.0 slot for GPU support  . I have already 2  x 8 GB RAM  and  512 GB SATA SSD . 

thanks]]></description>
			<content:encoded><![CDATA[<p>After going through your post on building a deep learning PC  ,  I feel I can revive my old desktop  &amp; defer buying an  i7 9750H + RTX 2060  based laptop .  I am mostly interested in  sequence processing  like trying out ELMO , BERT , BiLSTM / CRF . Would it be wise to upgrade my  i3 3220 by adding an RTX 2070  Super GPU ?   The motherboard has 1&#215;16 PCI Express 2.0 slot for GPU support  . I have already 2  x 8 GB RAM  and  512 GB SATA SSD . </p>
<p>thanks</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Tim Dettmers		</title>
		<link>https://timdettmers.com/about/comment-page-1/#comment-60365</link>

		<dc:creator><![CDATA[Tim Dettmers]]></dc:creator>
		<pubDate>Sun, 04 Aug 2019 20:31:10 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?page_id=1#comment-60365</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://timdettmers.com/about/comment-page-1/#comment-59279&quot;&gt;ML&lt;/a&gt;.

For NLP research I recommend a RTX 2080 Ti. If you want to run large transformers you need that extra memory. 8 GB of RAM can be limiting.]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://timdettmers.com/about/comment-page-1/#comment-59279">ML</a>.</p>
<p>For NLP research I recommend a RTX 2080 Ti. If you want to run large transformers you need that extra memory. 8 GB of RAM can be limiting.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: ML		</title>
		<link>https://timdettmers.com/about/comment-page-1/#comment-59279</link>

		<dc:creator><![CDATA[ML]]></dc:creator>
		<pubDate>Fri, 19 Jul 2019 08:52:26 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?page_id=1#comment-59279</guid>

					<description><![CDATA[I am planning to set up a deep learning workstation with a colleague. As a motherboard, we want to use a Asus TUF mark I x299, The CPU will be a I9 9820x with 44 PCI lanes.

So, my question is now what GPU would be the best choice for Natural Language Processing like text-to-speech, speech-to-text and other speech recognition models? A Titan RTX is financially impossible, so we thought of the RTX 2080 super (to be launched soon) or the RTX 2080ti, whereas from AMD the Vega VII seems to be quite attractive due to its large VRAM.

Do you have any experience with the hardware concerning compatibility with TF, PyTorch drivers for Linux etc.? Which one would be the best choice? If Nvidia is preferred, is larger VRAM or higher base clock more important for NLP? There is so much contradictory information on the internet that it is hard to discern who really has experience and who just wants to do some marketing (or simply voices prejudice against the companies)]]></description>
			<content:encoded><![CDATA[<p>I am planning to set up a deep learning workstation with a colleague. As a motherboard, we want to use a Asus TUF mark I x299, The CPU will be a I9 9820x with 44 PCI lanes.</p>
<p>So, my question is now what GPU would be the best choice for Natural Language Processing like text-to-speech, speech-to-text and other speech recognition models? A Titan RTX is financially impossible, so we thought of the RTX 2080 super (to be launched soon) or the RTX 2080ti, whereas from AMD the Vega VII seems to be quite attractive due to its large VRAM.</p>
<p>Do you have any experience with the hardware concerning compatibility with TF, PyTorch drivers for Linux etc.? Which one would be the best choice? If Nvidia is preferred, is larger VRAM or higher base clock more important for NLP? There is so much contradictory information on the internet that it is hard to discern who really has experience and who just wants to do some marketing (or simply voices prejudice against the companies)</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Tim Dettmers		</title>
		<link>https://timdettmers.com/about/comment-page-1/#comment-58058</link>

		<dc:creator><![CDATA[Tim Dettmers]]></dc:creator>
		<pubDate>Fri, 14 Jun 2019 02:50:44 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?page_id=1#comment-58058</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://timdettmers.com/about/comment-page-1/#comment-57697&quot;&gt;Carlos&lt;/a&gt;.

eGPUs should be better for deep learning since, unlike in gaming, you do not need to transfer as much data back and forth between the CPU and the GPU. Most of the computation will stay on the GPU. There are some bottlenecks in certain applications like computer vision, but they are less severe than for gaming. I would expect a decrease of 25-30% in performance which is not too bad!]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://timdettmers.com/about/comment-page-1/#comment-57697">Carlos</a>.</p>
<p>eGPUs should be better for deep learning since, unlike in gaming, you do not need to transfer as much data back and forth between the CPU and the GPU. Most of the computation will stay on the GPU. There are some bottlenecks in certain applications like computer vision, but they are less severe than for gaming. I would expect a decrease of 25-30% in performance which is not too bad!</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Tim Dettmers		</title>
		<link>https://timdettmers.com/about/comment-page-1/#comment-58056</link>

		<dc:creator><![CDATA[Tim Dettmers]]></dc:creator>
		<pubDate>Fri, 14 Jun 2019 02:46:23 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?page_id=1#comment-58056</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://timdettmers.com/about/comment-page-1/#comment-57513&quot;&gt;micah&lt;/a&gt;.

ASICs are specialized hardware that serves a single purpose. For example, a TPU can compute matrix multiplications. An ASIC for bitcoin mining computes hashes and that is it — it cannot be used for deep learning. If bitcoin tanks it will be useless in any case.]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://timdettmers.com/about/comment-page-1/#comment-57513">micah</a>.</p>
<p>ASICs are specialized hardware that serves a single purpose. For example, a TPU can compute matrix multiplications. An ASIC for bitcoin mining computes hashes and that is it — it cannot be used for deep learning. If bitcoin tanks it will be useless in any case.</p>
]]></content:encoded>
		
			</item>
	</channel>
</rss>
