<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	
	>
<channel>
	<title>
	Comments for Tim Dettmers	</title>
	<atom:link href="https://timdettmers.com/comments/feed/" rel="self" type="application/rss+xml" />
	<link>https://timdettmers.com/</link>
	<description>Making deep learning accessible.</description>
	<lastBuildDate>Mon, 15 Dec 2025 19:30:27 +0000</lastBuildDate>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.0.11</generator>
	<item>
		<title>
		Comment on Why AGI Will Not Happen by Jeff Brower		</title>
		<link>https://timdettmers.com/2025/12/10/why-agi-will-not-happen/comment-page-1/#comment-256170</link>

		<dc:creator><![CDATA[Jeff Brower]]></dc:creator>
		<pubDate>Mon, 15 Dec 2025 19:30:27 +0000</pubDate>
		<guid isPermaLink="false">https://timdettmers.com/?p=1233#comment-256170</guid>

					<description><![CDATA[In reply to &lt;a href=&quot;https://timdettmers.com/2025/12/10/why-agi-will-not-happen/comment-page-1/#comment-255279&quot;&gt;dirk bruere&lt;/a&gt;.

Agree on both points]]></description>
			<content:encoded><![CDATA[<p>In reply to <a href="https://timdettmers.com/2025/12/10/why-agi-will-not-happen/comment-page-1/#comment-255279">dirk bruere</a>.</p>
<p>Agree on both points</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		Comment on Why AGI Will Not Happen by Jeff Brower		</title>
		<link>https://timdettmers.com/2025/12/10/why-agi-will-not-happen/comment-page-1/#comment-256169</link>

		<dc:creator><![CDATA[Jeff Brower]]></dc:creator>
		<pubDate>Mon, 15 Dec 2025 19:29:11 +0000</pubDate>
		<guid isPermaLink="false">https://timdettmers.com/?p=1233#comment-256169</guid>

					<description><![CDATA[AGI will happen, but as it stands the brain does in 40 W, msec memory (meters/sec signal propagation), and 1400 cc what it takes our state-of-the-art AI Mega Watts, 1000s of lbs, and nsec memory (not to mention EDAC). The difference is so vast we can&#039;t compare even on a log-log chart.

So what is the brain doing ? Obviously not matrix multiplies, gradient descent, or any other algorithms requiring 100% error free memory access with a high degree of numerical accuracy. Possibly some form of associative or weight-addressable memory, but if so it must be a hard-to-imagine huge amount, something like petabytes and in that case how this memory is organized -- and more importantly how content and search paths are organized -- is what we need to understand]]></description>
			<content:encoded><![CDATA[<p>AGI will happen, but as it stands the brain does in 40 W, msec memory (meters/sec signal propagation), and 1400 cc what it takes our state-of-the-art AI Mega Watts, 1000s of lbs, and nsec memory (not to mention EDAC). The difference is so vast we can&#8217;t compare even on a log-log chart.</p>
<p>So what is the brain doing ? Obviously not matrix multiplies, gradient descent, or any other algorithms requiring 100% error free memory access with a high degree of numerical accuracy. Possibly some form of associative or weight-addressable memory, but if so it must be a hard-to-imagine huge amount, something like petabytes and in that case how this memory is organized &#8212; and more importantly how content and search paths are organized &#8212; is what we need to understand</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		Comment on Why AGI Will Not Happen by dirk bruere		</title>
		<link>https://timdettmers.com/2025/12/10/why-agi-will-not-happen/comment-page-1/#comment-255279</link>

		<dc:creator><![CDATA[dirk bruere]]></dc:creator>
		<pubDate>Thu, 11 Dec 2025 13:16:02 +0000</pubDate>
		<guid isPermaLink="false">https://timdettmers.com/?p=1233#comment-255279</guid>

					<description><![CDATA[So, GPUs may have reached their limits but you make no comment on more analog approaches like memristor tech which promise to be orders of magnitude more efficient. 
As for high value uses of general purpose humanoid robots, how about the medical field, with dementia and elder care, for example?]]></description>
			<content:encoded><![CDATA[<p>So, GPUs may have reached their limits but you make no comment on more analog approaches like memristor tech which promise to be orders of magnitude more efficient.<br />
As for high value uses of general purpose humanoid robots, how about the medical field, with dementia and elder care, for example?</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		Comment on Why AGI Will Not Happen by Ulf		</title>
		<link>https://timdettmers.com/2025/12/10/why-agi-will-not-happen/comment-page-1/#comment-255152</link>

		<dc:creator><![CDATA[Ulf]]></dc:creator>
		<pubDate>Wed, 10 Dec 2025 18:29:20 +0000</pubDate>
		<guid isPermaLink="false">https://timdettmers.com/?p=1233#comment-255152</guid>

					<description><![CDATA[Danke für Deine Mail ! 
Ich hab mich da wohl mal irgendwo eingetragen! Durch Dich bin ich auf das ganze Thema gestoßen, hab eine Maschine gebaut und bin am lernen. Danke ! Ich hab bis jetzt keine Kontakte, ChatGPT 5.1 ist mir eine große Hilfe ! (Ich hab dich mal erwähnt, du bist bekannt)
Ich selbst bin aus der SAP Logistik Beratung , also eine ganz andere Vorbildung ! 
Freundliche Grüße 
Ulf Mayr]]></description>
			<content:encoded><![CDATA[<p>Danke für Deine Mail !<br />
Ich hab mich da wohl mal irgendwo eingetragen! Durch Dich bin ich auf das ganze Thema gestoßen, hab eine Maschine gebaut und bin am lernen. Danke ! Ich hab bis jetzt keine Kontakte, ChatGPT 5.1 ist mir eine große Hilfe ! (Ich hab dich mal erwähnt, du bist bekannt)<br />
Ich selbst bin aus der SAP Logistik Beratung , also eine ganz andere Vorbildung !<br />
Freundliche Grüße<br />
Ulf Mayr</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		Comment on Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning by Zoran		</title>
		<link>https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/comment-page-2/#comment-204944</link>

		<dc:creator><![CDATA[Zoran]]></dc:creator>
		<pubDate>Mon, 14 Apr 2025 18:12:28 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?p=4#comment-204944</guid>

					<description><![CDATA[Threadripper 3945wx works almost 2x slower than intel 12400 as per my test.]]></description>
			<content:encoded><![CDATA[<p>Threadripper 3945wx works almost 2x slower than intel 12400 as per my test.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		Comment on Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning by Zoran		</title>
		<link>https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/comment-page-2/#comment-132150</link>

		<dc:creator><![CDATA[Zoran]]></dc:creator>
		<pubDate>Fri, 16 Feb 2024 06:01:42 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?p=4#comment-132150</guid>

					<description><![CDATA[Where can i find the code for the 8-bit inference?]]></description>
			<content:encoded><![CDATA[<p>Where can i find the code for the 8-bit inference?</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		Comment on Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning by Andrea de Luca		</title>
		<link>https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/comment-page-2/#comment-127183</link>

		<dc:creator><![CDATA[Andrea de Luca]]></dc:creator>
		<pubDate>Mon, 18 Dec 2023 18:24:43 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?p=4#comment-127183</guid>

					<description><![CDATA[Hi Tim. I think there is a bit of confusion in the article regarding the RTX A6000. You wrote:
&quot;The best GPUs for academic and startup servers seem to be A6000 Ada GPUs (not to be confused with A6000 Turing). &quot;

The RTX A6000 is a 48gb Ampere card, not Turing. Its performance (in any domain) is slightly better than a 3090, while in the charts it performs equal to the old Turing workstation cards (which is frankly impossible). Other than that, the Ada 48gb workstation card is officially called &quot;RTX 6000 Ada&quot; (without the &quot;A&quot;). Thanks.]]></description>
			<content:encoded><![CDATA[<p>Hi Tim. I think there is a bit of confusion in the article regarding the RTX A6000. You wrote:<br />
&#8220;The best GPUs for academic and startup servers seem to be A6000 Ada GPUs (not to be confused with A6000 Turing). &#8221;</p>
<p>The RTX A6000 is a 48gb Ampere card, not Turing. Its performance (in any domain) is slightly better than a 3090, while in the charts it performs equal to the old Turing workstation cards (which is frankly impossible). Other than that, the Ada 48gb workstation card is officially called &#8220;RTX 6000 Ada&#8221; (without the &#8220;A&#8221;). Thanks.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		Comment on Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning by Andrea de Luca		</title>
		<link>https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/comment-page-2/#comment-127182</link>

		<dc:creator><![CDATA[Andrea de Luca]]></dc:creator>
		<pubDate>Mon, 18 Dec 2023 18:24:02 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?p=4#comment-127182</guid>

					<description><![CDATA[Hi Tim. I think there is a bit of confusion in the article regarding the RTX A6000. You wrote:
&#060;&#062;

The RTX A6000 is a 48gb Ampere card, not Turing. Its performance (in any domain) is slightly better than a 3090, while in the charts it performs equal to the old Turing workstation cards (which is frankly impossible). Other than that, the Ada 48gb workstation card is officially called &quot;RTX 6000 Ada&quot; (without the &quot;A&quot;). Thanks.]]></description>
			<content:encoded><![CDATA[<p>Hi Tim. I think there is a bit of confusion in the article regarding the RTX A6000. You wrote:<br />
&lt;&gt;</p>
<p>The RTX A6000 is a 48gb Ampere card, not Turing. Its performance (in any domain) is slightly better than a 3090, while in the charts it performs equal to the old Turing workstation cards (which is frankly impossible). Other than that, the Ada 48gb workstation card is officially called &#8220;RTX 6000 Ada&#8221; (without the &#8220;A&#8221;). Thanks.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		Comment on Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning by Zoran		</title>
		<link>https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/comment-page-2/#comment-118648</link>

		<dc:creator><![CDATA[Zoran]]></dc:creator>
		<pubDate>Sun, 30 Apr 2023 15:21:59 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?p=4#comment-118648</guid>

					<description><![CDATA[Hello, 

Have you had any chance to test high end CPU only interface, for example on Intel 13900K CPU? It seemed like could be pretty fast and a dedicated server with it can be around 100 EUR per month. Much cheaper compared to GPU dedicated server or cloud.]]></description>
			<content:encoded><![CDATA[<p>Hello, </p>
<p>Have you had any chance to test high end CPU only interface, for example on Intel 13900K CPU? It seemed like could be pretty fast and a dedicated server with it can be around 100 EUR per month. Much cheaper compared to GPU dedicated server or cloud.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		Comment on Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning by David Laxer		</title>
		<link>https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/comment-page-2/#comment-115887</link>

		<dc:creator><![CDATA[David Laxer]]></dc:creator>
		<pubDate>Tue, 17 Jan 2023 20:24:30 +0000</pubDate>
		<guid isPermaLink="false">http://timdettmers.wordpress.com/?p=4#comment-115887</guid>

					<description><![CDATA[Hi Tim,

Thanks for your posts.

Do you have any comments on Apple&#039;s M1/M2 chips for Deep Learning research?
Apple&#039; Metal Performance Shader API only supports float32 as well as 32bit complex numbers.  Do you see this as a
&#039;show stopper&#039; for Deep Learning Research for M1/M2 chips.
The M1/M2 processors do use considerably less power then NVIDIA GPUs
does this significantly change the trade-off calculus?

Thanks in advance.]]></description>
			<content:encoded><![CDATA[<p>Hi Tim,</p>
<p>Thanks for your posts.</p>
<p>Do you have any comments on Apple&#8217;s M1/M2 chips for Deep Learning research?<br />
Apple&#8217; Metal Performance Shader API only supports float32 as well as 32bit complex numbers.  Do you see this as a<br />
&#8216;show stopper&#8217; for Deep Learning Research for M1/M2 chips.<br />
The M1/M2 processors do use considerably less power then NVIDIA GPUs<br />
does this significantly change the trade-off calculus?</p>
<p>Thanks in advance.</p>
]]></content:encoded>
		
			</item>
	</channel>
</rss>
