Klaus-Gerd Giesen<p>"Before you can ask an <a href="https://chaos.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> model to help you [...], the model is born in a <a href="https://chaos.social/tags/data" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>data</span></a> center. Racks of <a href="https://chaos.social/tags/servers" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>servers</span></a> hum along for months, ingesting <a href="https://chaos.social/tags/training" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>training</span></a> data, crunching numbers, and performing <a href="https://chaos.social/tags/computations" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>computations</span></a>. [...] It’s only after this <a href="https://chaos.social/tags/training" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>training</span></a>, when <a href="https://chaos.social/tags/consumers" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>consumers</span></a> or <a href="https://chaos.social/tags/customers" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>customers</span></a> “inference” the AI models to get answers or generate outputs, that model makers hope to [...] eventually turn a <a href="https://chaos.social/tags/profit" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>profit</span></a> .[...]<br>80–90% of computing power for AI is used for inference."</p><p><a href="https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">technologyreview.com/2025/05/2</span><span class="invisible">0/1116327/ai-energy-usage-climate-footprint-big-tech/</span></a></p>