Sujet : Re: Memory Powering the AI Revolution
De : janburse (at) *nospam* fastmail.fm (Mild Shock)
Groupes : sci.physicsDate : 16. Jan 2025, 11:07:12
Autres entêtes
Message-ID : <vmalog$1pah$4@solani.org>
References : 1
User-Agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0 SeaMonkey/2.53.20
See also:
The Special Memory Powering the AI Revolution
https://www.youtube.com/watch?v=yAw63F1W_UsMild Shock schrieb:
I currently believe that some of the fallacies
around LLMs is that one assumes that the learning
generates some small light NNs (Neural Networks),
which are then subject to blurred categories and
approximative judgments. But I guess its quite
different the learning generates very large massive NNs,
which can afford representing ontologies quite precise
and with breadth. But how is it done? One puzzle piece
could be new types of memory, so called High-Bandwidth
Memory (HBM), an architecture where DRAM dies are
vertically stacked and connected using Through-Silicon
Vias (TSVs). For example found in NVIDIA GPUs like the
A100, H100. Compare to DDR3 that might be found in
your Laptop or PC. Could give you a license to trash
L1/L2 Caches with your algorithms?
HBM3 DDR3
Bandwidth 1.2 TB/s (per stack) 12.8 GB/s to 25.6 GB/s
Latency Low, optimized for Higher latency
real-time tasks
Power Efficiency
More efficient Higher power consumption
despite high speeds than HBM3