Memory Powering the AI Revolution

Liste des GroupesRevenir à s logic 
Sujet : Memory Powering the AI Revolution
De : janburse (at) *nospam* fastmail.fm (Mild Shock)
Groupes : sci.logic
Date : 16. Jan 2025, 11:04:42
Autres entêtes
Message-ID : <vmaljq$1pah$1@solani.org>
User-Agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0 SeaMonkey/2.53.20
I currently believe that some of the fallacies
around LLMs is that one assumes that the learning
generates some small light NNs (Neural Networks),
which are then subject to blurred categories and
approximative judgments. But I guess its quite
different the learning generates very large massive NNs,
which can afford representing ontologies quite precise
and with breadth. But how is it done? One puzzle piece
could be new types of memory, so called High-Bandwidth
Memory (HBM), an architecture where DRAM dies are
vertically stacked and connected using Through-Silicon
Vias (TSVs). For example found in NVIDIA GPUs like the
A100, H100. Compare to DDR3 that might be found in
your Laptop or PC. Could give you a license to trash
L1/L2 Caches with your algorithms?
              HBM3                  DDR3
Bandwidth    1.2 TB/s (per stack)  12.8 GB/s to 25.6 GB/s
Latency      Low, optimized for    Higher latency
              real-time tasks
Power Efficiency
              More efficient        Higher power consumption
              despite high speeds   than HBM3

Date Sujet#  Auteur
16 Jan 25 * Memory Powering the AI Revolution2Mild Shock
16 Jan 25 `- Re: Memory Powering the AI Revolution1Mild Shock

Haut de la page

Les messages affichés proviennent d'usenet.

NewsPortal