Sujet : Re: Neural Networks (MNIST inference) on the “3-cent” Microcontroller
De : gneuner2 (at) *nospam* comcast.net (George Neuner)
Groupes : comp.arch.embeddedDate : 22. Oct 2024, 21:39:42
Autres entêtes
Organisation : i2pn2 (i2pn.org)
Message-ID : <metfhjl089r3lrmjju7ivsqm8alcuf6u88@4ax.com>
References : 1
User-Agent : ForteAgent/8.00.32.1272
On Mon, 21 Oct 24 20:06:28 UTC, D. Ray <
d@ray> wrote:
Bouyed by the surprisingly good performance of neural networks with
quantization aware training on the CH32V003, I wondered how far this can be
pushed. How much can we compress a neural network while still achieving
good test accuracy on the MNIST dataset? When it comes to absolutely
low-end microcontrollers, there is hardly a more compelling target than the
Padauk 8-bit microcontrollers. These are microcontrollers optimized for the
simplest and lowest cost applications there are. The smallest device of the
portfolio, the PMS150C, sports 1024 13-bit word one-time-programmable
memory and 64 bytes of ram, more than an order of magnitude smaller than
the CH32V003. In addition, it has a proprieteray accumulator based 8-bit
architecture, as opposed to a much more powerful RISC-V instruction set.
>
Is it possible to implement an MNIST inference engine, which can classify
handwritten numbers, also on a PMS150C?
>
>
<https://cpldcpu.wordpress.com/2024/05/02/machine-learning-mnist-inference-on-the-3-cent-microcontroller/>
>
<https://archive.md/DzqzL>
Depends on whether you mean implementing /their/ recognizer, or just
implementing a recognizer that could be trained using their data set.
Any 8-bitter can easily handle the computations ... FP is not required
- fixed point fractions will do fine. The issue is how much memory is
needed and what your target chip brings to the party.