Sujet : Re: Beazley's Problem
De : annada (at) *nospam* tilde.green (Annada Behera)
Groupes : comp.lang.pythonDate : 24. Sep 2024, 09:25:57
Autres entêtes
Organisation : tilde.green
Message-ID : <08bddb548dce214b1d41432e92d431d0ef304929.camel@tilde.green>
References : 1 2 3 4 5 6 7
User-Agent : Evolution 3.52.2
-----Original Message-----
From: Paul Rubin <
no.email@nospam.invalid>
Subject: Re: Beazley's Problem
Date: 09/24/2024 05:52:27 AM
Newsgroups: comp.lang.python
def f_prime(x: float) -> float:
return 2*x
>
You might enjoy implementing that with automatic differentiation (not
to be confused with symbolic differentiation) instead.
>
http://blog.sigfpe.com/2005/07/automatic-differentiation.html
Before I knew automatic differentiation, I thought neural networks
backpropagation was magic. Although coding up backward mode autodiff is
little trickier than forward mode autodiff.
(a) Forward-mode autodiff takes less space (just a dual component of
every input variable) but needs more time to compute. For any function:
f:R->R^m, forward mode can compute the derivates in O(m^0)=O(1) time,
but O(m) time for f:R^m->R.
(b) Reverse-mode autodiff requires you build a computation graph which
takes space but is faster. For function: f:R^m->R, they can run in
O(m^0)=O(1) time and vice versa ( O(m) time for f:R->R^m ).
Almost all neural network training these days use reverse-mode autodiff.