Philosophical bits that I collected over the years for building AI.

Mar. 3, 2024

I’ve been thinking what kind of rule it is that governs our internal ODE/SDE. Existing works satisfy some of the above points,

  1. Learning rules from biological inspirations, for example, Hebbian rule, STDP. They describe learning rules, aka, both neuron activations and the connection weights are part of the ODE/SDE variables, and the evolution function is the learning rule. They lack mathematical explaination what on earth is done via these rules.
  2. Associative memory using hopfield net. Attractors are used to store information, it is also a learning rule, but it lacks the probability part.
  3. BPTT, if we see RNN as a discretized ODE, BPTT is a pragmatic algorithm to train RNN. However, none of the above philosophical points are satisfied.
  4. Neural ODE as normalizing flow, and diffusion models. Both of them create a probability model out of ODE/SDE. But they don’t make use of critical points (Both of them are evolve the system for a fixed time window from a prefixed distribution to the target distribution. Nothing is said about what if the system is evolve further in time.), and they are not about learning rules.