7

Turns out you can treat a a function mapping parameters to outputs as a product that acts as a *scaling* of continuous inputs to outputs, and that this sits somewhere between neural nets and regression trees.

Well thats what I did, and the MAE (or error) of this works out to about ~0.5%, half a percentage point. Did training and a little validation, but the training set is only 2.5k samples, so it may just be overfitting.

The idea is you have X, y, and z.
z is your parameters. And for every row in y, you have an entry in z. You then try to find a set of z such that the product, multiplied by the value of yi, yields the corresponding value at Xi.

Naturally I gave it the ridiculous name of a 'zcombiner'.

Well, fucking turns out, this beautiful bastard of a paper just dropped in my lap, and its been around since 2020:

https://mimuw.edu.pl/~bojan/papers/...

which does the exact god damn thing.

I mean they did't realize it applies to ML, but its the same fucking math I did.

z is the monoid that finds some identity that creates an isomorphism between all the elements of all the rows of y, and all the elements of all the indexes of X.

And I just got to say it feels good.

Comments
Add Comment