I totally agree, if you have an understanding of how the weights in an ANN are set during learning it might not surprise you that inverting an image doesn’t return the same result i.e. the activation of the relevant layers is not invariant under f(x)=-x. I think the point here is the never-ending surprise at how sophisticated and successful algorithms can do some things that humans can do very well, but are completely useless when the parameters or set up are changed in a way that seems trivial to humans. It’s another sign of how far we are from a general purpose AI.

Data, science, data science and trace amounts of the Middle East and the UN

Data, science, data science and trace amounts of the Middle East and the UN