Neural net hallucination caused by force-feeding Google's image recognition engine.
If we apply the algorithm iteratively on its own outputs and apply some zooming after each iteration, we get an endless stream of new impressions, exploring the set of things the network knows about. We can even start this process from a random-noise image, so that the result becomes purely the result of the neural network, as seen in the following images:
Inceptionism: Going Deeper into Neural Networks Alexander Mordvintsev, Christopher Olah, and Mike Tyka, @ Google Research
No comments:
Post a Comment