Today's Progress:
1. Read the articles 'Using Artificial Intelligence to Augment Human Intelligence' and 'Visualizing Representations : Deep Learning and Human Beings' from the recomended readings section of tensorflow/lucid
Link to code: I'll post the code when the lucid/distill team releases a paper on caricatures
Thoughts:
I found the article 'Using Artificial Intelligence to Augment Human Intelligence' very interesting. The authors discuss how learning math and art (and other things) augment our ability to conceptualize and communicate. And so if we learn about Neural Nets, we open our minds to potentially greater depth. My feeling is that certain topics have a richness that allows for this... while others allow for it to a lesser degree. Take for example the 'study' of video-games. A person's mind is probably made deeper by playing a complex strategy game, than a quick-reaction shooter... however, I would expect some improvements in both gamers cognition.
Future Work:
11. Design a short music video (proof of concept: 30 seconds) using feature visualization as a visual effect
Today's Progress:
2. Reviewed some colab notebooks: 'Negative Neurons', 'Diversity Visualization', 'Neuron Interactions', and 'Regularizing Visualizations'
Link to code: https://colab.research.google.com/drive/1L_CMEVJ3wvKOOHL5xRlycS5IVRPwUg5o
Thoughts:
I found the task of producing perlin noise surprisingly difficult. The code repo I found today helped tremendously, but I still need to understand certain parts of it better.
Future Work:
Today's Progress:
Link to code: https://colab.research.google.com/drive/1bbSn8p7KjK-2utuhnjgoBMJBwcXvXAES
Thoughts:
Today was an overall success. I still have along way to go to understand the code fully. I also need to improve my understanding of tensors overall.
Future Work:
1. Create a script in gimp using python-fu (better yet imageio python library) to convert panels to animation
2. Review the following colab notebooks: 'semantic dictionaries', 'activation grids', 'spatial attribution', and 'neuron groups'
Today's Progress:
Link to code: https://colab.research.google.com/drive/18N_Lq-mCI5Ie2pbaW93P7CJv9api7L2y
Thoughts: I feel like I accomplished just a bit better than the minumum today. That said, I learned a good amount about the 'gram matrix' which is produced my multiplying a matrix by its transpose.
Future Work:
7. Explore the work of the original authors, create a visualization of the branching threads (google scholar API)
Today's Progress:
1. Reread the section from 'Differential Image Parameterizations' (DIP) on 'Compositional Pattern Producing Networks' (CPPN)
Link to code: https://colab.research.google.com/drive/1FIkCm0AKmyVpgDVyjOM2pANZtfigR2xK
Thoughts: I had glossed over this section before. Now I'm feeling as though it deserves a full paper to itself. We learn about the title of the paper in this section: "CPPNs are a differentiable image parameterization — a general tool for parameterizing images in any neural art or visualization task"
Future Work:
Today's Progress:
Link to code: https://colab.research.google.com/drive/1Qvli2GDBPXA09WzMpQsoET9YCuMD11I6
Thoughts: The images seem very different from the original feature visualizations.
Future Work:
1. Compare the visualizations produced by a given neuron in vanilla feature visualization VS CPPN VS semi-transparent
2. Find a way to visualize the output of the image_sample function directly (seems to produce perlin noise)
Today's Progress:
Link to code: https://colab.research.google.com/drive/1fQvGxE2JFZJ33Ofcm3C20jPM0B8wCtZl
Thoughts: Feeling thrilled that this came together. I was anticipating this section greatly over the last few days. As it turns out, I had to make several attempts in order to get the 3D model to display with a texture at all. The way I have it now is somewhat hacky... but I'm happy with the aesthetic quality of the results.
Future Work:
5. Find some robotic looking layers... design a cool robot model in blender with feature visualization as a part of the workflow {shiny outer shell, complex internals showing through the cracks}
Today's Progress:
3. Created a unique 3d texture using style transfer and the methods discussed in Differentiable Image Parameterizations
Link to code: https://colab.research.google.com/drive/1rX0TePYx07ee8Exd4ZarmrWKFnzWapfI
Thoughts: Very proud of the results from today's experiment.
Future Work:
Today's Progress:
Link to code: https://colab.research.google.com/drive/1VZu7qhlsFch0mVw-ctsRF0UiPbBX8s0T
Thoughts: Ran out of GPU memory for the colab worksheet today. Tried to run it again on my computer with some success. Also ran into some GPU issues there (needs more power). I'd like to find a way to produce more frames without increasing the image size.
Future Work:
Today's Progress:
Link to code: https://colab.research.google.com/drive/1eiLy9ciqMFozqkKGwZXUU1J1XyiBMGh5
Thoughts: Felt the weight of failure today. After vigorously studying the code I'd previously copied for the 2D perlin noise... I found myself confused and frustrated. I was able to decipher a good portion of the code, but in the end I was left with a puzzle half-constructed. As a path forward, I found another coder who had been more successful at such efforts.
Future Work:
1. Compare the 3D and 2D example provided by Pierre Vigier (https://github.com/pvigier)
Today's Progress:
Link to code: https://colab.research.google.com/drive/1_skeSqAPfzLzKrwSIMw_ASmaUAYoFL2l
Thoughts: Today's task was challenging. It felt good to succeed.
Future Work:
Today's Progress:
Link to code: https://colab.research.google.com/drive/1bWqdV-vVCxzf2dRtHnjrlpcKUgSx7sWe
Thoughts: I'm finally starting to 'get' perlin noise.
Future Work:
2. Explore using noise for interpolation of a single image (in a similar way to the ramp technique of Gene Kogan)
4. Explore interpolating visualizations with the peaks (of noise) first... sort of like a mountainous rock rising from the sand.
Today's Progress:
1. Explored a variety of functions from tensorflow/lucid that support feature visualization (such as the details of 'render')
Link to code: https://colab.research.google.com/drive/17TatN0y7ZP8YO4C2M6JQDMdve3GWfkib
Thoughts: My main goal today was to mask the gradients of the update function as demonstrated by Gene Kogan https://twitter.com/genekogan/status/1074359156084072448. At this I did not succeed. Tensorflow presents a unique set of challenges when it comes to debugging. I set myself the easier task of creating some cool looking large renders with a CPPN. I also discovered a rather awesome blog that talks about CPPN.
Future Work:
1. Explore the work of 'Otoro' (David Ha?) on CPPNs http:https://blog.otoro.net/archive.html (2016)
Today's Progress:
2. Explored a variety of feature visualizations using the 'diversity' method from this article: https://distill.pub/2017/feature-visualization/
Link to code: https://colab.research.google.com/drive/15PDZTFeQioK2SiRCHQJwZW4EE8p7QhJW
Thoughts: I spent some more time today looking at how I might approach masking gradients. From what I can see, Gene Kogan prefers Caffe... Not a library I'm feeling eager to learn. I discovered a number of functions in tensorflow that might do the trick but I'm still just stumbling in the dark without a sense of real direction. https://github.com/genekogan/deepdream
Future Work:
Today's Progress:
Link to code: I'll post the code when the distill/lucid community releases a paper on this.
Thoughts: Sometimes progress is very slow.
Future Work:
Today's Progress:
1. Successfully produced non-square, arbitrary size outputs using feature visualization and caricatures
Link to code: I'll post the code when the distill/lucid community releases a paper on this.
Thoughts: Today's work was surprisingly easy. I had noticed the parameter 'h' before and suspected that it would be useful. 'sd' (standard deviation) is another parameter for initialization which might be interesting to explore.
Future Work:
Today's Progress:
1. Worked on my coming presentation on Art and AI at Holberton: https://www.meetup.com/san-francisco-school-of-ai/events/257387650/
Link to code: https://colab.research.google.com/drive/1Hq_9fQtA3Fa8wx3Z4giLt4uCQfPJCcE8
Thoughts: Today's work was initially just doing what had to be done. This evening however, I accomplished something outside of the prescribed box. This feeling of touching hard to reach places is surely what makes artists and researchers pursue their respective crafts with passion.
Future Work:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/deepdream/deepdream.ipynb https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/14_DeepDream.ipynb https://github.com/llSourcell/deep_dream_challenge/blob/master/deep_dream.py
Today's Progress:
4. Tried out increasing the number of stages for an interpolation without increasing the number of low res tensors (initial results seem fine)
Link to code: https://colab.research.google.com/drive/1efa7zyz9NwPe7hkR-lmiMMoatlD5MoqY
Thoughts: Had alot of fun with this today. I still have some further areas to explore. Attempted to combine with interpolation: total failure.
Future Work:
2. Ask about transformations on the slack (why didn't my original method of rolling work (as a transform)? or did it...)
3. Find a method to improve tiling (ideas: active rotation, smaller adjustments along the gradient, tiled color-noise (initial values))
Today's Progress:
Link to code: https://colab.research.google.com/drive/1Qaj6G06rZGJaC6eYEM03T52bx4gu-9ih
Thoughts: Two ideas failed (interpolating caricatures and zooming), but in the end, I was able to achieve something new and interesting.
Future Work:
Today's Progress:
Link to code: https://colab.research.google.com/drive/1x1dd4YLwlcMye64B6R3CRiMhxKtHaf1D
Thoughts: It seems I've exhausted my daily resources for colab. We'll see if I am allowed to resume tomorrow.
Future Work:
Today's Progress:
Link to code: https://colab.research.google.com/drive/1S-UGWm-K8jA7g285O59LbHbby8_LRH4w
Thoughts: My assumption yesterday about exhausting resources turned out to be false. I suspect a currupt image file was at fault. Today I did exhaust the GPU resources afforded by colab
Future Work: