The Wonderful World of CLIP
Last touched February 18, 2021
I realise that I all too easily fall into the trap of writing things on Twitter and thinking “I’ll clean that up, flesh it out, and make a blog post of it.”. But I never do. So better to break it out of the silo a bit first and then work on it later.
I’m excited about @OpenAI’s CLIP, more so than DALL-E and it people are doing some really cool stuff with it (that’s what happens when you release an amazing model!). Here’s a thread to capture some interesting tweets.
Age prediction:
Anime description:
Prompt based image optimisation (using a CPPN):
GAN based image optimisation:
More GAN based image optimisation:
Image captioning:Matching photos to poetry:
Finding photos with incorrect orientationControlling anime generation with text (in conjunction with the http://thisanimedoesnotexist.ai model)
Dataset cleaning and curation
Generating animated visuals for long texts
Making a text to image search for Unsplash photos (which is possibly better than Unsplash’s actual search?)
Judging drawing competitions @paintdotwtf
https://paint.wtf/” “Towards” (😅) zero-shot object detection:
Editing faces in conjunction with Stylegan
(sort of) generating Where’s Wally (Waldo) pictures
I also had something to say about CLIP in this Wired article: https://www.wired.com/story/ai-go-art-steering-self-driving-car/