A talking mouth chanting algorithmically generated prayers. Given they’re nonsense to begin with, why not?
A talking mouth chanting algorithmically generated prayers. Given they’re nonsense to begin with, why not?
(*)
Bryan Boyer built an epaper display that shows movies at 24 frames per hour (instead than 24 frames/sec). He has called it Very Slow Movie Player (VSMP): it slows a movie down so that it can be experienced differently, so that its frames can be seen as paintings.
VSMP is an object that contains a Raspberry Pi computer, custom software, and a reflective ePaper display (similar to a Kindle), all housed inside a 3D printed case. Every 2.5 minutes a frame from the film stored on the computer’s memory card is extracted, converted to black and white using a dithering algorithm, and then communicated to the reflective ePaper display. […]
Films are vain creatures that typically demand a dark room, full attention, and eager eyeballs ready to accept light beamed from the screen or projector to your visual cortex. VSMP inverts all of that. It is impossible to “watch” in a traditional way because it’s too slow. In a staring contest with VSMP you will always lose. It can be noticed, glanced-at, or even inspected, but not watched.
Inspired by the project, Jon Bell built Slow In Translation: Lost in Translation, stretched out over the entire year as a webpage background.
@mothgenerator è un bot di Twitter che genera sia le farfalle che i loro corrispettivi nomi in latino:
This bot tweets make-believe moths of all shapes, sizes, textures and iridescent colors. It’s programmed to generate variations in several anatomical structures of real moths, including antennas, wing shapes and wing markings.
Another program, which splices and recombines real Latin and English moth names, generates monikers for the moths. You can also reply to the account with name suggestions, and it will generate a corresponding moth.
Wipawe Sirikolkarn ha analizzato i messaggi che si è scambiato per quattro anni durante una relazione a distanza – il risultato sono delle visualizzazioni molto minimaliste che raccontano la nascita (e la fine) di una relazione.
C’è Richard di Silicon Valley (Thomas Middleditch), è ambientato in un futuro non meglio definito, è un po’ bizzarro, confuso, e l’autore è un AI, “Benjamin“.
Si può guardarlo su Arstechnica, che scrive:
Benjamin is an LSTM recurrent neural network, a type of AI that is often used for text recognition. To train Benjamin, Goodwin fed the AI with a corpus of dozens of sci-fi screenplays he found online—mostly movies from the 1980s and 90s. Benjamin dissected them down to the letter, learning to predict which letters tended to follow each other and from there which words and phrases tended to occur together. The advantage of an LSTM algorithm over a Markov chain is that it can sample much longer strings of letters, so it’s better at predicting whole paragraphs rather than just a few words. It’s also good at generating original sentences rather than cutting and pasting sentences together from its corpus. Over time, Benjamin learned to imitate the structure of a screenplay, producing stage directions and well-formatted character lines. The only thing the AI couldn’t learn were proper names, because they aren’t used like other words and are very unpredictable. So Goodwin changed all character names in Benjamin’s screenplay corpus to single letters. That’s why the characters in Sunspring are named H, H2, and C. In fact, the original screenplay had two separate characters named H, which confused the humans so much that Sharp dubbed one of them H2 just for clarity.