Anticipating 2020s is a video generated with deep learning methods. It attempts to portray our collective imagination about how life in the 2020s would be from the perspective of decades-earlier science fiction films and books. It calls for the viewers to reinterpret the shown perspectives and reevaluate them in dialogue with the reality they experience as humans living at the beginning of the 2020s.
The video was generated by rearranging the found footage of random small excerpts from various science fiction movies that are set in the 2020s. The selected movies were filmed from 1960 to 1999 and illustrate different perspectives from different decades on how life in the 2020s would be. All frames in the selected movie excerpts were analyzed and categorized with a deep learning algorithm for object detection.
The object detection algorithm analysis of found footage in movie excerpts provides a high-level description of what is shown in each movie frame. For example, if and how many persons, cars, phones, and various other objects are present in a movie frame. This analysis is used to identify the frames from different movie excerpts with a similar type and number of objects and persons to generate a new narrative algorithmically.
The algorithmic rearrangement of the found footage performs a serialization of archival discourse according to iconography. It attempts to explore the question posed by Lev Manovich in The Language of New Media “how can our new abilities to store vast amounts of data, to automatically classify, index, link, search and instantly retrieve it lead to new kinds of narratives?”. It utilizes the found shots and scenes of cinematic narratives that present what the world would be like in the 2020s to create a new narrative toward a database aesthetic.
The rearrangement algorithm randomly sets the target “intensity” of the video scenes to be generated at specified intervals. The term “intensity” denotes how many persons and objects the selected movie frames should include. Then, the database of the analyzed found footage is searched to find the movie frames with the specified “intensity.” The result is a serialization of found footage from various movies, with a flow directed by the algorithm.
The generated video shows fragments of different movies and thus blends the issues about the science fiction imagined life in the 2020s as presented in the original movies. For example, life in space, time travel, telepathy, totalitarianism, androids, cyborgs, and many others. However, the issues presented in the original movies are often not identified in Anticipating 2020s because of its fragmented nature. Furthermore, issues about life in the 2020s are shown from different perspectives and certainly with a different aesthetic as they originate from various movies. The attribute of Anticipating 2020s to reorganize familiar footage in an unfamiliar way calls for the viewers to reexperience the fragments of the original movies and drives their impulse to create a coherent story. Therefore, viewers are subjected to a procedure that impulsively calls for reinterpretation and reevaluation of imagined life in the 2020s in dialogue with the reality they currently experience. Within this experience, viewers will also be inclined to reflect on the possibilities for the future.
Anticipating 2020s is subtitled with text generated by the deep learning model GPT-2. The GPT-2 was finetuned with training on the text of science fiction books . At an interval of 6 seconds, captions are generated with the pre-trained CLIP model for the corresponding frames in the Anticipating 2020s video. These captions are then used to trigger the finetuned GPT-2 to generate short passages of text, eventually the subtitles of Anticipating 2020s. The subtitles enhance the drive of the viewers’ impulse to create a coherent story by also attempting to correlate the subtitles with video.
The sound of Anticipating 2020s was also algorithmically generated by comparing the image differences of successive frames.
Anticipating 2020s is presented as the various applied algorithms generated it without any post-processing.