ABOUT SEYHAN LEE


Seyhan Lee is a creative-led technology A.I. production studio in Boston founded by Pinar 'Seyhan' Demirdag and Gary 'Lee' Koepke. Since its genesis in 2020, the studio has specialized in developing and integrating A.I.-based motion pictures into the film industry, immersive entertainment, and brands. They offer creative and technological A.I. solutions to elevate and reinvent the storytelling experience. The firm works with corporations, directors, agencies, and artists worldwide, seeking innovative solutions in filmmaking, NFTs, branding, and beyond. At the core of their company mission lies developing — and perfecting — conscious use of technology for the immersive new world. It's the mission of all of their projects to expand mindfulness and possibilities for the meaningful elation of humanity through technology.



GENERATIVE A.I.


Since we receive regular requests from production companies to explain how the revolutionary generative A.I. technology works, we felt this FAQ would be helpful for filmmakers, advertising agencies, producers, and anybody else interested in diving deep into the field of artificial neural networks.

For further reading, our co-founder Pinar Seyhan Demirdag regularly publishes articles around generative A.I. and how humans will remain relevant in the age of A.I. and our Discord is filled with heated conversations around generative A.I. and the future of filmmaking.

FAQ GENERATIVE A.I.


What are generative films?
In other words A.I. motion pictures Generative films are storytelling and VFX created and produced with generative A.I. art. In generative A.I., artificial neural networks and machine learning are used to make visuals and picture sequences (films). These generations are made entirely or in part by an autonomous system based on an algorithmic code. Max Bense, a German philosopher, coined the term “generative art” in 1965. With the latest improvements to text-to-image generative models and apps like DALL-E 2 by OpenAI, Midjourney, and Stable Diffusion in 2022, the word has become well-known.

What is the process of creating generative A.I. art?
Firstly, in A.I., without a dataset, there is no outcome. In generative A.I. and film, you need to train a dataset with a lot of images. An example of a renowned dataset would be LAION, with 5,85 billion trained images. However, not all generative A.I. projects require these exorbitant datasets.

Second, well-known universities and private companies come up with theoretical models of generative A.I. and publish either public or private papers about them. These "genesis" papers are hard to find, and they are very important to the development of generative A.I. Examples of such papers would be GAN (Ian Goodfellow), Diffusion (Sohl-Dickstein et al.), and CLIP (OpenAI). Stanford University, U.C. San Diego, Heidelberg University, the Weizmann Institute of Science, and Simon Fraser University are just some of the universities that contributed to the research.

If research labs make these models open-source, anyone can use them in their apps and modify the original code, as Midjourney did in Discord. Other applications include plugins, personal usage, collaborative notebook creation (mostly Google Collab), and variations. These “notebooks” are sometimes released by universities, private research labs (Facebook, OpenAI), smaller companies like ours, or people with an advanced understanding of computer vision and machine learning. A notebook makes the genesis code accessible to everyone with a more straightforward interface. Sometimes, these tweaks and notebooks can be simple artistic touches for a more aesthetic result. In others, they can serve as a new iteration for improving open-source code and achieving more consistent results. Pytti, Disco Diffusion, Deforum, and among well-known generative A.I. notebooks.

Artists who don't know much or anything about programming can use generative A.I. notebooks with the help of Google Collab and user interfaces like Dream Studio.

Last but not least, everyone working with generative A.I. tools requires local GPUs in personal computers or a cloud-based GPU that can work with Google Collab.

Can you please simplify?
Sure, analogies have always helped us understand complex foreign concepts. Let’s use the comparison between a guitar and a fuzz pedal to explain A.I. models and A.I. notebooks. If A.I. models are guitars, a fuzz pedal is tuned to produce the model with varying results. A professional musician needs to know a lot about guitars and guitar effect pedals because every song needs a different technical approach to make it sound good. Following this concept, Seyhan Lee creates their own fuzz pedals and writes and performs their songs on guitars manufactured by research labs.

Who makes the decision on what type of art or film will be created? Does A.I. read the script and make something?
No, A.I. does not make autonomous decisions, and generative tools yield results only when guided by humans. In both instances, a human plays the guitar, and when a human makes art or film with A.I., the idea, the spark, and the image belongs to the human. We cannot get any result from generative A.I. without humans’ inspiration and creative guidance.

Why do generative films look different?
Edward Muybridge showed in the 1800s that a film is just a series of pictures, which is why we call them "motion pictures." The same approach applies to artificial intelligence (A.I.) films. The difference between real life and artificial intelligence is that for A.I. to make a series of images that make sense in time, the algorithm must learn every part of real life. Real-life physics, movement, spatial awareness, and so forth. Several technology corporations (NVIDIA, Google, Facebook, OpenAI) and the open-source A.I. community are working hard to advance generative motion pictures.

Can you please explain the current otherworldly appearance of generative films?
Unconsciously, our minds and sense of space and time make us think of reality in a linear way. Such as, yesterday was before today, and tomorrow will be after today. In the same way, in our effort to make sense of and measure reality, our minds use object classifications. Such as, this is a girl, this is a bookshelf, this is a room, and this is a table. Unlike for A.I., unless we define the background, foreground, or objects, all separate items in a picture are considered as one and a whole.

Currently, where the technology is, the whole picture, in its holistic unity, “becomes” another picture in our effort to give motion to still images. Even though this process is still far from our control over our VFX tools, there is an excellent quality to these magical, recursive worlds that we can create with generative A.I. that we were unable to develop before. These worlds, which are beyond what humans can perceive, can be poetically compared to the visionary, out-of-body trips we take in dreams, when we feel strong emotions, when we take DMT, etc.

What is the historical timeline of generative A.I. art and film?
After Alan Turing pioneered machine learning in 1950, artists worldwide began to include generative or algorithmic art in their work. The most well-known piece of A.I. art from the early days of programming was AARON, which was made by the artist Harold Cohen.

In 1968, Vera Molnar, a Hungarian artist, started using computers as generative tools to help her art research. She is considered one of the founders of early computer and generative art, and her work dramatically influenced geometric abstraction by emphasizing form, transition, and movement.

By the end of the 1960s, disappointment and criticism followed the hype within the field. One can argue that the evil portrayal of A.I.s in Hollywood movies such as 2001: Space Odyssey (1968) played a crucial role in spreading fear about A.I. This led to funding cuts, which is the reason why few inventions existed in A.I. art during the end of the 20th century. This period (1874–1993) is called “A.I. winter.”

The current "A.I. spring" began when improvements were made to systems that translate languages, recognize images, and play games. In the early 2000s, artists were able to get back into the field thanks to the rise of new coding languages for artists and open-source projects that could be found on GitHub.

After a few important models were made in 2014 and 2015, generative A.I. art became more popular. The first one is Deep Dream, and Google’s mad scientist Alexander Mordvintsev discovered this computer vision program that produces hallucinogenic results. The second is generative adversarial networks (GANs), invented by Ian Goodfellow and his colleagues. The field quickly grew with tools accessible to artists without a technical background. Researchers started creating big sets of data, like ImageNet, that could be used to train algorithms. Meanwhile, tech companies have open-sourced their frameworks, including Google (TensorFlow) and Facebook (Torch).

Pinar&Viola, Pinar Seyhan Demirdag (our Co-Founder) and Viola Renate became the first artists to have a personal Google Collab developed to make art by Google in 2018. This process led to Infinite Patterns, an open-source A.I. tool from Google Arts & Culture that is based on Alexander Mordvistsev's Deep Dream and feature-visualization research and lets anyone use generative A.I. to make artistic patterns.

The French team Obvious sold an artwork they created with the GAN model for $432,000 in 2018.

The year 2020 saw the rise of A.I. activism in response to bias in some A.I. models. Tech companies like OpenAI have started to put a lot of thought into how their A.I. can be used in an ethical way.

Seyhan Lee’s A.I. Film Connections for Beko became the first high-budget A.I. short film with 90 million YouTube views and was awarded a D&AD Pencil for VFX/Emerging Realities in 2021.

OpenAI released the text-to-image DALL-E and CLIP models in 2021. This launch was a big step toward making generative A.I. more popular. It was the first time people used words to make pictures. Since then, other text-to-image methods like Vector Quantized Generative Adversarial Networks, Contrastive Language-Image Pre-training (VQGAN+CLIP), and diffusion models (Disco Diffusion, Stable Diffusion) have become more popular.

2022 has become the year of widespread acceptance of generative art. Midjourney's technical team, led by Daniel Russell, put out a version of the CLIP Diffusion model that looks more artistic on their Discord. Their platform grew into a worldwide creative phenomenon with 3,6MM+ community members (October 2022). Facebook and two anonymous collectives, Phenaki and Dreamfusion released text-to-video research papers. Their quality is still infantile but is a great leap forward for generated films. As the last major news of October 2022, as I stated at the beginning of the article, Stability A.I. has received $101MM in funding, officially turning generative art into Silicon Valley’s official new craze.

None of these advancements in generative A.I. would have been possible without the creative contributions of artists in their early days. Between 2016 and 2020, when generative A.I. was still new and only a few people were interested in it, creative coders and early adopter artists like Memo Akten, Gene Kogan, Katherine Crowson, Umut Yildiz, Vadim Epstein, Alexander Reben, Nathan Shipley, and Pinar Seyhan Demirdag helped to make it more widely accepted.



CONNECT WITH US.


We would love to hear from you.
contact@seyhanlee.com