We shape our tools and thereafter our tools shape us: Interview with RunwayML founder Cristóbal Valenzuela

Interview with Cristóbal Valenzuela founder of RunwayML, an electrifying new company that brings state-of-the-art machine learning techniques and architectures to media creatives with an intuitive and simple visual interface.

3 years ago   •   9 min read

By Olivia Lengyel

There’s no doubt that machine learning applications are proliferating throughout the entire media industry.

In this blog we've covered a wide spectrum of art-focused ML use cases. We've recently interviewed visual artists, game designers, and TikTokers, and we’ve written guides for topics like pose estimation with poseNet, semantic image synthesis with GauGAN, and FaceApp-style image filters with CycleGAN.

In short, we think it’s abundantly clear that machine intelligence will transform – and is transforming already – the film and media industry from VFX to AR and VR, to animation, asset creation, video analysis, typography design, text generation, and everything in between.

That’s where Cristóbal Valenzuela comes in. Cris, a former researcher at NYU Tisch’s renowned ITP (Interactive Telecommunications Program) – as well as a former Paperspace ATG Research Fellow – is building an electrifying new company called RunwayML to bring state-of-the-art machine learning techniques and architectures to media creatives.

As we’ve become obsessed with the intersection of art and technology, we’re thrilled to see Runway appear over and over again in discussions about how to equip creatives with powerful AI-assisted media creation tools.

We are delighted to get to speak with Cris and get his take on the future of machine intelligence in media and what it will take to build a new stack of tools to empower every creative to make things that were never before possible.

Bring this project to life

Paperspace: First things first - what’s behind the name of RunwayML? It seems to evoke fashion, aircraft, and production lines -- do any of those items capture the spirit of what you’re trying to do?

Valenzuela: I started working on Runway while I was studying at NYU. Early in the research phase, I wanted to have a short name I could use to discuss the project with my advisors and I didn't want something too long or complex. The initial idea behind the project was to create a platform that makes machine learning models accessible to artists. So I started brainstorming ideas around that and then I realized that "a platform for models" already has a name: a runway.

Paperspace: Ah! Yes of course. So what got you interested in machine learning? Can you tell us about your research at NYU and how that lead to or accelerated your interest in AI?

Valenzuela: I was working in Chile when I came across the work of Gene Kogan around neural style transfer. At the time, I had no idea how it was made but I was fascinated by the idea of computational creativity and what deep-based image techniques might enable and mean to artists. I went down a rabbit hole researching neural networks until I became so obsessed with the topic that I eventually quit my job, left Chile, and enrolled in NYU's ITP to study computational creativity full time.

Paperspace: Was there a point where you were like “Ok, I need to build Runway because this tool stack doesn’t exist yet and I want it?” Or was it a different realization about the potential of making traditionally burdensome ML tasks 100x easier to accomplish?

Valenzuela: Working around a creative idea involves a lot of experimentation. Any creative endeavor requires a search and experimentation phase, a spirit to prototype quickly, and a willingness to try new ideas fast. I wanted to create art and explore ideas around neural networks, but every time I tried to build prototypes I was confronted with a wall of technicalities that were irrelevant to my goal.

Imagine if every time a painter wanted to paint a new canvas, she had to manually create the pigments and paint tubes – that's how I felt every time I wanted to use a machine learning model. I was grinding my pigments by hand for weeks before attempting to paint anything. I was so frustrated that I eventually decided to build something that could make the whole process of working with ML in a creative context easier.

Paperspace: How did your work on ml5js translate to Runway? Do both projects share a similar mission -- to make advanced machine learning techniques available to visual creatives?

Valenzuela: I had the amazing opportunity to work closely with Dan Shiffman while I was at NYU. Together with Dan and a group of students at ITP, we had the idea of creating a way of making machine learning techniques more accessible and friendly on the web – especially for the creative coding community.

We were super inspired by the mission of p5js and the Processing Foundation to promote software literacy within the visual arts. Runway and ml5js share a common set of values and principles around accessibility, inclusion, and community. Both projects started around the same time and have a similar vision about how to develop technology for the arts.

Paperspace: What are some of the coolest use cases you’ve seen creators pursue using your software?

Valenzuela: We are building Runway to allow other people to create and express themselves. I love when I see creators from different backgrounds using Runway to create art, visuals, videos – or just to learn or experiment.

We've seen so many amazing projects over time that we started grouping them on a dedicated website: runwayml.com/madewith. I think those are just some of the best projects from the community.

We also constantly interview creators to showcase some of their work. Like what Claire Evans, Jona Bechtolt, and Rob Kieswetter of YACHT made recently for their Grammy-nominated album.

Paperspace: What about you or one of your team members? Has your team created anything that really inspires you?

Valenzuela: Dan Shiffman has created a long list of amazing projects with Runway. Recently, he's been playing with the text generation feature and creating fascinating Twitter bots. The idea is that you can train your own generative text model based on OpenAI's GPT-2 model. Once the model has been trained inside Runway, you can deploy it as a hosted URL that can be used in all sorts of different ways.

Dan has been creating Discord, Slack, and Twitter bots based on different datasets he's gathered. Check out the Coding Train YouTube channel to learn more and create your own. If you want to create your own GPT-2 bots with Runway, check out this tutorial.

Paperspace: There’s a large element of RunwayML that has to do with being a part of a creative community of technologists. How do you foster an environment for this sort of collaboration? What are the early returns on building this community and what’s the dream scenario for the kind of community you could create around Runway?

Valenzuela: I think what makes a good community is that shared ideals, visions, and passions are constantly discussed among the members. The Runway community, or the creative tech community, is no different.

We listen closely to what the members of the community are saying – and being able to create together with them is crucial. That's why we've helped and collaborated with hundreds of students, artists, and technologists from all over the world.

Runway is also being used to teach at a wide variety of institutions, from architectural programs at MIT to self-organized independent workshops in Peru. But most importantly, we want to foster an environment that promotes creativity, kindness, equality, and respect.

Paperspace: Do you charge for the use of your application? It’s an amazing product and we’re continually amazed by how much we can accomplish so simply and quickly. It’s like magic!

Valenzuela: There are a lot of technical complexities when building a large platform that allows creators to use powerful machine learning models and techniques. Our goal is to make the platform as accessible as possible to everyone. Runway is free to use on the web and free to download. Users can run models locally and also pay to use the models remotely in the cloud. There's also a subscription plan to get access to more advanced features in the platform.

Paperspace: When we’re designing software we think a lot about the quote “We shape our tools and thereafter our tools shape us.” Do you see Runway as a tool stack with this kind of world-building potential? Or do you see Runway as a kind of creative coauthor? What is the correct role for a platform like Runway that enables so much media creation to have relative to the output that its users generate?

Valenzuela: A common metaphor when building interfaces is this idea of translating objects from the physical world into software concepts to help users interact more easily with an application. For example, the Desktop Metaphor suggests a physical desk with documents and folders.

Something very similar has happened around interfaces for media creation and creative software. In traditional image editing software, we have the idea of a pencil, an eraser, a ruler, and scissors. But the problem is that we’ve relied on these metaphors for too long -- and they affect how we think about what the limits of our tools are.

We're facing decades-old media paradigms and metaphors around how to create content and build complex digital tools around them. I think it's time we change those principles because they limit our creative expressions and embrace a new set of metaphors to take advantage of modern computer graphics techniques. Runway is the platform building those new tools and principles.

Paperspace: Do you like the terms generative art or synthetic media? Do you think either is a good description of what’s taking place in the world of AI-assisted media creation? Do you think we’ll start to see artists achieve fortune and fame on bodies of work that is, for lack of a better word, synthetic?

Valenzuela: I believe artists try to engage with whatever mediums are relevant to their practice. R Luke Dubois has a great way of putting it when he says: "It’s the responsibility of the artist to ask questions about what that technology means and how it reflects our culture."

The history of generative art is not new. The idea of involving an autonomous system in the art-making process has been around for decades outside of the recent AI boom. What's different is that now we are entering a synthetic age.

The idea of using high volumes of data and deep-techniques to manipulate and edit media will massively transform not just art, but general content creation possibilities in ways similar to what the CGI revolution did in the ’90s.  Everyone will be able to generate a professional Hollywood-type of content very soon. It won’t matter what we call it when we're inspired by what we can create with it.

Paperspace: Are there any creative industries that you believe that would benefit from the application of ML that haven’t quite yet applied this technology?

Valenzuela: We've been chatting and collaborating with a lot of creatives over the last year. I think most creative industries are going to benefit from ML. For instance, we've done some workshops and experiments with ZHA Code, the technology group from Zaha Hadid's architects. I believe architects are now rapidly incorporating ML techniques inside their workflows and we'll start to see major changes in architecture/design systems in the next few years.

I do think that one of the spaces that will benefit the most from ML will be the entertainment industry. It's not just that automation will help speed-up processes that right now take weeks or months to complete (everything from rotoscoping to editing will be automated) – but that the impact of synthetic media on the next generation of film-makers and creatives will be monumental.

The barriers to entry to create professional-level content will be radically lower with ML. Everyone with access to a computer will be able to create content that only professional VFX studios have traditionally been able to create.

Paperspace: What was your experience working with Paperspace as a fellow? Did that experience help you create Runway?

Valenzuela: I had the pleasure of working with the Paperspace team in 2018 and I learned a lot. Having the opportunity to work and collaborate with the Paperspace team provided me with great insights into how to build a great product, company, and community. What you are doing enabling teams to collaborate on ML models is fantastic and much needed for engineering teams.

Paperspace: What’s next for Runway? What products, features, implementations, or developments are you most excited to share with your community? What achievements are you most excited to unlock with your team looking forward?

Valenzuela: There are a few big updates coming very soon! We've been working on some exciting new features around video and image models that are going to be game-changers. But what I'm most excited about always is all the amazing work that is made with Runway. I can't wait to see what creators do!

Paperspace: Is there anything else you’d like to add? What’s the best way for readers of this interview to get started making stuff on Runway? Do you have any advice for creatives who might be exploring how ML can augment their work for the first time?

Valenzuela: We are hiring! If you are interested in helping us imagine and build the creative tools of the future, we'll love to hear from you.

If you are getting started with Runway or machine learning, check out some of the Coding Train video tutorials, join our Slack channel to connect with more creatives, or just DM us on Twitter: @runwayml and @c_valenzuelab

Add speed and simplicity to your Machine Learning workflow today

Get startedContact Sales

Spread the word

Keep reading