We were lucky to have the opportunity to sit down with Dan Jeffries -- a prolific writer, blogger, teacher, consultant, and futurist in addition to his day job as the Chief Technology Evangelist with our friends over at Pachyderm.
If you’re into futurism or machine intelligence or crypto there’s a good chance you’ve read something Dan has written -- whether it’s his series with over a million views Learning AI If You Suck at Math or his widely shared essays on cryptocurrency like Why Everyone Missed the Most Important Invention in the Last 500 Years.
But of all the projects Dan is involved in the one we really wanted to talk with him about is his recent exploration of generative (that is, generated with machine intelligence) music using the Transformer architecture.
We’ll be discussing not just Dan’s inspiration and creative process but also the exciting work he’s been doing to integrate art and technology in innovative ways. We also get into his view on futurism and what we might learn from his approach to studying a multitude of varying subjects under this broad umbrella.
We’d like to recommend that as you read through this interview, you throw on Dan’s Laughing Monkey album in the background and see how this take on computer-generated ambient music suits you.
Bring this project to life
Paperspace: When we started reading about generative music we were surprised to learn that the conversation seems to start with Brian Eno’s work with Koan in the mid-1990s! Eno, who some consider the first mainstream pioneer of generative music, defines it under two criteria: 1. The music must change continuously and never repeat itself, and 2. It must last forever.
Was Eno an inspiration for you? Do you think your modern exploration satisfies these standards? Do modern ML techniques perhaps even guarantee these standards? In what ways do architectures like Transformers bring Eno’s vision to life some quarter of a century later?
Jeffries: I love Brian Eno’s work and he features extensively in the ambient music playlist I listen to everyday as I write. His pioneering work in electronic music created the inspiration for thousands of artists after him.
But I didn’t have any particular artist in mind though when I started work on my ambient music machine learning project which I published in short form on the Google Magenta project blog and with more detail on Hackernoon, as part eight of my Learning AI If You Suck at Math series.
I was just in love with ambient music and machine learning as a whole and I wanted to see if I could create more of the style of ambient music that I love most. There are lots of styles, from simple white noise, to songs that recreate natural environments like a rain forest, but I love the flowing, ethereal ambient that puts me in a perfect mood to write or focus deeply.
Paperspace: We enjoyed your musings on ambient music -- especially the bit about creative flow as a meditative state that can be enhanced with the right auditory inputs.
Can you tell us more about your personal relationship with ambient music? Do you think flow … enhancement could one day be an environmental setting, so to speak? Like a thermostat but dynamically tuned to suit the person and situation?
Jeffries: I’ve always loved the sci-fi idea that music could adapt to a person. It could adjust in real time, understanding the user’s mood and based on constant feedback from that person.
Once more and more music is generated by machines there’s a near infinite potential for creating the kind of music you want to hear all the time. You may have collected the perfect playlist for your favorite Motown hits but Motown isn’t making any more 1960s music and that means you eventually run out.
But with AI generated music you could have infinite Motown!
Will it ever match the brilliance of those magnificent artists? Probably not. But who knows? Maybe and maybe not. But even if it’s not completely original, because it takes brilliant people at a particular point in time and with particular experiences to make a new sound, that’s all right. Sometimes we don’t need something brand new, we need something we already love and more of it.
As for Flow enhancement, I absolutely think you could find entire spaces dedicated to helping people get into the flow faster. We have dedicated, shared workspaces, so why not shared Flow spaces too? Think WeWork crossed with a spa. I once was a member of a fantastic artist co-working space in San Diego called 3rd Space and I could see a space like that pumping in beautiful ambient music to help people drop into that magnificent state where time seems to disappear and you’re focused completely on what you’re doing.
Paperspace: We can’t help but think that there must be gradations of -- for lack of a better or less philosophically loaded term -- quality inherent to music that determines our cognitive response to it. Did you learn something about quality when you started writing code to generate music? And if you want to go there -- what characteristics make music beautiful? How do you train a model to optimize for these characteristics?
Jeffries: When it comes to machine learning in its current state, the most important thing is the data. I curated a list of incredible music over several decades that I listen to every day as I write. It’s a perfect playlist at this point with no songs that I hate and not songs that are radically different in feel and temperament and tempo. They flow together as one.
Pulling together a highly curated list of music is the key, rather than just grabbing every old song on the internet and hoping that more equals better. More does not equal better. Less is more if you pick the right songs for training a machine.
As for the algorithms, I leave those to the researchers and the math wizards. I can’t write a cutting edge algorithm to learn music, so I pull from what’s out there and just try to find the best way to optimize it for what I want to do. In this case, the data set made all the difference in the world.
As for the music itself, there are deeper patterns that humans perceive and respond to that machines still can’t figure out. I’m not sure we could even articulate what those deeper patterns are, though many people have tried. Artists tap into something at a primordial level as they get better and better at creating music. When they create something highly original people respond to it at an unconscious level in ways we’re only beginning to understand. But even though it’s original it follows a deeper, preternatural pattern that we seem to recognize as humans no matter what external form it takes.
The first time I heard Marvin Gaye’s What’s Going On, I knew I was listening to Motown but I was also listening to something utterly new that had grown out of the Motown sound. Gaye had taken a quantum leap as an artist and he laid down the tracks by himself in multiple layers, singing backgrounds one and two and playing the congas and singing the main vocals too. It was Marvin on top of Marvin. They dramatized that evolution brilliantly in the movie Hitsville about the history of Motown.
In the future, I see AI and artists working together to create the next sounds. They may have the AI create twenty continuations of a riff and then jam with it and the AI will adjust in real time. The artist might tell the AI to go further with riff three and create new variations of that and then take over from there. It will turn into an incredible co-creative process of man and machine working together.
Paperspace: Have you listened to other music being produced with AI? We think the most talked about album right now is from Taryn Southern, a former American Idol contestant? What’s your opinion?
Jeffries: I listened to a lot of what was out there and I wasn’t in love with much of it. The classical stuff was good but it was probably only good to me because I’m not a classical expert so it all sounds a bit alike to me.
That said, Taryn is doing exactly what I talked about earlier, co-creating with AI. It’s pioneering work and someone had to blaze a trail so why not her! It’s super early stages but almost every artist will have AI co-creation woven into their process in the next decade.
If I’m being honest, most of the AI music I listened to as I was working on the project didn’t move me all that much. Too many of the articles I read generally ended with failure. The programmers and data scientists said, “here’s all the cool stuff we tried aaaaannnnd it didn’t work out.”
I wanted to be one of the first folks to succeed and I think I am, though of course without the Magenta folks and other researchers I had no chance.
Paperspace: Ok so here’s a wild idea for a follow-up piece to your work so far -- what about creating a kind of musical Turing test? If an ML model can be trained to generate sheet music that a professional musician can’t distinguish from a human-generated chart ... wouldn’t that be something? What can you tell us about the state of the art in machine-generated music that’s indistinguishable from human-generated music?
Jeffries: I think you could already fool people with some of the stuff that research teams are developing, particularly the piano and classical music generators.
The biggest problem currently is maintaining long term consistency in songs. A lot of generators out there can keep a song consistent for 10 or 15 seconds but then the generator seems to “lose its mind” and create a completely different kind of song. I focused on the Music Transformer from Magenta because it’s particularly good at learning long term structure to the music.
But there is still a long way to go.
Most of these systems can’t do anything like what humans do, which is generate an amazing hook and then loop that every 30 seconds as a chorus and then use the rest of the song to support it.
The algorithms are mostly just prediction machines trying to guess the next note from the notes that came before. It’s a snow plow that’s plowing ahead and forgetting a lot of what came before it. The Transformer can make the song all “feel” the same now but that doesn’t mean it’s generated anything like great music or a hit song most of the time.
Paperspace: What was the biggest challenge in creating the Laughing Monkey album? If a reader of this interview wanted to get started on a similar project, how would you recommend they start?
Jeffries: The biggest challenge was just wrangling the code. Most of these algorithms and libraries are coming out of research institutions and universities. They’re not enterprise software libraries that are perfectly maintained, so you end up having to rewrite parts of it or fix bugs with dependencies or you spend two days trying to compile something. When things go wrong, you’re on your own figuring it out because there’s no support.
In the stage where we converted songs to MIDI, that took us a month because we couldn’t find a great library and the ones we did have weren’t perfect. There’s no money in converting things to MIDI so there are no great commercial libraries to lean on that really captured the music close to the original.
Paperspace: What’s your stance on who or what should get credit for creating a piece of music? If someone forks your code to generate a different piece of music using the same model -- would they be a plagiarist or the sole artist or something in between? What should this mean for authorship generally as ML rapidly gets easier to implement?
Jeffries: I don’t have a good answer for this question other than to say that I hope the lawyers and lawmakers do a better job this time around then they did in the past.
We have massive rights tracking databases trying to catch people using a few bars of something to sue them. I hate that.
I remember when the Beastie Boys Paul’s Boutique came out with lots of samples and that album as well as others from the era led to the laws getting changed so even short bars required licenses. You couldn’t make Paul’s Boutique now without it costing $20 million dollars and letters from thirty different rights holders. I hope we do better.
As for the machines, I don’t think the machines or the algorithms should have hold copyrights for anything.
Paperspace: We’d like to switch gears and talk for a second about your creative process. To us it looks like you’re involved in just a crazy amount of things -- from evangelizing a data science pipeline product to teaching millions of readers how to get started in AI to designing novel crypto architectures to writing science fiction novels, and more -- are you multithreading all this stuff or are you one of those people that just sits down, settles into a state of flow, and knocks things out serially?
Jeffries: There’s an old joke I tweeted the other day:
"Do you wait for the Muse to show up or do you write on a schedule?
I wait for the Muse and she shows up promptly at 10 AM when I write every day."
There is nothing more important for an artist than to do the work, every day. Develop an iron will to work on your craft and you can do wonders. It took me a decade of screwing around to really commit to writing in my heart and focus on it. I’ve written nearly every day since then for the last decade.
You get no credit for yesterday. You have to do it again and again. And it’s the same every time. I’m at war with myself for the first half hour to an hour. I want to quit, do other things or anxiety and distractions and the world pinch in on me. But I stay with it and soon enough the magic happens. Suddenly two or three or four hours have passed and I’ve completely lost track of time. I’m just there writing and focusing and not thinking about anything except what I am writing.
I live for Flow. I just finished a self help book based on my article Mastering Depression and Living the Life You Were Meant to Live and there’s a chapter on how I use Flow to bring balance to my life. I’m hoping to have the book published later this year or early next.
Flow helps you process emotions and find time to relax and to feel like you’re contributing something meaningful to the world. I can’t imagine my life without it and it’s the secret to how I create so much stuff. I actually think I don’t create all that much but then when I look back I realize I sure have and that’s all due to Flow where I lose time all together and seem to do more than seems possible.
I’m a firm believer in showing up to do the work. It’s that consistency that lets you get into the Flow easier. Steven Pressfield’s The War of Art talks about it best. Show up every day and do the work with discipline and the rest takes care of itself.
Paperspace: What kind of background or training prepared you to cast such a wide net in your work?
Jeffries: My great mentor in life taught me critical thinking is the most important skill in the world.
If you don’t accept your limitations and you assume you can learn anything if you just work hard enough then you can learn just about anything.
We get too focused on narrow pursuits in life now. We don’t teach kids interdisciplinary thinking or liberal arts. It’s all about what is this human going to do for the hive? How did they score on some useless standardized test that means absolutely nothing except that you can pass a test? It doesn’t have anything to do with true intelligence.
True intelligence is self awareness.
If you have self awareness you don’t identify with your limitations and who you are now. You can always become something more and let go of old ideas about yourself.
Paperspace: One theme that would make sense to apply to a lot of your (public-facing) work is that of futurism. Can you tell us what futurism means to you? Are you exploring the same futurism that writers like Ray Kurzweil or Michio Kaku are exploring? What is your current obsession that you think nobody else is paying enough attention to?
Jeffries: My current obsession has to do with psychology since I’m finishing up my self help book.
As I wrote the book, I wanted to see how much of my personality and mind was hardware, how much was firmware, and how much was software, meaning impossible to rewrite, hard to rewrite and easy to rewrite. I found out that a lot more is software than we think. Yuval Harari talks a lot about how biologists and AI researchers will show people that they’re mostly just programs and that much of what we thought of as original thought was nothing but a mental heuristic.
Beyond that I am currently watching CBDC’s or Central Bank Digital Currencies very closely. I think very few people are paying attention to CBDC's and they should pay attention fast because they could easily crush decentralized cryptocurrencies and the dream of freeing money. Most people don’t understand Bitcoin or privacy preserving coins like Monero or Zcash. They don’t realize that cash is already privacy preserving and when it vanishes all together they will be surprised to have the powers-the-be right in their pocket. Central governments have demonized decentralized cryptocurrencies while at the same time they’re figuring out how to warp the dream of privacy preserving, decentralized money into a centrally controlled, surveillance coin in your pocket 24x7.
If we’re not careful you’ll soon have a panopticon payment system in your pocket that will track you more closely than your smart phone ever did and that’s not good at all.
I also think people aren’t paying enough attention to AI ethics either.
I’ve done some consulting on building real world ethics programs for companies instead of just a stack of weasley platitudes like most companies do. I talked about it on a podcast recently with the 2b Ahead think tank in Berlin. When AI is making life and death decisions and deciding who gets hired or promoted or fired, or who gets into school and who doesn’t, or who goes to jail and who doesn’t, we better have some good answers as to how and why those machines made those decisions.
Right now we don’t have good answers and that’s scarier than any killer robot Hollywood fantasy.
Paperspace: In your take on futurism -- does everything have to happen in a contiguous world? Or does your writing allow the possibility of divergent futures whereby some advances are mutually exclusive?
Jeffries: I think of my view of the future as a Monte Carlo analysis of the future. I’m running through lots of possibilities looking for the most likely branches based on all the variables.
But no futurist can foresee black swan events, like the invention of the Internet. Once you get the Internet it invalidates all predictions before it because it changes everything. Once someone invents the printing press the whole world changes and every prediction has to change with it.
So at best you can say, if we keep going like this the future will look like this. t’s only as good as what we know now and how good you are at stepping outside of yourself and seeing the world from lots of different perspectives.
Paperspace: One of the challenges out of the box with futurism is the difficulty (or impossibility) of having epistemological certainty -- that is, how do you know you know what you know? Epistemology is always mind-numbing but in the context of the theoretical space of futurism (it hasn’t happened yet) it seems like you need a pretty malleable theory of how the world works to account for all the wonderful new stuff coming out every day.
Is this sort of the appeal of thinking deeply about this stuff -- that you have to relentlessly re-optimize? Do you ever have moments when you’re like -- oh damn! -- and some big new development shifts a load-bearing beam in your logical system? Some of us were just about speechless when the GPT-3 demo was made public -- have you had a similar experience?
Jeffries: I’ve gone pretty deep down the rabbit hole of what we can really know in my Rick and Morty and the Meaning of Life article. That’s an endless hole but it’s also mostly a game people play with themselves. Okay, you can’t know much except “I exist” and a few other things. But once you stop playing that game and you zoom out there are some pretty consistent heuristics and algorithms running at the human and culture and societal level. And there are pretty clear rules if you can get out of your own way long enough to see them.
Someone tweeted the other day that you can’t have absolute truth if he thinks it’s relative. In other words, everything is relative. I tweeted back “If you jump out of a fifty story window you’ll find out that gravity doesn’t care what you think.” There are some hard and fast rules to the world and they’re not hard to see if you open your eyes and look closely enough.
Nothing is hidden from us. We just hide it from ourselves.
I consider myself able to adjust my thinking no matter what new evidence or experience comes along. If something changes the way I think or see things, then all the better. Rigidly holding onto past understandings is the path to stagnation. As long as we’re alive we have the opportunity to grow and we should do it.
Paperspace: There's a lot of nuance and important ideas there. Is that addressed in your upcoming book? What is the next big project you're working on?
Jeffries: The next big thing for me will be my self help book and it will be a big shift for me but I hope that it will help a lot of people. I didn’t just want to write something where I passed off some stuff I picked up in other books as legitimate. I can’t stand parlor tricks like fire walking. That’s not changing anything. It’s just a magic trick. I did real, hard work on myself and I changed my life.
As I wrote the book, I tested everything on myself and people around me and I threw out more than I kept. In most cases I had to evolve my own techniques to counter habits of my mind but in the end I did it.
I’m proud of the book. It’s the best damn thing I’ve ever written by far. I hope it helps people live a life that’s amazing instead of just doing what they’re shown and accepting the mediocre life they’ve been handed.
Make sure to stay up to date with all of Dan's work as an author, engineer, pro-blogger, podcaster, public speaker, and the Chief Technical Evangelist at Pachyderm. As well as his musings on Twitter, and work at the Practical AI Ethics Alliance and the AI Infrastructure Alliance, and the canonical stack for machine learning.