ai tags

Bull Session

AI and Music

September 7, 2017          

Episode Summary

On the podcast this week, we discuss artificial intelligence and music with special guest Pierre Barreau, CEO of Aiva. Aiva (Artificial Intelligence Virtual Artist) is an AI composer. Aiva has created music used in the soundtracks for films, advertising, and games, and is the first virtual artist to be recognized by an author’s rights society. Join us as we explore how man and machine collaborate to create the future of music.

A New AI Can Write Music as Well as a Human Composer

Welcome to episode 223 of The Digital Life, a show about our insights into the future of design and technology. I’m your host, Jon Follett, and with me is founder and co-host, Dirk Knemeyer.

Greetings listeners.

For our podcast this week, we’re honored to chat about AI and music with our special guest, Pierre Barreau, CEO of Aiva. Aiva is an AI composer that creates musical pieces used as soundtracks for film, advertising, and even games. Aiva is the first virtual artist to be recognized by an author’s rights society, and in fact, the music that you heard for the show intro today was composed by Aiva. Pierre, welcome to the show.

Hi. Thank you so much for having me.

So, let’s jump right into it. What’s the origin of Aiva? How did you approach creating an AI musical composer? How did Aiva come to be, Pierre?

Right. So, I mean, I saw this science fiction movie, “Her,” with Scarlett Johansson. In this movie, there is an AI that composed a beautiful piece of music, of piano, for a moment that the AI is sharing with her companion. That particular moment really struck a chord with me because I originally come from a family of artists. My father is a film and movie producer. Sorry, film and music producer, and my mother is a singer.

But at the same time, I studied computer science at university, so I really got the passion for both domains, and seeing that on screen with a story was really powerful, so I then decided, what would happen if that was actually real, if an AI can actually compose a beautiful piece of piano? Yeah, that’s pretty much how I started my journey and how our team started the journey into AI generated music.

So are you a coder as well? When you saw the film, did you just rush to the computer and start coding? Or, how did the team come together? How did you create this organization?

It was more of a long process. First, more asking yourself the questions about you know, what are the implications of such a technology if it exists? How can we make sure that we use it to benefit composer and not replace them? That was firstly the initial thoughts that I got, and then yeah, I assembled my team with people who are my friends, are interested in the same topics of music. I mean, everybody at Aiva is actually a musician, so that’s pretty much how we got together really because of a passion for both technology and music.

Oh, wow. So your entire team are musicians, as well? I don’t know if I would have guessed that, that a group of musicians would be creating this next gen AI tool, which frankly, could be a little bit frightening if you’re a musician and like you said earlier, you think you might be replaced. But clearly you guys are okay with that and you don’t see Aiva as musician replacement.

I mean, that’s really what we’re selling to people in general, to clients, but also to the public. Yeah, we’re doing this knowing that there are some concerns about the AI being used for creative arts, but at the same time, we as musicians, we think that the power of AI can really enhance human abilities to do completely new things that could have never been done before with regular man-powered work makes sense.

Yeah, that does. Let’s dig into the service a little bit, into Aiva. How does the AI compose music? What’s the interplay between input from human beings and machine learning?

Right. So, basically Aiva works like most deep learning algorithm out there. It learns from 15,000 pieces of classical music, so music written Mozart, Beethoven, Bach. Then, it basically picks up some of music and creates a mathematical model which is basically a definition of what music is, to this algorithm. Then, using this mathematical model, it can recreate totally unique content.

The idea in the beginning was to create an AI that can post themes really quickly, so compelling themes that you hear in content like video games or films, and basically the AI starts by composing for piano, fully fledged polyphonic compositions. Then, we work with humans. Humans have an input in transcribing this piano work into a work for orchestra. Influence that humans have, but at the same time, we get the benefit of AI composing themes extremely quickly for our clients.

So you’ve done some Turing tests with Aiva’s music. What were the results of those and how did you stage those tests?

Those were really interesting actually, because we did it at the very beginning, once we got a working prototype. Basically, we showed the piano compositions to professional artists, without telling them who composed the music, and they weren’t able to say, “This was weird or machine made composition.” It definitely passed the test. It’s really interesting because if you, for example, if you ask someone to see if there is something wrong about the composition of an AI, if they know it’s an AI composing music, they will probably find something that is wrong, or at least they will think the thing they’re looking for is wrong.

But actually, in most cases, it might just be bias, human bias, because it’s hard to relate to something that does not exist as a being, that has emotions. Yeah, the music Turing test was successful when people obviously were professionals and didn’t know that the AI composed the music.

That’s completely fascinating. After the test, did you reveal to them, to the professionals which was which? If you did, what was their reaction?

Yeah, totally. We did revieww it and their reaction was they were pretty much blown away. The interesting thing is that from this start, you get a much more positive conversation about what AI can do for music. If you start by just showing a composer music composed by an AI and they know it’s composed by an AI, then the conversation might start on a more negative note and it’s harder to exchange feedback and point of views, but here we were able to actually start very meaningful conversations about how we can make this technology work for not just us as a company, but for everybody eventually, because that’s our vision. Creating music for, or at least creating tools for everybody to interact with.

So make that concrete. Pretend that I am a composer and you’re selling to me. What is the real language you use when you’re trying to convince me that what you’re offering is something that I should desire and find exciting instead of be afraid of?

Right. So, composer’s block is probably one of the most [credible 00:08:38] thing when you’re a composer, staring at a blank sheet of paper, starting from nothing. Essentially what AI can do is bring in some ideas to the table, and then you take those existing ideas, and you have to know that humans are very good at taking existing idea, and then building on it. As a composer, you can use these themes created by the AI to shape your next composition, and that’s actually a pretty powerful thing, because if you think about it, humans as composers are generally very … I mean, they’re stuck in that local minima. They do what they do best, and that’s great, but sometimes they don’t go out there and doing some completely thing, or creating a music that is in a style that they’re not used to.

AI can enable those things because they give you the ideas that you would have never thought of by yourself. I think that’s really where the power of AI lies for individuals. You have AI doing all the dirty work of coming up with something, and then you as a human, use something to build something even better.

That’s cool. I mean, AI as an augmentation to human creativity makes a lot of sense, but inevitably, the next step or an eventual step would be the AI just taking over as the composer, right? I mean, isn’t down the road a way, isn’t that inevitable or do you think not really?

Really interesting question. I think it’s not going to happen because, for example, if you’re around a camp fire with your friends and you’re singing a song, someone is jamming on the guitar, maybe the person playing the guitar will most probably not play as well as the original recording of the song, so if you wanted a better listening experience you would probably go and pick up your headphones and hear the actual recording. But actually, you enjoy hearing the person play the music because there is a social aspect. It’s not just creating music for quality, it’s also because it’s social, it’s fun, and it’s collaborative.

Also, if you think about modern music, we’re heading towards music that is not just composed, but also it’s more about sounds than composition. Classical music, the [inaudible 00:
11:11] is on the composer. If you have two different orchestras playing the same classical piece, it will probably sound different, but not so much. If you look at a famous rock band playing their tune, you will recognize the singer’s voice, the way that the guitarist has his guitar sound and et cetera. If you have someone, the same [inaudible 00:11:36] guitarist playing that same guitar tune, for example, you will hear a difference.

The emphasis is more on sound with modern music, and I think sound is a lot harder to crack. The problem is a lot harder to crack for AIs because it’s so much more computationally extensive, and also it’s not as meaningful to solve this problem I would say.

Your point about interaction makes a lot of sense, but do you have a theory of what that looks like? Just quickly as I was listening to you talk about it, I’m imagining, “Okay sure, in social, non-professional contexts, people are composing and sharing and performing music,” but the AI has taken over, providing music for movies, providing music for ads, providing music for professional paid contexts, because it can do it cheaper, with a wider variety of outputs. Is my coming to that conclusion wrong minded, or could it break out in that paid professional versus amateur enjoyment context?

So, there’s also the case where you … For some of you, this is a little bit off topic, when you go to a concert. I wouldn’t say that music is made by humans in the future is just going to be about social interactions with your friends, but also if you go to a concert, you want to see a performance. It’s not just music, it’s about connecting with the minute you’re on stage, and I think AI is very, very, very far away from replacing that. I think it will probably never replace that.

But regarding AI composing all entertainment content, it’s possible that someone come in and create all the music for all content out there, but what we are focusing on as a company is solving use cases that humans cannot. For example, if you take a video game, these days big video games have hundreds of hours of content, but only two hours of music. If you think of that, that’s pretty crazy because it means you hear the same music 50 times over. If you’re a musician, sometimes you actually mute the music and put your own. I think AI is really about solving those things that cannot be solved by humans, and that’s really where the value of what we’re trying to create is.

That’s great. Tell us about your plans for Aiva’s future. You’ve started with the classical style and now you’re going to branch out I assume into other styles. How do you see that happening? Or, is it on your roadmap?

Absolutely. Actually, we are working on right now training algorithms on more modern music, but in terms of technical milestones, I would say we’re more focused on getting some feedback from the artificial intelligence. So, right now all the work we’re doing, if we want to evaluate a model that we created, we need to do that by just listening to samples composed by the AI, and that’s very tiresome and that’s not very [inaudible 00:14:59] because quality is very subjective. One person might hear something and they will say it’s good, another person will say, “Well, it’s bad.” Yeah, it’s very subjective and we’re trying to essentially create a musical ear, a qualitative metric for the composition’s quality created by the AI.

That will allow us to do a lot more interesting things like having control over what happens if you get feedback from a human saying, “I like this music, I like how this is going” and then the music can fit more following the user’s taste. That’s pretty much what we’re focusing on right now.

That’s very interesting. So, you mentioned this a little bit in some of your earlier answers, but as you see AI tools like Aiva evolving with human musicians, how do you see the future of music shaping up in the next say five years or so? Clearly there’s going to be collaboration between man and machine, and there always has been in music, but given the power of these new AI tools, what do you see happening, Pierre?

Right, so I think that as a first step, you will see people, for example, who are vocalists. They need someone to compose their tune because they don’t play an instrument, maybe. They just sing. These people will use AI to complement themselves and it will help them get started, because if you don’t have money to hire someone to compose your tune, then it’s probably tough to get started. Or, likewise, if you’re playing the piano, but then you would like to play with an orchestra, and you don’t have the money or you don’t have a local orchestra playing nearby you, you can do that from your bedroom

I think AI in the same way that software, musical and [patron 00:
17:09] software, digital, junior workstations, have enabled music to everybody. I think AI will take that a step further.

Yeah, that seems to me to be the good next steps. I mean, I’m a musician myself and I use a lot of software tools to help, to create music, and I could definitely see how AI could enhance the use of those tools, so I can either have better accompaniment as you said, or perhaps better choices when it comes to the things I’m composing. I wonder, there is some controversy over, and perhaps there always will be, over people who are using software to create their music versus “learning the acoustical instruments,” where that be piano or guitar or what have you. Do you see in that controversy, how do you see that playing out? Or, does it even matter anymore? Are they all just tools for creating music and that’s the totality of the story?

I mean, it’s a very good question and I don’t think I can give an answer that will satisfy everybody, but if you think what virtual instruments have allowed people, I mean, if you don’t have money to hire an orchestra to record your music, nowadays you can be a kid with a laptop and create an orchestral score and hear what you just did. That’s pretty powerful, and people, when those virtual instruments first came about, they were not very good, but they still had some potential. [inaudible 00:19:09] people actually said, “Well, that’s just going to [inaudible 00:19:12] off the real thing. It’s not good. It’s just not serving any problem.”

I think it’s normal to have resistance when something this big comes along, because even us, we’re still, I try to figure some things out. I mean, we have a core vision to go towards the personalization of music, as I said, for video games and for these interactive contents, or even for users, but I think these huge shifts take time to get understood by everybody. People also need to understand the technology behind or they need to understand that if you’re a programmer behind those technology, there’s some part of creativity involved in creating those tools as well. Yeah, I hope that answers your question.

Yeah. No, that’s certainly a good answer. One more question as we wind up. I was interested in the reaction to Aiva’s first album, I guess. You call it “Genesis.” I noticed it on SoundCloud you’re getting lots of people listening to it. What has the reaction been? Do you have another album that’s coming? What are the plans and how has it been received?

So, the feedback, I would say it’s super in favor of what we’re doing, and I think a lot of people actually are very positive about this and supportive, or it’s completely against us. Sometimes it does lead to very interesting conversations, but I think like every other thing that anybody would do if they were trying to disrupt something, there’s always going to be the pros and cons. Yeah, I would say the feedback so far has been pretty good overall, even though there’s always people who are against it, and that’s fine, because maybe myself as a musician, if I was not doing this, maybe I would be very skeptical about it, so I think people are raising questions which are fair.

Regarding the second album, yeah we’re actually for now, we’re focusing more on delivering music to our clients, and we’re building up a list, an extensive list of music. But I think eventually we will release a second album, but first we want to have this big improvement of the technology. I think that would be really relevant, just to see the contrast between the first album and the second album if that makes sense.

Yeah, that does. So, if our listeners today are interested in learning more about Aiva or getting in touch with you, what’s the best way for them to do that?

So, they can do everything on the website. They can listen to the music, they can watch some of the videos we scored, and they can reach out via the contact form. We reply to most requests, so yeah, they can reach out on the website.

Terrific. So, Listeners, remember that while you’re listening to the show you can follow along with the things that we’re mentioning here in real time. Just head over to, that’s just one L in TheDigitaLife, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everybody so it’s a rich information resource to take advantage of while you’re listening. Or, afterward if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM, and Google Play. And, if you want to follow us outside of the show you can follow me on Twitter, @JonFollett. That’s J-o-n-F-o-l-l-e-t-t. And, of course the whole show is brought to you by Involution Studios which you can check out at That’s Dirk?

You can follow me on Twitter, @DKnemeyer. That’s @D-K-n-e-m-e-y-e-r, and thanks so much for listening. Pierre, how about you?

Yeah, you can go on the website, I don’t really have a Twitter, but yeah, reach out on the website.

Terrific. That’s it for episode 223 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett, and we’ll see you next time.

No Comments

Leave a Comment

Your email address will not be published. Required fields are marked *

Jon Follett

Jon is Principal of Involution Studios and an internationally published author on the topics of user experience and information design. His most recent book, Designing for Emerging Technologies: UX for Genomics, Robotics and the Internet of Things, was published by O’Reilly Media.

Dirk Knemeyer

Dirk is a social futurist and a founder of Involution Studios. He envisions new systems for organizational, social, and personal change, helping leaders to make radical transformation. Dirk is a frequent speaker who has shared his ideas at TEDx, Transhumanism+ and SXSW along with keynotes in Europe and the US. He has been published in Business Week and participated on the 15 boards spanning industries like healthcare, publishing, and education.

Pierre Barreau


Co-Host & Producer

Jonathan Follett @jonfollett

Co-Host & Founder

Dirk Knemeyer @dknemeyer

Minister of Agit-Prop

Juhan Sonin @jsonin

Audio Engineer

Dave Nelson

Technical Support

Eric Benoit @ebenoit

Brian Liston @lliissttoonn

Opening Theme @aivatechnology

Closing Theme

Ian Dorsch @iandorsch

Bull Session

My Trusted Robots

June 8, 2017          

Episode Summary

On The Digital Life this week we take a look at designing trust in human-robot relationships. More so than with other technologies, robots require a certain level of trust. Our comfort level with robots will dictate whether we’re willing to ride in driverless cars, work on the assembly line with a collaborative robot, or have a health robot caregiver. Designing human robot relationships will be key to overcoming barriers in the transition to a robot filled world. But how do we manage the wide variety of human emotional reactions? And what does this mean for the future of robot services?



Most westerners distrust robots – but what if they free us for a better life?

Bull Session

Storytelling and AI

April 20, 2017          

Episode Summary

On The Digital Life this week we explore storytelling, creativity, and artificial intelligence. Our cultural evolution is reflected in our ability to communicate through stories, creating shared experiences and meaning. Recent research from the University of Vermont and the University of Adelaide used an AI to classify the emotional arcs for 1,327 stories from Project Gutenberg’s fiction collection, identifying six types of narratives. Could these reverse-engineered storytelling components be used to build automated software tools for authors, or even to train machines to generate original works? Online streaming service Netflix already uses data generated from users’ movie and television preferences to help choose its next shows. What might happen when computers not only pick the shows, but also write the scripts for them?

The Six Main Arcs in Storytelling, as Identified by an A.I.
The strange world of computer-generated novels
A Japanese AI program just wrote a short novel, and it almost won a literary prize

Bull Session

AI Goes Mainstream

February 3, 2017          

Episode Summary

On this episode of The Digital Life, we discuss the high-powered Partnership on Artificial Intelligence to Benefit People and Society, an initiative whose founding members include Amazon, Facebook, Google, IBM, Microsoft and Apple. Apple was just recently added as a founding member.

The mission of the group is to educate the public about AI, study its potential impact on the world, and develop standards and ethics around its implementation. Interestingly, the group also includes organizations with expertise in economics, civil rights, economics, and research, who are concerned with the impacts of technology on modern society. These include: the American Civil Liberties Union (ACLU), the MacArthur Foundation, OpenAI, the Association for the Advancement of Artificial Intelligence, Arizona State University and University of California, Berkeley.

Will AI build upon our society’s biases and disparities and make them worse? Or does it have the potential to create something more egalitarian? Join us as we discuss all this and more.

Apple joins Amazon, Facebook, Google, IBM and Microsoft in AI initiative
Partnership on AI
A massive AI partnership is tapping civil rights and economic experts to keep AI safe

Bull Session

A Year Talking Tech

December 22, 2016          

Episode Summary

For our final podcast of 2016, we chat about the big themes on the show and our favorite episodes over the past year. We had conversations on design and tech with some wonderful guests including ground breaking geneticist George Church and open science advocate and researcher, John Wilbanks. From AI to genomics to cybersecurity, we covered a wide range topics on The Digital Life in 2016. So what did we learn from a year talking tech?

AI is too smart for its own good.
Artificial intelligence is evolving rapidly, with both high profile public failure and success by a number of tech giants this year. For instance, Microsoft had to terminate Tay, its teenage chatbot, after the bot started tweeting neo-Nazi propaganda and other abusive language at people. Meanwhile, Google’s DeepMind created an AI capable of beating some of the very best human players in the world at Go, the Asian strategy board game. And, we were introduced to a brand new “Rembrandt”, which was 3D-printed with eerie accuracy by an artificial intelligence algorithm, trained by analyzing the artist’s paintings.

Episode 149: Artificial Intelligence
Episode 151: AI Goes to Art School
Episode 163: AI Goes to the Ballpark

DNA replaces silicon as the new material for innovation.
The fields of genomics and synthetic biology continue to press forward in astonishing ways. In Seoul, Korea, a controversial lab revealed plans to clone endangered animals in order to save them from extinction. At the Massachusetts Institute of Technology (MIT) and Boston University (BU) synthetic biologists created software that automates the design of DNA circuits for living cells.

Episode 148: On Cloning
Episode 150: Engineering Synthetic Biology
Episode 154: DNA as Data Storage
Episode 158: Writing Human Code
Episode 168: The Microbiome
Episode 169: Genomics and Life Extension
Episode 170: Chimeras and Bioethics
Episode 176: Three Parents and a Baby

Hacking and cybersecurity are front and center as online and offline worlds collide.
In 2016, cybersecurity became a primary issue in a host of critical areas including communication, energy, and politics. Power grids, airports, and other infrastructure were increasingly subject to cyber attacks and an increasing number were successful. The debate over privacy and security was reinvigorated by the hubbub around the FBI request of Apple to unlock an iPhone owned by one of the San Bernardino shooters. And, Wikileaks distributed e-mails obtained by sources who hacked the DNC and individuals associated with the Clinton campaign during the U.S. presidential elections.

Episode 139: Hacking Power
Episode 144: Apple vs. FBI
Episode 166: Hacking the DNC
Episode 179: Internet Takedown

The automation of work is coming.
We got another startling look at what the future of work could become as software, robots, and the IoT continued to automate activities previously completed by humans. According to preliminary findings of a recent McKinsey report, 45 percent of all work activities could be automated today using technology already demonstrated. From fulfilling warehouse orders to suggesting medical treatments for ailments, the coming wave of automation will redefine jobs and business processes for factory workers and CEOs alike.

Episode 140: Automating Work
Episode 141: Future Transportation
Episode 145: Robot World
Episode 153: Smart Cities and Sidewalk Labs
Episode 173: Labor and the Gig Economy

Design and science are intersecting in new and significant ways.
Whether it’s in the creation of high tech clothing, embeddables, or materials, design and science are coming together in new and significant ways. Clothing designers are working with multi-disciplinary teams, integrating input from engineers and synthetic biologists into their work. From 3D-printed couture to scarves dyed with bacteria to textiles grown in the lab, emerging tech is creating rapid innovation in the fashion industry. And this year, in the burgeoning world of designing embeddables, the U.S. Patent Office approved Google’s patent for electronic lens technology, which implantable directly in the eye. These mechanical eyes might give you superhuman abilities — to see at great distance or view microscopic material, and document it all by capturing photos or video.

Episode 143: Clothing and Technology
Episode 155: Designing Embeddables
Episode 161: The Future of UX
Episode 171: Embeddables
Episode 172: Quantum Computing