Artificial Intelligence is allowing automation to be faster, broader and more disruptive than ever before. We’ve long been confident that creativity is the one aspect of human nature that would be impossible to be performed by machines, but it takes just a quick glance at recent AI advancements to realize that the next Spielbergs and da Vincis might be made of silicon.
Text by Benjamin Piñeros
Image by Amos Mulder
Today, the global economy, local politics, and the very way we interact with each other are all influenced in one way or another by artificial intelligence. Voice assistants are becoming a habitual presence in households, Netflix decides what films we consume, and self-driving cars are just around the corner.
The Cambridge Analytica scandal showed us the extent at which the world’s electorate can be manipulated with social media, and today there’s a new startup that’s developing an Ai capable of identifying the ideological bias of a piece of news and replicating it.
Have you heard of Google’s Duplex project? Their Ai is so advanced, it’s already capable of sustaining such natural conversation that it can fool humans.
All this new technology is boosting the development of automation at such a pace that academics can’t agree on whether we’re crafting for ourselves a world of sunshine and roses or digging humanity into a hole.
Historically, the results have been sweet and sour. Ever since the explosion of the industrial revolution in the late 18th century, automation has been the main driving force of economic growth and technological progress. In a little over a century, the whole world transitioned from agrarian-based economies to global industrial economies. Life expectancy has risen to heights never seen in history, and the overall standards of living on the planet have exploded dramatically.
The past two centuries have also witnessed how 1% of the world’s population has amassed half of the globe’s net wealth, a perverted maldistribution that had never appeared before at this scale in human history. The gap between a billionaire and a blue-collar worker today is astronomically greater than the gap between a king and a peasant in the 17th century.
Textbook capitalist economic theory suggests that the higher productivity that results from automation speeds economic growth, thus, encourages more consumer spending. The jobs that are lost are compensated quickly by the increased labor demand.
But this notion is as complex as it is controversial. If this process of compensation was so clean-cut, continuous economic progress for over two hundred years would have by now resulted in an inclusive society where the prosperity that all of us have created with our labour was broadly shared. The more than two billion people that live in poverty today tell a whole different story.
It turns out, more often than not the process of innovation displaces more jobs than it creates, and the accumulation of decades of savage automation is becoming unsustainable.
Companies are investing the profits of high-rate production in increasingly better automation technology, cutting production costs at an exhilarating speed. The rate at which jobs are lost is becoming faster than the rate at which jobs are created, making an increasing amount of the labour force redundant.
Karl Marx argued that there was a “trap” in the capitalist model, one in which no true compensation existed, either in the short or long run. For Marx, constant competition would force companies to enter a never-ending cycle of investment in labour-saving technology to cut their production costs, increasingly hiring fewer and fewer workers for less and less wages.
For the German philosopher and economist, there was no way the worker would have a happy ending under this system; unemployment and poverty were technological issues at their core.
In many ways that is exactly what we’re seeing today. New technologies are generating massive amounts of wealth for very few, while the rest of the world is left with the scraps, as our historic levels of income and wealth inequality suggest. It’s capitalism’s original sin and the current social turmoil erupting all over the world is a sign that this pressure cooker is about to explode.
We’re still trying to find a consensus on the impact of the automation of goods and services on the global economy and its repercussions in society. The means to patch those problems are precisely at the very core of the tug of war between left and right. But while we’re locked in this discussion, another boogeyman, this one, maybe even more perverse, is starting to peek its head on the horizon. What will happen to human society when our very culture is also automated?
If it’s hardcore enough to imagine a world where all production and service processes are performed by machines, now think of a scenario where not only your soap and clothes but also our culture is made in an assembly line. Although we’ve seen examples of this happening previously, never before have we been faced with the possibility of having all of our art and entertainment entirely created by machines.
After the first wave of rock’n’roll, we saw the apparition of the brill building approach to music production and the rise of Motown Records during the ‘60s, where musicians were put into cubicles and were paid to churn out hits like workers assembling toys on a conveyor belt.
“Every day we squeezed into our respective cubby holes with just enough room for a piano, a bench, and maybe a chair for the lyricist if you were lucky. You’d sit there and write and you could hear someone in the next cubby hole composing a song exactly like yours.” said legendary songwriter Carol King, describing the workflow at publishing houses of the time in Simon Frith’s book from 1978 The Sociology of Rock, “The pressure in the Brill Building was really terrific — because Donny (Kirshner) would play one songwriter against another. He’d say: ‘We need a new smash hit’ — and we’d all go back and write a song and the next day we’d each audition for Bobby Vee’s producer.”
During that same decade, Andy Warhol’s Factory satirized the commercialization of art by turning underground expressions into pop culture and adopting a similar “industrial” approach. And of course, we’ve seen the Hollywood studio system and Japanese animation factories producing content at an industrial rate for decades now.
But in this new AI revolution, we’re facing for the first time a cycle of creation completely devoid of human intervention. It’s a new scenario that sounds as intriguing as it feels sacrilegious.
Our time will be remembered as the era when AI-assisted art came to prominence. How will this computer-inspired and created art will look like? And even more importantly… will this prove that art is not the exclusive expression of human experience?
1. We’re one step away from computers composing film scores.
Iamus is a cluster of computers at the Universidad de Málaga that in 2010 was able to compose the first fragment of professional contemporary classical music ever done by an artificial intelligence.
Composers like the Greek-French theorist Iannis Xenakis have been using computers to create music since the ‘60s, and we all know samples, MIDI, synthesizers, and other electronic means have been used in popular music for more than half a century now.
In 1980 computer scientist Kemal Ebcioğlu developed a program called CHORAL, which harmonized chorales in the style of Johann Sebastian Bach, and another software, GenBebop, developed in the early 1990s by cognitive scientists Lee Spector and Adam Alpern was programmed to play jazz solos in the style of Charlie Parker.
Music composition and performance assisted by software and other electronic means is not new, but what makes Iamus’ creations different than previous attempts is that it doesn’t emulate existing styles, nor is it programmed to use the work of past composers and build from there. Iamus is capable of autonomously composing original music on its own.
The London Symphony Orchestra recorded Iamus’ compositions in 2012, in what is probably the only album so far to be written entirely by a computer and played by human musicians.
2. There are already piles of produced screenplays written by artificial intelligence.
In 2016, the 3-minute long short film Do You Love Me became the first film co-written by an Ai to go viral.
Cleverbot is an online conversational Ai capable of holding a believable, text-based chat with a human. Filmmaker Chris R. Wilson asked the software to come up with the film’s character names, the dialogue, and even the title.
What’s even crazier, Huffington Post journalist Bianca Bosker asked Cleverbot to review the film, “Well, I like movies, I just don’t like watching them” said the AI in the conversation.
The 48hr Film Challenge contest of Sci-Fi-London is an annual challenge where contestants only have two days to write, shoot and edit their films. To make things even more interesting, filmmakers are given a series of prompts that they have to include in the film, like certain props or specific lines of dialogue.
For the 2016 edition, BAFTA-nominated filmmaker Oscar Sharp and NYU AI researcher Ross Goodwin created a 9-minute short film entirely written by a bot called Benjamin. The artificial intelligence wrote the screenplay like any human would, including scene directions and dialog.
Benjamin was developed to be a self-improving AI trained with hundreds of screenplays of sci-fi TV shows and movies mostly from the 1980s and 90s.
The same team did it again a year later, putting Benjamin to pen a new meta-infused script about an AI that steps in during a Hollywood writer’s strike to churn out a screenplay for a film headlined by David Hasselhoff.
When we think of prolific authors, the names of Isaac Asimov, Agatha Christie and Stephen King might come to mind. But did you know that the most published author in history, with more than 200,000 titles under its belt, is an AI?
Professor of management science Philip Parker has developed a software that collects publicly available information from the internet and with the aid of dozens of computers and a small team of six or seven programmers, digests it, and turns it into a book. The software is capable of writing anything from textbooks to crossword puzzles, from poetry to scripts.
Say, Mr. Parker sells only one copy of each title for $1. It doesn’t take a mathematical genius to figure out that such a strategy is a profitable one, a path that many publishing houses, authors, and screenwriters will be tempted to carry on in the near future.
A screenwriter that’s cheap, does exactly what told, non-unionized, and that always delivers on time is practically a Hollywood producer’s wet dream come true. Are we on the verge of the rise of artificial scribes ruling the market?
While artificial intelligence takes over the screenwriting world, Hollywood is already adopting the technology to help them evaluate scripts and determine their worth.
ScriptBook, for example, is a company that uses a cloud-based artificial intelligence system that analyses movie scripts and provides a detailed report that includes possible flaws in the story, the MPAA rating, territoriality forecasts, target demographics and box office sales predictions.
“The AI-powered system has learned what storyline characteristics affect the box office (positively or negatively) and to what degree.” they boast on their official website.
Disney has also jumped on the Ai bandwagon and has partnered with Thelma & Louise actress Geena Davis and her Institute on Gender in Media to develop GD-IQ: Spellcheck for Bias (Geena Davis Inclusion Quotient), a software that analyzes scripts to find underrepresentation and social bias.
Co-developed by the University of Southern California Viterbi School of Engineering, the AI examines a screenplay and tracks how many characters are part of the LGBTQIA community, how many people of color are featured, and how many disabled people are included in the story.
3. Netflix’s Big Data approach with House of Cards.
Netflix has put into place probably the biggest and most sophisticated data analysis apparatus ever used in the entertainment industry.
They save every search you make, they know how many times you paused or fast-forwarded a show, and the precise moments where you did it. They log the exact location from which you play your shows and the device you use. They know what genre you prefer on weekends, and which you fancy on Mondays.
What’s even more fascinating, — and creepy — they actually analyze the performance of the content itself. Netflix captures screenshots of the show you’re watching to take notes at which type of colors, scenery, and other traits you prefer. They also consider the volume at which shows are played, the length of the credits and other variables to determine what works with the audience and what doesn’t.
Back in 2011, Netflix analyzed all its data and based on that decided to embark on a $100 million remake of the BBC miniseries House of Cards.
They found a strong correlation between the fans of the original show and the devotees of David Fincher and Kevin Spacey’s work. The interest, at least on cold data, was enough to consider the material had a promising audience. And it worked.
House of Cards went on to become the most popular show from their roster of original content, with positive reviews and a slew of awards and nominations, including 33 Primetime Emmy Award nominations and eight Golden Globe Award nominations.
“We don’t have to spend millions to get people to tune into this,” said to GigaOm at the time Steve Swasey, Netflix’s V.P. of corporate communications, “Through our algorithms we can determine who might be interested in Kevin Spacey or political drama and say to them, ‘You might want to watch this.’”
Without the commercial pressure for short term results nor the need to appeal to a board of stakeholders or any market in particular, the independent and authorial scene has always been the arena where artists have experimented with new styles and developed the trends that would later feed popular culture. It’s precisely in that realm of untampered expression that the boundaries of the artform are pushed and the “next big thing” is born.
Without films that performed poorly commercially, say 2001: A Space Odyssey or Blade Runner, we wouldn’t have Star Wars or the gazillion other successful sci-fi flicks that those groundbreaking films inspired. Nobody would have known at the time that they “needed” poetic explorations of space and time, or a gritty and pessimistic vision of the future. These just didn’t exist before and there was no reference for such a product.
To me, chewing on what people want and regurgitating it to them again sounds like a vicious cycle that can potentially stifle the advancement of the medium. It reminds me of that famous — and maybe misattributed — Henry Ford quote, “If I had asked people what they wanted, they would have said faster horses”
Most often than not, that film, song or painting that speaks to you deeper than any friend or relative you know in real life, that work of art that is so good it saves you, comes from the place you least expect. It’s born out of novelty and genuine expression. The danger of adopting a cycle of creation where we only get “what we want”, is that before long we might end up with a culture solely based on pornography and cat videos.
4. IBM’s Watson can edit a film faster than you.
In 2016, 20th Century Fox and IBM partnered to create the first movie trailer ever devised by an artificial intelligence.
Morgan is a Frankenstein tale adapted to our current times about an artificial organism that turns against its creators. To fit the premise of the movie, Fox contacted IBM over the prospect of using their “celebrity” Ai Watson — which famously in 2011 won $1 million on the game show Jeopardy! — to create the trailer for the movie.
They first fed the Ai with the trailers of a hundred horror movies so it could learn how to classify what type of shots, sound design, objects, and scenery have the most “scare factor” on us puny humans.
Then they gave the whole movie to Watson so it could identify the ten scariest moments that could make up the most effective trailer possible.
The real “scary” part of this process is that while a human editorial team spends anywhere from a few weeks to a couple of months going over a film to select the most appropriate bits for a trailer, it took just a day from the moment Watson “watched” Morgan and selected the shots, to the moment IBM’s resident filmmaker cut the trailer and delivered the final product.
Even more impressively, the scenes Watson picked as the scariest were different from the ones the human editors chose to use in the official trailers for the film.
5. Deepfakes and blurring the line between what’s “real” and what’s not.
We’re going through an era scholars call “Post-truth” where fundamental notions of objective reality are being questioned and often disregarded. Ironically, despite this being a time of unprecedented technological and industrial advancements, we’re going through an age of “anti-enlightenment”, where opinions and beliefs are often posed as truth above scientific fact. Long-gone diseases like measles are reappearing because people are refusing to vaccinate their children, and millions of people today believe the Earth is flat.
The 2016 United States presidential election, the fall of the Colombian peace referendum, and Brexit are key moments that illustrate a time when the concept of truth is no longer universal, and science and fact are losing their power to shape public opinion.
And amidst this chaotic landscape, we are seeing the apparition of a technology that is capable of destroying completely the boundaries between reality and fiction.
Deepfakes (a portmanteau of “deep learning” and “fake”) has become a term to describe an image or a video where a person is convincingly replaced by another via artificial neural networks.
This technology is based on machine learning techniques known as autoencoders and generative adversarial networks (GANs). The tools to create a Deepfake are based on multiple open-source libraries widely available to the public, and you only need a couple dozen pictures to create one. With a decent consumer-grade graphics card anyone could process a Deepfake in just hours. The technique can also be used to fake audio, which gives you the power of making anybody say practically anything.
As expected, one of the first uses people gave to this new powerful technology was inserting celebrities into porn videos and making Nicholas Cage star in practically every movie ever made.
Raunchy experiments and joking aside, this technology is evolving so rapidly, governments are starting to take notice, as it’s application in media, courts of justice and politics can be potentially cataclysmic.
Similar technology is on the verge of being used in animation at a big scale. Japanese researcher and programmer Yuichi Yagi is developing software that uses artificial intelligence to create in-between frames. In 2017, Yagi united forces with telecommunications company Dwango and production company Mages to create some sequences for the anime series Idol Incidents, and the results are mind-blowing. If this technology continues to develop at the current pace, in-between animators will be ruled redundant in less than a decade.
(The original versions on the left, the AI-enhanced versions on the right)
The technique can also be used to autonomously create character concept design. Crypko is an incredibly advanced character generator powered by GAN (generative adversarial network), that produces unique anime faces that are almost indistinguishable from the work of a professional illustrator.
This technology not only will disrupt the economic and artistic landscape of the film industry, but also poses some ethical questions as well.
Osamu Tezuka, probably the most influential manga artist of all time, passed away in 1989 leaving an extensive legacy that includes Astro Boy, Buddha, and Kimba the White Lion — which Disney blatantly ripped off to make Lion King — Thirty years after his death, Japanese tech firm Kioxia with the blessing of Tezuka Productions, is working on a project to create new art by the departed master.
Kioxia — formerly known as Toshiba Memory — will feed its software with Tezuka’s body of work so the Ai can analyze his style and reproduce it. The project will launch in 2020 with the, let’s say uncomfortable title “The resurrection of Osamu Tezuka”
Movie studios are already pushing the boundaries of ethics, scanning actors’ performances to use in as many projects as they like, even after the actor’s death. The first example of a deceased performer to be recreated for a film was Brandon Lee in The Crow back in 1994, and since then we’ve seen Carrie Fisher come back to the Star Wars universe, Audrey Hepburn starring in an advert for Galaxy chocolate, and Bruce Lee reanimated for a Johnnie Walker Blue Whiskey commercial.
Paul Walker, who was recreated via VFX wizardry after his death in Fast and Furious 7, is rumored to come back once more for the upcoming instalment of the series. The pandora’s box has been open and it seems this is a train that nobody can stop.
Is our culture on the verge of becoming one endless circle jerk? And even more important, what will happen to human society when artists become extinct?
By Benjamin Piñeros