Black and white photo collage shows Ilan Manouach (wearing a beard and a baseball hat) on the left – and somewhat undefined comic characters in a surreal setting on the right.
AI & Automization

Things Lurking on the Fringes

A neat side effect of this little blog is that it continues to serve as a starter of conversations with interesting people –like Ilan Manouach, who originally got in touch to discuss synthetic media. After a fruitful first exchange, we decided a proper interview would be in order. In case you haven't heard of Ilan yet: He's a very prolific Greek-Belgian digital artist, musician, writer, researcher, and scholar with a strong interest in AI, avant-garde publishing, post-modern disruption, and–possibly first and foremost–comics in all shapes and sizes. Ilan's current jobs include that of a visiting scholar at Harvard's metaLAB and a consultant for Onassis Publications. In short, he's a great colloquist when you're into innovation and next level ideas.

Ilan, you seem to be a man of many interests and skills. However, this is a media tech blog, so let's talk media tech. A couple of years ago, you started The Neural Yorker. That's essentially a Twitter bot that posts The New Yorker-style comics generated by an AI. What was your inspiration to build this–and what are you trying to achieve with it?

First of all: Thank you. And to answer your question: Throughout the years, I've developed an extensive artistic practice exploring how to do comics "differently", using tools and methods from conceptual art, uncreative writing, post-colonial critique and situationist appropriation. Most of my published projects can be described in just a few words: Shapereader is comics for blind people, Tintin Akei Kongo is Tintin translated to Lingala, and so on. My comics are generally produced through sets of basic instructions, not unlike the programmatic operations and algorithmic processes of deep learning. Now The Neural Yorker is a project that I've developed in collaboration with computer scientist Yannis Siglidis. It explores comics from the perspective of automation. It intensifies and amplifies what is called technological mediation by using algorithms to generate new content. Speculations about and comments on the growing role of automation in artistic production have actually been a trope in art debates for more than a century. And the comic industry has always expanded symbiotically alongside the development of printing, distribution, and communication technologies. The Neural Yorker is a continuation of this tradition, if you will.


You've also started offering a service that creates custom AI comics for media organizations–with their domain and location taken into account (if desired). Now I understand automated finance or sports journalism, but how does this thing work exactly?

This service–currently in beta–is open to media outlets that are receptive to The Neural Yorker's quirky and non-sequitur humor. The whole thing works like this: First, the service crawls a list of thousands of online newspapers, periodicals and blogs which are sorted by sector (social news, financial press, sports, etc.), region (local, national, international) and political bearing (progressive, conservative, etc.). As a next step, our algorithm performs information extraction on the headlines, and classifies named entities into predefined categories such as person names, organizations, etc. Using these newly acquired tokens (the Tories, cricket, Boris Johnson, or whatever happens to be in the news that day) and our signature conditional models, the software then generates 10,000 cartoons. When this is done, the service uses a heuristics-informed humor-detection algorithm to assign scores -- and eventually picks the funniest AI comics to upload to our deployment infrastructure. The whole operation might seem a little unconventional and inflated, but to paraphrase Raymond Devos, "humor is a very serious matter and should never be entrusted to buffoons!".

Well, that goes without saying. However, from what I've seen so far, The Neural Yorker 's output is intriguing and quite funny, but more in a post-modern way. Do you think you can eventually deliver something as good as the legendary New Yorker comics?**

A while back, I listened to a podcast interview with a computer scientist who specializes in NLP (natural language processing) and humor. After months of training neural networks on large databases of punchlines and jokes, she retrieved the following synthetic joke: "Why did the chicken cross the road? To screw in a light bulb!" 

That was Janelle Shane! I actually discussed AI and synthetic media with her last year. But please, carry on...

Well, there was something very alien about the machine's cross-pollinating approach to humor, about its capacity to apply a probability distribution function to a database of jokes in order to infer the fittest candidate according to the following logic: Both 'chicken' and 'lightbulb' seem to be defining features of a joke. So in order to maximize chances to produce something funny, the two terms should be brought together in the same expression. And it works! Now I just wanted to take this a step further and apply it to cartoons. I believe they represent an art form that is deeply rooted in the 20th century and that caters to readers with a limited attention span–and the urge for a quick visual fix. Cartoons are designed to divert readers from an article's textual density, to captivate their perception, and to leave them hanging in disarray. My goal certainly isn't to level up with human cartoonists or to make them redundant by a generalized global deployment of non-sequitur (when they're already precarious). I am interested in how machines understand humor, and by harnessing the power of our algorithms, I believe I'll be able to  pick up patterns, traits and outliers. Things that have been overlooked or lurking on the fringes of the reader's cognition. In the near future, through the advancement of syntax and semantic analyses, we'll probably be able to emulate the whole complexity of natural, spoken, conversational languages. And at some point, we could actually overcome the challenges of language functions that may hitherto have constituted unique human traits such as humor, irony, or ambiguity.

Hier kommt der AltText rein

This is getting very philosophical and academic, and I like it, but let's be down-to-earth for a moment: State-of-the-art AI media synthesis is about recognizing, recreating and remixing patterns. The software doesn't really understand what's going on in the world, and it doesn't have a sense of humor (which doesn't mean it's incapable of producing funny results). With that in mind, do you think there's still a way to have a neural network and a couple of deep learning experts distill a universal language–or grammar–of humor? Especially given the fact that puns and punchlines need to reflect what's depicted in a frame, and that they're notoriously hard to translate?

Here, I am reminded of Daniel Dennett's thesis of "competence without comprehension". If machines achieve a level of performance that in human contexts would be ascribed to comprehension, the difference of these terms is of a purely epistemic matter. Humor has a social function, entirely dependent on context, language, timing, and mode of delivery. In France and Belgium for instance, the longstanding traditions of irreverent satirical cartoons, such as in the infamous Charlie Hebdo, are a world away from US publications like MAD, National Lampoon or The New Yorker. I'm never convinced by the structuralist mission to set cognitive universal standards for a Rosetta stone of "holy funny". Translation or transcreation might not be the necessary operation here. I'd rather argue that there's now a lack of independence between text and image, or better: there's a subordination of the drawing to the text–which is a sign of a medium in decline. Think of the poetic qualities of Peter Arno, Gary Larson, or Willem–and compare them to the bulk of contemporary cartoons that feature a random character, usually male, at the center of the image, hands in pockets, who delivers some sort of acerbic commentary related to the news. In order to better understand the relevance of cartooning, I commissioned an online survey. And do you know what I discovered? The twenty most popular tags in hundreds of thousands of cartoons are: pets, kids, pet, wife, dogs, husband, kid, wives, husbands, doctor, doctors, boss, manager, job, crime, parents, employee, jobs, parents, manager. Not exactly a foray into non-ordinary states of consciousness. Nevertheless, research on the vibrant traditions of press cartoons at the beginning of the last century reveals the deeply hallucinatory, mind-bending legacy of daily funnies (in that context, check out Dan Nadel's excellent book Art out of Time).

Let's discuss a different, but related topic. Your AI comic service is probably based on a vast number of digitized cartoons, drawings, punchlines etc., many of them published by rather famous people. Isn't that dangerous terrain, legally speaking?

Actually it isn't. For training processes, we mainly used a very large private collection of old satirical magazines that we painstakingly digitized and annotated, as well as datasets of digital material under Creative Commons licenses. Most (copyrighted) material that can be found on the web has a resolution that's far too low to be of any real use for generative models. And as for the project's title–The Neural Yorker–I believe there shouldn't be any kind of trademark confusion with the famous American periodical.

From your point of view, what will sophisticated AI and synthetic media have done to the media and publishing industry ten years from now? Of course, I'm not only thinking about comics, but all kinds of visual storytelling and reporting now.

Well, several players in the media industry are heavily investing in content synthesis as we speak. The global spending on deep learning technologies in the media domain is estimated to reach almost  $120 billion by 2025. Media consortia are working towards a fuller integration of synthetic text into more and more areas of content production. Their hope is to significantly boost productivity. Some comic publishers are more hesitant, but AI and a number of other digital technologies are bound to significantly transform their industry as well. There will be all sorts of effects: Economic ones like the precarization of craftsmanship traditions. Social ones like the rise of entrepreneurial fan culture, and the consolidation of increasingly diversified communities with novel forms of amateur and semi-professional activity. Or aesthetic ones, as AI and synthetic media become part of the regular production pipeline. There is no reason to expect that other parts of the arts, entertainment, and news industry will absorb synthetic media trends in a different way.

Let's drop the AI thing and talk about digital or post-digital comics in general. What can the medium bring to the domain of news, journalism, or media literacy?

Well, the multimodal expressive communication of graphic narratives is now a primary modality when it comes to sharing and shaping representation of our worlds on the internet. From data-driven journalism to graphic journalism to digital community building, there is always a visual story to be told. So we'll surely find many fields of application for post-digital comics.

Thanks for the interview, Ilan.

Links and resources:

Ilan's Website The Neural Yorker on Twitter Chimeras–Inventory of synthetic cognition (book)

The interview was conducted by Alexander Plaum and slightly edited for better readability.

Alexander Plaum