GEN Summit: AI’s breakthrough year in publishing

Lundi 4 Juin 2018

This week’s GEN Summit marked a breakthrough moment for artificial intelligence (AI) in the media industry. The topic dominated the agenda of the first two days of the conference, from Facebook’s Antoine Bordes opening keynote to voice AI, bots, monetisation and verification – and it dominated my timeline too.

At times it felt like being at a conference in the 1980s discussing how ‘computers’ could be used in the newsroom, or listening to people talking about the use of mobile phones for journalism in the noughties — in other words, it feels very much like early days. But important days nonetheless.

Ludovic Blecher‘s slide on the AI-related projects that received Google Digital News Initiative funding illustrated the problem best, with proposals counted in categories as specific as ‘personalisation’ and as vague as ‘hyperlocal’.

Digging deeper, then, here are some of the most concrete points I took away from Lisbon — and what journalists and publishers can take from those.

The difference between ‘intelligence’ and robots that you can teach

Benedict Evans‘s Thursday keynote was certainly one of the most articulate presentations at the summit, managing to move quickly from orientation (‘what is machine learning’) via potential applications, to problematisation.

The talk echoed other discussions I’ve been involved in recently around the need to move from talking about AI to terms like machine learning — being able to recognise patterns, using those to find needles in haystacks, automate repetitive behaviour, predict possibilities and likelihoods, and so on.

#GENSummit "What can you do with a million ten-year-olds?" asks @BenedictEvans by way of demonstrating machine learning. For journalists it might be… pic.twitter.com/o8zlhhWCa1

— Paul Bradshaw (@paulbradshaw) May 31, 2018

Where ‘artificial intelligence’ threatens robot overlords and robot journalism, machine learning more clearly relates to the process of teaching a relatively ‘dumb’ (but also, in relative terms, smart) robot how to do parts of our job we don’t have the time or consistency to do.

But with great power comes great responsibility — and Evans makes a useful distinction between our responsibility for the accuracy of the algorithms we create as part of that (shaped particularly by input) and the potential social impact of the results. Even an accurate algorithm can have undesirable outcomes.

Discussing AI ethics #GENSummit pic.twitter.com/244ujRloXI

— Sheila Coronel (@SheilaCoronel) May 31, 2018

Algorithmic accountability and the route to intelligibility

A similar point was made by Jonathan Albright, who talked more broadly about algorithmic accountability: journalists’ watchdog role when it comes to algorithms’ effects on public and private life.

Albright’s work on ‘crisis actor’ YouTube videos and Google search suggestions — as well as ProPublica‘s Machine Bias series — are well worth exploring as an introduction to the field. What may have started as a feature of technology and politics beats will clearly eventually play an important role in every area: from reporting on crime and housing to education and health — and even music and film.

It helps that media companies have been among the earliest, and most regular, victims of algorithms. We are acutely aware of how a small change in those recipes can shape whether, and how, our audiences receive our journalism, as Emily Bell‘s talk showed.

Today I talked about some new research from @TowCenter at #GENSummit …one of the most unsurprising yet potentially complex findings was this pic.twitter.com/2voagR3CZn

— emily bell (@emilybell) June 1, 2018

What is needed now is not only an awareness of how our communities can be affected by algorithms (and a desire to report on that), but also a reflective attitude to our own reliance on algorithms — and the need for transparency around that.

"Transparency is not just an
end in itself, but an interim step on the road to intelligibility" – Frank Pasquale via @d1gi on algorithms – here's the full passage from his book The Black Box Society #GENSummit pic.twitter.com/7ZBNBAHGOy

— Paul Bradshaw (@paulbradshaw) May 30, 2018

The work of Nick Diakapoulos has laid some important ground on the subject including the limitations of algorithmic transparency and potential disclosure mechanisms, while AP’s Stuart Myles has spoken about the same subject with regard to algorithmic news, identifying four levels of algorithmic transparency.

The commodification of (some) data journalism — and the move beyond automation to augmentation

Pretty cool platform – aggregating data for local newsrooms @le_Parisien #GENsummit #servicestrategy #suisseknive pic.twitter.com/FwgExTVNGc

— Marc Springer (@SpringerMa) May 31, 2018

Frames, Grafiti, RADAR and Le Parisien’s LiveCity project showed just how far data journalism has shifted from the exceptional to the commodified.

Frames provides a service supplying ready-made charts for news organisations to embed alongside their articles, complete with a revenue-sharing business model whereby charts are also sponsored (they have been working with a Portuguese news organisation and claim that chart inclusion leads to higher sharing and recirculation).

Grafiti, meanwhile, is aiming to make social creation of charts easier while building a charts-as-data search engine. RADAR has built one of the UK’s biggest data journalism teams to supply copy newswire-style to local publishers; and Le Parisien is pulling city data into articles in the form of personalised widgets.

In other words, data journalism is starting to scale. For data journalists this means two things: either moving from the low-hanging fruit to more complex or investigative stories, or moving into the scaling of data journalism itself (i.e. coding).

…Or, of course, doing both.

Chatbots and narrative, bots and newsrooms

Chatbots, meanwhile, are moving in the opposite direction — although many news industry chatbots until now have been nothing more than a glorified RSS alert, Quartz’s John Keefe and the BBC’s Paul Sargeant showed that we are increasingly seeing a more sophisticated application of editorial skill to the form.

“The success to a good bot,” as Keefe explained, “is really good humans.”

In other words, chatbots not only represent a technical challenge, but a narrative challenge too — Sargeant spoke about the BBC’s experimentation with chatbots as a storytelling device in articles such as a piece on The unspoken alcohol problem among UK Punjabis, while Keefe highlighted the “very talented humans writing the storylines for the bots” and the demand for those skills from advertisers that has seen the bot team being split into editorial and commercial.

“We’re finding there’s a market for our talents and our storytelling with some of our clients.”

The approach is certainly effective: BBC articles on the royal wedding containing in-story chatbots saw 20% of users engaging with the widgets, “usually [asking] up to five or six questions”. Chatbots may not increase reach, he said, “but will increase engagement.”

At Quartz Keefe similarly reported that “90% of people who start go all the way through.

“The real value is not in reaching more people, but rather in deepening the relationship with the people you reach.”

The use of narrative techniques, however, seems to introduce a tension between user expectations of interactivity and the journalistic pressure to communicate particular facts.

In a separate session BBC Visual Journalism’s Bella Hurrell would note that testers of their in-story bots missed the option “to write text directly, to ask their own questions of the bot.”

Really terrific talk from @BellaHurrell from @BBCNewsGraphics team on how they implemented an in-story chat bot idea (2017 @GENinnovate winner) #GENSummit pic.twitter.com/nmzHX6ICI7

— Dmitry Shishkin (@dmitryshishkin) June 1, 2018

Perhaps the increasing use of AI in the industry will help to partly resolve that tension — indeed, Sargeant said that he sees chatbots as “a transitional format. We know we are going to integrate AI and chat together.

“Doing these [chatbots] you’re learning a lot about how tone, and how you structure those conversations and storytelling [gets you] in a position for when AI is on the way”

Meanwhile, an increasing number of bots are being developed for internal use.

Quartz’s Quackbot (code on GitHub) helps journalists cache copies of webpages and suggests sources of data, while the BBC used bots to automatically generate election graphics and tweet those on the @bbcelection account, and Dagens Nyheter’s Martin Jönsson has created a ‘gender bot’ (“Genusroboten”) to help reporters see how representative their coverage is.

I dag lanserar vi ett nytt internt verktyg på DN: genusroboten. Varje månad får varje skribent ett mejl på hur omnä… twitter.com/i/web/status/9…
Martin Jönsson (@DagensNyheterMJ) April 04, 2018

The Associated Press has been exploring the potential of AI technology for verification, transcription, personalisation, image recognition and bots. Their artificial intelligence strategy group lead Lisa Gibbs suggested that “I can draw a direct line between AI [for automating parts of journalism] and the increase in investigative journalism our business reporters are able to do.”

If there is a field where the automation-versus-augmentation battle takes place, a place to put algorithmic accountability into practice and experiment with the possibilities of AI and narative, there are probably few better places to look than bots.



Source : https://onlinejournalismblog.com/2018/06/04/gen-su...

Paul Bradshaw