Over the past year I have given this talk in: Venice-Bristol-London-Paris-Lisbon – here is the Paris version.
This interview aired a few weeks ago on italian national TV. It is about my current work on ethical issues resulting from Big Data – mostly within my ERC grant ThinkBIG.
Over the past few weeks I have been interviewed by the New Scientist and the BBC on the nature of modern Artificial Intelligence. This is about the data-driven nature of current intelligent systems, and how science is adopting a similar approach, and what implications this could have for society. Links follow…
Big Data Analysis of News and Social Media Content
Ilias Flaounas, Saatviga Sudhahar, Thomas Lansdall-Welfare, Elena Hensiger, Nello Cristianini (*)
Intelligent Systems Laboratory, University of Bristol
(*) corresponding author
The analysis of media content has been central in social sciences, due to the key role that media plays in shaping public opinion. This kind of analysis typically relies on the preliminary coding of the text being examined, a step that involves reading and annotating it, and that limits the sizes of the corpora that can be analysed. The use of modern technologies from Artificial Intelligence allows researchers to automate the process of applying different codes in the same text. Computational technologies also enable the automation of data collection, preparation, management and visualisation. This provides opportunities for performing massive scale investigations, real time monitoring, and system-level modelling of the global media system. The present article reviews the work performed by the Intelligent Systems Laboratory in Bristol University towards this direction. We describe how the analysis of Twitter content can reveal mood changes in entire populations, how the political relations among US leaders can be extracted from large corpora, how we can determine what news people really want to read, how gender-bias and writing-style in articles change among different outlets, and what EU news outlets can tell us about cultural similarities in Europe. Most importantly, this survey aims to demonstrate some of the steps that can be automated, allowing researchers to access macroscopic patterns that would be otherwise out of reach.
Nello Cristianini – (Draft of article prepared for AIComm issue on History of AI)
The field of Artificial Intelligence (AI) has undergone many transformations, most recently the emergence of data-driven approaches centred on machine learning technology. The present article examines that paradigm shift by using the conceptual tools developed by Thomas Kuhn, and by analysing the contents of the longest running conference series in the field. A paradigm shift occurs when a new set of assumptions and values replaces the previous one within a given scientific community. These are often conveyed implicitly, by the choice of success stories that exemplify and define what a given field of research is about, demonstrating what kind of questions and answers are appropriate. The replacement of these exemplar stories corresponds to a shift in goals, methods, and expectations. We discuss the most recent such transition in the field of Artificial Intelligence, as well as commenting on some earlier ones.