We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we'll assume that you are happy to receive all cookies on the ESCP website. However, if you would like to, you can change your cookie settings at any time.


The Jean-Baptiste Say Institute had the chance to welcome Scott Hartley, author of the best-seller “The Fuzzy And The Techie”, at the Centre Pompidou, to explain his vision of the future of human skills, more and more surrounded by artificial intelligence (AI).

Starting 60 years ago, Charles Percy Snow pictured in a famous lecture at Cambridge University the differences between sciences and humanities, two mutually exclusive concepts, whose defenders did not even talk to each other. Today might be the time to realise that they are – and should be – imbricated. The debate has not changed so much, but instead of science and humanities, we talk about AI and ethics. In fact, AI is still driven by people with many biases, raising the question of ethics inherent to it. Mastering code does not allow to change the world if we do not first understand a context, the first step to answer the great challenges of our times. AI and ethics are the two faces of the same coin.

For instance, Facebook faces today the question of the gathering of massive amounts of data, and this asks the question of the rights of individuals about their privacy, about constitutional rights and more generally, the whole social contract.

As Scott Fitzgerald said, intelligence can be recognised as the ability to put together two contradictory ideas. Following, the opposition between fuzzy and techies can be overtaken. The best proof is the academic background of the CEOs or founders of most of tech companies: they graduated in history, literature, economics, art, design, sociology… And in fact, philosophers, artists and entrepreneurs are different from machines because of their ability to hesitate, to follow an unknown path.

Of course, technology is necessary, but not sufficient. Flickr and Slack founder Stewart Butterfield reminds us how philosophy taught him how to think. And this can be replicated even in most technical companies: Nissan autonomous cars required an anthropological research from Melissa Cefkin about human means of communication. More surprising: since one of the big cases for robotics is healthcare, ballet dancer and choreographer Catie Cuan helped to teach robots how to behave and move their body, in order to inspire a greater confidence among patients.

Such examples prove to us that algorithms are not an objective concept, but an extrapolation of a creator’s context. They rely on imperfect inputs, and people’s logics. For instance, mapping criminality in a city according to declared crimes is highly biased: it shows only declared crimes, not real ones. All models are constructed on fundamentally moral choices.

Then, we should consider AI as an intelligence augmentation rather than artificial intelligence: Google AI department’s Fei Fei Li reminds students not to be biased by the term “artificial”: nothing is artificial. In a world where manual and routine jobs will be automated over decades, we should focus on non-routine and creative tasks. In other terms: soft skills. Our advantage against machines. As Tim Cook said, the greater danger is not computers acting like humans, but humans behaving like computers.