Just before this summer holiday I had the honor of inaugurating #bbhtalk with a seminar talk - hosted by us here at BBH Stockholm. (Hopefully the first in a long series of seminars on a range of topics!)

This particular morning the subject at hand was Artificial Intelligence, including the big questions of its “What, Why and How”. I took the approach of casting a wide net, drawing inspirations many viewpoints on AI, and approaching from not just a technical angle but also discussing its philosophical and sociopolitical implications.

We dove in at the deep end, taking a look at the building blocks of modern AI; the virtual neurons that build up a digital mind - using Google’s Tensorflow Playground to illustrate how such a system can be self-learning, merely by iteratively adjusting the weights of connections between these neurons in order to get outputs that approximate the expected values as described by a training dataset.

With Deep Learning these neural networks can become vast - XKCD described them as big piles of linear algebra - yet the principle and individual components remain simple. In a prime example of the concept of emergence, complex behaviors arise from relatively trivial processes. However, a consequence of this is that, when we zoom out and look at the AI’s we have trained; acquiring an understanding of how it is they do what they do becomes quite elusive. This frequently encountered difficulty of learning how an AI actually do things often becomes frustrating to experts in any given field once the machine has bested them in their particular domain; described in this Wired article as simultaneously saddening and beautiful.

This phenomenon can be more than an annoyance though. For instance; Will Knight, writing for the MIT Technology Review, describes it as The Dark Secret at the Heart of AI. When an AI has become better than humans at predicting disease than doctors we would very much like to know how it does it (so that we could devise better treatments). And as we begin to integrate AI into more and more systems that could have profound impact on our lives (including leaving our emotional well-being at the hands of the likes of Facebook), we would sleep more easily at night if we could know these systems had not learned to make these choices based on inappropriate biases (because who among us would like to have Microsoft’s disastrously racist Tay in charge of those decisions?).

Emphasizing the importance of data we looked at a few examples of how Google gathers data from users playing games (Quick Draw) or providing a service (reCAPTCHA) - and how it has since been able to leverage to use such data to train and improve their neural network models.

Touching briefly on the concept of existential risk and the singularity (book recommendation; Superintelligence by Nick Bostrom), we also discussed some dangers that are more close at hand. Two examples; the great power at the hands of those with the most data (e.g. the great ease at which Facebook could decide an election should they - or their AI!- choose), and the gradual devaluation and eventual obsolescence of human labor in many industries (Kurzgesagt has a great video on this).

Finally, we discussed some of the ways we at BBH Stockholm have used the plethora of available machine learning tools; from chatbots with Wit.ai and Api.ai to image recognition with Tensorflow, and sentiment analysis with Watson. Our main recommendation is however not to get too stuck on what has already been done with AI before; the biggest opportunities come not in developing better algorithms, but in finding hitherto untapped-use cases for machine learning. And with the right data, we can do that together!