Skip to main content

Artificial intelligence and machine learning for dummies

Our Management Trainee Steinar, who holds a masters degree in Industrial Economics and Technology Management, provides an useful introduction to AI and Machine Learning.

Management trainee Steinar

“Hey! My name is Steinar and I’m one of the 10 management trainees at Visma this year.”

Today, Artificial Intelligence (AI) and Machine Learning are at the peak of the hype cycle. These topics appear frequently in everything from doomsday scenarios where AI take over, conversations where people fear their jobs will be lost to AI, and in scientific articles where Machine Learning has been used to predict nature disasters and global epidemics.

If you, like most others, feel a bit embarrassed by your lack of knowledge and turn silent whenever AI or Machine Learning is brought up in a conversation, you should finish this short introduction for dummies!

What is Machine Learning?

Let us start by clearing up the confusion in the relation between Machine Learning and AI. These two concepts are today used interchangeably by many and often given the same meaning, but they are in fact not equivalents.

AI can be defined as the capability of a machine to imitate intelligent human behaviour, and includes programming for traits such as interpreting and signalling emotions, language processing and complex reasoning. Machine Learning gives a machine the capability of learning, i.e it imitates the intelligent human behaviour that is learning. Hence, Machine Learning is a sub-set of the methods used to create an AI, and not its equivalent. The remainder of this guide will be focused on Machine Learning, as this is what people most commonly refer to when discussing AI.

Okay, so now we know that Machine Learning is about imitating the human learning process. But what is human learning? Thinking a few seconds to yourself, you probably agree that human learning is what happens when we practice and getter better at some task through the experience we accumulate.

Consider for instance Cristiano Ronaldo, who has become exceptional at scoring from free-kick shots after taking a ridiculous amount of practice shots throughout his life. This experience has given him the ability to predict how he should hit the ball in terms of power, angle, part of foot, and part of ball to maximise the probability of scoring. The process of learning for machines is similar; it becomes better at some task through experience, and is able to make optimal predictions for similar situations in the future.

Interested in starting a career in Visma? Visit our career pages here. 

How does it work in practice?

Does this imply that we can expect to see robots hanging out at the football fields, practicing their free-kicks? Well, probably not, although some projects are going on to create robot football teams.

In fact, Machine Learning is more commonly used to automate tasks that do not require physical limbs like soccer do, such as recognising objects in pictures, interpreting emotions and meaning in texts, dispatch customer support requests and making optimal production decisions. Although object recognition and text interpretation is indeed easy to do for humans as well, machines have the ability to do this on a completely different scale.

If you want to read about an interesting use-case, have a look at the heavily debated Chinese Machine Learning backed surveillance system used to recognise persons in street cameras. Although giving the benefit of identifying escaped prisoners, murderers and other scary people, it also results in the government always knowing what you do and where you are. If you really want to brag to your friends about your new-found knowledge on Machine Learning, you need some more details on the learning process for the machines.

The process can most easily be summarised in 4 steps; (1) reading historical data, (2) finding a way to model patterns and relationships in the data, (3) training the selected model on the data, and (4) using the trained model to make predictions on new data.

Consider for instance a machine learning model for identifying whether a picture contains a cat or not. First, we need a large dataset with labelled pictures of cats and other objects to feed the model. Then a model needs to be selected – a Neural Network could for instance be suitable in this case. Traditionally, this has been done by the data scientist, but the trend is moving towards using an AI for choosing the appropriate model as well!

Third, the model needs to find the patterns distinctive for the cat pictures, like colour, size and ratios. Finally, the trained model is used to make predictions on a new incoming picture and determine whether it is a cat or not. Does identifying cat pictures sound like a waste of resources? Well, I agree. Try changing cat with malignant tumour and connect to a scanner at your regional hospital, and things are suddenly getting more interesting.

If you have suffered through this whole thing and wonder why you spent 10 minutes reading about cat pictures, I am truly sorry. If you did find it interesting, I will first of all claim that you are no longer a dummy on this subject, but I will also urge you to have a look at how we are working with Machine Learning in Visma.

If you want to know more about Machine Learning in general, have a look at this article by Medium presenting the different types of machine learning (we only really discussed Supervised Learning). Finally, for those super motivated once wanting to try some practical Machine Learning programming, I suggest looking into the Scikit Learn library developed for Python.

Read more about being a trainee in Visma

Most popular