You can read and download Stanford University’s AI Index here.
Interview with the Data Science Director in Visma about the AI Index 2021
The report highlights the effects of Covid-19 on AI development, and how AI systems are now able to generate content of such high quality that humans have a hard time telling the difference between what is man-made and what is created by artificial intelligence.
This brings with it loads of opportunities but also challenges in the form of diversity and ethics. To look deeper into the findings of the 2021 AI Index, we talked to one of Visma’s leaders within AI and Machine Learning, Michael Håkansson.
What do you consider the most interesting takeaway from Stanford’s 2021 AI index?
There are many interesting topics in this year’s AI Index but the AI ethics domain is one that I think is especially interesting and relevant. How do we regulate biases and discrimination? What kind of surveillance will we allow? How and where can data be collected and processed?
It is today, in the relatively early days of implemented AI that we’re setting the standard and regulations for what will be acceptable in the future.
It looks like AI investment increased in 2020, despite (or maybe due to?) the Covid pandemic. Are you surprised?
Not at all, we have seen so many big leaps in the tech industry during the pandemic. Some even talk about changes and transitions that in normal times would have taken years or even a decade to happen, which has now taken only a few months “thanks to” the pandemic.
The transition to cloud software is an obvious example. In general, the pandemic has been a boost for the tech industry and with a boost comes more potential of investing in new ventures where AI often is high on the agenda.
How does global AI investment affect Visma’s AI work, if at all?
Three things are coming to mind: First the expectations on intelligent services. As AI enters our everyday lives through voice assistants and recommendations in music and video services, the expectations are also increased in other areas, as they should. I mean, why should you have good movie recommendations on the sofa at home but not good recommendations on bookkeeping at work?
The second thing that comes to mind is the general advancement of the AI field. More investments often lead to more development and nowadays much of the technology is made available to the whole world as building blocks through for example public cloud platforms.
Thirdly, I would also like to highlight that Visma is of course also part of the global investment numbers. We have state-of-the-art AI technology in optimisation and machine learning solutions such as transaction prediction and natural language processing.
A higher percentage of AI PhD graduates are going into the private sector than ever before. Why do you think that is, and why is it important?
I think it has to do with AI being a field that has high research focus also in the private sector. You don’t have to be in an advanced lab in a university to make research and breakthroughs—all you need is a computer.
We see some significant breakthroughs and advancements stemming from the private sector nowadays. It can also be that if you work in a company, the research you do can have an immediate impact on people’s lives. I think that is highly motivating.
Diversity in AI is low. What are the implications of this and how can people of more diverse backgrounds get into this field?
It is proven that a diverse workforce generally produces better results, and better results are interesting in any field, not just AI. When it comes to AI specifically, it can be especially important.
In this field, bias is something that is heavily discussed. For example, face recognition services perform worse in recognising people of colour. I think that these biases seldomly are introduced on purpose but rather by ignorance. With a higher diversity in the workforce, I think we have a better chance of thinking of, identifying, and avoiding potential biases.
When it comes to getting people of more diverse backgrounds into the field, it is a company responsibility but even more social responsibility. Companies must make sure to evaluate talent on the same grounds when recruiting.
From a societal perspective, people entering studies in the tech field must be diverse. If the number of people choosing to enter studies in the tech field is low, so will the workforce be.
You might also be interested in: Making Visma more efficient and sustainable with the help of robots.
The report says that AI “can now compose text, audio, and images to a sufficiently high standard that humans have a hard time telling the difference”. Can you elaborate on the upsides and downsides of this?
The upsides are so many I almost don’t know where to start. Having AI technology that can generate content at scale means that people can have customised experiences like never before, both in terms of speed and content.
For support, for example, chatbots are being used to answer support cases instantly. Since the machines never sleep, they can answer questions all day, all year round. It doesn’t stop with practical things like support, of course. With text, audio, and video being custom generated, new music and books can be created based on what you like, and you and your friends can be the main characters in your favourite movie.
As with most great innovation, there are also downsides that we will need to take care of. With cars, it is car crashes and pollution. To solve that, we have invented safety belts, airbags, and electric engines.
For the specific field of AI, we’re discussing now, the most debated downside is the creation of fake content. With AI technology, it is possible to create fake text, audio, and video content on a large scale. It can for example be fake news and fake speeches even with video. If applied in politics, for example, the implications could be severe.
The good thing is that the same technology that is being used to create fake content can also be used to identify and reveal it. Just because new technology can be (mis)used for malicious purposes doesn’t mean we should stop innovating, because it can be used for so much more good causes than bad.
To what extent is surveillance relevant to how Visma works with AI? Is it a topic we need to think about?
The kind of surveillance referred to in Stanford’s research, meaning surveilling people through image classification, face recognition, video analysis, and voice identification is not relevant for Visma. We do not operate in that area.
The advancements in the underlying technologies, however, can be used in many other ways. We are using machine learning technology to read and understand information from images of receipts and invoices. With improvements in the underlying technology, our customers can experience a higher degree of automation and efficiency.
So, the advancements in surveillance technologies aren’t advancements in surveillance—it is advancements in technology being applied to surveillance.
The future of AI seems to be extremely promising. What do you see as Visma’s contribution to AI in the next 2 years? 5 years? 10 years?
It is very promising and has been so for many years now. Looking back, it has for a long time been just that, a promise, without very impressive results. But today, thanks to many technological advancements, the results slowly start to catch up to expectations.
For Visma, efficiency and automation in administrative processes are in our DNA, and this is where I see our contributions coming. Where we are now, we are mostly using AI to augment people in their work by giving suggestions and guiding in processes.
When we can be certain enough of the result, the suggestions can become decisions, meaning that processes can be automated.
I also think that some processes will be completely changed and revolutionised with technological advancements. We already see it in for example planning and allocation of tasks, where we can use algorithms to solve problems in seconds that used to take months.
Was there anything else you would have liked to see the report cover?
I would have liked to have seen some more on the democratisation of computing power, AI technology, and data.
A big reason for recent advancements in AI technology is related to innovation in hardware used for performing very specific calculations. This hardware and compute power is being made available to more or less anyone in the world through public clouds at a fraction of the cost of a couple of years ago, and many magnitudes lower cost than 10 years ago. This democratisation of computing power has opened up for new actors and new areas of application.
AI technology is also being packaged in building blocks that make it available to non-AI experts in a way like never before. There are “auto-machine learning” solutions that allow anyone to just plug the data in and get results without having to do any of the advanced machine learning.
Of course, this can be dangerous if you have no idea what you’re doing. With some basic understanding that wouldn’t have taken you anywhere a few years ago, you can get very far today.
Lastly, I would also have liked to see some results about the availability of data, both in terms of publicly available data, and data accessibility initiatives like the Payment Services Directive Two (PSD2).
The reason why this is interesting is that as data becomes more accessible, the innovation velocity is impacted, so by knowing which data people and companies collect and can access we can foresee which type of offerings we can expect in the future.