Skip to main content

Garbage in, Gospel Out – The Unrealistic Expectations of AI

AI has been a buzzword for quite some time. And as a professional in the field, you may have gotten the question: “Can’t we just use some AI?” In this article, you’ll get the needed support to explain why AI is or is not a good solution, through a framework to anchor your arguments. We call it the AI Evaluation Loop.

A diagram showing a closed loop for AI evaluation.

As a tech company, you want to stand out, and to do so today, that’s by promoting all the cutting-edge technologies you are using. Just like every other tech company is doing. Ironically, we are trying to stand out by fitting in. As a professional in the field, you might recognize the unrealistic expectations that are set, mainly by management, to use AI whenever possible. Even though you know it might not be the best solution.

Perhaps you’ve gotten the question: Can’t we just use some AI? 

You are basically asked to create gospel out of thin air.

In this article, you will get the needed support to explain why AI is or is not a good solution, through a framework to anchor your arguments. We call it the AI Evaluation Loop.

The AI Evaluation Loop

The AI Evaluation Loop, presented below, is meant as a support in the decision-making process of using or not using AI to solve your problem. It consists of five steps and is called a loop to encourage evaluation repeatedly throughout the project. If your solution doesn’t live up to your expectations, it is possible that one or multiple of these five areas went wrong.

A diagram showing the 5 steps of the AI Evaluation loop.

1. Pre-project phase: Is the use case suitable for AI?

Consider the complexity of your use case. Is it simple enough to solve by predefined rules? If you can write a recipe for the task, you might as well turn that into code and be done with it, because in these cases AI is overkill and will only contribute to an unnecessarily complex and expensive solution.

However, if the pattern is too complex and determined by thousands of parameters, you will need to break it down and maybe parts of your problem can be specific enough to be solved by AI. If you break it down, you might hit the sweet spot.

The sweet spot is found where you have a clear pattern, specific task, and a limited number of output options. For these use cases, AI might give you great results!

2. Data Collection: Is there enough, and good data available?

Consider the QQBB:

Quantity: The amount of data was for a long time an issue with building good AI models. You can think of data as experience in a human being. The more you have seen and experienced, the more you know and can use that knowledge to make informed decisions. It is just the same with AI. The more data the AI has been able to go through, the more information it has to make a good prediction.

One great example of this is AI diagnosing breast cancer. The AI model was trained using over 5.4M images which would take a human several years to analyse, and an AI model can do it in just a couple of hours.

Quality: Great! You have the quantity needed for your use case. Awesome, you have managed to gather a huge dataset and are ready to dig deeper into it. The first thing to check is the quality. Do you have outliers? Missing information for some features and rows? Are you missing labels? How bad is the quality? If you realise the quality is poor, there are methods to improve it, yet not all datasets are salvageable.

Balance: Make sure your dataset has a somewhat equal amount of data for each possible outcome. Again, think of data as experience: if you only have experience of looking at cats, how will you know what a dog looks like?

Bias: 

A really trendy topic, due to its importance. This touches upon both data and use cases. The data can cause biases, while use cases determine the ethical perspective.

One example could be using AI for the purpose of recruiting the next CEO for your company. Let’s say you’re aiming to create an unbiased model based on historical data. Since most CEOs in the past were men, even if you try to exclude gender, the AI might recognise specifically male traits as traits of a good CEO. Since the nature of AI is to learn from the past, you always risk bringing past biases into the future. You therefore need to assess if the intended (or unintended) usage of the model might make it ethically unjustifiable.

And remember! We always use historical data to predict the future, so the prediction will always be influenced by whatever history looks like.

Also read: Introducing Machine Learning as a Service – a scalable, automatic way of delivering AI to customers

3. Design and training of Models: Do you have the right skills and tools?

For designing and training models, you have to make sure that you or someone you get help from knows how to (1) pose the questions based on your use case, (2) choose the proper algorithm fit for the questions, and (3) apply the algorithm correctly to the data to achieve the expected results. There are many pitfalls in these three topics that need to be considered before determining the success of the AI model.

Secondly, consider the tools to use. They should be (1) up to date to avoid unnecessary bugs, (2) able to handle the performance of the amount of data and algorithm you want to apply, and lastly (3) suitable for scaling up. If the project is a success, how do you plan to scale it using the tools available to you?

4. Production and Scaling: Is the case financially viable?

Remember that the cost of changes grows exponentially as you move into the delivery phase of a project. You can iterate over 10 times in the discovery phase at the same cost of one iteration in the delivery phase. We can of course explore, play and implement AI into our projects for the sake of learning. To facilitate innovation in a company it is key to make room for trying new things without having the pressure of performing.

However here we talk about AI projects in a context where the expectations are to affect the business positively. It is highly important to have a business case – not only involving the technical aspects but also including the strategic and business perspective.

5. Retrospective: Was AI the best solution for the use case?

In the early stages, consider what success means for the project. What is your minimum performance measure? Consider the AI model as well as the business case.

Plan to follow up after implementation, after approximately 3 months. Is your solution being used? Perhaps the cost of keeping it alive is higher than the amount it brings in. It is unfortunate, but it happens. And as we are working with AI, be aware that the model is still performing as expected. Remember Microsoft’s Twitter bot? It only took 24 hours before it went racist! Read more about the bot here.

Further down the line, say 6 months after implementation, is the AI model being developed and improved? If at the beginning of the project you can spend time considering this, it can be of help if and when you get here. Can you ensure that the AI continues to serve its purpose? It takes resources to monitor and maintain any product, AI-powered products included.

To summarise: Make sure to evaluate the potential of your upcoming AI projects. It can save you a lot of both time and money.

If you want to see the full presentation of the topic presented here, make sure not to miss this TechTalk. Best of luck with your AI endeavours!

Watch the TechTalk

Most popular