Posted on: February 4, 2021 at 1:04 PM    

As human beings, we are constantly using our past experiences to learn and do things that we haven’t seen before. When you train a model from scratch, it’s like learning the road rules without ever having been in a vehicle. It is achievable but takes much longer.

In our previous Computer Vision blog, we considered how time consuming and costly preparing data can be. There is alot of manual work involved in formatting and labelling data before you can begin to consider training a model. The training process for a complex machine learning model, like the neural networks often used for computer vision, is an iterative process. It may take hours, days, or even weeks to run the code. It’s no wonder that data scientists want to save time! The purpose of producing a machine learning model is to then apply it to new data to make predictions or detect information based on patterns identified during training.

Source: datacamp.com

Many data science professionals implement a technique called transfer learning. The concept of transfer learning is to take the knowledge learnt from a previously trained model and use it in a new application. Not only is this useful on a project level, but this concept has greatly accelerated machine learning research over the past decade; with each new discovery building on the knowledge from the last project. Using this process, training a model can take less time, but can also give more accurate results while requiring less data. For example, we could take an existing model that detects human faces in an image and build off it to produce a model that could identify the expression on each face.

The benefits of transfer learning are undeniable, but it is equally important to understand its limitations. Not all applications can use transfer machine learning. The pretrained model that is used needs to be from a similar application. Both the data and the task should be comparable. There is no widely accepted definition of how similar the applications should be. Instead, it is considered on a case-by-case basis.

Transfer learning plays a crucial role in the current and future practice of machine learning. Models grow more complex and accurate over time, covering more applications every day. Without transfer learning, these advancements would take much longer. The concept of transfer learning is important to understand for anyone working in or in conjunction with the machine learning field.

Perhaps there's an application of artificial intelligence that you’ve seen that you think could be applied to a problem you want to solve? 

Computer vision blog series:

Blog 1 - Data Science Researcher joins Abley

Blog 2 - Using computer vision to detect traffic signs

Blog 3 - Data labelling

Blog 4 - The realities of "big data" in computer vision 

Blog 5 - Selecting the right data for computer vision

Blog 6 - Non-existent cats and other data augmentation magic

Blog 7 - Elevating your business with machine learning

Blog written by Joe Duncan, Data Science Researcher 

Machines teach machines2