The IDL Machine Learning framework provides an easy-to-use way to extract information from data. These practical examples show how the technology can be used in many different applications.
Well, I could not resist writing this blog. There is so much information out there about Machine Learning (ML), Deep Learning (DL) and Artificial Intelligence (AI) that it is often hard to differentiate facts, fiction, goals, reality and aspirations that Geospatial professionals may or may not hold.
More importantly for an average user of geospatial information like you and me – it would be nice to see what part of this makes sense and what is of practical use. In this blog I will restrict myself to Machine Learning, which really is a technique under a broader umbrella of AI, which includes DL.
I always find that when new technology arrives, the best way to understand it is to look at it through the lens of your own data. Often, we do not have time and resources to write everything from scratch – but if there is an easy-to-use framework to try in a familiar environment without investing too much time and effort, then that’s ideal.
This is exactly what you could do with the recently released version of ENVI 5.5/IDL 8.7, which provides an easy-to-use Machine Learning framework. Of course, you need to be able to write a few lines of code in Interactive Data Language (IDL) for this but believe me, it is not that onerous. All the necessary ingredients are already there. All you need to know is how to identify the type of ML algorithm that you would like to use and this is in turn often dictated by the nature of data/problem you have!
IDL Machine Learning framework
The IDL Machine Learning framework provides a powerful and flexible way to run Machine Learning applications on numerical data. Before I get into specifics let me briefly dwell on a few important considerations before starting to use ML techniques.
The input data often dictates the type of algorithm one would typically use in Machine Learning. It is therefore imperative to understand the importance of the data preparation step i.e. getting the numerical data ready to use in Machine Learning; more on data preparation later.
Once input data is correctly prepared, the IDL ML framework allows you to create and train models and apply them in classification, clustering, or regression applications.
Techniques and methods
Let me briefly explain what these terms mean and the conditions that dictate the choice of one or the other of these models.
The techniques of classification and regression belong to the supervised learning family as opposed to clustering based on the unsupervised learning technique.
To get a little more technical here, IDL supports the methods of Support Vector Machine (SVM), SoftMax and Feed Forward Neural Network-based classification and regression methods. Autoencoder and KMeans are the models supported as part of the unsupervised learning method.
As you may have guessed, the technique of ML that you choose (classification, clustering or regression) depends upon both the nature of the input data and the nature of the output (result or prediction) intended.
Let’s look at the situations where these specific techniques may be used.
Classification: Uses examples to train a model that will predict a discrete output class (only a finite number of outputs is possible). In simple terms it means choosing one decision among many discrete possible decisions based on previous known experiences or behaviours.
Clustering: Uses examples to train a model that will cluster a dataset into a given number of groups or clusters. This method will break the data into groups that exhibit similar patterns. The flower example below is an illustration of this method.
Regression: Uses examples to train a model that will predict a continuous output value (an infinite number of output values is possible). In simple terms, this model will predict an outcome based on previously known experience when the output can potentially take any of the many potential continuous values. The house price prediction model below is based on this method.
Now that we have briefly touched on data and techniques let us get into some important considerations and try to enhance our understanding a little more using a couple of examples: first using ML to predict the price for a house based on historical data and secondly using an image to cluster into classes of interest.
House valuation model: A simple illustration can explain this. Say you would like your ML algorithms to predict the price for which a house will sell. Typically, what you will need is to have a large collection of historical data that provides you with the actual selling prices of these properties.
The objective is to predict the price of a new house accurately based on historical sales records. The next logical step is to identify and weigh the importance of the parameters that will determine the selling price.
There could be hundreds of these parameters, but the beauty of ML algorithms is that we don’t need to worry about the importance and weight of each – the model will adjust appropriately to determine the importance of the parameters without any user intervention. This certainly takes away biases and outliers from the data.
Getting back to our house price problem, the attributes that are likely to determine the house prices can be (but are not limited to): number of bedrooms, number of bathrooms, land area, plinth area of the house, number of storeys, distance to the nearest train station, hospital or school. One could add many more attributes.
There could be thousands of houses with sold-price history that could be used for this model. For example, if there were 1000 houses, we could possibly use 800 houses to train and build the ML-based prediction model. 200 houses would be used to test the accuracy of the model. Model parameters would then be tuned to ensure that the prediction is within reasonable accuracy. Once the model is fully trained it will then be used to predict the sell price of any house to be put on the market.
The house price example of ML I provided above uses the regression technique of Machine Learning which can be implemented using an IDL ML framework.
Moving onto my flower example. Using my iPhone, I took a photo of “Crown of Thorns” (Euphorbia milii) – a beautiful flower (surrounded by thorns, of course) that is in my garden. In this example I made use of autoencoder to identify the patterns in this image – flower, leaves, buds, background, etc. An autoencoder is a type of neural network that specialises in learning a representation of the data, which can be used to group or cluster a dataset into a small set of categories and can be implemented using the IDL ML framework.
For the technically minded, this flower image is a JPEG file and I have clustered the image into five different categories based on the RGB pixel values. (To get a bit more technical here, to fine-tune my autoencoder, I did use what is called an Activation Function for each layer.)
Training a neural network requires the use of an optimiser. An optimiser helps the neural network adjust the learning rate during training based on how quickly the model converges to a solution. In this instance I used an optimiser available in the IDL framework called Gradient Descent (Method).
The examples above are relatively simple applications of the power of ML and are only provided to explain the basic capabilities of the IDL ML framework.
In practice we should be able to develop a deeper and more complex framework based on DL (such as Convolution Neural Networks) that can be applied to solve more challenging problems. For example, to identify all the solar panels in an urban area from an aerial photo, or identify a specific crop (such as a banana plantation) from satellite imagery.
Stay tuned to see the next big ML updates coming from Harris Geospatial in the near future!
About the author
Dr Dipak Paudyal
Principal Consultant for Remote Sensing and Imagery