Skip to main content

Writing GPS coordinates on an image from OneDrive using Azure Functions

Let's take a look over Azure Functions, from a developer perspective. In this post we will write an Azure Function that adds the GPS coordinates of a picture as watermark.

If you want to jump to the code, without any other explanations, than feel free to check the GitHub repository - https://github.com/vunvulear/Stuff/tree/master/Azure/azure-function-extractgpslocation-onedrive-drawwatermark 

What is Azure Functions?
The best explanation that I can give is Azure Functions are the AWS Lambda in Azure world. They allow us to execute code in a serverless way, without thinking about where you host the code, you just write the code and run it as Azure Functions.
There are many things that we can say about Azure Functions, the most important things for me are:
  • You can write code in different languages (C#, Node.JS, PHP, Python, Bash, Batch, PowerShell)
  • Multiple connectors and triggers, from queues to Blob Storage and Google Drive, all of them are supported
  • Flexible and transparent charging model, you pay only what you use
  • Real time processing
  • Refs to external libraries (e.g. NuGet) are supported

Our mission
The mission is to write an Azure Functions that:
  • Is triggered each time when a new image is added to a OneDrive folder
  • Extract the GPS coordinates from image attributes (if are available)
  • Draw GPS coordinates on the image
  • Save image in another folder in OneDrive 
First step  - Create the Azure Function
The first step is to create an Azure Function. This can be done from Azure Portal, a nice documentation related to it can be found here. We can create a 'GenericWebhookCSharp' function initially and we will delete the parts that we don't need.
Once we done this, we will need to go to the Integrate tab and specify our trigger. Delete all the Triggers and Outputs that are already defined. We will define them again.

Triggers and binding
In our case we will need use External File and in the connection field we will create a new one that will point to OneDrive. It is important to know that even if your access credentials are requests, they are not stored in Azure Functions. They are stored are API Connections and the same connection can be used in multiple functions if needed. This trigger will be called in the moment when a new file will be copied in the given path. As output, we will need to use the same External File. Now, we can reuse the same connection, that we created for the trigger - onedrive_ONEDRIVE. 
Now, let's take a look on the binding. 
{
  "bindings": [
    {
      "type": "apiHubFileTrigger",
      "name": "inputFile",
      "path": "Imagini/Peliculă/{filename}",
      "connection": "onedrive_ONEDRIVE",
      "direction": "in"
    },
    {
      "type": "apiHubFile",
      "name": "outputFile",
      "path": "Imagini/Peliculă/pictureswithgpswatermark/{rand-guid}.jpg",
      "connection": "onedrive_ONEDRIVE",
      "direction": "out"
    }
  ],
  "disabled": false
}

The first binding specifies the trigger. As we can see the direct is in and the connection is done to OneDrive. The 'path' is relative to our OneDrive. In my case, the folder that is monitored is 'Imagini/Peliculă'. {filename} is the file parameter. In Azure Function, we will refer to the input file using the 'name' attribute - inputFile.
Similar to it, we have the output, that is written in ""Imagini/Peliculă/pictureswithgpswatermark" folder. '{rand-guid}' is used to generate a random name for each image. 

Writing an empty function
As we can see below, we have inputFile and outputFile as parameters to the Run method. This method is the entry point each time when trigger runs. If you need to write something to logs, you can use TraceWriter with success, that needs to be specified as parameter.
public static void Run(Stream inputFile, Stream outputFile, TraceWriter log)
{
     log.Info("Image Process Starts"); 

     log.Info("Image Process Ends");
}

We can define our own class, reference existing libraries or nuget packages. To be able to work with files and this type of binding we'll need to add a reference to ApiHub, otherwise an encrypted error will be throw:
Exception while executing function: Functions.SaasFileTriggerCSharp1. Microsoft.Azure.WebJobs.Host: One or more errors occurred. Exception binding parameter 'input'. Microsoft.Azure.ApiHub.Sdk

The reference is added as a normal using, but specifies to Azure Functions to load the assembly from his shared repository.
#r "Microsoft.Azure.WebJobs.Extensions.ApiHub" //

Save and Run
Before hitting the Save button, make sure that Logs window is visible. This is useful, because each time when you hit Save button, the function is compiled. Any error during the build are displayed in the Logs window.

From now one, each time when you copy/upload a new file in your OneDrive folder, your function will be called automatically. In the logs you shall be able to see the output logs.

Reading the GPS location
To be able to read the GPS location from images, we will use ExifLib. This nuget package allow us easily to read GPS information. To be able to push a Nuget package, we will need to open project.json and add a dependence to our nuget package. Below you can find how the JSON should look like. I also added the Nuget package that will be used later on to draw the coordinates on the image
{
  "frameworks": {
    "net46":{
      "dependencies": {
        "ExifLib": "1.7.0.0",
        "System.Drawing.Primitives": "4.3.0"
      }
    }
   }
}

In the moment when you click Save button, you will see that the function is compiled and the Nuget packages and all package dependencies are downloaded.
When we run the code that extracts the GPS location we shall take into account the cases when an image doesn't has this information.
private static string GetCoordinate(Stream image, TraceWriter log)
{
    log.Info("Extract location information");
    ExifReader exifReader = new ExifReader(image);
    double[] latitudeComponents;
    exifReader.GetTagValue(ExifTags.GPSLatitude, out latitudeComponents);
    double[] longitudeComponents;
    exifReader.GetTagValue(ExifTags.GPSLongitude, out longitudeComponents);

    log.Info("Prepare string content");
    string location = string.Empty;
    if (latitudeComponents == null ||
        longitudeComponents == null)
    {
        location = "No GPS location";
    }
    else
    {
        double latitude = 0;
        double longitude = 0;
        latitude = latitudeComponents[0] + latitudeComponents[1] / 60 + latitudeComponents[2] / 3600;
        longitude = longitudeComponents[0] + longitudeComponents[1] / 60 + longitudeComponents[2] / 3600;

        location = $"Latitude: '{latitude}' | Longitude: '{longitude}'";
    }

    return location;
}

Next step is to call our method from the Run method (string locationText = GetCoordinate(inputFile, log). Once we click on save, the GPS location for each image can be found in the log window.

string locationText = GetCoordinate(inputFile, log);
log.Info($"Text to be written: '{locationText}'");
---- log window ----
2016-12-06T00:06:35.680 Image Process Starts
2016-12-06T00:06:35.680 Extract location information
2016-12-06T00:06:35.680 Prepare string content
2016-12-06T00:06:35.680 Text to be written: 'Latitude: '46.7636219722222' | Longitude: '23.5550620833333''


Write the watermark (coordinates)
The last step is to write the text on the image and copy the image stream to the output. The code is the same code that is required in a console application for the same task.

private static void WriteWatermark(string watermarkContent, Stream originalImage, Stream newImage, TraceWriter log)
{
    log.Info("Write text to picture");
    using (Image inputImage = Image.FromStream(originalImage, true))
    {
        using (Graphics graphic = Graphics.FromImage(inputImage))
        {
            graphic.SmoothingMode = SmoothingMode.HighQuality;
            graphic.InterpolationMode = InterpolationMode.HighQualityBicubic;
            graphic.PixelOffsetMode = PixelOffsetMode.HighQuality;
            graphic.DrawString(watermarkContent, new Font("Tahoma", 100, FontStyle.Bold), Brushes.Red, 200, 200);
            graphic.Flush();

            log.Info("Write to the output stream");
            inputImage.Save(newImage, ImageFormat.Jpeg);
        }
    }
}

Don't forget to reset the cursor position of the inputFile stream before calling the WriteWatermark. This is necessary because reading coordinates will move the cursor from position 0.
In the end, the run method should look like this:
public static void Run(Stream inputFile, Stream outputFile, TraceWriter log)
{
    
    log.Info("Image Process Starts"); 

    string locationText = GetCoordinate(inputFile, log);
    log.Info($"Text to be written: '{locationText}'");

    // Reset position. After Exif operations the cursor location is not on position 0 anymore;
    inputFile.Position = 0;

    WriteWatermark(locationText, inputFile, outputFile, log);

    log.Info("Image Process Ends");
}

Final output
The final output, of our function shall be an image that was the coordinates written with RED, see below.

Conclusion
In this post we discovered how we can write an Azure Function that adds as watermark the GPS coordinates of the location where the picture was taken. In the next post we will discover how we can do the same think from Visual Studio directly and how we can integrate CI to Azure Function.

Full code can be found on GitHub - https://github.com/vunvulear/Stuff/tree/master/Azure/azure-function-extractgpslocation-onedrive-drawwatermark 

Next post about Azure Functions - Azure Functions integration with Visual Studio and CI

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP