Serverless Azure Function – Run a cron job using time trigger

This post is about using serverless Azure function for running a batch job
or cron job. There are many instances where in, we can execute the cron job using Azure Function using time trigger.

Serverless Azure Function allows us to schedule a custom block of code to execute on time the client wanted and gain immediate visibility of logging in the Azure portal. Since this is serverless, we need not bother about setting up infrastructure. This does not need to have Virtual machines to setup or IIS to be configured in order to host any service.

We will create a demo project which is going to run as a batch job and fetch the BBC news rss feed for asia region. And we are going to send an email using SMTP settings for gmail. So every 6 hours this cron job is going to sniff bbc news reader rss feed and email me the top headings to my gmail inbox.

Before we start creating new project and proceed, please ensure you have installed the Azure Development workload for visual studio 2017.

Azure function gives us multiple triggers types, which we can use as a template and use.

1.HTTPTrigger – To Trigger the execution of your code by using an HTTP request.
2. TimerTrigger – To Execute cleanup or other batch tasks on a predefined schedule.
3. CosmosDBTrigger – To Process Azure Cosmos DB documents when they are added or updated in collections in a NoSQL database.
4. BlobTrigger – To Process Azure Storage blobs when they are added to containers. You might use this function for image resizing.
5. QueueTrigger – To Respond to messages as they arrive in an Azure Storage queue.

We need to Add System.ServiceModel in references for using SyndicationFeed and it will fetch and read rss feeds.

PM> Install-Package System.ServiceModel.Syndication

We would like to run this Azure function in every 6 hours, so 4 times in a day. We would select time trigger as Azure function type.

For testing we will change the time trigger to 20 seconds, in the below image we see the the cron job is being executed every 20 seconds.

Publish Azure Function to Azure Cloud

We will publish the azure function to cloud. We need to created a new profile in Azure for us to publish this Azure Time Trigger function.

We will name our Azure function as RSSFeedSniffer

After being successful published, it shall be redirected to
https://rssfeedsniffer.azurewebsites.net/

It can also be seen in Azure portal dashboard.

Advertisements

Azure Cosmos DB – Angular with WebAPI

This article is about using Azure Cosmos DB with Angular and WebAPI. Why Azure Cosmos DB, Any web, mobile, gaming, and IoT application that needs to manage extensive amounts of data, reads, and writes, are great of use cases.

Few differences between a document DB and a relational database:-

Document DB Relational Database
De-normalize data (Think about JSON format, key value pairs) Normalized data (Plain SQL queries)
Referential integrity NOT enforced Referential integrity FORCED through normalization and relationship
Mixed data in a collection Uniform data in tables
Flexible schema The schema is not so flexible
SQL like language as well as Javascript Pure T-SQL

More info here – https://azure.microsoft.com/en-ca/services/cosmos-db/

We will create a project – beer tracker and it will use angular, webapi and cosmosdb. We use azure cosmos db emulator to not get into configuring Azure CosmosDB which will make it post a really long one.

Download Azure CosmosDB Emulator here – https://docs.microsoft.com/en-us/azure/cosmos-db/local-emulator

This is how the final demo would look:-

Please note that in this application we are using Cosmos DB local emulator instead of real Azure Cosmos DB service. Once installed properly, it should show like below in your browser. I have installed and it is working in my chrome browser.

We will start with WebAPI, let’s create a new WebAPI project from visual studio 2017. And post that we will hook up Azure cosmosDB into webAPI and last part will be froentend UI by Angular.

In the below step, we are going to install Microsoft.Azure.DocumentDB NuGet package. DocumentDB is a true schema-free NoSQL document database service designed for modern mobile and web applications.

Install-Package Microsoft.Azure.DocumentDB -Version 2.2.3

We need to enable CORS in our Web API to allow requests from front end Angular application. For that, install Microsoft.AspNet.WebApi.Cors using NuGet

Install-Package Microsoft.AspNet.WebApi.Cors -Version 5.2.7

Lets go the the AppStart and update the WebApiConfig.cs

public static class WebApiConfig
     {
         public static void Register(HttpConfiguration config)
         {
             // Web API configuration and services       
             // Web API routes
        config.MapHttpAttributeRoutes();

        EnableCorsAttribute cors = new EnableCorsAttribute("*", "*", "*");
        config.EnableCors(cors);

        config.Routes.MapHttpRoute(
            name: "DefaultApi",
            routeTemplate: "api/{controller}/{id}",
            defaults: new { id = RouteParameter.Optional }
        );
    }
}

We will also update the web.config of our webapi project to use cosmosDB emulator. Later on it can be changed for Azure CosmosDB keys. It will have no impact on other part of this demo.

<!--config keys for cosmos DB--&gt;
<add key="endpoint" value="https://localhost:8081" /&gt;  
<add key="authKey" value="C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==" /&gt;  
<add key="database" value="AngularBeerMeetup" /&gt;  
<add key="collection" value="BeerMeetupCollection" /&gt;
<!--config keys for cosmos DB--&gt;

Now lets add the models and API controllers.

namespace BeerMeetupSolution.Models
{
using Newtonsoft.Json;
public class BeerMeetup
{
[JsonProperty(PropertyName = "id")]
public string Id { get; set; }
[JsonProperty(PropertyName = "uid")]
public string UId { get; set; }
[JsonProperty(PropertyName = "location")]
public string Location { get; set; }
[JsonProperty(PropertyName = "brand")]
public string Brand { get; set; }
[JsonProperty(PropertyName = "cheers")]
public string Cheers { get; set; }
}
}

We also create a DocumentDBRepository class, just to keep here CosmosDB CRUD operations. These CRUD methods will be called by API controllers.

public static class DocumentDBRepository where T : class
     {
         private static readonly string DatabaseId = ConfigurationManager.AppSettings["database"];
         private static readonly string CollectionId = ConfigurationManager.AppSettings["collection"];
         private static DocumentClient client;
    
public static async Task<T&gt; GetItemAsync(string id)
    {
        try
        {
            Document document = await client.ReadDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id));
            return (T)(dynamic)document;
        }
        catch (DocumentClientException e)
        {
            if (e.StatusCode == System.Net.HttpStatusCode.NotFound)
            {
                return null;
            }
            else
            {
                throw;
            }
        }
    }

    public static async Task<IEnumerable<T&gt;&gt; GetItemsAsync()
    {
        IDocumentQuery<T&gt; query = client.CreateDocumentQuery<T&gt;(
            UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId),
            new FeedOptions { MaxItemCount = -1 })
            .AsDocumentQuery();

        List<T&gt; results = new List<T&gt;();
        while (query.HasMoreResults)
        {
            results.AddRange(await query.ExecuteNextAsync<T&gt;());
        }

        return results;
    }

    public static async Task<IEnumerable<T&gt;&gt; GetItemsAsync(Expression<Func<T, bool&gt;&gt; predicate)
    {
        IDocumentQuery<T&gt; query = client.CreateDocumentQuery<T&gt;(
            UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId),
            new FeedOptions { MaxItemCount = -1 })
            .Where(predicate)
            .AsDocumentQuery();

        List<T&gt; results = new List<T&gt;();
        while (query.HasMoreResults)
        {
            results.AddRange(await query.ExecuteNextAsync<T&gt;());
        }

        return results;
    }

    public static async Task<T&gt; GetSingleItemAsync(Expression<Func<T, bool&gt;&gt; predicate)
    {
        IDocumentQuery<T&gt; query = client.CreateDocumentQuery<T&gt;(
            UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId),
            new FeedOptions { MaxItemCount = -1 })
            .Where(predicate)
            .AsDocumentQuery();
        List<T&gt; results = new List<T&gt;();
        results.AddRange(await query.ExecuteNextAsync<T&gt;());
        return results.SingleOrDefault();
    }

    public static async Task<Document&gt; CreateItemAsync(T item)
    {
        return await client.CreateDocumentAsync(UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId), item);
    }

    public static async Task<Document&gt; UpdateItemAsync(string id, T item)
    {
        return await client.ReplaceDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id), item);
    }

    public static async Task DeleteItemAsync(string id)
    {
        await client.DeleteDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id));
    }

    public static void Initialize()
    {
        client = new DocumentClient(new Uri(ConfigurationManager.AppSettings["endpoint"]), ConfigurationManager.AppSettings["authKey"]);
        CreateDatabaseIfNotExistsAsync().Wait();
        CreateCollectionIfNotExistsAsync().Wait();
    }

    private static async Task CreateDatabaseIfNotExistsAsync()
    {
        try
        {
            await client.ReadDatabaseAsync(UriFactory.CreateDatabaseUri(DatabaseId));
        }
        catch (DocumentClientException e)
        {
            if (e.StatusCode == System.Net.HttpStatusCode.NotFound)
            {
                await client.CreateDatabaseAsync(new Database { Id = DatabaseId });
            }
            else
            {
                throw;
            }
        }
    }

    private static async Task CreateCollectionIfNotExistsAsync()
    {
        try
        {
            await client.ReadDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId));
        }
        catch (DocumentClientException e)
        {
            if (e.StatusCode == System.Net.HttpStatusCode.NotFound)
            {
                await client.CreateDocumentCollectionAsync(
                    UriFactory.CreateDatabaseUri(DatabaseId),
                    new DocumentCollection { Id = CollectionId },
                    new RequestOptions { OfferThroughput = 1000 });
            }
            else
            {
                throw;
            }
        }
    }
}

Here is the APIs which are accessible and can be tested by Fiddlers or PostMan tool. All these methods are exposed via end points and accessible by HTTP protocols. We have also used WebAPI route prefix, and this will help angular code to resolve the url and access them.

[RoutePrefix("api/beermeetup")]
namespace BeerMeetupSolution.Controllers
 {
     [RoutePrefix("api/beermeetup")]
     public class BeerMeetupController : ApiController
     {    
    [HttpGet]
    public async Task<IEnumerable<Models.BeerMeetup&gt;&gt; GetAsync()
    {

        IEnumerable<Models.BeerMeetup&gt; value = await DocumentDBRepository<Models.BeerMeetup&gt;.GetItemsAsync();
        return value;
    }

    [HttpPost]
    public async Task<Models.BeerMeetup&gt; CreateAsync([FromBody] Models.BeerMeetup objbm)
    {
        if (ModelState.IsValid)
        {
            await DocumentDBRepository<Models.BeerMeetup&gt;.CreateItemAsync(objbm);
            return objbm;
        }
        return null;
    }
    public async Task<string&gt; Delete(string uid)
    {
        try
        {
            Models.BeerMeetup item = await DocumentDBRepository<Models.BeerMeetup&gt;.GetSingleItemAsync(d =&gt; d.UId == uid);
            if (item == null)
            {
                return "Failed";
            }
            await DocumentDBRepository<Models.BeerMeetup&gt;.DeleteItemAsync(item.Id);
            return "Success";
        }
        catch (Exception ex)
        {
            return ex.ToString();
        }
    }
    public async Task<Models.BeerMeetup&gt; Put(string uid, [FromBody] Models.BeerMeetup o)
    {
        try
        {
            if (ModelState.IsValid)
            {
                Models.BeerMeetup item = await DocumentDBRepository<Models.BeerMeetup&gt;.GetSingleItemAsync(d =&gt; d.UId == uid);
                if (item == null)
                {
                    return null;
                }
                o.Id = item.Id;
                await DocumentDBRepository<Models.BeerMeetup&gt;.UpdateItemAsync(item.Id, o);
                return o;
            }
            return null; ;
        }
        catch (Exception ex)
        {
            return null;
        }

    }
}
}

Now we will switch to our front end. The Angular part. Ensure you have install NPM and its configured in your system.

Type into the black command prompt

ng new AngularUI 

It will take some time for Angular CLI to create a new project and once it completes we will switch to Visual Studio Code.

The boiler plate generated by AngularCLI would be ready, and we will start to build our frontend code.

Adding model for BeerMeetup,

export class BeerMeetup {  
uid: string;
location: string;
brand: string;
cheers: string;
}

After model, we add service which will have crud methods.

  
import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';

import { BeerMeetup } from './beermeetup';

const api = 'http://localhost:53090//api';

@Injectable()
export class BeerMeetupService {
constructor(private http: HttpClient) { }

getBM() {
return this.http.get<Array<BeerMeetup>>(`${api}/beermeetup`);
}

deleteBM(beermeetup: BeerMeetup) {
return this.http.delete(`${api}/beermeetup?uid=${beermeetup.uid}`);
}

addBM(beermeetup: BeerMeetup) {
return this.http.post<BeerMeetup>(`${api}/beermeetup/`, beermeetup);
}

updateBM(beermeetup: BeerMeetup) {
return this.http.put<BeerMeetup>(`${api}/beermeetup?uid=${beermeetup.uid}`, beermeetup);
}
}

Lastly, we will add the component

  
import { Component, OnInit } from '@angular/core';

import { BeerMeetup } from './beermeetup';
import { BeerMeetupService } from './beermeetup.service';

@Component({
selector: 'app-ohs',
templateUrl: './beermeetup.component.html'
})
export class BeerMeetupComponent implements OnInit {
addingBM = false;
deleteButtonSelected = false;
heroes: any = [];
selectedBM: BeerMeetup;

constructor(private beermeetupService: BeerMeetupService) { }

ngOnInit() {
this.getBM();
}

cancel() {
this.addingBM = false;
this.selectedBM = null;
}

deleteBM(hero: BeerMeetup) {
this.deleteButtonSelected = true;
let value: boolean;
value = confirm("Are you sure want to delete this meetup?");
if (value != true) {
return;
}
this.beermeetupService.deleteBM(oh).subscribe(res => {
this.ohs= this.heroes.filter(h => h !== oh);
if (this.selectedBM === oh) {
this.selectedBM = null;
}
});
}

getBM() {
return this.beermeetupService.getBM().subscribe(ohs=> {
this.ohs= ohs;
});
}

enableAddMode() {
this.addingBM = true;
this.selectedBM = new BeerMeetup();
}

onSelect(hero: BeerMeetup) {
if (this.deleteButtonSelected == false) {
this.addingBM = false;
this.selectedBM = oh;
}
this.deleteButtonSelected = false;
}

save() {
if (this.addingBM) {
this.beermeetupService.addBM(this.selectedBM).subscribe(
obj => {
this.addingBM = false;
this.selectedBM = null;
this.ohs.push(oh);
});
} else {
this.beermeetupService.updateBM(this.selectedBM).subscribe(obj=> {
this.addingBM = false;
this.selectedBM = null;
});
}
}
}

Once this is done, lets switch over to app.module.ts, we will ensure our modules are registered here.

  
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { HttpClientModule } from '@angular/common/http';

import { AppComponent } from './app.component';
import { BeerMeetupService } from './beermeetup.service';
import { BeerMeetupComponent } from './beermeetup.component';
@NgModule({
declarations: [
AppComponent,
BeerMeetupComponent
],
imports: [
BrowserModule,
FormsModule,
HttpClientModule
],
providers: [BeerMeetupService],
bootstrap: [AppComponent]
})
export class AppModule { }



Now we will move to Visual Studio Code, Terminal and fire

ng serve -o

Azure DevOps – Angular 5 with .Net WebApi

This article shows how to set up AzureDevOps CI/CD pipelines to be set for a full stack application. Here I use a multilayered solution set up which show the tasks based on users. Below is the technology stack which is used.

  • Frontend UI – Angular 5
  • Middleware APIs- ASP.NET WebAPI
  • RDBMS – SQL Server 2012
  • Dependency Injection – Unity
  • ORM – EntityFramework
  • Framework – .NET Core

I have code hosted on github as public repo.

Our projects in the solution would look like below:-

This image has an empty alt attribute; its file name is image.png

First, create an AzureDevOps account by going to azure web portal. https://dev.azure.com

As for pricing, Azure DevOps is free for open source projects and small projects (up to five users).

I have created Tracker as a private project, which I will be using for my Azure CI/CD pipeline.

Once you create a project there will be an option on the left-hand side, which gives us a submenu to show files and its metadata.

After I click files, I would not see any files. Since I am using git version control and the source is hosted on GitHub. I will fire these two commands in the command prompt to push my latest changes.

git remote add origin https://github.com/varunmaggo/Tracker.gith
git push -u origin --all

Now the code is pushed and the next step is to create AzureDevOps pipeline, on the left-hand side, there is an option. We would need to create Build and Release pipelines separately.

Again goto build option, we would see an option, where we need to create a build agent with multiple configurations.

  • Install node package manager,
  • Install angular cli
  • Build packages for angular,
  • Nuget restore to install packages for WebApi’s solution.
  • run unit tests,
  • And finally, publish artefacts.

Once we are set with these build pipelines, we would need to proceed for Release Pipeline.

In order to

In order to make it simple, I would use the staging environment. In real-world scenarios, we can have dev, stage and prod separately.

In the image below, we select the build source type, which is our build pipeline, and we would like to

JWT Authentication for WebAPI

This post is about securing web api using JWT token based authentication. JWT stands for JSON Web Tokens. JSON Web Tokens are an open, industry standard method for representing claims securely between two parties. In token based authentication, the user sends a username and password, and in exchange gets a token that can be used to authenticate requests.

A JWT token look like:

Header.Payload.Signature

HEADER PAYLOAD SIGNATURE
AAAAAAAAAAAAA. BBBBBBBBBBBBBBBBB. CCCCCCCCCCCCC
<base64-encoded header>.<base64-encoded claims>.<base64-encoded signature>

.NET has build in support for JWT tokens in the below namespace.

using System.IdentityModel.Tokens.Jwt;

JWT token has three sections:

  • Header: JSON format which is encoded as a base64
  • Claims: JSON format which is encoded as a base64.
  • Signature: Created and signed based on Header and Claims which is encoded as a base64.

In the below project, we will see how the JWT token authentication has been implemented.

Step 1 – A browser client is going to send a http request with username and password. This is going to be validated using WebAPI filter attribute.

AuthorizationFilterAttribute

Step 2 – Server validates the username and password and completes a handshake. Post handshake, the server generates the token and send it to the client.

The below code is going to generate the token for the user(client)


We need to add below two nuget packages from Nuget Package manager,

Install-Package Microsoft.IdentityModel.Tokens -Version 5.4.0   
Install-Package System.IdentityModel.Tokens.Jwt -Version 5.4.0

Step 3 — Check for token validation

We used System.IdentityModel.Tokens.Jwt library for generating and validating tokens. To implement JWT in Web API, we created a filter for authentication which will be executed before every request. It will verify the token contained in the request header and will deny/allow resource based on token.

Call FitBit API using C#

FitBit integration is pretty easy since FitBit documentation is very detailed one and easy to pick up. In order to call FitBit apis, you need to register yourself on their development platform and do a handshake with token-based authentication. FitBit uses token-based authentication for the webbased application. So I am using asp.net MVC application as the web client.

First register yourself and get Client ID and Client secret key

https://dev.fitbit.com

Once you have registered yourself and created the application on fitbit dev environment, you will see your application below :-

As a developer, we need client ID, Client secret and callback url. We will save this information in config files. We prepare the user to redirect to Fitbit.com to prompt them to authorize this app, on successful authentication we will call the required apis from FitBit and get the data.

More info on webapi calls with request and response on FitBit website — https://dev.fitbit.com/build/reference/web-api/basics/

Once we run out webapp, we need to go to its login page. This login page is going to FitBit api, does a handshake.

We are passing same clientid and client secret for authentication.

On successful handshake, we will see our profile data from fitbit. FitBit also gives, user activity like daily steps, weekly steps, heart rate data. But that will be there for another post.

5 considerations to save your sql database from bottleneck

We all write sql queries and fetch the data from the database. Many a times, an inefficient query may pose a bottleneck on the production database’s resources, and cause slow performance for other users if the query contains errors. Most time, we write blind queries and get all the possible data, which is not even required or to be shown to the end user on the form or GRID. In most scenarios, there are few tweaks you can do to your sql queries to optimize for better good.

  1. Indexes

Database novices often find indexes mysterious or difficult. They either index nothing or they try to index everything.

2. Less is more, so select * [STAR] carefully

A common way of retrieving the desired columns is to use the * symbol even though not all the columns are really needed. If you only need a limited number of rows you should use the LIMIT clause (or your database’s equivalent). Take a look at the following code:

SELECT name, price FROM products;

SELECT name, price FROM products LIMIT 10;

3. Say no to correlated subqueries

A correlated subquery is a subquery which depends on the outer query. It uses the data obtained from the outer query in its WHERE clause. Suppose you want to list all users who have made a donation. You could retrieve the data with the following code:

SELECT user_id, last_name FROM users WHERE EXISTS (SELECT * FROM donationuser WHERE donationuser.user_id = users.user_id);

SELECT DISTINCT users.user_id FROM users INNER JOIN donationuser ON users.user_id = donationuser.user_id;

4. Avoid Wildcards

In SQL, wildcard is provided for us with ‘%’ symbol. We should be considerate for using wildcard, which will definitely slow down your query especially for table that are really huge. We can optimize our query with wildcard by doing a postfix wildcard instead of pre or full wildcard. Below are few examples.

#Full wildcard
SELECT * FROM TABLE WHERE COLUMN LIKE ‘%hello%’;
#Postfix wildcard
SELECT * FROM TABLE WHERE COLUMN LIKE ‘hello%’;
#Prefix wildcard
SELECT * FROM TABLE WHERE COLUMN LIKE ‘%hello’;
5. COUNT VS EXIST, you decide

Some of us might use COUNT operator to determine whether a particular data exist. There are many instances where we can make use of Exists. Example below:-

SELECT COLUMN FROM TABLE WHERE COUNT(COLUMN) > 0

This is a very bad query since count will search for all record exist on the table to determine the numeric value of field ‘COLUMN’.