Event Sourcing and Command Query Responsibility Seggration(CQRS)

Event sourcing is a architectural pattern that captures and stores every state change within an application as a series of immutable events. Rather than only maintaining the current state of an entity, event sourcing use an append-only store to record the full series of actions led to the current state, allowing for the reconstruction of that state at any given point.

Conventional applications typically persist the end state without tracking the path that led to the current state. For instance, consider a service ticket record entity with evolving status

{Ticketid:1, comments:"VPN not working", status:'open'} - Initial State

After 2 days this ticket is assigned to someone

{Ticketid:1, comments:"VPN not working", status:'assigned'}

and it took 1 day for resolution and 1 more day to close (final state).


{Ticketid:1, comments:"VPN not working", status:'in-progress'}
{Ticketid:1, comments:"VPN not working", status:'closed'} - end state only retained

This method loses the history of entity state changes, only retaining the final state. However, valuable information like “ticket was open for 2 days” or analytics to identify prolonged open tickets becomes inaccessible.

On the contrary, Event Sourcing retains all entity-changing events, constructing a comprehensive history of the service ticket’s evolution. This approach not only supports thorough analytics on all available data but also facilitates the creation of summarized snapshots by replaying events sequentially. This capability proves instrumental in reconstructing events and building projections.

TicketopenedEvent - {Ticketid:1, comments:"VPN not working", status:'opened'}
TicketAssignedEvent - {Ticketid:1, comments:"VPN not working", status:'assigned'}
TicketinprogressEvent - {Ticketid:1, comments:"VPN not working", status:'in-progress'}
TicketcloseEvent - {Ticketid:1, comments:"VPN not working", status:'closed'}

An example of event sourcing is activity logs in azure which provide insights into the operations performed on each Azure resource, The state is persisted as a series of immutable events.

Event stores serve as pivotal components in Event Sourcing architectures, providing reliable and durable storage solutions for recording and managing events. They empower applications to maintain a complete historical record, perform analytics, support recovery processes, and establish a robust foundation for building scalable and resilient systems. Notably, specialized databases like EventStoreDB cater specifically to Event Sourcing needs.

CQRS (Command Query Responsibility Segregation)

CQRS is a pattern that segregates the responsibilities of handling read and write operations within an application. Unlike the traditional CRUD (Create, Read, Update, Delete) approach, CQRS advocates for the separation of concerns between:

  • Commands: Represent actions that modify the system’s state (e.g., creating, updating, or deleting data).
  • Queries: Retrieve data without affecting the system’s state, offering a view of the data to the user.

The combination of Event Sourcing and CQRS allows services to operate independently, consuming events asynchronously to update read models without disrupting write-side operations.

Overcoming Challenges and Considerations

Implementing Event Sourcing and CQRS requires careful consideration, especially regarding event consistency across services, managing communication, and synchronizing events effectively.

Conclusion: Empowering Modern Architectures

Combination of Event Sourcing and CQRS fundamentally reshapes software architecture, offering resilience, scalability, and flexibility. In the dynamic landscape of software development, Event Sourcing and CQRS stand as pillars redefining how systems handle data, encouraging scalability and resilience.

Domain Driven Design and Micro Services Architecture

I have been working on Domain Driven Design and Micro Services Architecture a lot lately, thought would do series of posts covering Architecture and Patterns, stay tuned 🙂

Domain-Driven Design (DDD) was first coined by Eric Evans in 2003 in his book title the same or famously referred as The Green Book, after 20 years concepts still are very relevant especially when building Micro Services. In today’s rapidly evolving software landscape, DDD stands as a guiding path to crafting robust, and sustainable software systems. At its core, DDD encapsulates a philosophy that prioritizes understanding and modeling the intricacies of the domain within which a software system operates. By aligning technical implementation with the domain’s intricacies.

Understanding the Domain
The crux of Domain-Driven Design lies in comprehending the domain itself—the problem space for which the software is being developed. This understanding involves close collaboration between domain experts and software developers, fostering a shared language and knowledge exchange. This collaboration allows for the extraction of domain models that mirror the real-world complexities, rules, and relationships, paving the way for accurate representation within the software.

the Microservices Revolution

Microservices architecture, on the other hand, champions the decomposition of monolithic applications into smaller, independently deployable services. Each service, focused on specific business capabilities, communicates through APIs, promoting flexibility, scalability, and resilience. This architectural style has redefined how we build and scale modern applications.

Ubiquitous Language: Bridging the Gap
One of the foundational concepts in DDD is the development of a ubiquitous language. This shared vocabulary acts as a bridge between technical and non-technical stakeholders, ensuring that everyone involved in the project speaks the same language. By using terms and concepts from the domain in both code and conversations, the ubiquitous language facilitates clearer communication and a deeper understanding of the software’s purpose.

Bounded Contexts: Focused and Cohesive Components – Micro Services
In complex domains, creating a single, all-encompassing model can be overwhelming and counterproductive. DDD introduces the concept of bounded contexts—explicitly defined boundaries within which a particular model and its language hold significance. Bounded contexts enable teams to manage complexity by breaking down a large domain into smaller, more manageable micro Service development within each context.

Strategic Design: Aligning Business Goals with Technical Implementation
Aligning the software architecture with business goals is crucial for success. Strategic design in DDD emphasizes mapping out the core domain and supporting subdomains while identifying how they interact and integrating them effectively. This alignment ensures that the software solution not only meets immediate needs but also remains flexible enough to adapt to future changes and expansions.

Tactical Patterns: Implementing the Domain Model
Once the domain model is defined, DDD offers tactical patterns to implement it effectively. Patterns like Aggregates, Entities, Value Objects, and Repositories guide developers in structuring code that reflects the domain’s intricacies, maintaining consistency and enforcing domain-specific rules within the software.

Entities

Entities are objects with Bounded Contexts (BCs) that have a persistent identity. These are distinct individuals who also serve as behavioral data points.

Value objects

Value objects can exist with attributes but not independently. One such value object is the address. The number of entities and value objects in large and complex systems is infinite.

Aggregates

The domain model requires a certain level of coherence, and for the same reason, it will group them logically into manageable categories. Aggregates are the name for these groupings.

Domain Service

This stateless service is used to implement business logic. In addition, the application service is a further layer devoid of business logic. It is, however, above the domain model and serves to organize application activities.

Domain events

When something happens, domain events let other services know, pushing updates to Message Broker.

Repositories

Repositories are persistent storage spaces for aggregates.

Continuous Refinement and Evolution
Domain-Driven Design isn’t a one-time process; it’s an ongoing journey of refinement and evolution. As the understanding of the domain deepens or business requirements change, the models and code evolve accordingly. This iterative approach allows software systems to stay aligned with the evolving needs of the domain, ensuring long-term relevance and value.

Embracing Domain-Driven Design: A Paradigm Shift
Embracing Domain-Driven Design requires a paradigm shift in how software is approached. It’s not just about writing code but about understanding the problem space deeply and reflecting that understanding in the software’s architecture and design. By fostering collaboration, embracing complexity, and aligning technical solutions with real-world needs, DDD empowers teams to build software that truly resonates with its intended purpose.

In conclusion, Domain-Driven Design isn’t merely a methodology; it’s a mindset—a way of thinking that places the domain at the heart of software development. By embracing DDD principles, software teams can transcend beyond building applications and instead craft purposeful solutions that truly make an impact in the world.

Helm charts for kubernetes

Helm is package manager for Kubernetes, helm deploys charts that are  packages of pre-configured kubernetes resources, think of helm as apt/yum/homebrew for Kubernetes.

Benefits of Helm
  • Find and use popular software packaged as Kubernetes charts.
  • Share applications as Kubernetes charts.
  • Create reproducible builds for Kubernetes applications.
  • Intelligently manage Kubernetes object definitions.
  • Manage releases of Helm packages.
INstalling Helm

Windowschoco install kubernetes-helm

macOSbrew install helm

Debian/Ubuntu

curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

please refer to Helm official documentation for more installation types.

Generate sample chart

Best way to create helm chart is by running “helm create command”, run the command in the directory of your choice.

       helm create azsrini (azsrini is name of my chart, use name of your choice)

Helm will create a new directory “azsrini” with file hierarchy shown below.

there are four main components in Helm:  

  • Chart.yaml
  • Values.yaml
  • Templates.yaml
  • Charts Directory for other charts

Chart.yaml has required information describing what the Chart is about, with a name and version. There is more information that can be included in the Chart.yaml, but that is the minimum. 


Values.yaml file is where you define values to be injected/interpreted by the templates, values are represented key value pair and can be defined as nested values, there can be multiple values.yaml for different environments/namespaces, example : customvalues.yaml and this file can be passed during install like helm install -f customvalues.yaml ./azsrini


Template is where Helm finds the YAML definitions for your Services, Deployments and other Kubernetes objects. Helm leverages Go Templating if there is conditional logic needing to be included in the Templates.


Charts Directory for other Chart is where dependencies or sub charts are placed.

Chart Dependencies :

In Helm, one chart can depend on another chart(s), These dependencies can be dynamically linked using the dependencies field in Chart.yaml or brought in to the charts/ directory and managed manually.

I have added sample dependency from azure helm samples, running “helm dependency update” will add dependency to charts folder, like the below

You could also add sub charts by going into charts folder and running “helm create sub-chart“, this will create entire chart hierarchy structure in charts folder.



Deploying Charts

Sample chart created above has two pods defined (including dependency) and can be deployed to kubernetes by running helm install command, I’m installing the sample chart on Azure kubernetes cluster, this will install template file and also dependencies

helm install {name of deployment} {chartlocation}
ex: helm install azsrini-helm ./azsrini

Pods and Services deployed using Helm

Debugging Helm Templates

Debugging templates can be tricky because the rendered templates are sent to the Kubernetes API server, which may reject the YAML files for reasons other than formatting. There are few commands that can help debug.

  • helm lint to verify chart follow best practices
  • helm install --dry-run --debug or helm template --debug to view yaml templates are deployed.
  • helm get manifest to view templates that are installed on the server.
Upgrade Helm package

when you have newer version of chart, helm deployment can be upgraded to new version by running helm upgrade command. Helm upgrade takes the existing release and upgrades to information provide. I updated version to 0.1.1 and appversion to 1.16.1 in chart.yaml and upgraded my deployment, helm list will show newer chart version and app version.

Rollback Helm deployment

Helm deployment can be rolled back to previous version easily by running helm rollback, you can use helm history command to view all the previous deployment and rollback to the desired revision.

Helm package

we have been working with unpacked charts so far, if you want to make chart public or deploy to repository, charts need to be packaged to tar file, all publicly available charts are tar files. charts can be packaged using helm package command, this will create tar file and save to working directory, helm names the package with the metadata in chart.yaml {name}-{version}.tgz. helm packages can be installed using same helm install command.

Helm Repository

Helm repository is place where helm package are stored and shared, it can be any http server that can serve YAML and tar files, checkout Artifact Hub for community distributed charts. there are several options for hosting chart repositories, like ChartMuseum(open source), GitHub, Hashicorp vault, JFrog antifactory, GCS bucket,S3 bucket or Azure storage.

Conclusion

I haven’t discussed in-depth about injecting values, template functions, control flows or scopes, I highly recommend to refer to Helm official documentation and GO template functions at https://godoc.org/text/template.

Useful links

I found starter tutorials at https://jfrog.com/blog/10-helm-tutorials-to-start-your-kubernetes-journey/ very helpful.
Helm official documentationhttps://helm.sh/
Helm commandshttps://helm.sh/docs/helm/helm/

Helm charts for kubernetes

Azure Kubernetes Service (AKS) Network plugins

Kubernetes networking enables you to configure communication within your k8s network. It is based on a flat network structure, which eliminates the need to map ports between hosts and containers. Let’s  discuss different network plugins in AKS.

AKS cluster can be deployed using one of the network plugins

  • Kubenet (Basic) networking
  • Azure Container Networking Interface (CNI) networking

Kubenet (basic) networking

Kubenet is a very simple network plugin. It does not implement more advanced features like cross-node networking or network policy, kubenet networking option is the default network plugin for AKS. only  nodes receive an IP address in the virtual network subnet, Pods can’t communicate directly with each other, User Defined Routing (UDR) and IP forwarding is used for connectivity between pods across nodes. The following diagram shows how the AKS nodes receive an IP address in the virtual network subnet, but not the pods:

v

Azure portal does not provide option to select vnet when using Kubenet, but can be deployed in existing subnet using Terraform or Arm Templates.

Terraform sample

# import existing subnet
data "azurerm_subnet" "subnet-aks" {
    name = "azsrini-AKS-SN"
    virtual_network_name = “azsrini-aks-vnet”
    resource_group_name = “azsrini-aks”
}

# create log analytics workspace
resource "azurerm_log_analytics_workspace" "azsrini-law" {
    name                = “azsrini-law”
    location            = “eastus”
    resource_group_name = “azsrini-aks”
    sku                 = “Standard”
}


# create cluster.
resource "azurerm_kubernetes_cluster" "azsrini-k8s" {
    name                = "azsrini-AKSCL"
    location            =””
    resource_group_name = “azsrini-aks”
    kubernetes_version  = "1.20.9"
    tags                =  [sample,Test]
    sku_tier            = "Free"
    dns_prefix          = "azsrini -aks"
    private_cluster_enabled = true

    linux_profile {
        admin_username = "azsriniaksuser"

        ssh_key {
            key_data = ” ssh-rsa AAAAB………………………………”
        }
    }

    identity {
         type = "SystemAssigned"
    }

    role_based_access_control {
        enabled = true
    } 

      default_node_pool {
        name            = "default"
        node_count      = 1
        vm_size         = “standard_d4s_v3”     
        os_disk_size_gb = "128"        
        vnet_subnet_id  = data.azurerm_subnet.subnet-aks.id
        node_labels      = {}
        availability_zones = [1,2,3]
        max_pods = 150
        enable_auto_scaling  = true
        max_count = 5
        min_count = 1
    }

    addon_profile {
        oms_agent {
        enabled                    = true
        log_analytics_workspace_id = azurerm_log_analytics_workspace.azsrini-law.id
        }
    }

    network_profile {
        load_balancer_sku = "Standard"
        network_plugin = "kubenet"
        outbound_type = "loadBalancer"
    }  
}

Azure supports a maximum of 400 routes in a UDR, so you can’t have an Kubenet AKS cluster larger than 400 nodes. 

Azure CNI (advanced) networking

When using Azure CNI every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front.

IP Exhaustion and planning with Azure CNI

we are facing IP exhaustion issue in our Non prod environment, plan carefully when ahead of time for for Azure CNI networking. The size of your virtual network and its subnet must accommodate the number of pods you plan to run and the number of nodes for the cluster.

IP addresses for the pods and the cluster’s nodes are assigned from the specified subnet within the virtual network. Each node is configured with a primary IP address. By default, 30 additional IP addresses are pre-configured by Azure CNI that are assigned to pods scheduled on the node. When you scale out your cluster, each node is similarly configured with IP addresses from the subnet.

Compare network models

Both kubenet and Azure CNI provide network connectivity for your AKS clusters. However, there are advantages and disadvantages to each. At a high level, the following considerations apply:

  • kubenet
    • Conserves IP address space.
    • Uses Kubernetes internal or external load balancer to reach pods from outside of the cluster.
    • You manually manage and maintain user-defined routes (UDRs).
    • Maximum of 400 nodes per cluster.
  • Azure CNI
    • Pods get full virtual network connectivity and can be directly reached via their private IP address from connected networks.
    • Requires more IP address space.

Conclusion

Both network model have advantages/disadvantages, Kubenet is very basic where as CNI is advanced and force networking ahead of time, might save lot of trouble later. If you do not have shortage of IP addresses, I would recommended to go with Azure CNI.

Helpful links : https://docs.microsoft.com/en-us/azure/aks/configure-kubenet ,
https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni
https://mehighlow.medium.com/aks-kubenet-vs-azure-cni-363298dd53bf

Azure Kubernetes Service (AKS) Network plugins

Azure Kubernetes cluster across Availability Zones

Availability Zone is high-availability offering that protects applications and data from datacenter failures. Availability Zones are unique physical locations within an Azure region, each zone is equipped with independent power, cooling, and networking, there’s a minimum of three separate zones in all enabled regions.

AKS cluster nodes can be deployed across multiple zones with in Azure region for high availability and business continuity.

Limitations

  • Availability zones can  be defined only when creating the cluster or node pool.
  • Availability zone settings can’t be updated after the cluster is created.
  • Node size(SKU) selected must be available across all availability zones selected.
  • Azure Standard Load Balancers is required for clusters with availability zones enabled.
Create an AKS cluster across availability zones

When creating cluster using “az aks create” command,  --zones parameter defines which zones agent nodes are deployed into, etcd or the cluster APIs are spread across the available zones in the region during the creation. run the above command in azure cli to create 3 nodes in 3 different zones in eastus2 region, as my resource group( aks-az-rg) is in eastus2.

az aks create --resource-group aks-az-rg --name AKS-AZ-Cluster --generate-ssh-keys --vm-set-type VirtualMachineScaleSets --load-balancer-sku standard --node-count 3 --zones 1 2 3

Verify Node Distribution

Run “get-credentials” to setup kubeconfig and then verify the node details using kubectl describe command in bash, you can also verify on azure portal.

az aks get-credentials --resource-group aks-az-rg --name AKS-AZ-Cluster

kubectl describe nodes | grep -e "Name:" -e "failure-domain.beta.kubernetes.io/zone"

Three nodes are distributed across three zones, eastus2-1, eastus2-2 and eastus2-3,if you scale node pool Azure platform automatically distributes new nodes across zones.

az aks scale --resource-group aks-az-rg --name AKS-AZ-Cluster --node-count 5
Verify POd distrubtion

Let’s deploy image with three replicas and verify how pods are distributed, run the below command in azure cli to deploy NGINX with three replicas.

kubectl create deployment nginx --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine

kubectl scale deployment nginx --replicas=3

By viewing nodes where pods are running, you see pods are running on the nodes corresponding to three different availability zones. run the below command in a Bash to verify node where the pod is running.

kubectl describe pod | grep -e "^Name:" -e "^Node:"

The first pod is running on node 3, which is located in the availability zone eastus2-1. The second pod is running on node 1 which corresponds to eastus2-2, and the third one in node 2, in eastus2-3. Without any additional configuration, Kubernetes is spreading the pods correctly across all three availability zones.

Conclusion

if you are running application with higher SLA on AKS , take advantage of high availability with Azure Availability Zones as a part of your comprehensive business continuity and disaster recovery strategy with built-in security, and flexible, high-performance architecture.

Helpful Links: https://docs.microsoft.com/en-us/azure/aks/availability-zones

Azure Kubernetes cluster across Availability Zones

Connect to Azure SQL Managed instance By Using Azure AD Authentication

Azure Active Directory authentication is a mechanism of connecting to Microsoft Azure SQL Database or other resources by using identities in Azure Active Directory (Azure AD), SQL password Authentication is not a very secured, AD login is more secured and provides password rotation and MFA.

Azure Active Directory authentication uses contained database users to authenticate identities at the database level, i.e. does not have access to master database and access is assigned at individual database level. Azure AD identity can be either an individual user account or a group. I will walk you through how to create container DB access to AD group in this post.

Create Active directory Group in Azure AD

  1. Create Active directory group for DB access in Azure AD, multiple group can be created to isolate database access or to separate read and Write permission
  2. Add users to AD group.

Create Contained database users

  1. Connect to Azure Database using SQL Server Management Studio, to create container users/group you need to login with AD credentials/group that is setup as Admin on Database Server.
  2. In my case, Admin is Active directory user who is also SQL Admin.


  3. Run the following command to create contained DB users
CREATE USER Azure_AD_principal_name ROM EXTERNAL PROVIDER


Example: CREATE USER [azsrini-DB-RW] FROM EXTERNAL PROVIDER;

Azure_AD_principal_name can be the user principal name of an Azure AD user or the display name for an Azure AD group

When you create a database user, that user receives the CONNECT permission and can connect to that database as a member of the PUBLIC role. Once you provision an Azure AD-based contained database user, you can grant the user additional permissions, azsrini-DB-RW is granted ‘CONTROL’ permissions

Connect to Database

Active Directory Integrated authentication

To connect to a database using integrated authentication and an Azure AD identity, the Authentication keyword in the database connection string must be set to Active Directory Integrated and the client application should be running on a domain joined machine.

string ConnectionString = @"Data Source=n9lxnyuzhv.database.windows.net; Authentication=Active Directory Integrated;"
SqlConnection conn = new SqlConnection(ConnectionString);
conn.Open();

Active Directory principal name and password

To connect to a database using AD name an password, the Authentication keyword must be set to Active Directory Password and the connection string must contain User ID/UID and Password/PWD keywords and values

string ConnectionString = @"Data Source=n9lxnyuzhv.database.windows.net; Authentication=Active Directory Password; UID=azsrini@azsrini.onmicrosoft.com; PWD=passwrod123";

Connect using SQL Management Studio

To connect from management studio, go to options, enter the database and then connect using the Active directory username/password

Useful Links:

https://github.com/toddkitta/azure-content/blob/master/articles/sql-database/sql-database-aad-authentication.md

https://docs.microsoft.com/en-us/azure/azure-sql/database/logins-create-manage

Conclusion:

AD authentication is secured way to connect to Database , AD takes care of password rotation and MFA, it also eliminated storing passwords in connecting client applications.

Thank you
Srinivasa Avanigadda

Connect to Azure SQL Managed instance By Using Azure AD Authentication

Azure SQL Server auto-failover groups

Auto-failover groups can be used for business continuity to switch database(s) to secondary location automatically in case of primary failure. failover group allows to manage the replication and failover of a group of databases on a server or all databases in a managed instance to another region.

Failover groups are built on top of Geo-Replication and provide automatic failovers for your applications, this is achieved by azure created listeners.

In addition, auto-failover groups provide read-write and read-only listener endpoints that remain unchanged during failovers. Whether you use manual or automatic failover activation, failover switches all secondary databases in the group to primary. After the database failover is completed, the DNS record is automatically updated to redirect the endpoints to the new region.

Best practices for SQL Database

The auto-failover group must be configured on the primary server and will connect it to the secondary server in a different Azure region. The groups can include all or some databases in these servers. The following diagram illustrates a typical configuration of a geo-redundant cloud application using multiple databases and auto-failover group.

Let’s Create Failover Group

  1. Go to existing Managed SQL server Instance or Create New on Azure.
  2. Click Failover groups under “Data management”


  3. Click on ‘Add Group’


  4. Enter failover group name, Select secondary server or create new if secondary doesn’t exists.
  5. Failover policy select “Automatic”, can be changed later if needed.
  6. Read/Write grace period is telling the system for wait for some specific period of time before initiating a failover, default is 1 hour.
  7. Click “create” to Add Failover group.

Test Failover(Manual)

To test manual failover, go to failover group and click ‘Failover’’, this will initiate the switch of roles for the primary and secondary, there is also “Forced Failover”, that initiates the failover immediately without waiting for the grace period.

Useful Links:

https://docs.microsoft.com/en-us/azure/azure-sql/database/auto-failover-group-overview

Conclusion:

To achieve business continuity, adding database redundancy is part of the solution, recovering application end-to-end after a catastrophic failure requires recovery of all components that constitute the service.

Thank you
Srinivasa Avanigadda

Azure SQL Server auto-failover groups.


Inject Azure Key vault secrets into Azure DevOps Pipeline.

Azure Key Vault helps to securely store and manage sensitive information such as keys, passwords, certificates, etc, this prevents from exposing confidential information in source code, When working with Azure DevOps, you might need to use sensitive information like Service Principals or API keys, you can integrate pipeline with key vault with few steps and read secrets securely without configuring in Build Pipeline.

There are two way to retrieve secrets from Azure Key Vault into Pipelines

  1. Pipeline Task – Secrets are available with in the pipeline only.
  2. Variables Groups – Secrets are available across all the pipelines.

In this Post, we will look into retrieving secrets using pipeline task , will look into Variable groups in another post.

Lets Get Started.

I already have Key vault created and added couple of sample key to use from with in Pipeline, please refer Microsoft documentation for Key vault creation.

Create Service Connection in Azure DevOps organization.

Service connections enable you to connect to external and remote services to execute tasks in a job. For example, Microsoft Azure subscription, to a different build server or file server, to an online continuous integration environment, or to services you install on remote computers.

Add Key Vault Task to Pipeline.
  1. Go to Project Settings in Azure DevOps Organization.
  2. Click Service Connection and Add New Service Connection. Fill in the parameters for the service connection. The list of parameters differs for each type of service connection – see the following list.


  3. Check Grant Access Permission to all Pipelines option to allow YAML pipelines use this service connection.
  4. Choose Save to create the connection.

Add Key Vault task to pipeline
  1. Go to project in Azure DevOps and create a new Yaml Pipeline line or select existing.
  2. Select show assistant to expand the assistant panel. This panel provides convenient and searchable list of pipeline tasks.


  3. Search for vault and select the Azure Key Vault task.
  4. Select and authorize the Azure subscription you used to create your Azure key vault earlier. Select the key vault and select Add to insert the task at the end of the pipeline. This task allows the pipeline to connect to your Azure Key Vault and retrieve secrets to use as pipeline variables.

  5. SecretsFilter ‘*’ retrieves all secrets , you can also add comma separate list to get specific secrets.
  6. KeyVault values can be referred in the Pipeline by using syntax $(secretname).
  7. I’m writing to text file and publishing for testing.
  8. Save Pipeline(don’t run yet).
Set up Azure Key Vault access policies
  1. Go to Key Vault you want to integrate with Pipeline.
  2. Under Settings Select Access policies.
  3. Select Add Access Policy to add a new policy.
  4. For Secret permissions, select Get and List.
  5. Select the option to select a principal and search for yours.
    A security principal is an object that represents a user, group, service, or application that’s requesting access to Azure resources. Azure assigns a unique object ID to every security principal. The default naming convention is [Azure DevOps account name]-[Azure DevOps project name]-[subscription ID] 

  6. Select Add to create the access policy.
  7. Select Save.
Run and test Secrets retrieval in Pipeline.
  1. Run the pipeline, that was created earlier.


  2. Return to pipeline summary and select the published artifact.
  3. Under Job select the secret.txt file to view it.


  4. The text file contains our secret: ClientId

Conclusion:
Secrets, Passwords should never be exposed in source code or pipelines. key Vault can be directly integrated with App Services, Function apps and Pipelines to retrieve secrets securely.

Inject Azure Key vault secrets into Azure DevOps Pipeline.

Festive Tech Calendar 2020:Azure Well Architected Framework

Festive Tech Calendar is a great community event, hosted by Gregor Suttie (Azure Greg) and Richard Hooper (Pixel Robots) every year. It’s all about sharing knowledge and helping people to learn new things, there were multiple sessions throughout month of December 2020, I’m very happy I was able to contribute as well.

This post is little delayed, I was very happy my session about Azure Well architected Framework was accepted and delighted to see myself among many MVPs, this was my first ever talk at global event and was so exciting 🙂

you can watch my session here.

Check out all contributions at https://festivetechcalendar.com/ or Festive Tech YouTube Channel

Useful links
https://docs.microsoft.com/en-us/azure/architecture/framework/

Azure Well Architected Framework

Logic Apps SQL Connector to detect record insertion and trigger workflow

Azure Logic Apps offer hundreds of connectors to provide quick access from Logic apps to events, data and actions across other apps, services, systems, protocols, and platforms.

Logic app work flow can be triggered when record is added/modified using SQL connector (this is not old school SQL trigger). let’s see how this can be implemented, I’m adding to another table in the same database but you get the idea..

Let’s get Started...

1) Go to Azure portal, create a new Azure Logic App and click logic app designer

2) In the search box, enter “SQL” as your filter. and pick When an item is created (V2).

3) Enter SQL connection details, Select Database and table, to watch for new records insertion.

4) logic app provides option to select how frequently to check for changes, I choose default.


5) Add next step to the logic app flow, select SQL Server and select Insert Row(V2) Action.

6) In the insert row step, add the SQL connection, enter database and destination table where data is inserted.

7) click ‘add new parameter’ and select columns you want to insert to destination table.


8) Go through each column  and select corresponding columns values from dynamic content (dynamic content populates all the values from trigger output, in our case inserted row data).

9) Save logic app workflow.

Our logic app is ready, when a new record is inserted in Users table the same record get inserted to Users2 table as well via Logic app workflow, Lets test the Logic app trigger and action.

Insert a sample record into Users table that logic app is monitoring.

Since my logic app is checking Users Table for changes every 3 mins, logic app workflow gets triggered within 3 mins.

record is inserted into Users2 table.

Conclusion:
Logic Apps provided number of connectors out of the box for different types of triggers and actions, check Microsoft website for more details about connectors.

Logic Apps SQL Connector to detect changes and trigger workflow