Articles | July 26, 2024

Is a serverless Azure SQL Database cost-effective? Auto-scaling Azure SQL DB

How can you easily scale up the Azure SQL Database and reduce costs? Using the portal, you can meet all the performance requirements and save money. Learn how!

Azure SQL Database

In the world of data management, the cost of maintaining a database can be a significant concern for many businesses. The Serverless Azure SQL Database has emerged as a potential solution, promising a cost-effective approach to database management without compromising on performance or scalability. 

How can Azure DB be used to its highest potential?  How can you easily scale up the Azure SQL Database and reduce costs? By using the portal, you can meet all the performance requirements and save money at the same time. Read on to learn how! 

The Serverless Azure SQL Database: key features

Serverless Azure SQL Database offers a multitude of features and benefits, including automatic pause and resume capabilities, built-in high availability, and robust security measures. It is designed to simplify performance management and reduce costs, as you only pay for the computing resources you use. 

However, there are also potential drawbacks to consider. The auto-pause feature, while cost-effective, can lead to a delay in database availability, which might not be suitable for applications requiring instant access. Additionally, while the serverless model can handle fluctuating workloads, it might not be the best fit for workloads with consistent high-performance needs, as the scaling process can introduce a slight latency. 

Choosing the right purchasing model for the Azure SQL Database 

The Azure SQL Database offers a fully managed platform as a service (PaaS) database engine that aligns with your specific performance and budgetary requirements. When selecting a purchasing model for the Azure SQL Database, based on your chosen deployment model, you have the following options: 

  • Virtual core (vCore)-based purchasing model: This model allows you to choose between provisioned or serverless compute tiers: 
    • In the provisioned compute tier, you have the ability to allocate a fixed amount of compute resources dedicated to your workload. 
    • The serverless compute tier offers flexibility in terms of autoscaling compute resources within a set range and includes cost-saving measures by pausing databases during inactivity (billing only for storage) and resuming them when activity picks up. The cost per vCore unit is more economical in the provisioned compute tier compared to the serverless tier. 
  • Database transaction unit (DTU)-based purchasing model: This model offers combined compute and storage packages, which are optimized for typical workloads. 

Find out more about them at the official Microsoft Learn website 

Serverless vs provisioned – key differences

When you compare serverless and provisioned compute tiers in the Azure SQL Database, you can find a couple of key differences: 

Serverless Compute Tier:

  • Autoscaling: Compute resources scale automatically based on workload demands.
  • Automatic pausing and resuming: Databases automatically pause during inactivity, reducing costs as you are billed only for storage, and resume when activity itself resumes.
  • Cost-effective: You pay only for the compute resources you use, which can lead to cost savings during periods of low or no activity, so it is ideal for workloads with intermittent and unpredictable usage patterns, such as development and testing environments.
Azure SQL Database

Provisioned compute tier: 

  • Fixed resources: Compute resources are pre-allocated and dedicated to your workload. 
  • Consistent performance: Ideal for workloads with predictable usage patterns that require consistent performance. 

Reducing the cost of the Azure SQL Database a real-world example

Let’s take a look together at the following real case from one of my Azure environments. 

Azure SQL Database

You can observe that during business hours when users are actively using an application, the system requires much more computing power. Another peak can be observed after midnight when the system calculates aggregates (derived data) based on transactional data provided on the last day. 

First, I tried a serverless compute layer, which (at first glance) fits my application usage perfectly and costs me $568 monthly

Azure SQL Database

Then, I started to think about how to reduce the cost of the Azure SQL Database. 

My first idea was to use the provisioned compute tier (2 vCores => $385 monthly), but it does not handle peak business hours… 

Azure SQL Database

The provisioned compute tier does not support auto-scaling, but if it did, what would be the cost impact? 

Azure SQL Database

Based on this model, it will cost me $517 monthly, which comes with$51 monthly savings (almost a 9% cost reduction). 

I think that the introduction of a pausing mechanism allows me to save $50-60 monthly extra. 

Another observation is that at a lower price you have more available computing power – the average 1.4 vCPU (serverless) vs 2.7 vCPU (provisioned). In other words, you can do more while paying less. To summarize, the provisioned compute tier or a DTU-based purchasing model may be cost-effective, but the service doesn’t support auto-scaling as some of us would expect… But using Microsoft Azure, we can set up a workflow that auto-scales an Azure SQL Database instance to the next immediate tier when a specific condition is met.  

For example: scale up the database as soon as it goes over 85% CPU usage for a sustained period of 5 minutes or scale down when CPU usage is  below 40%. 

By using this tutorial you will achieve that as well. 


Marek Dobkowski 1 bezloczkowy kwadrat

Optimize your Azure SQL Database and take your project to the next level

Schedule a consultation with Marek today to streamline your database management and maximize efficiency

Book a call

Step-by-step tutorial. Automating Azure SQL Database scaling based on CPU usage

Step #1: Deploy an Azure Automation Account 

The scaling operation will be carried out through a PowerShell runbook within an Azure Automation account. Navigate to the Azure Portal, use the search bar to find Automation, and proceed to set up a new Automation Account. 

Azure SQL Database
Azure SQL Database

Step #2: Deploy an Azure Automation Account 

With our Automation Account now in place, we are ready to proceed with the scripting process. Create a new runbook and paste the following code into it: 

Azure SQL Database

The script provided utilizes the Webhook data transmitted from the alert. This data is packed with valuable insights regarding the resource that triggers the alert, enabling the script to autonomously scale any database without requiring parameters. It simply needs to be activated by an alert that employs the Common Alert Schema on an Azure SQL database. 

param 

( 

    [Parameter (Mandatory = $false)] 

    [object] $WebhookData 

) 

 

#If webhook data is received from an Azure Alert, proceed to execute the workflow. 

if ($WebhookData) { 

    # Get the data object from WebhookData 

    $WebhookBody = (ConvertFrom-Json -InputObject $WebhookData.RequestBody) 

     

    # Get the info needed to identify the SQL database (depends on the payload schema) 

    $schemaId = $WebhookBody.schemaId 

 

    Write-Verbose "schemaId: $schemaId" -Verbose 

    if ($schemaId -eq "azureMonitorCommonAlertSchema") { 

        # Common Metric Alert schema 

        $essentials = [object] ($WebhookBody.data).essentials 

        Write-Output $essentials 

        # Get the first target only as this script doesn't support multiple targets 

        $alertTargetIdArray = (($essentials.alertTargetIds)[0]).Split("/") 

        $subscriptionId = ($alertTargetIdArray)[2] 

        $resourceGroupName = ($alertTargetIdArray)[4] 

        $resourceType = ($alertTargetIdArray)[6] + "/" + ($alertTargetIdArray)[7] 

        $serverName = ($alertTargetIdArray)[8] 

        $databaseName = ($alertTargetIdArray)[-1] 

        $status = $essentials.monitorCondition 

     

 

        # If alert that triggered the runbook is Activated or Fired, it means we want to autoscale the database. 

        # When the alert gets resolved, the runbook will be triggered again but because the status will be Resolved, no autoscaling will happen. 

        if (($status -eq "Activated") -or ($status -eq "Fired")) { 

            try { 

                "Logging in to Azure using managed identity for automation account..." 

                Connect-AzAccount -Identity 

            } 

            catch { 

                Write-Error -Message $_.Exception 

                throw $_.Exception 

            } 

 

            # Gets the current database details, from where we will capture the Edition and the current service objective. 

            # With this information, the below if/else will determine the next tier that the database should be scaled to. 

            # Example: if DTU database is S6, this script will scale it to S7. This ensures the script continues to scale up the DB in the case CPU keeps pegging at 100%. 

 

            $currentDatabaseDetails = Get-AzSqlDatabase -ResourceGroupName $resourceGroupName -DatabaseName $databaseName -ServerName $serverName 

            $edition = $currentDatabaseDetails.Edition 

            if ($edition -in @("Basic", "Standard", "Premium")) { 

                Write-Output "Database is DTU model." 

 

                $dtuTiers = (Get-AzSqlServerServiceObjective -Location $currentDatabaseDetails.Location) | Where-Object { $_.Enabled -eq $true -and $_.Edition -eq $edition -and $_.CapacityUnit -eq "DTU" } | Select-Object -ExpandProperty ServiceObjectiveName 

 

                $maxDtuTier = ($dtuTiers | Select-Object -Last 1) 

 

                if ($currentDatabaseDetails.CurrentServiceObjectiveName -eq $maxDtuTier) { 

                    Write-Output "DTU database is already at highest tier ($maxDtuTier). Suggestion is to move to Business Critical vCore model with 32+ vCores." 

                } 

                else { 

                    for ($i = 0; $i -lt $dtuTiers.length; $i++) { 

                        if ($dtuTiers[$i] -eq $currentDatabaseDetails.CurrentServiceObjectiveName) { 

                            $targetServiceObjectiveName = $dtuTiers[$i + 1] 

 

                            Write-Output "Scaling up database $databaseName to $targetServiceObjectiveName" 

                            Set-AzSqlDatabase -ResourceGroupName $resourceGroup -DatabaseName $databaseName -ServerName $serverName -RequestedServiceObjectiveName $targetServiceObjectiveName 

                            break 

                        } 

                    } 

                } 

            } 

            elseif ($edition -in @("GeneralPurpose", "BusinessCritical", "Hyperscale")) { 

                Write-Output "Database is vCore model." 

                 

                $vCoreTiers = (Get-AzSqlServerServiceObjective -Location westeurope) | Where-Object { $_.Enabled -eq $true -and $_.Edition -eq $edition -and $_.CapacityUnit -eq "VCores" -and $_.Family -eq $currentDatabaseDetails.Family -and $_.SkuName -eq $currentDatabaseDetails.SkuName } | Select-Object -ExpandProperty ServiceObjectiveName 

                $maxvCoreTier = ($vCoreTiers | Select-Object -Last 1) 

 

                if ($currentDatabaseDetails.CurrentServiceObjectiveName -eq $maxvCoreTier) { 

                    Write-Output "vCore database is already at highest tier ($maxvCoreTier)." 

                } 

                else { 

                    for ($i = 0; $i -lt $dtuTiers.length; $i++) { 

                        if ($vCoreTiers[$i] -eq $currentDatabaseDetails.CurrentServiceObjectiveName) { 

                            $targetServiceObjectiveName = $vCoreTiers[$i + 1] 

 

                            Write-Output "Scaling up database $databaseName to $targetServiceObjectiveName" 

                            Set-AzSqlDatabase -ResourceGroupName $resourceGroup -DatabaseName $databaseName -ServerName $serverName -RequestedServiceObjectiveName $targetServiceObjectiveName 

                            break 

                        } 

                    } 

                } 

            } 

            else { 

                Write-Error "The database edition '$edition' is not supported." 

            } 

 

             # All done, closing alert automatically 

             $alert = [object] ($WebhookBody.data).essentials.alertId 

             $pos = $alert.lastIndexOf("/") 

             $alertId = $alert.Substring($pos + 1, 36) 

             Write-Output "Closing alert $alertId" 

             Update-AzAlertState -AlertId $alertId -State "Closed" -Comment "Required action was executed automatically by autoscaleupsqldb-rb runbook. No further action requied, hence closing this alert." 

 

        } else { 

            Write-Error "The alert status - $status - is not in expected state." 

        } 

    } 

    else {         

        Write-Error "The alert data schema '$schemaId' is not supported." 

    } 

} 

else { 

    Write-Error -Message "Webhook data - $WebhookData - is in expected format." 

} 

 

And one more for scaling down… 

param 

( 

    [Parameter (Mandatory = $false)] 

    [object] $WebhookData 

) 

 

#If webhook data is received from an Azure Alert, proceed to execute the workflow. 

if ($WebhookData) { 

    # Get the data object from WebhookData 

    $WebhookBody = (ConvertFrom-Json -InputObject $WebhookData.RequestBody) 

     

    # Get the info needed to identify the SQL database (depends on the payload schema) 

    $schemaId = $WebhookBody.schemaId 

 

    Write-Verbose "schemaId: $schemaId" -Verbose 

    if ($schemaId -eq "azureMonitorCommonAlertSchema") { 

        # Common Metric Alert schema 

        $essentials = [object] ($WebhookBody.data).essentials 

        Write-Output $essentials 

        # Get the first target only as this script doesn't support multiple targets 

        $alertTargetIdArray = (($essentials.alertTargetIds)[0]).Split("/") 

        $resourceGroupName = ($alertTargetIdArray)[4] 

        $serverName = ($alertTargetIdArray)[8] 

        $databaseName = ($alertTargetIdArray)[-1] 

        $status = $essentials.monitorCondition 

     

 

        # If the alert that triggered the runbook is Activated or Fired, it means we want to autoscale the database. 

        # When the alert gets resolved, the runbook will be triggered again but because the status will be Resolved, no autoscaling will happen. 

        if (($status -eq "Activated") -or ($status -eq "Fired")) { 

            try { 

                "Logging in to Azure using managed identity for automation account..." 

                Connect-AzAccount -Identity 

            } 

            catch { 

                Write-Error -Message $_.Exception 

                throw $_.Exception 

            } 

 

            # Gets the current database details, from where we will capture the Edition and the current service objective. 

            # With this information, the below if/else will determine the next tier that the database should be scaled to. 

            # Example: if DTU database is S6, this script will scale it to S7. This ensures the script continues to scale up the DB in case CPU keeps pegging at 100%. 

 

            $currentDatabaseDetails = Get-AzSqlDatabase -ResourceGroupName $resourceGroupName -DatabaseName $databaseName -ServerName $serverName 

            $edition = $currentDatabaseDetails.Edition 

            if ($edition -in @("Basic", "Standard", "Premium")) { 

                Write-Output "Database is DTU model." 

 

                $dtuTiers = (Get-AzSqlServerServiceObjective -Location $currentDatabaseDetails.Location) | Where-Object { $_.Enabled -eq $true -and $_.Edition -eq $edition -and $_.CapacityUnit -eq "DTU" } | Select-Object -ExpandProperty ServiceObjectiveName 

 

                $minDtuTier = ($dtuTiers | Select-Object -First 1) 

 

                if ($currentDatabaseDetails.CurrentServiceObjectiveName -eq $minDtuTier) { 

                    Write-Output "DTU database is already at lower tier ($minDtuTier)." 

                } 

                else { 

                    for ($i = ($dtuTiers.length - 1); $i -gt 0; $i--) { 

                        if ($dtuTiers[$i] -eq $currentDatabaseDetails.CurrentServiceObjectiveName) { 

                            $targetServiceObjectiveName = $dtuTiers[$i -1] 

 

                            Write-Output "Scaling down database $databaseName to $targetServiceObjectiveName" 

                            Set-AzSqlDatabase -ResourceGroupName $resourceGroup -DatabaseName $databaseName -ServerName $serverName -RequestedServiceObjectiveName $targetServiceObjectiveName 

                            break 

                        } 

                    } 

                } 

            } 

            elseif ($edition -in @("GeneralPurpose", "BusinessCritical", "Hyperscale")) { 

                Write-Output "Database is vCore model." 

                 

                $vCoreTiers = (Get-AzSqlServerServiceObjective -Location westeurope) | Where-Object { $_.Enabled -eq $true -and $_.Edition -eq $edition -and $_.CapacityUnit -eq "VCores" -and $_.Family -eq $currentDatabaseDetails.Family -and $_.SkuName -eq $currentDatabaseDetails.SkuName } | Select-Object -ExpandProperty ServiceObjectiveName 

                $minvCoreTier = ($vCoreTiers | Select-Object -First 1) 

 

                if ($currentDatabaseDetails.CurrentServiceObjectiveName -eq $minvCoreTier) { 

                    Write-Output "vCore database is already at lower tier ($minvCoreTier)." 

                } 

                else { 

                    for ($i = ($dtuTiers.length - 1); $i -gt 0; $i--) { 

                        if ($vCoreTiers[$i] -eq $currentDatabaseDetails.CurrentServiceObjectiveName) { 

                            $targetServiceObjectiveName = $vCoreTiers[$i - 1] 

 

                            Write-Output "Scaling down database $databaseName to $targetServiceObjectiveName" 

                            Set-AzSqlDatabase -ResourceGroupName $resourceGroup -DatabaseName $databaseName -ServerName $serverName -RequestedServiceObjectiveName $targetServiceObjectiveName 

                            break 

                        } 

                    } 

                } 

            } 

            else { 

                Write-Error "The database edition '$edition' is not supported." 

            } 

 

             # All done, closing alert automatically 

             $alert = [object] ($WebhookBody.data).essentials.alertId 

             $pos = $alert.lastIndexOf("/") 

             $alertId = $alert.Substring($pos + 1, 36) 

             Write-Output "Closing alert $alertId" 

             Update-AzAlertState -AlertId $alertId -State "Closed" -Comment "Required action was executed automatically by autoscaleupsqldb-rb runbook. No further action requied, hence closing this alert." 

 

        } else { 

            Write-Error "The alert status - $status - is not in expected state." 

        } 

    } 

    else {         

        Write-Error "The alert data schema '$schemaId' is not supported." 

    } 

} 

else { 

    Write-Error -Message "Webhook data - $WebhookData - is in expected format." 

} 

 

Step #3: Trigger Automation runbook(s) via the Azure Monitor Alert 

Create a new alert rule on your Azure SQL Database: 

Azure SQL Database

The next step will require several different setups: 

  1. Scope of the alert: this will be auto-populated if the New Alert Rule is clicked from within the database itself. 
  2. Condition: when should the alert get triggered by selecting a signal and defining its logic?
  3. Actions: when the alert gets triggered, what will happen? 
  4. Details: location, name, and other configuration values.

Condition 

For this example, the alert will monitor the CPU consumption every 1 minute from the last 5 minutes. When the average goes over 85%, the alert will be triggered: 

Azure SQL Database

Actions 

After the signal logic is created, we need to indicate what the alert is to do when it gets fired. We will do this with an action group. When creating a new action group, two tabs will help you configure triggering the runbook. 

Basics: 

Azure SQL Database

Actions:

Azure SQL Database

After saving the action group, add the remaining details to the alert.

Azure SQL Database

Create another alert and action for scaling down with the condition: CPU percentage is less than 40%

That’s it! The alerts are now enabled and will auto-scale the database when fired. The runbook will be executed twice per alert: once when fired and another when resolved, but it will only perform a scale operation when fired.

Summary 

The serverless Azure SQL Database offers a potentially cost-effective solution for database management, balancing performance and scalability with features like automatic pause and resume, high availability, and robust security. Key benefits include cost savings of $50-$60 per month from automatic pausing and flexibility in resource usage. However, it may not be ideal for applications needing instant access or consistent high performance due to potential delays and slight latencies during scaling.

Architect, developer, designer and manager of division responsible for delivering projects in Microsoft technology stack. For almost ten years I has been involved in creating software for the .NET platform for the world's largest companies from different sectors (food and beverage, banking, aerospace). During this time, I participated in the development of desktop, mobile and web applications, performing various roles. Since 2015 I specialize in Microsoft Azure solutions, in particular related to Cloud migration and building innovate Azure native solutions.

Exclusive Content Awaits!

Dive deep into our special resources and insights. Subscribe to our newsletter now and stay ahead of the curve.

Information on the processing of personal data

Exclusive Content Awaits!

Dive deep into our special resources and insights. Subscribe to our newsletter now and stay ahead of the curve.

Information on the processing of personal data

Subscribe to our newsletter to unlock this file

Dive deep into our special resources and insights. Subscribe now and stay ahead of the curve – Exclusive Content Awaits

Information on the processing of personal data

Almost There!

We’ve sent a verification email to your address. Please click on the confirmation link inside to enjoy our latest updates.

If there is no message in your inbox within 5 minutes then also check your *spam* folder.

Already Part of the Crew!

Looks like you’re already subscribed to our newsletter. Stay tuned for the latest updates!

Oops, Something Went Wrong!

We encountered an unexpected error while processing your request. Please try again later or contact our support team for assistance.

    Get notified about new articles

    Be a part of something more than just newsletter

    I hereby agree that Inetum Polska Sp. z o.o. shall process my personal data (hereinafter ‘personal data’), such as: my full name, e-mail address, telephone number and Skype ID/name for commercial purposes.

    I hereby agree that Inetum Polska Sp. z o.o. shall process my personal data (hereinafter ‘personal data’), such as: my full name, e-mail address and telephone number for marketing purposes.

    Read more

    Just one click away!

    We've sent you an email containing a confirmation link. Please open your inbox and finalize your subscription there to receive your e-book copy.

    Note: If you don't see that email in your inbox shortly, check your spam folder.