Thursday, September 15, 2016

Azure Deployment Experience - Series 4 - Getting SQL Stats for Azure SQL DB calculation

At my work-place, hosting our web-Application and Database in a VM (with Windows or Windows + SQL) has been the usual practice. Though we want to move to Azure SQL DB, there are quite a few factors which makes us step back, for the simple fear of "which DTU we actually need". 

After some searching, I came across this blog (Choosing Right Tier for Azure SQL DB), which explained the basics of Service tiers, business continuity options, importantly the DTU calculator (Azure SQL DB DTU Calculator). It was like all unknown or little known pieces put together.

I am not getting into how-to-do, as it is very clearly explained in the blog. I struggled while executing the Powershell script which collects the performance data in the VM-with-SQL server.
Listing them down, so that, it is helpful for someone or even it could be for me in future !
  • Install the Windows Powershell ISE, run it as administrator, and open the Ps1 file (download from the blog)
  • Check if the file needs to be 'Unblocked' to allow it to be run (as it is a downloaded file from other source). 
  • The SQL Server name is the catch in the parameter list for the "Get -Counter". For a long time, I struggled and could not figure out how to give the name. 
  • If you wish to provide the SQL Server which is installed as "default" instance, then provide the name as "'\SqlServer:Databases(*)\Log Bytes Flushed/sec".
  • If you have SQL installed as a named instance, find out the account name under which the instance is running. This is NOT the SQL Instance name. In my case, I had given the instance name as 'Sql2016' but it was running under 'MSSQL$SQL2016'. This can be found easily either via Services.msc or 'Sql Configuration Manager'. The SQL server has to be specified as "\MSSQL`$SQL2016:Databases(KIDB)\Log Bytes Flushed/sec". Note the symbol ` before the $.



Azure Deployment Experience - Series 3 - Bizarre behaviour in ReFS drive

In my previous post, I described a failure in a ReFS drive. In spite of claiming to be a Resilient one (Re stands for Resilience), we could not recover from bad sectors or corrupted sectors and we had to abandon that drive. 

Wonder why the C and D drive of Azure VMs are still NTFS based and why not ReFS. Does it mean Microsoft itself are skeptical to roll out ReFS for C and D drives.

Yet another problem we faced in one of the ReFS drives in one of the VMs. Here it goes
  • Took a Windows OS based VM (D2 series), allocated Disks and created two logical drives - F and G, both with ReFS.
  • Hosted the application in F drive (Web based application over IIS).
  • Any modification to any of the files in the Web Application folder does not effect in the application. E.g. aspx file modified with some changes. The date time gets updated, but in the application, it does not reflect
  • Do a App Pool recycle (of that web app) and then it reflects.
  • Hosted the application in C and D drives (NTFS) and the changes work immediately (which is the expected behaviour).
  • Hosted the application in G drive (ReFS) and the same problem like the F drive
  • Tried a few other VMs, which has a ReFS drive but it worked.
  • Create a new logical NTFS drive, repeat the test and it works (w/o app. pool recycle).
I guess, ReFS is not really resilient or fool-proof, especially in Production or sensitive environments. One-off case has occurred at least twice in our case. In the above case, at least we are able to do a work-around.