Change SQL Collation without reinstalling SQL

I ran into an issue where SQL Server was installed with the wrong collation and a lot of user databases were already attached. I could easily backup the databases, uninstall, reinstall, and restore the databases back, but this could take literally all day. There is a better and much faster way to make this change. This post will go over it….

First, backup all databases (duh)

Next, we’ll verify the current collation. On this server it’s set to Latin1_General_CI_AS and I want to change it to SQL_Latin1_General_CP1_CI_AS.

Next, we’ll double check and make sure we have backups of all databases ?

Open SQL Configuration Manager and turn off all SQL Services:

Open Command Prompt (as administrator) and browse to the BINN directory and type the following command.

sqlservr -m -T4022 -T3659 -q”SQL_Latin1_General_CP1_CI_AS”

-m – starts SQL Server in single user admin mode
-T4022 – bypasses all startup procedures
-T3659 – undocument trace flag. enables logging of all errors to the errorlog during startup
-q – new collation

Before hitting Enter, let’s triple check and make sure those backups exist.

Hit Enter.

SQL will run through its startup routine:

Voila. Recovery is complete.

Close Command Prompt and start the SQL services back up.

Back in SQL Server Management Studio, verify that the collation has changed.

Create Azure Windows VM

Creating an Azure Windows VM seems pretty easy, but there are a lot of settings to be careful of before hitting the Create button. In this post, we’ll go over some of those so the next (or first) time you create an Azure VM, you’ll be ready!

Here we go….

Log into the Azure portal and click Virtual Machines and Create Virtual Machine:

Azure VM 1

Note: I’m using a free trial.

Under the Basics tab, there are a few options. Let’s start at the top.

Subscription: An Azure subscription is the agreement with Microsoft to use Azure, for which charges accrue. You can create multiple subscriptions to separate billing or management.

Resource Group: A container that holds related resources for an Azure solution. The resource group includes those resources that you want to manage as a group. You decide how to allocate resources to resource groups based on what makes the most sense for your organization.

Virtual machine Name: The identifier and host name for the VM.

Region: The region in which your VM will reside. To see more go here: https://azure.microsoft.com/en-us/global-infrastructure/regions/

Availability Options:

  • No infrastructure redundancy required: No High Availability.
  • Availability Zone: VM’s are in a different physical location within an Azure region. This offers 99.99% SLA.
  • Availability Set: A group with two or more virtual machines in the same Data Center is called Availability Set, this ensures that at least one of the virtual machines hosted on Azure will be available if something happens. This configuration offers 99.95% SLA.

Image: The base Operating System for the VM. The following Windows versions are supported. https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-supported-os

Azure VM 2Size: This is the size of the VM. Here’s a good chart to use when sizing. Remember, pricing! Another cool thing with Azure is that you can resize pretty easy by shutting down the VM, resizing it, and restarting. Of course, this depends on your region and availability options. https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes

Azure VM 3

There’s also a free tool on the web that will help you size your VM based on certain resources. https://www.vmchooser.com/vmchooser

Username: The administrator username for the VM

Password: The administrator password for the VM

Public inbound ports: Ports that are accessible from the public internet.

Already have a Windows license?: You can save money if you already have a Software Assurance license or an Active Windows server subscription.

 

In my example, I’m using a Free Trial, so that’s the only option for Subscription. For Resource Group, let’s create a new group (by clicking Create New), named SQLFreelancerRG.

I’ll name my VM SQLVM, put it in the East Region, with no redundancy and a fresh copy of Windows Server 2016 DataCenter.

Azure VM 4

Next, I’ll size this VM with the Standard DS1 v2, which is going to give me 1vCPU and 3.5GB RAM. I’ll create an Administrator account, allow no ports from the public internet and choose No for the Azure Hybid Benefit.

Azure VM 5

The next page is all about the Disks. https://docs.microsoft.com/en-us/azure/virtual-machines/windows/about-disks-and-vhds

OS disk type: This is type of disks we can use for the VM. Let’s look at Standard vs. Premium:

Azure VM 6

You can also create and attach a new disk or attach an existing disk (from snapshot of another disk or BLOB storage)

In my example, I’ll use a Premium SSD disk and I’ll create a second disk called SQLVM_DataDisk_1.

Azure VM 7

The next page gets into a little Networking. https://docs.microsoft.com/en-us/azure/virtual-machines/windows/network-overview

Virtual Network: A virtual network enables VM’s to communicate privately with each other, and with the internet.

Subnet: The subnet is the range of IP addresses in the Virtual Network.

Public IP: Public IP addresses allow Internet resources to communicate inbound to Azure resources. Public IP addresses also enable Azure resources to communicate outbound to Internet and public-facing Azure services with an IP address assigned to the resource. The address is dedicated to the resource, until it is unassigned by you. If a public IP address is not assigned to a resource, the resource can still communicate outbound to the Internet, but Azure dynamically assigns an available IP address that is not dedicated to the resource.

NIC network security group: Security rules in network security groups enable you to filter the type of network traffic that can flow in and out of virtual network subnets and network interfaces.

Public inbound ports: Ports that are accessible from the public internet.

Accelerated Networking: Enables low latency and high throughput on the network interface.

Load Balancing: Azure Load Balancer delivers high availability and network performance to your applications. A load balancer can be configured to balance incoming Internet traffic to VMs or balance traffic between VMs in a VNet. A load balancer can also balance traffic between on-premises computers and VMs in a cross-premises network, or forward external traffic to a specific VM.

For this post, I’m just going to use the defaults and move forward.

Azure VM 8

Next up is Monitoring. https://docs.microsoft.com/en-us/azure/virtual-machines/windows/monitor

Boot diagnostics: As VMs boot, the boot diagnostic agent captures boot output and stores it in Azure storage. This data can be used to troubleshoot VM boot issues.

OS guest diagnostics: Allows you to get metrics every minute for your VM. You can use them to create alerts and stay informed.

Diagnostics storage account: The storage account in which your diagnostics are stored.

System assigned managed identity: Managed identities for Azure resources provides Azure services with an automatically managed identity in Azure Active Directory. You can use this identity to authenticate to any service that supports Azure AD authentication, without having credentials in your code. https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm

Enabled auto-shutdown: Configures your VM to automatically shutdown each day.

Enable backup: Auto backup for your VM.

I’m going to keep defaults and click Next.

Azure VM 9

The next page is Guest Config where you can add additional configuration, scripts, agents, or applications via VM extensions. Click Next.

Tags are name values pairs that enable you to categorize resources and view consolidated billing. Think of this as Twitter hashtag.

Last page is Review and Create. This is the page that displays a summary of our selections including price per hour of uptime. You’ll also notice at the bottom there is a link that will allow you to download a template for automation. This will allow you to use the same specs that you created for this VM on other VM’s. Click Create to build your new VM.

Azure VM 10

If you click on the Notification Bell at the top of the portal, you’ll see that Deployment is in progress.

Azure VM 11

Once the VM is created, you can click Virtual Machines from the Favorite Bar and view your newly created VM:

Azure VM 12

If you click on the new VM it will bring up the Overview page (along with tons of other pages to choose from).

Click on the Networking tab and Add inbound port rule. Under Destination Port Range type 3389 and under Name type RDP inbound. This will allow RDP access.

Azure VM 13

Click Connect and Download RDP file to remote into the server.

Azure VM 14

TADA!

 

 

What’s new in SQL Server 2019

SQL Server 2019 preview builds on previous releases to grow SQL Server as a platform that gives you choices of development languages, data types, on-premises or cloud, and operating systems. This article summarizes what is new for SQL Server 2019. For more information and known issues, see the SQL Server 2019 Release Notes.

Try SQL Server 2019!

CTP 2.0

Community technology preview (CTP) 2.0 is the first public release of SQL Server 2019 preview. The following features are added or enhanced for SQL Server 2019 preview CTP 2.0.

Click here to view the rest of this post.

Create Always Encrypted Certificate

In SSMS, browse to Database, Security, Always Encrypted Keys, right click Column Master Keys and click New Column Master Key:

In the New Column Master Key box you can choose from several key stores. The most popular stores are Current User or Current Machine. See below for description.

Local machine certificate store
This type of certificate store is local to the computer and is global to all users on the computer. This certificate store is located in the registry under the HKEY_LOCAL_MACHINE root.

Current user certificate store
This type of certificate store is local to a user account on the computer. This certificate store is located in the registry under the HKEY_CURRENT_USER root.

Be aware that all current user certificate stores inherit the contents of the local machine certificate stores. For example, if a certificate is added to the local machine Trusted Root Certification Authorities certificate store, all current user Trusted Root Certification Authorities certificate stores also contain the certificate.

In this example, we’ll use Local Machine. Type a name for the Master Key, choose Key Store and click Generate Certificate.

Click OK and a new Column Master Key will appear:

Do the same for any other master keys you would like to create. For example, Developer:

To create a Column Encryption Key, right click on Column Encryption Key and choose New Column Encryption key:

Give the key a name and select the Column Master Key that you would like to associate it with:

Do the same for the Developer keys. You should have two column master keys and two column encryption keys:

To view Local Machine certificates:

Open Certlm.msc

Browse to Personal, Certificates:

Double click a certificate and go to the Detail tab where you will see the thumbprint:

Match this thumbprint with the Column Master Key in SSMS:

Right click the certificate in Certlm.msc and choose Properties. Enter a friendly name to identify which certificate belongs with the Column Master Key.

To Export and certificate, right click and choose All Tasks, Export:

Always On Availability Group Latency Reports

With SQL Server 2012 we introduced Always On Availability Groups, and the Always On Availability Group Dashboard in SQL Server Management Studio (SSMS). This dashboard can be utilized by database administrators to view the current health of an Availability Group and its availability replicas and databases. While the dashboard can be configured to provide information regarding the latency between Primary and Secondary Replicas (can be calculated using the Commit LSN, Sent LSN and harden LSN values), it does not provide insights into the reason for the latency. To understand the reason for latency, requires capture and analysis of Extended Events and Performance Monitor counters. This activity can be time consuming and requires extensive knowledge of the Extended Events associated with Always On.

With the new SSMS 17.4 release, we are introducing the Availability Group Latency data collection and reporting built into the Availability Group dashboard. This feature masks the capture and analysis of the Extended Events from the end user and provides an easy to understand report detailing the time spent during the various phases of the Log Transport process.

What to use it for?

The Latency data collection functionality and the associated reports allows a database administrator to quickly discern the bottleneck in the log transport flow between the Primary and the Secondary replicas of an Availability Group. This feature does NOT answer the question “Is there latency in the Availability Group deployment?” but rather provides a way to understand why there is latency in the Availability Group Deployment. This functionality provides a way to narrow down the potential cause of latency in an Availability Group deployment.

How does it Work?

As seen below, the Availability Group Latency Data Collection functionality can be accessed from the Availability Group Dashboard.

Click here to view the rest of this post.

SSMS SQL Server Diagnostics

Microsoft is releasing the SQL Server Diagnostics (Preview) extension within SQL Server Management Studio and Developer APIs to empower SQL Server customers to achieve more through a variety of offerings to self-resolve SQL Server issues.

What this offers to our customers?

Analyze Dumps – Customers using this extension will be able to debug and self-resolve memory dump issues from their SQL Server instances and receive recommended Knowledge Base (KB) article(s) from Microsoft, which may be applicable for the fix. The memory dumps are stored in a secured and compliant manner as governed by the Microsoft Privacy Policy.

Click here to view the rest of this post.

Community driven Enhancements in SQL Server 2017

While SQL Server 2016 runs faster, SQL Server 2017 promises to run faster and empower customers to run smarter with intelligent database features like the ability to run advanced analytics using Python in a parallelized and highly scalable way, the ability to store and analyze graph data, adaptive query processing and resumable online indexing allowing customers to deploy it on platform of their choice (Windows or Linux). SQL Server is one of the most popular DBMS among SQL Community and is a preferred choice of RDBMS among customers and ISVs owing to its strong community support. In SQL Server 2017 CTP 2.0, we have released several customer delighters and community driven enhancements based on the learnings and feedback from customers and community from in-market releases of SQL Server.

Smart Differential Backup – A new column modified_extent_page_count is introduced in sys.dm_db_file_space_usage to track differential changes in each database file of the database. The new column modified_extent_page_count will allow DBAs, SQL Community and backup ISVs to build smart backup solution which performs differential backup if percentage changed pages in the database is below a threshold (say 70-80%) else perform full database backup. With large number of changes in the database, cost and time to complete differential backup is similar to that of full database backup so there is no real benefit of taking differential backup in this case but it can rather increase the restore time of database. By adding this intelligence to the backup solutions, customers can now save on restore and recovery time while using differential backups.

Consider a scenario where you previously had a backup plan to take full database backup on weekends and differential backup daily. In this case, if the database is down on Friday, you will need to restore full db backup from Sunday, differential backups from Thursday and then T-log backups from Friday. By leveraging modified_extent_page_count in your backup solution, you can now take full database backup on Sunday and lets say by Wednesday, if 90% of pages have changed, the backup solution should take full database backup rather than differential backup. Now, if the database goes down on Friday, you can restore the full db backup from Wednesday, small differential backup from Thursday and T-log backups from Friday to restore and recover the database quickly compared to the previous scenario. This feature was requested by customers and community in connect item 511305.

Click here to view the rest of this post.

SQL Server Performance Dashboard Reports

SQL Server 2012 Performance Dashboard Reports is one of most popular SQL Server monitoring solution for customers and SQL community leveraging dynamic management views (DMVs) for monitoring and reporting and available at no cost to consumers. SQL Server Performance Dashboard Reports are available as a set of custom reports in SQL Server Management Studio (SSMS) which runs against the connected instance in Object Explorer. When monitoring large enterprise deployments of SQL Server, hosting SQL Server Performance Dashboard Reports on a central reporting server can provide additional benefits making life easier for enterprise DBAs for monitoring and troubleshooting SQL Server issues. To support hosting SQL performance dashboard reports on a central SQL Server Reporting Services instance, we have customized SQL Server 2012 Performance Dashboard Reports, added new reports and uploaded in Tiger toobox github repository for customers and SQL community. The reports are tested to run against SQL Server 2012, SQL Server 2014 and SQL Server 2016 versions of target SQL Server instance and can be deployed against SQL Server 2012, SQL Server 2014 or SQL Server 2016 Reporting Services instance.

Following are some of the benefits of hosting SQL Performance dashboard reports on central SSRS reporting server.

  • Monitoring Reports accessible anytime, anywhere using browser – This removes the dependency of thick client like SQL Server Management Studio (SSMS) to be present on the workstation server allowing DBAs, DevOps audience to check the health of SQL Server and resource consumption using web browser from any workstation machine with access to the server.
  • Scheduling automatic report delivery – SSRS allows scheduled email or file share delivery of reports. This allows DBAs, application owners and database stakeholders to choose push model where by performance health reports can be scheduled to run against specified SQL Server instances at the specified time and be delivered in their mailbox to proactively monitor overall health of SQL Server instance and detect any anomaly.
  • Performance Base lining using Report Snapshots – SSRS allows you to capture scheduled point in time report snapshots at the specified time interval allowing DBAs to establish performance baselines using historical snapshots for the target SQL Server instances.
  • Linked Reports for Application owners and other stakeholders – In an enterprise environment, most application teams and stakeholders are interested to see the performance, resource consumption, blocking information and overall health of their SQL Server instance on-demand. In such scenarios, DBAs can create linked reports for the target SQL Server instances on the SSRS central server and delegate them permissions to view reports for their target SQL Server instance of interest. This allows application teams, developers to be self-sufficient to check the overall health of their SQL Server instances creating some bandwidth for DBAs who needs to be contacted only if there is an anomaly or problem detected.

Click here to view the rest of this post.

SQL Server DBCC CHECKDB Overview

SQL Server database corruption can be a problem and can cause serious damage to a database. If you’re an experienced DBA then you probably have safeguards in place to detect this, but over the years I’ve seen hundreds of SQL Servers with no detection methods at all and this is a problem. There are a few ways to detect database corruption, but this tip will focus more on DBCC CHECKDB.

You may or may not have heard of DBCC (database console commands) statements. These statements are used to perform different operations in your database and can be broken down into four categories: Maintenance, Miscellaneous, Informational, and Validation. I use some of the DBCC statements on a daily basis, but none more than DBCC CHECKDB.

What is SQL Server DBCC CHECKDB

DBCC CHECKDB, from Microsoft MSDN Library, checks logical and physical integrity of all the objects in the specified database by performing the following operations:

  • Runs DBCC CHECKALLOC on the database – Checks consistency of disk space allocation structures for a specified database.
  • Runs DBCC CHECKTABLE on every table and view in the database – Checks the integrity of all the pages and structures that make up the table or indexed view.
  • Runs DBCC CHECKCATALOG on the database – Checks for catalog consistency within the database.
  • Validates the contents of every indexed view in the database.
  • Validates link-level consistency between table metadata and file system directories and files when storing varbinary(max) data in the file system using FILESTREAM.
  • Validates the Service Broker data in the database

If you’ve ever ran DBCC CHECKDB you know it takes some time for large databases. Now that you know all of the steps that are run, you can see why it takes time to complete.

How can SQL Server DBCC CHECKDB help me?

Data corruption is bad. It can cause all sorts of issues within the database that may include incorrect data results, failed SQL statements, and in some cases can take down the entire SQL instance. DBCC CHECKDB warns you of corruption so that you can fix it before (hopefully) it gets too bad.

How do I use SQL Server DBCC CHECKDB?

DBCC CHECKDB is pretty straightforward. There are a few options you can use with the statement and we’ll go over some of those in the next section, but the basic syntax looks like this:

DBCC CHECKDB ('DatabaseName')

Pretty simple.

Automate SQL Server DBCC CHECKDB

Obviously, you don’t want to log in every morning and run this statement on each database, so you can automate this process using a few different methods:

  • SQL Server Maintenance plans – Maintenance plans are part of SQL Server out of the box (unless you’re running Express Edition). I don’t like using maintenance plans for the most part, but I don’t mind using them for this type of task. In the Maintenance Plan toolbox you’ll need to use the Check Database Integrity task. The only configurable option is to include indexes so it’s not really user friendly, but in some cases this is all you need. Again, we’ll talk about other options in the next section.

Custom scripts – Custom scripts are usually what I use and offer the best flexibility as far as adding the options you want. My go-to scripts are already created and free to use from Ola Hallengren. He’s done a wonderful job of creating these and sharing them to the world. Thanks Ola!

Click here to view the rest of this post.