Multiple Database Projects in Azure Data Studio

In many companies, you’re often working with more than one database on the same server. When I use Visual Studio, I have all the databases that exist on the same server in the same solution. You can do this same thing when you’re working with Azure Data Studio.

In this week’s YouTube series, I show you one trick that will let you have multiple database projects in the same workspace. This becomes especially important if you have database code that references objects in a separate database.

Creating a Publish Profile in Visual Studio

One of the factors that allowed my company to get comfortable automating database deployments was SQL Server Data Tools (SSDT) and publish profiles. We started using SSDT with database projects before Azure Data Studio (ADS) existed.

One of our fears was always how to prevent losing data and critical data code. Here were publish profiles to our rescue. We also found that some of our database code had specific values depending on the environment or contained references to other databases. Once again, publish could solve these problems!

While I’d love to say that you could use ADS to manage your database projects, that just isn’t true right now. However, we have a way to help you get a publish profile created. If you don’t want to use Visual Studio yourself, you might want to ask your Developer friends real nice and see if they’d be willing to help you out.

I’ve create a nice YouTube video to show you how you can create your own publish profile. Check it out!

And if you’re curious about what a publish profile might look like, here’s an example of the one created in the video.

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="Current" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <IncludeCompositeObjects>True</IncludeCompositeObjects>
    <TargetDatabaseName>BookCatalog</TargetDatabaseName>
    <DeployScriptFileName>BookCatalog.sql</DeployScriptFileName>
    <TargetConnectionString>Data Source=localhost;Integrated Security=True;Persist Security Info=False;Pooling=False;MultipleActiveResultSets=False;Connect Timeout=60;Encrypt=False;TrustServerCertificate=False</TargetConnectionString>
    <BlockOnPossibleDataLoss>True</BlockOnPossibleDataLoss>
    <ProfileVersionNumber>1</ProfileVersionNumber>
  </PropertyGroup>
</Project>

But I’d highly recommend using the GUI in Visual Studio instead of trying to make a file on your own.

Update the Default Branch Name in Azure Data Studio

While performing my videos, I realized that I hadn’t updated my default branch name from the previous value. This week’s video walks you through the process of renaming your default branch from master to main. This video not only covers how to update GitHub but also how to update your local GitHub project. Once again, we’re using Azure Data Studio to work with our Database Projects. You can also use Visual Studio, if you prefer.

Deploy Database Projects Manually

Earlier in this series, I showed you how to create a Database Project using Azure Data Studio (ADS) and how to add a Database Project to GitHub. Getting a database project setup and into source control is often the beginning. I’ve found one of the hardest things to do is to keep your database in source control. This is often because it hard to keep using source control when you don’t have a way to deploy your changes.

This week, we’ll talk about one of the easier ways to deploy your database changes. One of the benefits of database projects is that they can generate data-tier applications (DAC). The data-tier applications can be bundled into what is called a DACPAC. This is a collection of files that can be used to deploy your database.

DACPACs are definitely not the only way to deploy your database. However, they are one of the main ways to deploy database projects. If you’re using to running a series of script files for your deployments, this is going to be a bit different. When deploying database projects, the database project becomes the source of truth. Generally speaking, whatever is in the database project will be deployed to the SQL Server instance.

If you’re like to figure out how you can manually deploy your database projects, you can check out the video below. If you’re looking for a more automated method to deploy database code, check back over the next couple of weeks where I’ll start showing you how you can use Azure DevOps to build and deploy your database code.

Add Database Project to GitHub

Two weeks ago, I showed you how to create a Database Project using Azure Data Studio (ADS). If you’re starting your database journey with source control and/or Continuous Integration/Continuous Delivery (CICD), that’s the first step you’ll need to complete. However, getting a database project created is just the beginning!

Once you have the database project created, you’ll want to get your database project added to source control so that you (and others) can modify and manage your database code. This next step is the beginning of allowing you and others to work on the same databases and minimize the risk of overwriting someone else’s work or deploying the wrong code to Production.

This week’s YouTube video will show you how you can get your database project to GitHub. If you’re using Azure DevOps, the process should be similar.

YouTube video showing how to add a database project to GitHub

Keep following this blog, and I walk through how to setup a process to automatically build and deploy your database code.

Use Azure Data Studio to Clone a Repo from GitHub

One of the more common questions I’ve gotten is how can Database Administrators and Database Engineers embrace database source control without using Visual Studio. I have to admit, a good part about DevOps, automation, and database source control involves using tools that people will embrace.

In this week’s YouTune video, I show how you can clone (import) a repository (database code) from GitHub all within Azure Data Studio. This is a great feature that helps make database source control more accessible to individuals who may not have access or be comfortable using Visual Studio or VS Code.

You will need to be on at least Azure Data Studio (ADS) 1.25.1. In addition, you’ll want to download Git on your machine as well as install the Database Project extension in ADS. This guide is designed for those individuals using SQL Server Data Tools (SSDT).

The YouTube video can be found here or by clicking on the image below.

Thumbnail for YouTube video: Database Projects with GitHub

Creating a Database Project in Azure Data Studio using an existing database

Check out this video on how to quickly get your databases into Database Projects. If you’re wanting to save a copy of your SQL Server database into a version control system, this is one of the first things you’ll need to do.

Database Projects can also help organize your T-SQL code that is used to make database objects. In this video, I show you how to create a database projects that groups T-SQL scripts by schema and then by database object type. No more wondering if that object is a view or table!

It’s the end of the year, and I’ve decided the best way to prepare for the New Year is to start my goals early. One of the things I’ve wanted to do all year is start a video series on how you can improve your database deployment process.

There are all sorts of fancy names that get used including source control, version control, continuous integration, continuous delivery, DevOps, etc… My goal here is make this process easy for you.

When I first started talking about source control and automated deployments, I lived in a world of Visual Studio. I was a data professional, the only real language I know is T-SQL. But I soon learned there were alot of challenges in getting my data peers to embrace using Visual Studio.

Thankfully, Azure Data Studio has created the Database Projects extension to solve this problem. Instead of giving you pages of screenshots, I have a video that will walk you through creating your first Database Project using Azure Database Studio and one of your existing databases.

I’ve created a quick YouTube video to walk you through the process. Check it out and let me know what you think! https://www.youtube.com/watch?v=DgfMycbBQUU

T-SQL Tuesday #133 – Filling in the Gaps

T-SQL Tuesday Logo - A blue database with a calendar showing days of the week wrapped artfully around the database.

I’m so glad Lisa Bohm (b|t) is hosting this month’s T-SQL Tuesday. I’m also thankful to Steve Jones for hosting T-SQL Tuesday and helping hosts share their ideas with the SQL community. I also want to thank Adam Machanic for starting T-SQL Tuesday as a blog party. I know I often have a difficult time coming up with topics, and it helps to be provided a topic.

Let’s get back to this month. Lisa says, “I’d like those of you who have presented, or written a presentation, to share something technical THAT DID NOT RELATE to the topic of the presentation”. I’m glad to respond to this month’s blog topic. The main reason is, like Lisa, I want to encourage people to present and share their experiences. One of the best things happened to me as a result of presenting, and it was something that I needed.

A couple of years ago, the company I was at was planning a data center migration using distributed Availability Groups. My boss at the time, Rie Irish (b|t), encouraged me to get a SQL Saturday presentation together. At the time, one of my largest concerns as a DBA was that I was not familiar with Server Manager. I often felt like I didn’t belong as a DBA because I came from what I perceived as a non-conventional route.

For those of you data professionals that came to this profession from outside of IT, I understand you. I have a B.B.A. in Accounting, and when I started my career, I was in business operations. I was introduced MS Access, and I was mesmerized with databases. I eventually found that if I wanted real database action, I needed to work with technology with Microsoft SQL Server.

Fast forward to this presentation for distributed Availability Groups. Prior to this presentation, I barely used PowerShell, and I had no idea how to create VM. But thanks to the encouragement I received, I figured out how to build multiple VMs using PowerShell. I made Domain Controllers and all that jazz. Honestly, I think all the things I had to do to create distributed Availability Group really helped me build confidence I didn’t realize I was missing.

Now I accept that I don’t need to know everything about infrastructure and networking. I still want to learn. But I am comfortable with Server Manager, building VMs, and I have a passing understanding of Domain Controllers. But really, the best thing I learned is that I don’t have to know all the answers. What I love is automating database deployments and understanding how applications interact with databases. And that’s ok!

T-SQL Tuesday #130 – Recap: Automate Your Stress Away

T-SQL Tuesday Logo - A blue database with a calendar showing days of the week wrapped artfully around the database.

This month, I hosted #TSql2sDay for September 2020. If you’re unfamiliar with T-SQL Tuesday, it’s a blogging party. The idea is a host comes up with a topic and encourages other members of #SqlFamily to join in and write a blog on the topic. T-SQL Tuesday was stared by Adam Machanic and Steve Jones has carried on this fun tradition!

I’m really grateful for all the bloggers that took part this month. Especially since automation has been a topic discussed before. However, it’s hard for me to get too much of my day to day work automated. And I was really looking forward to these topics so that I could learn new tasks I could automate myself. With that said, let’s see all the wonderful ideas people contributed this month. And if you’re like me, you’re going to want to put some of this automation in place as soon as possible.

I’m right there with Greg Dodd regarding automation. Greg blogs about how automation can be a great way to minimize making mistakes. I also like how Greg points out that automation can mean we can all take a vacation without worrying that someone will call us with an emergency. Greg’s done some pretty interesting stuff including automating the creation of test servers using Chef. Greg keeps things realistic by stating that often automation is not a time saver at first, but over time you can really see the benefit.

Eitan Blumin is very familiar with automation, and Eitan has blogged about the topic extensively. As such, Eitan approached this month’s topic with a different perspective. Instead of discussing a project using automation, Eitan walks us through what steps we can use to start automating tasks around us.

This was Magda‘s first blog post ever (yay, Magda!), and I’m so glad that Magda joined in the fun to write about how we can all use automation. Magda might describe themselves as a non-technical worker, but I think Magda’s approach to automation is the same as the rest of us. Magda found that some manual tasks in Excel could be improved with automation. I’m also very glad that Magda reminds us all to celebrate the victories… and even a cup of tea with the time you’ve saved automating your work.

Kevin Chant lets us know what benefits he’s found with automation, which includes making our lives easier. Kevin shares information about the first steps you can make in creating your first Azure DevOps pipelines. I really like how Kevin reassures us all to not get stuck in perfectionism paralysis. Kevin lets us all know that sometimes its ok to have an idea and implement a proof of concept quickly.

We each have a preference when using SQL Server Management Studio (SSMS). Some of us prefer to write the code outright and others prefer the GUI. Rob Farley reminds of us one of the option available in SSMS. We can use the GUI to create a T-SQL script for you. You also have this option with Azure Data Studio (ADS) for many (but not all) actions.

When things go wrong, one of the more tedious tasks we must undertake is reviewing error logs. Aaron Bertrand shares his solution on how to aggregate various error logs using PowerShell. I think the idea of combining various error logs can help us all. There is even a copy of the current version of code available on the blog post as well as a link to the source control so you can keep up with any changes as Aaron improves the code over time.

Another common issue in our day-to-day life is keeping things as they are because they’ve always been that way. I know I am all too guilty of this at times. In this month’s post, Mikey Bronowski gives examples of manual tasks that need some rework. Mikey covers how to get rid of paperwork and standardize SQL Server installations. Mikey found an outdated process due to old technology and after some internal conversations, Mikey updated the process to use newer technology and saved time.

One of the common tasks we often must deal with is patching our SQL Servers. It is one of those tasks that is monotonous but also requires additional analysis. Volker Bachman shares how to use PowerShell to control when and what updates are being deployed to your servers. I love that Volker has also included the functionality to automatically send an email if the server is restarted. That can really save time troubleshooting a server restart the next business day.

Richard Swinbank found that sometimes that you may want to generate database objects based on tables already in the database. Using metadata and dynamic SQL, Richard shows us how we generate scripts that will create multiple database objects. I like how Richard took a problem dealing with multiple different systems and found a way to automate a way to create consistent and reliable code.

One of the areas I haven’t spent much time with in SQL Server is trace flags. However, I completely understand wanting to ensure that trace flags are applied consistently across SQL Server instances. Taiob Ali shows us how we can keep trace flags consistently applied. With startup procedures and configuration tables, we can decrease our stress and know that our SQL Servers are configured as expected.

Every environment is different and each have their own challenges. For Deepthi Goguri those challenges include managing hundreds of servers with multiple versions of SQL Server. Deepthi has found that PowerShell is the right tool for the job. With PowerShell, Deepthi has saved hours making job owners, SQL Server editions, and recovery models consistent. Thank you, Deepthi for writing your first blog post for #tsql2sdy.

Another first time T-SQL Tuesday blogger is Travis Page. Travis is also using PowerShell to help automate stress away. I’m not new to database source control, and I want all the things in source control. Travis is tackling one aspect of source control that can be quite tricky. Using dbatools, Travis has found a way to put SQL Server Agent jobs into source control.

I’ve recently started using Azure this year, and Drew Skwiers-Koballa‘s blog post addresses the issue of how to save money in Azure. One common pitfall can be creating resources and forgetting to clean them up. Drew shows us how to use PowerShell Workflow runbooks to automatically delete resources. I also like how he showed us not only how to deallocate resources but also introduced me to Azure Automation.

One of the things Database Administrators are tasked with is ensuring that the data for our company is protected. In this month’s T-SQL Tuesday, Gregg Barsley shows us how to use dbatools and PowerShell to generate an audit file of user permissions to SQL Server. This is one of those topics that I never knew I needed; now that I’ve seen it, I know I have to have it.

Among many other things, knowing that our SQL Server instances are properly configured and running peak efficiency can really help reduce stress. Glenn Berry shows various options for how we can automate our health checks. You have options to how to run Glenn’s diagnostic queries that include PowerShell and Jupyter Notebooks. As a bonus, Glenn also shared how you can setup SQL Server Agent Alerts for various errors.

Steve Jones mentions that he uses automation to do tedious work so that he can focus on problem solving, and Steve is busy solving various problems. He has automated projects ranging from collecting data for SQL Saturday events to deploying development databases. This kind of variety shows the power available with automation.

We database people love our backups. And aside from testing our backups many of us have found that developers often ask for fresh data from Production. Hugo Kornelis found himself in that exact situation. Using T-SQL, Hugo was able to create a script to backup a database from one server and restore it to a location and name of his choosing.

I’ve been looking forward to this month’s T-SQL Tuesday for quite a while because I was so eager to see what you are were automating. One thing I did not expect to see was Rob Sewell‘s post. Rob has a creative solution for how to easily get the details about SSIS failures. I’m not surprised that Rob is using PowerShell, but what is really cool is that Rob is sending the detailed messages to Teams!

I’m glad that Josh Handler could also join us a first time #tsql2sday blogger. As a data professional, there are some times where we can take our time performing a task and other times where we need to get things done quickly. Josh writes about one such tasks which involves installing SQL Server fast. By using the SQL Server configuration ini file and some additional PowerShell scripts, Josh has been streamline SQL Server installations.

Mike Donnelly shares his experiences creating a deployment pipeline for Redshift. Mike discusses some common pitfalls that we can all fall into. Sometimes we start out with what seems to be a simple process but as time goes on, we find that a little manual work can become overwhelming. Mike identified the challenges and came up with the solution of using Python to improve the deployment pipeline.

One of the more dreaded things that can happen as a data profession is getting reports that a system is slow, especially hours after the slowness occurred. John McCormack has a solution that can help you find what was running. John shows us how he automated recording query activity using sp_WhoIsActive and SQL Server Agent.

Many of us may use T-SQL are our most frequent programming language. However, Todd Kleinhans has found a new interest with Python as it pertains to Todd’s passion for UE4, Epic Games Unreal Engine. Knowing the power of Python, Todd looks into the usefulness of Python with SSIS for ETL and other projects.

When working with PCI data, one of the most frequent and important tasks involves finding where credit card data may be stored in the database. In this month’s blog, Jason Brimhall shows how to automate detecting credit card data in your database. With just some T-SQL and SQL Server Agent job, you will be well on your way.

Jess Pomfret discusses a frequent task that many database professionals perform to ensure the health of their database environments. Jess shows how she has automated daily health checks using dbachecks. With dbachecks 2.0, you are now able to easily save your results to a database. It looks like I’m going to need to get the newest version of dbachecks!

This was my first time hosting T-SQL Tuesday, and I want to thank you all for joining me. I hope you get a chance to check out all the blogs on this recap post and put some of these ideas into practice.