There has been a lot of attention given to ChatGPT and AI over the last month or two. I’ve tried a few things with the public interface at Open.ai. Some worked well, like this one:
Others not so well:
This post looks at a few things I tried with VS Code and GitHub Copilot.
Getting Access
I saw a note in our internal Redgate Slack that all developers were given access to Github Copilot. This is something you can subscribe to for about US$10/month, or if you are a student, you can get it for free. In my case, I filed a ticket:
It took a week or so as someone was out on holiday and this was a low priority ticket. In any case, I got a note from our ticket system, as well as GitHub, that I had access:
So I added the extension to VSCode.
Once installed, I got a note to sign in to GitHub, and when I had completed that, I saw the Copilot icon in the lower right corner of my IDE.
Getting Started
My first experiment was to open my ZeroDowntime client code and see what happened. This is a VS 2019 project, but I opened it in VSCode, specifically the form1.cs code. I highlighted some code and …
Nothing.
Then I tried something I’d seen. I added a comment above some code. Still nothing, but when I opened the Github Copilot completions panel, I saw this:
Not helpful, nor what I asked for. Both solutions were similar.
Starting from Scratch
I decided to then start from scratch. I created a new file, set this to C# and wrote this:
Initially I got nothing, but when I opened the Copilot panel I saw this:
That is more interesting. Clearly poor specifications on my part.
Let’s try something else. I added more detail, and as I did, Copilot even added a few things after to help me specify what I needed.
I finished these lines and hit enter and I got something, though not what I wanted.
Let’s try something else:
That’s better. Not great, but better. This doesn’t quite match what I asked for. I copied this to a template project and then added another comment. I want to check for the existence of the parameter, and I got this code, which is better than something I’d write.
Not quite right, and I have some lines to delete, but this gives me something, and as a middling C# dev, this is helpful. Once cleaned, I compiled and ran the code, and it seemed to work, at least for a very basic console app.
Let’s try SQL
I opened a new file, set the type to SQL and wrote a comment, using the appropriate comment style.
I guess that’s OK. Not great, but I didn’t provide much detail. Interesting that it chose to use a created_at column with a timestamp. I never use timestamp, which is deprecated. Not sure how the AI learned about this type, perhaps because of a large corpus of training data using old code with this? Who knows.
Let’s try something else. I’ll ask for a query using a known schema.
A start, but not quite right. I do have to keep hitting enter to get the query written. A couple more Enters and Tabs to accept code get me this:
If I keep going, I get a long query, but it doens’t work. At least not on my version of AdventureWorks.
Not great.
What if I add a comment? I’ll ask for window functions.
I get something else, but again, the final query doesn’t work. This time there are less errors, but the join listed seems to think the Customer table has a first name and last name, which it doesn’t.
Initial Thoughts
I’m not sure how useful this is. I think this is going to be one of those tools that I’ll have to practice with and understand how it works. My basic tests are mostly because I’m not sure what to do with it, or how it can be helpful.
I have lightly seen some demos, but I realize that I need to watch a few more and also experiment with the features. I was hoping it would clean up some of my C# code, which is fairly basic, but it didn’t, at least not with my prompts.
We’ll see how this goes, and I’ll see if I can use it in ADS and if it can actually recognize and use my database schemas to write queries or tests.