Knowledge Sharing

One of the biggest problems most companies face is the loss of institutional knowledge when an employee leaves. As a consultant, I see this at every company I go to. If it isn’t from an employee leaving while I’m there, it certainly happens when I leave. I leave as much documentation as I can behind when I go about whatever I’ve done, but invariably there are little tasks that don’t get written up or get lost in the larger documentation.

When most people leave a position they have a week or two at most to do “knowledge transfer”, but that only occurs if the departure is amicable and it is never very thorough. Most companies, if they consider knowledge loss at all, frame it as the “hit by a bus” scenario since they never want to acknowledge any other reason a person might leave a company. This is, of course just denial. People leave and they take their knowledge with them. It doesn’t matter if the person is there for a week or two or twenty years, they likely have knowledge the company needs.

On many of my recent contracts, I’ve tried to help combat this. There are technological solutions to help ameliorate the loss of knowledge that comes with the loss of personnel. The primary tool I’ve been suggesting is the wiki. If you set up an internal wiki for your company and encourage its use, you can have easily accessible documentation for your IT department that will come in handy whether a person leaves, a new person comes in and needs to get up to speed or even if it is just an old project that needs revision. It doesn’t have to be just for the IT department either. Most departments in a company would benefit from having a wiki that lets the employees contribute to the institutional memory of a company.

Even more important than making the software available I’ve found, is getting management “buy-in”. If you can get management to encourage the use of the wiki, it’ll work well. If you can’t, it is unlikely to. The culture of the company needs to incorporate encouraging everyone to update the wiki with relevant information. They need to be actively encouraged to do it at all levels and management has to understand that it takes time to make the updates. Convince them that it’ll save time and effort in the long run and you have a shot.

Ideally, especially in larger companies, you’ll be able to hire or assign someone to manage your wikis. Someone who can help people when they have problems with it and who can maintain the software. If used well, there can even be sections of the wiki set aside for non-company business that fosters a sense of togetherness whether it is a section for company announcements or a place to handle announcements that are currently sent out through emails like birth and wedding announcements.

Another system that works well for knowledge sharing is shared bookmarks. I like the site Delicious. It has an easy to remember address: del.icio.us and is easy to use. Just set up an account for your company or for the departments in your company and provide the user name and password to everyone. You’ll likely want an administrator for this so your pages don’t get clogged up with lolcats links and other non-business information, but this too can provide good resources for your employees.

If this goes well, blogs or discussion rooms can be set up to encourage collaboration and further knowledge sharing while cutting down on time spent in meetings. Weekly team meetings can be supplanted with blog posts and comments. OK, that’s likely more of a dream since I know how managers love their weekly meetings, but it could at least supplement them!

It is likely best to pilot these programs within the IT department as they’re the staff most likely to already be comfortable with the tools involved. Once it has been shown to work there, it can be spread through the company to other departments.

If used well and used by all you won’t have to worry as much when someone gets a job offer closer to home, for more money or leaves for any reason at all. It isn’t a complete replacement, but it can help ease the transition.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Clarity in Code

Commenting code is always a good idea and there are more ways than one to comment your code. When running a long series of SQL queries it can be useful to know what has run. If you have conditional statements you’ll want to know which result ran. If you are gathering metrics and have more than a few statements, it helps if you can match the results with the query.

By insering the command Print followed by an identifying phrase between single quotes, you can get this kind of identification on the Messages tab.

Let’s say you don’t use this and have SET STATISTICS IO and SET STATISTICS TIME set. Let’s also say you have 15 queries running in sequence. When you look at the Messages tab, you’re going to have a hard time figuring out what results go with which query, especially since some commands can return multiple sets of statistics. If you do use it and have say PRINT ‘Employee Stats’ before the first query, PRINT ‘Sales figures’ before the second, PRINT ‘Sales by quarter’ before the third and so on, then before the first set of statistics on the Messages tab you’ll see

Employee Stats

and before the second set of statistics

Sales figures

and so on. This will not only let you know which stats go with which query, it’ll also let you know if your query is running more processes than you expected.

Posted in 2008, SQL Server | Tagged , | Leave a comment

Lecture

I went to a SQL Server lecture tonight on Virtual Machines and SQL Server by Brent Ozar. It wasn’t what I expected. The lecture was great, as have been all the ones in this lecture series, but the advice was decidedly more mixed than I thought it’d be.

The takeaway message was that Virtual Machines don’t make SQL Server better, they make it cheaper. That’s likely a quote from some point in the lecture as that concept was repeated many times. He highlighted some of the benefits:

  • Easier to move resources
  • Easier to target databases to the appropriate environments
  • Easier to group databases for maintenance
  • Virtual machines have their own version of clustering/replication that allows them to be up more reliably
  • You can have more environments without having to worry about cross-effects
and there were many more.

However, he made it very clear that if you have SQL Server on virtual machines and your shop is large enough that someone else is in charge of the virtual machines, you’re going to constantly be fighting for resources or at the very least are going to have to monitor them vigilantly to ensure SQL Server isn’t being starved of what it needs.

  • Use Perfmon to monitor various aspects of your memory allocation
  • Also use it to monitor various aspects of your CPU allocation
  • Ensure you have a reasonable floor established for memory and CPU
  • Try to make sure you’re going to be using the same CPU set for your processes
  • Have the settings for the “balloon driver” set in such a way that it won’t steal resources from SQL Server
  • Monitor manually where your files are sitting
and there are more there too.

The Baloon Driver is an odd little feature of virtual machine management. All it does is constantly pretend it needs memory and ask for it from all available resources. This is to make sure no program is hogging all the RAM, but SQL Server needs to have a lot of memory allocated to it and needs it to be at a fairly high constant level. If the balloon driver is taking those resources away every 30 seconds, SQL Server performance will suffer.

I also found out the reason for something I’d previously considered inexplicable: Why newer computers have CPUs that cycle down to lower speeds. It is to save power and therefore money. This too can have a bad effect on SQL Server performance. In fact, it can have a horrible effect on any program’s performance. This is made worse in a virtual machine environment because you have less of an idea of when the machine will be cycled and the CPUs will be restored to high performance.

We were shown a manufacturer’s description of one of the known problems. The CPUs can cycle down to 800 Mhz and when they do, they stay there. Forever. Nothing will bring them back up except for cycling the machine. Rebooting may fix the problem. I’m not a hardware guy so I don’t know what “cycling the machine” entails, but if it is more involved than simply rebooting, it can’t be good. Rebooting on a production machine is bad enough!

He showed us some tools to get to monitor performance and apparently you really need to get them because otherwise the Virtual Machine Admin can hide all the performance metrics from observation by normal monitors like Task Manager. The tool he suggested was CPU-Z from CPUID.com. I’ll likely check it out, but I don’t know that I’ll get to try it out as we don’t as yet have any virtual machines.

One other piece of advice he gave us makes me question whether or not virtual machines will actually be useful to us. He said that for databases over 100 Gig, they’re of limited utility and that we should keep them on physical hardware unless we absolutely have to move them.

Posted in 2008, SQL Server | Tagged , , , , | Leave a comment

Resources

I’d like to recommend a web site for learning SQL Server and for all your questions as you get stuck. And you will get stuck, believe me, at least if you do anything at all beyond the basics and probably even then. It isn’t because SQL Server isn’t good or is especially difficult, it is simply the nature of complex systems. You’ll get stuck in any programming language you use, but at least these days we have the internet to help us figure our way out of our problems.

SQL Server Central is a great resource for learning new tricks, improving your coding and finding help for figuring out problems. It is mostly populated by knowledgeable, helpful people. Any on-line community will have its jerks, but I can honestly say they are few and far between on this site. Just ignore the ones that show up because there will be a ton of other people willing to pitch in and help.

In addition to the forums, and I encourage you to jump in there, they also publish articles daily. Make sure you read the comments that accompany the articles, I’ve learned some amazing things there and found links to other great articles.

I am fortunate enough to have had some articles published there. I’ll repost them here eventually, but if you publish with them they get it exclusively for an extremely reasonable 3 months.

Posted in 2008, SQL Server | Tagged , , , , | Leave a comment

Data Transformation Quirk

In SSIS when creating a Data Flow Task, you’ll usually want to have a destination for your data. From everything I’ve read, if your package is running on the same server as your destination, you’ll want to use SQL Server Destination as your destination type. If you’re on a different server, you’ll want to use OLE DB as your destination type and occasionally ADO.Net though ADO.Net is slower, having more levels of abstraction.

I developed my entire ETL using SQL Server Destinations as I knew I’d be running the package on the destination server. I tested it all on SQL Server 2008 SP2 and Windows Server 2003. When it came time for deployment to the production machine, I used SQL Server 2008 R2 on Windows Server 2008 and it simply doesn’t work. I switched over to OLE DB as my destination and it works fine. I’m looking forward to installing SP 2 for SQL Server 2008 R2 as this may have fixed the various problems I’ve mentioned so far, but we’re waiting for a few weeks to install the patch. When that’s done, I hope to test the various bugs I’ve found again and see if everything works. I’ll report the results here.

In the meantime, if you’re using SQL Server 2008 R2 on Windows Server 2008 and no patch for R2, use OLE DB as your destination. I didn’t notice any appreciable speed difference and it works quite well.

Posted in 2008, SQL Server, SSIS | Tagged , , | Leave a comment

File System Task

I’ve used the File System task before, but it was to do a daily copy of files in a directory. I used copy directory so I didn’t have to specify or loop through any files. It was the same process every time and the destination files could be replaced, so I used static variables for Source and Destination and everything worked fine.

This time I’m moving one file and renaming it so it’ll be unique and I just couldn’t get it to work. I was trying to use expressions to concatinate the standard file name and an integer. It worked just fine when I’d click “Evaluate Expression”, but would fail every time I’d try to run the step. I’d get a typical, always unhelpful error from SSIS

Failed to Lock Variable

The more verbose explanation in the message held the clue though. It told me that the variable “C:\OUT\Access\filename.accdb” could not be found. What? I didn’t have a variable by that name, that should be the contents of the variable. What the hell was going on?

Another clue was in the various instructions I found on using File System Task on-line and in books.

Set the variable’s EvaluateAsExpression property to true.

That inspired me to go through the various properties variables have in SSIS. Right under EvaluateAsExpression it says Expression.

At this point, I decided to start again from scratch. I didn’t want to take a chance that anything I’d already done wouldn’t change properly. SSIS has burned me that way in the past and that’ll be a future post.

Here’s how I’d been setting the expression so far. I created the variables, set IsSourveVariable to TRUE, IsDestinationVariable to TRUE and selected the variable for source and the variable I thought would hold the evaluated expression as destination. Then I went to Expressions in File System Task and created the appropriate expression for the Property Destination. When I ran it, it replaced the variable in Destination with the string I wanted to be the destination.

This time I didn’t touch Expressions in the File System Task. When I created the Destination variable, I went to the variable’s properties, set EvaluateAsExpression as TRUE and then created the concatination expression right there in the variable’s Expression property.

After I had the variables set up I went into the File System Task and set the variables as before, but didn’t touch Expressions within the File System Task. I ran the task and it worked beautifully. I initially tried it as a rename process, but that moved the file and didn’t leave the original. I wanted to leave the original so I changed it to copy and that worked too. Apparently it was all in where I create the expression.

I thought that perhaps I was just assigning the variable incorrectly. Up to this point I’d been doing everything with User:: variables. When I’d created the expression within File System Task, it asked for a Property name. I figured I should look for a System:: variable, but there isn’t one in the list. I think there must be some way to access that property for the Destination field, but I haven’t been able to find it as yet.

That aside, I’m satisfied with how it is working now. Next step is to set one of the variables going into the expression as the result of a query. We’ll see how that goes.

Posted in 2008, SQL Server, SSIS | Tagged , , | Leave a comment

Intellisense

Adding Intellisense to SQL Server Management Studio is a wonderful improvement. If you’ve used Visual Studio, you know what Intellisense is. For everyone else, Intellisense checks what you’re typing and gives you suggestions of valid options based on context. It will list the functions you might be trying to type, it lists the parameters you need within those functions too. I find it greatly speeds writing queries, especially inserts as once you’ve typed the table, it”ll give you a list of the columns in that table and you can simply pick from that list with either the mouse or keyboard.

Intellisense recognizes when a table is in the database and will warn you if you’re trying to create a new one with the same name. It even recognizes any aliases you’re using in your queries. If you have variables declared and type @ it’ll give you a list of your variables and it recognizes temp tables you have declared as well.

There is one problem with Intellisense in SQL Server 2008 and SQL Server 2008 R2 though. If you use a CREATE TABLE statement, Intellisense doesn’t know it is there right away. It let it know, you have to click Edit then Intellisense then Refresh Local Cache or use the keyboard shortcut of Ctrl+Shift+R. If you’re up to date with your service packs on SQL Server 2008, you don’t need to do this. They’ve fixed the problem and Intellisense will know your table and columns are there as soon as the table is created. Unfortunately, for some reason SQL Server 2008 R2 didn’t have that fix when it was rolled out and I don’t think it has been patched to fix it yet either. I’m sure it will be soon, but for now it is a little annoying to be working on the latest version with a bug that was fixed in the previous one.

Posted in SQL Server 2008 | Tagged , , , | Leave a comment

Aliases

Aliases seems to be something people don’t think much about. They learn that you can call a table or a column something else for reference elsewhere in your query and they go about using it. Sure, sometimes they can’t understand why people would use aliases, to each their own. I like aliases, I think it makes the query easier to read, easier to write and easier to understand. Lets go to the AdventureWorksR2 database for an example.

If you’re referencing the table EmployeeDepartmentHistory, do you really want to type that out every time you reference that table? Worse yet, if you’re using schemas and want to be specific, now you’re typing out HumanResource.EmployeeDepartmentHistory whenever you want to explicitly state a column. If you’re writing the query against a database on another server, you’ll end up with ServerName.AdventureWorks2008R2.HumanResource.EmployeeDepartmentHistory. If you’re linking that to the local table of the same name to check that the data matches, you’ll need to type that out in full for every column you reference

Or you could type RmtEDH for the remote server table and EDH for the local table.

To use a slightly different example that’s all on the same server and in the same database, lets use AdventureWorks2008R2 again.
We can either have the table names typed out

SELECT HumanResources.Employee.BusinessEntityID, HumanResources.Employee.CurrentFlag, HumanResources.Employee.BirthDate,
HumanResources.Employee.Gender, HumanResources.Employee.HireDate, HumanResources.Employee.JobTitle,
HumanResources.Employee.LoginID, HumanResources.Employee.MaritalStatus, HumanResources.Employee.ModifiedDate,
HumanResources.Employee.NationalIDNumber, HumanResources.Employee.OrganizationLevel,
HumanResources.Employee.OrganizationNode, HumanResources.Employee.rowguid, HumanResources.Employee.SalariedFlag,
HumanResources.Employee.SickLeaveHours, HumanResources.Employee.VacationHours,
Person.Person.FirstName, Person.Person.MiddleName, Person.Person.LastName, Person.Person.rowguid,
Person.Person.Title, Person.Person.Suffix, Person.Person.PersonType
FROM HumanResources.Employee
INNER JOIN Person.Person
ON HumanResources.Employee.BusinessEntityID = Person.Person.BusinessEntityID

Or we can use an alias

SELECT E.BusinessEntityID, E.CurrentFlag, E.BirthDate, E.Gender, E.HireDate, E.JobTitle,
E.LoginID, E.MaritalStatus, E.ModifiedDate, E.NationalIDNumber, E.OrganizationLevel,
E.OrganizationNode, E.rowguid, E.SalariedFlag, E.SickLeaveHours, E.VacationHours,
P.FirstName, P.MiddleName, P.LastName, P.rowguid, P.Title, P.Suffix, P.PersonType
FROM HumanResources.Employee E
INNER JOIN Person.Person P
ON E.BusinessEntityID = P.BusinessEntityID

Which scans faster? Which is really easier to read? Sure, you could only specify the columns that are in both tables (BusinessEntityID and rowguide), but then anyone else reading this query won’t know which column belongs with which table unless they go look. That’s not the result you want either, you want the quesy to be easily readable by anyone that comes along and looks at your code. Now imagine what the query would look like if the tables were on another server.

That’s the basics of table aliasing, but I’ve come across a strange execution of this in queries I’ve inherited. If we have a query with 4 joined tables, like

SELECT E.BusinessEntityID, E.BirthDate, E.Gender, E.JobTitle, E.MaritalStatus,
P.FirstName, P.MiddleName, P.LastName, P.Title, P.Suffix, A.AddressLine1, A.City
FROM HumanResources.Employee E
INNER JOIN Person.Person P
ON E.BusinessEntityID = P.BusinessEntityID
INNER JOIN Person.BusinessEntityAddress BA
ON P.BusinessEntityID = BA.BusinessEntityID
INNER JOIN Person.[Address] A
ON A.AddressID = BA.AddressID

What I find in the code instead is

SELECT A.BusinessEntityID, A.BirthDate, A.Gender, A.JobTitle, A.MaritalStatus,
B.FirstName, B.MiddleName, B.LastName, B.Title, B.Suffix, D.AddressLine1, D.City
FROM HumanResources.Employee A
INNER JOIN Person.Person B
ON A.BusinessEntityID = B.BusinessEntityID
INNER JOIN Person.BusinessEntityAddress C
ON B.BusinessEntityID = C.BusinessEntityID
INNER JOIN Person.[Address] D
ON D.AddressID = C.AddressID

There’s no logical connection between the alias given and the table it references. If the tables are listed in a different order later in another query in the same procedure, they’d have different aliases, making it much more confusing for anyone trying to understand the whole list of queries. I’ve been told it is common practice to alias tables this way in Oracle, but I have no idea if that is true or not. You can make an alias just about anything you want, so make it something that’ll make it easy to understand; it should have some connection to the original table name.

I like one or two letter abbreviations, so I tend to go with unique letters at the beginning of the table name. Of course, that can run into the same problem as the method I mentioned above where the table Person is P in one query and the table Preferences is P in another. It certainly isn’t a bad idea to have a list of standard aliases for all the tables in your database. It is a little more work, but in the long run it can make your job easier. Per and Pref are probably good aliases in that they’re not too long and they’re distinct. If the table is coming from another server or database or schema, just preface the standard abbreviation with an abbreviation of the server, database or schema.

Intellisense will recognize whatever alias you create, so don’t worry about losing that functionality.

There’s another time when table aliases aren’t just a good idea, they’re required. If you’ve created a subquery and joined it like it is a table, you have to have an alias at the end. For example

SELECT E.BusinessEntityID, E.BirthDate, E.Gender, E.JobTitle, E.MaritalStatus,
P.FirstName, P.MiddleName, P.LastName, P.Title, P.Suffix, A.AddressLine1, A.City
FROM HumanResources.Employee E
INNER JOIN Person.Person P
ON E.BusinessEntityID = P.BusinessEntityID
INNER JOIN Person.BusinessEntityAddress BA
ON P.BusinessEntityID = BA.BusinessEntityID
INNER JOIN (SELECT AddressID, AddressLine1, City
FROM Person.[Address]
WHERE StateProvinceID = 47) A
ON A.AddressID = BA.AddressID

This is just like the query above, but we’re using a subquery to limit our search to New Jersey. Without that alias A at the end, we’d be unable to join it to the rest of the query. If you’re going to use aliases in your subquery, try not to reuse the same alias at different levels of your query for readability. SQL Server won’t have a problem with it, but your eyes might.

SELECT E.BusinessEntityID, E.BirthDate, E.Gender, E.JobTitle, E.MaritalStatus,
P.FirstName, P.MiddleName, P.LastName, P.Title, P.Suffix, NJ.AddressLine1, NJ.City
FROM HumanResources.Employee E
INNER JOIN Person.Person P
ON E.BusinessEntityID = P.BusinessEntityID
INNER JOIN Person.BusinessEntityAddress BA
ON P.BusinessEntityID = BA.BusinessEntityID
INNER JOIN (SELECT A.AddressID, A.AddressLine1, A.City
FROM Person.[Address] A
WHERE A.StateProvinceID = 47) NJ
ON NJ.AddressID = BA.AddressID

In addition to aliasing tables, you can alias columns. If you have a long column name, you can certainly shorten it, for example in the Person.Address table referenced above, there’s a column called StateProvinceID. To shorten it you could state SELECT StateProvinceID as SPID. I don’t normally use column aliases just to shorten names, there’s much more important times to use it. If you’re doing calculations, you’ll frequently want to alias the calculatd column, for example

SELECT P.FirstName, P.LastName, P.BusinessEntityID, (SP.SalesYTD * SP.CommissionPct) as Commission
FROM Sales.SalesPerson SP
INNER JOIN Person.Person P
ON SP.BusinessEntityID = P.BusinessEntityID

Again, SQL Server won’t need that alias unless you use it as a subquery, but it does make it easier to read both the query and the result window.

The other main reason you’ll want to use aliases is if you have subqueries that pull back the same column, you’ll want to name them different things for readability. You could just use the aliases to tell them apart, but if they’re bubbling up through more than one subquery, that won’t work.

SELECT CurrYear.BusinessEntityID, CurrYear.FirstName, CurrYear.LastName, CurrYear.Commission as CurrComm, LastYear.Commission as LYComm
FROM
(SELECT P.FirstName, P.LastName, P.BusinessEntityID, (SP.SalesYTD * SP.CommissionPct) as Commission
FROM Sales.SalesPerson SP
INNER JOIN Person.Person P
ON SP.BusinessEntityID = P.BusinessEntityID) CurrYear
LEFT JOIN
(SELECT P.FirstName, P.LastName, P.BusinessEntityID, (SP.SalesLastYear * SP.CommissionPct) as Commission
FROM Sales.SalesPerson SP
INNER JOIN Person.Person P
ON SP.BusinessEntityID = P.BusinessEntityID) LastYear
ON CurrYear.BusinessEntityID = LastYear.BusinessEntityID

If that gets used as a subquery, you’ll need those aliases CurrComm and LYComm because you can’t reference anything more than 1 level below.

Think about aliases and use them, you’ll save yourself or someone else a lot of work later. Heck you’ll probably save yourself some time the first time through too.

Posted in 2008, SQL Server | Tagged , , , | 2 Comments

How much data is there?

We’re all familiar with trying to figure out just how much data is in our database. To get approximately how much memory the whole database takes up you can open SQL Server Management Studio, right-click on the database name you’re investigating, click properties and then click Files. To figure out how many rows, you can use the query SELECT Count(*) FROM YourTable and, if it is a large table and/or a busy database, suffer through the performance hit.

There’s a better way. Not only that, but it gives you more information, doesn’t cause performance problems and does it all in one step. What am I talking about? The built in stored procedure sp_spaceused. Most of the time I’m using it, I’m looking for information about a specific table. Lets use the AdventureWorks database for an example. Lets find out how big the Person.Address table is.

All you need is the name of the stored procedure followed by the name of the table in single quotes
sp_spaceused ‘Person.Address’
And here’s what it returns:

name rows reserved data index_size unused
Address 19614 5000 KB 2240 KB 2504 KB 256 KB

This is great! It tells us how many rows in the table, how much space is being reserved, how much is actually used and how much space the index is taking. The best part is, it didn’t even touch your tables to do it. It is querying the metadata stored in your database, basically the information SQL Server keeps track of about how your database is put together.

If you want to get space information about the whole database, just run sp_spaceused and don’t put anything after it.

This is a tool I love, so I’ll likely do more research on exactly what it is doing and follow this post up later.

Posted in 2008, SQL Server | Tagged , , | Leave a comment

Set Statistics

Any time you’re developing queries in SQL Server, whether they’re single use or intended for inclusion in a stored procedure that’ll be run frequently, you want to know how well your query is running. If it is intended for production, you’ll want to have an efficient query that runs quickly and doesn’t take up more resources than it needs to. If it is a run once, you still want those same qualities, especially since you’ll likely run it a few times to test it. There are plenty of tools with SQL Server that can help you monitor your query’s performance: Activity Monitor, SQL Server Profiler, Estimated and Actual execution plans and more.

I think my favorite though is 2 simple statements that you place before and after your query.
SET STATISTICS IO ON
SET STATISTICS TIME ON

Youre query here

SET STATISTICS TIME OFF
SET STATISTICS IO OFF

SET STATISTICS TIME tells you how long your code takes to run. Sure, the little timer at the bottom of the screen also tells you this, but this is far more accurate. If you click on the Message tab after the query has run, you’ll see how long each statement took to run in milliseconds. Not only that, but it tells you how much CPU time was used and how long the elapsed time was. If you have 2 INSERTS and 3 SELECTS that return results, you’ll get 5 sets of CPU and Elapsed times.

SET STATISTICS IO is more complicated. It details the disk activity for each statement in your code. It tells you the tables referenced, how many index/table scans are performed on each table, how many logical and physical reads there are, how many read-ahead reads are done and more. If you study what each of those stats means, you can get a really good, immediate idea of how well your query is running and what kind of steps you might want to try to get your code running more efficiently.

These are usually my first resort for checking my queries. Write down the numbers most important to you when the results are displayed so you have something to compare later refinements to.

Posted in 2008, SQL Server | Tagged , , , | Leave a comment