Pages

Sunday, October 17, 2010

Tools of the Trade: Part III – Kerberos (I)


In this series of posts, we’re going to take a look at some of the tools we at Microsoft Developer Support -- Database commonly use, how to use them, and the types of issues they can be used to resolve.    In this article, Adam Saxton starts down the long road of double-hop troubleshooting by providing a gentle introduction to Kerberos, and how to recognize Kerberos+SQL related issues.  Future articles will move beyond identification, and discuss the tools and processes we use to troubleshoot such issues.   Kerberos We receive numerous calls where the problem falls under Kerberos.  This will present itself when trying to establish a connection to SQL Server (or a Linked Server).  I’m going to discuss what Kerberos is, how to recognize when there is a problem with it, and go over some of the items that need to be looked at in order to resolve a Kerberos issue with SQL Server.  As there is a lot of information on this topic, I’ll be splitting it up over a few blog posts.     What is Kerberos? Kerberos is a secure method for authenticating a request for a service in a computer network, and is used to verify a user or host identity. Kerberos lets a user request an encrypted "ticket" from an authentication process that can then be used to request a particular service from a server. The user's password does not have to pass through the network.   Put simply, it is a more secure form of Windows Authentication versus NTLM.  Also, understand that Kerberos is not a Windows specific technology.  Windows will always try to use Kerberos first over TCP.  If that doesn’t work, it will drop down to NTLM.  The exception to this is if you are trying to access a resource that is local.  NTLM will always be used when accessing local resources.  Also note that Kerberos cannot currently be used over Named Pipes or Shared Memory.     How does Kerberos work? There are a lot of details into how Kerberos works.  I’m not going to get that deep into it within this blog series.  I’m going to relate this to how it works with SQL.  To do that, I’m going to use the classic example of a Double-hop with a Web Server.  We use the work Double-hop to explain that the IIS Server is using a resource that is located on a different server.  In this case the first “hop” is from the web browser client to the IIS Server (ASP or ASPX Page); the second hop is to the SQL Server on a different machine. The server on the second hop is also referred to as the Backend Server.   In order to successfully establish a connection using Kerberos, a SQL ServicePrincipalName (SPN) needs to exist within Active Directory.  I will talk more about SPN’s and where they are located in a later post as the topic is complex enough to deserve its own.  I would say about 80-90% of calls we receive relating to Kerberos involve either missing SPN or duplicate SPN’s.   When you log onto your machine, you will get what we call a Kerberos Ticket.  To be more specific, you get a Ticket-Granting Ticket (TGT).  You use the TGT as a master ticket to access all Kerberos services on a network. A Kerberos ticket includes all the user credentials or computer credentials in an encrypted format. These credentials are used to identify a specific user or a specific computer on a network for access to Kerberos services.  When you access a Kerberos service, that service uses the ticket that you received to authenticate you.  After the service verifies your identity and authenticates you, the service issues a service ticket. This is where the SPN’s come into play.  Think of the SPN as a pointer for Kerberos so it knows where to go.  That’s why, if it’s missing or there are duplicates, it doesn’t know what to do.     How to recognize a Kerberos error with SQL Server: There are a few errors that we see where we can make an educated guess that a connectivity issue is Kerberos related.    Cannot Generate SSPI Context   Login failed for user ‘(null)’   Login failed for user ‘NT AUTHORITY\ANONYMOUS’   Those are the three biggest errors we see in relation to Kerberos with SQL Server.  These errors occur when the credentials from the first hop cannot be passed to the second hop.  Usually, this is because we are failing over to NTLM or our SPN configuration is not correct.  There are also other settings within Active Directory that come into play as well as certain machine configuration based on how services are setup.  For now, let’s focus on the basics, so I’ll get more into those setting and machine configurations in a later post.   One of the last things I’d like to mention is that I’ve seen customers get the following error confused with a Kerberos issue:   Login failed for user ‘SomeUser’   The SomeUser could be whatever user (SQL or Windows) that you are trying to connect with.  If you see the actual username in the error that means the credentials actually got to the SQL Server.  The issue at that point lies in the SQL Security configuration itself (either the server as a whole, or the database you are trying to connect to).  This is not a Kerberos related issue, but instead is a much simpler login failure.   Resources:   Kerberos in Windows 2000   Kerberos in Windows 2003 How to use Kerberos authentication in SQL Server Posted By:       Adam Saxton
Source: blogs.msdn.com

Commonly used 32-bit CPU registers and their purpose
While debugging a dump, we commonly see various CPU registers. Each register has different purpose. I am trying to put them together in one place for your easy reference.   You can list all the registers in windbg with “r” command:   Please note that a given register may be used for general purpose at any time.   EAX: Arithmetic operations, I/O port access and interrupt call EDX: Arithmetic operations, I/O port access and interrupt call. If you have a multiplication operation which results in more than what a single register can handle, then the most significant 16 numbers are stored in EDX and least significant ones are stored in EAX EBX: Holds return values ECX: Used for loop counters. Also used for "this" pointer for a class EIP: Instruction pointer. Points to the next instruction to execute. ESP: Stack pointer. This points to the top of the stack. EBP: Base/Frame pointer.   Author : Enamul(MSFT), SQL Developer Technical Lead
Source: blogs.msdn.com

How to install/enable .Net 3.5 SP1 on Windows Server 2008 R2 for SQL Server 2008 and SQL Server 2008 R2
The .NET Framework 3.5 SP1 (also referred to as .NET Framework 3.5.1) is a prerequisite for SQL Server 2008. SQL Server 2008 Setup for standalone instance will install the .NET Framework 3.5 SP1 if it is not already installed. In Windows Server 2008 R2, the .NET Framework is a feature and installing it is different when compared to older versions of Windows Operating System. In previous versions, one could either download the .NET Framework 3.5.1 install binaries from Microsoft download site or use the install binaries from the redist folder of the SQL Server 2008 installation media. Starting with Windows Server 2008 R2, the method of installing .NET Framework 3.5.1 has changed. This document explains how to verify that .NET Framework 3.5.1 is installed and if it is not installed how you can add it.    How to verify if .NET Framework 3.5 SP1 is installed: Here are the steps to verify that .NET Framework 3.5.1 is installed on Windows Server 2008 R2. Click the Start button in the lower left hand corner of the display. Highlight Administrative Tools and select Server Manager. In the Server Manager interface, click Features to display all the installed Features in the right hand pane. Verify that .NET Framework 3.5.1 is listed. If .NET Framework 3.5.1 feature is not listed, you can use either of the following methods to install it: Method 1: Using Server Manager Interface In the Server Manager interface, select Add Features to displays a list of possible features. In the Select Features interface, expand .NET Framework 3.5.1 Features. Once you expand .NET Framework 3.5.1 Features, you will see two check boxes. One for .NET Framework 3.5.1 and other for WCF Activation. Check the box next to .NET Framework 3.5.1 and click Next. Note: If you do not expand .NET Framework 3.5.1 Features and check it, you will get a pop-up titled Add Features Wizard  as shown below. Click Cancel and expand .NET Framework 3.5.1 Features and then check .NET Framework 3.5.1 check box below it.   You cannot install .NET Framework 3.5.1 Features unless the required role services and features are also installed. In the Confirm Installation Selections interface, review the selections and then click Install. Allow the installation process to complete and then click Close. Method 2: Using PowerShell Click the Start button in the lower left hand corner of the display. Highlight All Programs and select Accessories Expand Windows PowerShell and right click Windows PowerShell and select Run as administrator. Click Yes on the User Account Control box. At the PowerShell command prompt, type the following commands, and then press ENTER after each command: Import-Module ServerManager Add-WindowsFeature as-net-framework Note: A screenshot is shown below:  
Source: blogs.msdn.com

SQL Null: a set of posts on the null
Null is a special marker used in Structured Query Language (SQL) to indicate that a data value does not exist in the database. Introduced by the creator of the relational database model, E. F. Codd, SQL Null serves to fulfill the requirement that all true relational database management systems (RDBMS) support a representation of "missing information and inapplicable information". Codd also introduced the use of the lowercase Greek omega (ω) symbol to represent Null in database theory. NULL is also an SQL reserved keyword used to identify the Null special marker. Null has been the focus of controversy and a source of debate because of its associated three-valued logic (3VL), special requirements for its use in SQL joins, and the special handling required by aggregate functions and SQL grouping operators. Although special functions and predicates are provided to properly handle Nulls, opponents feel that resolving these issues introduces unnecessary complexity and inconsistency into the relational model of databases. Since Null is not a member of any data domain, it is not considered a "value", but rather a marker (or placeholder) indicating the absence of value. Because of this, comparisons with Null can never result in either True or False, but always in a third logical result, Unknown. In my next few posts, I will explore the null.
Source: brian.chipsofttech.com

WHITEPAPER - Online Indexing Operations in SQL Server 2005
I wanted to post a quick blog about a new whitepaper from Microsoft. Introduced in Microsoft SQL Server 2005 Enterprise Edition, the online index feature provides a powerful way to perform maintenance operations such as rebuilding or creating indexes in a production system without sacrificing DML concurrency. Users are not blocked from querying and updating the underlying table during the index operation. This white paper is targeted to database administrators who want to use the online index feature of SQL Server 2005 Enterprise Edition to increase server availability in their production environment and need to understand the advantages and disadvantages of using this feature. You can read about it here. Posted By:       Sanchan Saxena    
Source: blogs.msdn.com

Class not registered error after restoring an Analysis Services database that uses the SQL Server 2005 SQL Native Client.
After restoring an Analysis Services database that uses the SQLNCLI.1 SQL Native Client (SNAC) on a server which has never had SQL Server 2005 installed, an error similar to the following occurs during processing operations: Internal error: The operation terminated unsuccessfully. Internal error: The operation terminated unsuccessfully. OLE DB error: OLE DB or ODBC error: Class not registered. Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Adventure Works DW', Name of 'Adventure Works DW'. Errors in the OLAP storage engine: An error occurred while processing the 'Internet_Sales_2001' partition of the 'Internet Sales' measure group for the 'Adventure Works' cube from the Adventure Works DW database. Server: The operation has been cancelled.   During execution of queries that access data contained in ROLAP storage, an error message similar to the following is returned: Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Adventure Works DW', Name of 'Internet Sales Order Details' was being processed. Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Adventure Works DW', Name of 'Adventure Works DW'. OLE DB error: OLE DB or ODBC error: Class not registered. Server: The operation has been cancelled.   Opening the Property Sheet for the Data Source from SQL Server Management Studio (SSMS) or the Data Source Designer in Business Intelligence Development Studio (BIDS) and attempting to edit the connection string returns the following error from Connection Manager: The specified provider is not supported. Please choose different provider in connection manager   Checking the connection string from either SSMS or BIDS, it is similar to the following: Provider=SQLNCLI.1;Data Source=MyServer;Persist Security Info=True;User ID=sa;Initial Catalog=AdventureWorksDW   Changing the connection string to use the native OLE DB provider, similar to the following: Provider=SQLOLEDB.1;Password=Olapolap1;Persist Security Info=True;User ID=sa;Initial Catalog=AdventureWorksDW;Data Source=MyServer   Likewise, modifying the connection string to use the SQL Server 2008 SQL Server SNAC, similar to the following. Provider=SQLNCLI10.1;Integrated Security="";Persist Security Info=False;User ID=sa;Initial Catalog=AdventureWorksDW;Data Source=MyServer;Initial File Name="";Server SPN=""   The issue occurs because the SQLNCLI.1 provider does not ship with SQL Server 2008 and the provider name for SNAC connections changed in SQL Server 2008. The SNAC Version Provider names for SQL Server 2005 and SQL Server 2008 are:  SQL Server 2005           SQLNCLI.1 SQL Server 2008           SQLNCLI10.1   To workaround this issue, any of the following options can be used: Edit the Data Source(s) for the database and switch from the SQLNCLI.1 provider to the SQLOLEDB.1 provider Edit the Data Source(s) for the database and switch from the SQLNCLI.1 provider to the SQLNCLI10.1 Native Client provider. Download and install the SQL Server 2005 SQL Native Client from the following link: http://www.microsoft.com/downloads/details.aspx?FamilyID=d09c1d60-a13c-4479-9b91-9e8b9d835cdc&DisplayLang=en   John Desch Microsoft SQL Server Escalation Engineer
Source: blogs.msdn.com

SQL Server 2005 setup fails when MSXML Core Services 6.0 Service Pack 2 has already been installed
  There is a known issue with SQL Server setup when MSXML6 component update has been installed on the system. The problem described in KB 968749 http://support.microsoft.com/kb/968749 has raised a lot of concerns by customers. The concerns are related to the fact that the solution is manual and not usable in a Large Enterprise environments. In order to automate MSXML6 component un-install we have created the automatic solution for this issue. However the solution needs to be implemented on a case by case basis.   If you are experiencing the issue described in KB 968749 and need an automated solution for this, please contact SQL CSS. Please see details on how to open the incident here http://support.microsoft.com/ The incident for this specific issue is going to be free of charge.   We apologize for any inconvenience.   SQL CSS.   Follow us on twitter (http://twitter.com/MicrosoftSQLCSS)
Source: blogs.msdn.com

Step by step N-tier configuration of Sync services for ADO.NET 2.0
Recently I have worked on couple of cases where customers were trying to use N-tier configuration of Sync services for ADO.NET on IIS. In this blog we will use IIS and setup N-tier configuration of sync service for ADO.NET version 2.0 which comes as part of Microsoft Sync Framework 1.0.  Preparing Environment: We need a development machine that we will use to develop the application, a middle tier server where IIS is installed and configured for WCF services and database server. If you like, you can use the same development box for IIS and database server too. a) Client tier We will develop the application (client and WCF service) and run the client part of the application in this machine. We will use Visual Studio 2008, SP1, so it should be installed on this machine. Install sync services on this client machine. Sync Services for ADO.NET version 2.0 comes with Microsoft Sync Framework 1.0 which gets installed if you install SQL Server 2008 or Visual studio 2008, SP1. You can also download it from http://www.microsoft.com/downloads/details.aspx?FamilyId=C88BA2D1-CEF3-4149-B301-9B056E7FB1E6&displaylang=en Install SQL Server Compact 3.5 Service Pack 1 on this client machine if it is already not there. SQL Server Compact is available in three ways: - Integrated with SQL Server 2008 or later versions, Integrated with Microsoft Visual Studio 2008 or later versions and as a download from the Web site at: http://www.microsoft.com/sqlserver/2005/en/us/compact-downloads.aspx b) Middle tier If you want to use a separated middle tier server for IIS to run WCF service then make sure you have IIS installed and configured on this box to run WCF. Install sync services on this machine too. I have used the same development machine for middle tier so I did not have to install it again. c) Database Server Install a version of SQL Server other than SQL Server Compact to act as the server database. If you like to you can use SQL Server Express that installs with Visual Studio. I have used developer edition of SQL 2008, SP1. We are going to use a sample database that should be prepared by following the article on “Setup Scripts for Sync Services How-to Topics” at: http://msdn.microsoft.com/en-us/library/bb726041.aspx. Copy the T-SQL statements of “Custom Change Tracking for Offline Scenarios” that creates a custom change tracking infrastructure. Once you run this script successfully from query analyzer of your SQL Server it will create the new database by the name SyncSamplesDb. I have the database created as shown below:   Developing and deploying WCF service: Create a WCF Services Application in Visual Studio 2008: (You may need to run Visual Studio as Administrator to create virtual directories in IIS) Type “WcfForSync” as the name of the project and  click on “OK” button. It will create the WCF project with IService1.cs, Service1.svc, Service1.svc.cs and Web.config files with other files and folders. I will keep these default files for simplicity. Open the IService1.cs file by double clicking it and replace the code with the code below: (Note that the code samples in the following sections have been taken from the MSDN articles mentioned in the reference section at the bottom with little modification) using System.Collections.ObjectModel; using System.ServiceModel; using System.Data; using Microsoft.Synchronization; using Microsoft.Synchronization.Data; using Microsoft.Synchronization.Data.Server; namespace WcfForSync {     [ServiceContract]     public interface IService1     {         [OperationContract()]         SyncContext ApplyChanges(SyncGroupMetadata groupMetadata, DataSet dataSet, SyncSession syncSession);         [OperationContract()]         SyncContext GetChanges(SyncGroupMetadata groupMetadata, SyncSession syncSession);         [OperationContract()]         SyncSchema GetSchema(Collection<string> tableNames, SyncSession syncSession);         [OperationContract()]         SyncServerInfo GetServerInfo(SyncSession syncSession);     } } Next add a class file by the name SampleServerSyncProvider.cs as below: Replace the code in this file with the code below: using System.Data; using System.Data.SqlClient; using Microsoft.Synchronization; using Microsoft.Synchronization.Data; using Microsoft.Synchronization.Data.Server; namespace WcfForSync {     //Create a class that is derived from     //Microsoft.Synchronization.Server.DbServerSyncProvider.     public class SampleServerSyncProvider : DbServerSyncProvider     {         public SampleServerSyncProvider()         {             //Create a connection to the sample server database.             Utility util = new Utility();             SqlConnection serverConn = new SqlConnection(util.ServerConnString);             this.Connection = serverConn;             //Create a command to retrieve a new anchor value from             //the server. In this case, we use a timestamp value             //that is retrieved and stored in the client database.             //During each synchronization, the new anchor value and             //the last anchor value from the previous synchronization             //are used: the set of changes between these upper and             //lower bounds is synchronized.             //             //SyncSession.SyncNewReceivedAnchor is a string constant;             //you could also use @sync_new_received_anchor directly in             //your queries.             SqlCommand selectNewAnchorCommand = new SqlCommand();             string newAnchorVariable = "@" + SyncSession.SyncNewReceivedAnchor;             selectNewAnchorCommand.CommandText = "SELECT " + newAnchorVariable + " = min_active_rowversion() - 1";             selectNewAnchorCommand.Parameters.Add(newAnchorVariable, SqlDbType.Timestamp);             selectNewAnchorCommand.Parameters[newAnchorVariable].Direction = ParameterDirection.Output;             selectNewAnchorCommand.Connection = serverConn;             this.SelectNewAnchorCommand = selectNewAnchorCommand;             //Create a SyncAdapter for the Customer table by using             //the SqlSyncAdapterBuilder:             //  * Specify the base table and tombstone table names.             //  * Specify the columns that are used to track when             //    changes are made.             //  * Specify download-only synchronization.             //  * Call ToSyncAdapter to create the SyncAdapter.             //  * Specify a name for the SyncAdapter that matches the             //    the name specified for the corresponding SyncTable.             //    Do not include the schema names (Sales in this case).             SqlSyncAdapterBuilder customerBuilder = new SqlSyncAdapterBuilder(serverConn);             customerBuilder.TableName = "Sales.Customer";             customerBuilder.TombstoneTableName = customerBuilder.TableName + "_Tombstone";             customerBuilder.SyncDirection = SyncDirection.DownloadOnly;             customerBuilder.CreationTrackingColumn = "InsertTimestamp";             customerBuilder.UpdateTrackingColumn = "UpdateTimestamp";             customerBuilder.DeletionTrackingColumn = "DeleteTimestamp";             SyncAdapter customerSyncAdapter = customerBuilder.ToSyncAdapter();             customerSyncAdapter.TableName = "Customer";             this.SyncAdapters.Add(customerSyncAdapter);         }     }     public class Utility     {         //Return the server connection string.         public string ServerConnString         {            get { return @"Data Source= SQLServer\instance; Initial Catalog=SyncSamplesDb; User Id=xxxxxx; Password=xxxxxx"; }         }     } } Note: You need to update the connection string in the above Utility class to connect to your SQL Server. Open Service1.svc.cs file in the project by double clicking on it: and replace the existing code with the code below: using System.Collections.ObjectModel; using System.ServiceModel; using System.Data; using Microsoft.Synchronization; using Microsoft.Synchronization.Data; using Microsoft.Synchronization.Data.Server; namespace WcfForSync {     // NOTE: If you change the class name "Service1" here, you must also update the reference to "Service1" in App.config.     public class Service1 : IService1     {          private SampleServerSyncProvider _serverSyncProvider;          public Service1()         {             this._serverSyncProvider = new SampleServerSyncProvider();         }         [System.Diagnostics.DebuggerNonUserCodeAttribute()]         public virtual SyncContext ApplyChanges(SyncGroupMetadata groupMetadata, DataSet dataSet, SyncSession syncSession)         {             return this._serverSyncProvider.ApplyChanges(groupMetadata, dataSet, syncSession);         }         [System.Diagnostics.DebuggerNonUserCodeAttribute()]         public virtual SyncContext GetChanges(SyncGroupMetadata groupMetadata, SyncSession syncSession)         {             return this._serverSyncProvider.GetChanges(groupMetadata, syncSession);         }         [System.Diagnostics.DebuggerNonUserCodeAttribute()]         public virtual SyncSchema GetSchema(Collection<string> tableNames, SyncSession syncSession)         {             return this._serverSyncProvider.GetSchema(tableNames, syncSession);         }         [System.Diagnostics.DebuggerNonUserCodeAttribute()]         public virtual SyncServerInfo GetServerInfo(SyncSession syncSession)         {             return this._serverSyncProvider.GetServerInfo(syncSession);         }     } } The application requires references to Microsoft.Synchronization.dll, Microsoft.Synchronization.Data.dll and Microsoft.Synchronization.Data.Server.dll. Right click on “References” from the project and click on Add Reference… Select Microsoft.Synchronization.dll (Location on my machine: C:\Program Files (x86)\Microsoft Sync Framework\v1.0\Runtime\x86) And next add the references to the other two dlls (Location on my machine: C:\Program Files (x86)\Microsoft Sync Framework\v1.0\Runtime\ADO.NET\V2.0\x86) Now you should be able to build the project successfully. Once building is successful publish the WCF to IIS. Go to the properties of the project and under Web* tab type IIS server information (middletierserver) , click on the “Create Virtual Directory” as below: From IIS you can see a Virtual directory has been created: If you browse the WCF (http://middletierserver/WcfForSync/Service1.svc) you should get the following page. I have used the same development machine for IIS, so it is showing “localhost” in the URL. Note: When tried to browse WCF I have noticed following error with some IIS machines: HTTP Error 404.3 – Not Found The page you are requesting cannot be served because of the extension configuration. If the page is a script, add a handler. If the file should be downloaded, add a MIME map. Detailed Error InformationModule StaticFileModule. If you encounter this error please take necessary action as per article at: http://blogs.msdn.com/rjohri/archive/2009/06/29/the-page-you-are-requesting-cannot-be-served-because-of-the-extension-configuration.aspx Developing client application and consuming WCF service: Create a Console application in Visual Studio: Once the project is created, reference the dlls  Microsoft.Synchronization.dll, Microsoft.Synchronization.Data.dll, and Microsoft.Synchronization.Data.SqlServerCe.dll as before. You also need to reference System.Data.SqlServerCe.dll (you should have this dll on your machine once you installed SQL Server Compact 3.5 Service Pack 1, location of this dll on my machine: C:\Program Files (x86)\Microsoft SQL Server Compact Edition\v3.5\Desktop as shown below) Now we need to do service reference to our WCF that we developed and deployed before. Right click on the Reference in this client project and select Add Service Reference… In the Add Service Reference screen you type the URL of our WCF and click on Go button as shown below: Click on “OK” button, it will create service reference as below: Replace the code in the file Program.cs with the following code: using System; using System.IO; using System.Data; using System.Data.SqlClient; using System.Data.SqlServerCe; using Microsoft.Synchronization; using Microsoft.Synchronization.Data; using Microsoft.Synchronization.Data.SqlServerCe; namespace ClientForSync {     class Program     {         static void Main(string[] args)         {             //The Utility class handles all functionality that is not             //directly related to synchronization, such as holding connection             //string information and making changes to the server database.             Utility util = new Utility();             //The SampleStats class handles information from the SyncStatistics             //object that the Synchronize method returns.             SampleStats sampleStats = new SampleStats();             //Delete and re-create the database. The client synchronization             //provider also enables you to create the client database             //if it does not exist.             ////util.SetClientPassword();             util.RecreateClientDatabase();             //Initial synchronization. Instantiate the SyncAgent             //and call Synchronize.             SampleSyncAgent sampleSyncAgent = new SampleSyncAgent();             SyncStatistics syncStatistics = sampleSyncAgent.Synchronize();             sampleStats.DisplayStats(syncStatistics, "initial");             //Make changes on the server.             util.MakeDataChangesOnServer();             //Subsequent synchronization.             syncStatistics = sampleSyncAgent.Synchronize();             sampleStats.DisplayStats(syncStatistics, "subsequent");             //Return server data back to its original state.             util.CleanUpServer();             //Exit.             Console.Write("\nPress Enter to close the window.");             Console.ReadLine();         }     }     //Create a class that is derived from     //Microsoft.Synchronization.SyncAgent.     public class SampleSyncAgent : SyncAgent     {         public SampleSyncAgent()         {             //Instantiate a client synchronization provider and specify it             //as the local provider for this synchronization agent.             this.LocalProvider = new SampleClientSyncProvider();             //The remote provider now references a proxy instead of directly referencing the server provider. The proxy is created by passing a reference to a WCF service             ServiceReference1.Service1Client serviceProxy = new ServiceReference1.Service1Client();             this.RemoteProvider = new ServerSyncProviderProxy(serviceProxy);             //Add the Customer table: specify a synchronization direction of             //DownloadOnly.             SyncTable customerSyncTable = new SyncTable("Customer");             customerSyncTable.CreationOption = TableCreationOption.DropExistingOrCreateNewTable;             customerSyncTable.SyncDirection = SyncDirection.DownloadOnly;             this.Configuration.SyncTables.Add(customerSyncTable);         }     }     //Create a class that is derived from     //Microsoft.Synchronization.Data.SqlServerCe.SqlCeClientSyncProvider.     //You can just instantiate the provider directly and associate it     //with the SyncAgent, but you could use this class to handle client     //provider events and other client-side processing.     public class SampleClientSyncProvider : SqlCeClientSyncProvider     {         public SampleClientSyncProvider()         {             //Specify a connection string for the sample client database.             Utility util = new Utility();             this.ConnectionString = util.ClientConnString;         }     }     //Handle the statistics that are returned by the SyncAgent.     public class SampleStats     {         public void DisplayStats(SyncStatistics syncStatistics, string syncType)         {             Console.WriteLine(String.Empty);             if (syncType == "initial")             {                 Console.WriteLine("****** Initial Synchronization ******");             }             else if (syncType == "subsequent")             {                 Console.WriteLine("***** Subsequent Synchronization ****");             }             Console.WriteLine("Start Time: " + syncStatistics.SyncStartTime);             Console.WriteLine("Total Changes Downloaded: " + syncStatistics.TotalChangesDownloaded);             Console.WriteLine("Complete Time: " + syncStatistics.SyncCompleteTime);             Console.WriteLine(String.Empty);         }     }     public class Utility     {        //Return the client connection string with the password. Don’t forget to create folder         public string ClientConnString         {           get { return @"Data Source='D:\\SyncServices\\SyncSampleClient.sdf'; Password=xxxxxxx"; }         }         //Return the server connection string.         public string ServerConnString         {             get { return @"Data Source= SQLServer\instance; Initial Catalog=SyncSamplesDb; User Id=xxxxxx; Password=xxxxxx"; }         }         //Make server changes that are synchronized on the second         //synchronization.         public void MakeDataChangesOnServer()         {             int rowCount = 0;             using (SqlConnection serverConn = new SqlConnection(this.ServerConnString))             {                 SqlCommand sqlCommand = serverConn.CreateCommand();                 sqlCommand.CommandText =                     "INSERT INTO Sales.Customer (CustomerName, SalesPerson, CustomerType) " +                     "VALUES ('Cycle Mart', 'James Bailey', 'Retail') " +                     "UPDATE Sales.Customer " +                     "SET  SalesPerson = 'James Bailey' " +                     "WHERE CustomerName = 'Tandem Bicycle Store' " +                     "DELETE FROM Sales.Customer WHERE CustomerName = 'Sharp Bikes'";                 serverConn.Open();                 rowCount = sqlCommand.ExecuteNonQuery();                 serverConn.Close();             }             Console.WriteLine("Rows inserted, updated, or deleted at the server: " + rowCount);         }         //Revert changes that were made during synchronization.         public void CleanUpServer()         {             using (SqlConnection serverConn = new SqlConnection(this.ServerConnString))             {                 SqlCommand sqlCommand = serverConn.CreateCommand();                 sqlCommand.CommandType = CommandType.StoredProcedure;                 sqlCommand.CommandText = "usp_InsertSampleData";                 serverConn.Open();                 sqlCommand.ExecuteNonQuery();                 serverConn.Close();             }         }         //Delete the client database.         public void RecreateClientDatabase()         {             using (SqlCeConnection clientConn = new SqlCeConnection(this.ClientConnString))             {                 if (File.Exists(clientConn.Database))                 {                     File.Delete(clientConn.Database);                 }             }             SqlCeEngine sqlCeEngine = new SqlCeEngine(this.ClientConnString);             sqlCeEngine.CreateDatabase();         }     } } Now we have to update the connection string information in the code. In the Utility class in the below methods you should update the connection string for SQL server and client database (SQL CE database) //Return the client connection string with the password. Don’t forget to create folder         public string ClientConnString         {           get { return @"Data Source='D:\\SyncServices\\SyncSampleClient.sdf'; Password=xxxxxxx"; }         }         //Return the server connection string.         public string ServerConnString         {             get { return @"Data Source= SQLServer\instance; Initial Catalog=SyncSamplesDb; User Id=xxxxxxx; Password=xxxxxxx"; }         } Build this client project, at this point the project should compile successfully. Running/testing synchronization: Run the Console application and you should see the result of the synchronization of this sample as below: You can run query on the Customer table in SyncSamplesDb to check the changes by setting the break point in the client app and also by opening a SQL profiler you can see the activities going on in SQL Server. Happy Synchronization! References: Overview (Synchronization Services) : http://msdn.microsoft.com/en-us/library/bb726031(SQL.90).aspx Architecture and Classes (Synchronization Services) : http://msdn.microsoft.com/en-us/library/bb726025(SQL.90).aspx Getting Started: A Synchronization Services Application: http://msdn.microsoft.com/en-us/library/bb726015.aspx How to: Configure N-Tier Synchronization at: http://msdn.microsoft.com/en-us/library/bb902831.aspx A large scale implementation of Sync Service example at: http://blogs.msdn.com/sync/archive/2009/10/19/sharepoint-2010-now-integrates-microsoft-sync-framework.aspx Microsoft Sync Framework 2.0 redistributable Package - http://www.microsoft.com/downloads/details.aspx?FamilyId=109DB36E-CDD0-4514-9FB5-B77D9CEA37F6&displaylang=en   Author : Faiz(MSFT), SQL Developer Engineer Reviewed by : Enamul(MSFT), SQL Developer Technical Lead; Azim(MSFT), SQL Developer Technical Lead;Srini(MSFT), SQL Developer Engineer
Source: blogs.msdn.com

General network errors (GNEs), Timeout expired, SQL Server not found or Access Denied…
Many times, the key to solving these errors is a network packet trace.  The problem is that these errors are often thrown intermittently.  This can obviously make it problematic to capture a network trace.    Network Monitor 3 (AKA Netmon3) solves this problem quite nicely.  Netmon3 has the built-in ability to capture rolling traces.  This allows you to start the trace and then walk away and be sure that you will have captured the error.   Here are my steps for configuring Netmon3 for a rolling trace:   Please see the steps below in order to configure Network Monitor:   1.    Download Network Monitor 3 a.    (http://www.microsoft.com/downloads/details.aspx?FamilyID=18b1d59d-f4d8-4213-8d17-2f6dde7d7aac&DisplayLang=en)   2.    Install Network Monitor on both the client and server experiencing the problem.  If you are not able to install it on the server you can install it on another machine that is connected to the same hub as the server or to the admin port of the switch that the server is on.  If you go this route you must synchronize the time between the client, server, and this third machine.   3.    Synchronize the clocks on the Client and SQL machines a.    At one of the machines bring up a command prompt and execute the following:                                            i.    Net time <\\machinenamewewanttosynchwith> /set /yes   4.    Turn off TCP Chimney if any of the machines are Windows 2003 a.    bring up a command prompt and execute the following:                                            i.    Netsh int ip set chimney DISABLED   5.    Using the NMCap command-line utility, start the chained trace (chained = create a new capture file after reaching the “Temporary capture file size” rather than overwriting the current capture) a.    [Netmon 3 installation folder]\NMCap /network * /capture /file test.chn:100M  (creates 100 MB chained files)                                            i.    NOTE:  YOU MUST USE THE .CHN EXTENSION TO CAPTURE CHAINED TRACES                                           ii.    NOTE:  The target folder must exist for NMCap to create the trace file       b.    More information can be found:                                          i.    http://blogs.technet.com/netmon/archive/2006/10/24/nmcap-the-easy-way-to-automate-capturing.aspx                                         ii.    Or by running the command-line switch /examples                                        iii.    Or by running the command-line switch /?   6.    Get the IP addresses of the client and the server a.    ipconfig /all on the client > client.txt b.    ipconfig /all on the server > server.txt   7.    Reproduce the problem, please note the exact time the problem reproduces.  Also, please note the exact error message.   8.    Stop the trace by hitting Ctrl-c in the command-line window   Please note that Netmon3 does not run on Windows 2000.  Also, step #6 is probably the single most important step (besides actually starting the trace).  If you don’t know the timestamp of the error, it is extremely difficult to find the offending packets in the trace.   Happy captures!   Posted By: Evan Basalik
Source: blogs.msdn.com

After Installation of .NET Framework 3.5 SP1 (or .Net framework 2.0 SP2) , SSIS packages using ODBC 3rd party drivers may fail
The Issue: In the last few months, we have seen quite a few cases where after installing .net framework 3.5 SP1 ( KB http://support.microsoft.com/kb/951847), SSIS packages using certain ODBC 3rd party drivers and importing character data may fail with the error below - Error Message: [DataReader Source [<number>]] Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "component "DataReader Source" (<number>)" failed because error code 0x80131937 occurred, and the error row disposition on "output column "ColumnName" (<number)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure. The error messages may vary depending on if you are using DataReader Source in Visual Studio 2005 or ADO .Net Source in Visual Studio 2008 – [ADO NET Source [<number>]] Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "component "ADO NET Source" (<number>)" failed because error code 0x80131937 occurred, and the error row disposition on "output column "ColumnName" (<number>)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure. Error 0xc0209029: Data Flow Task: The "component "<ComponentName>" (<number>)" failed because error code 0x80131937 occurred... [ADO NET Source [<number>] Error: The component "ADO NET Source" (<number>) was unable to process the data. Pipeline component has returned HRESULT error code 0xC0209029 from a method call. [SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "ADO NET Source" (<number>) returned error code 0xC02090F5. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure. NOTE: Not all ODBC drivers are affected by this problem. This problem affects primarily SSIS packages which import character data using certain 3rd party ODBC drivers, through the use of ADO.Net connection manager and System.Data.ODBC managed provider. In Visual Studio 2005: SSIS DataReader Source --> ADO.Net connection manager --> .Net Providers \ ODBC Data Provider (System.Data.Odbc) --> Vendor ODBC driver --> Character data In Visual Studio 2008: ADO .Net Source --> ADO.Net connection manager --> .Net Providers \ ODBC Data Provider (System.Data.Odbc) --> Vendor ODBC driver --> Character data Cause: The problem started with .Net Framework 2.0 SP2 after a change was made in the way OdbcDataReader checks for Nulls in Sequential Access mode, calling SQLGetData with a BufferLength argument of 0 for variable width data types.  Certain ODBC drivers are not compatible with this pattern. For more information on this function and its arguments, see: SQLGetData Function http://msdn.microsoft.com/en-us/library/ms715441.aspx .Net framework 2.0 SP2 is also included .Net framework 3.5 SP1 components. Microsoft is still investigating the issue and understands the difficult impact that this change has on customers. Determining the full impact of the issue (for example, what are all the data types that can cause the issue etc) is still a work in progress. You can track the progress from the connect site below. http://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=416455 In the meantime, we suggest that you use one of the workarounds mentioned below. Workaround: Workaround#1: (recommended) Modify the SSIS package to redirect the DataReader (or ADO .Net Source in Visual Studio 2008) Error Output to a text file, thereby allowing the package to succeed even though no rows are written out to the error output file. A. Go to Data Flow Task. B. On DataReader Source Properties-> ‘Input and Output Properties’ Tab -> DataReader OutPut -> Output columns C. Select the columns (only with character data type) and change the ‘ErrorRowDisposition’ and TruncationRowDisposition’ properties to RD_RidectRow D. Redirect the DataReader Error Output to a text file E. Repeat A-D for all Data Flow Tasks that fail with the above error. Workaround#2: If you need a quick fix (because this is a production environment and you cannot afford to modify SSIS packages at this time) – you may need to uninstall the components of .Net Framework 3.5 SP1 or .Net framework 2.0 SP2, depending on what framework version you installed that caused the issue. a. If your issue is caused by installing only .Net framework 2.0 SP2 (for example, from a Microsoft download site http://www.microsoft.com/downloads/details.aspx?familyid=5B2C0358-915B-4EB5-9B1D-10E506DA9D0F&displaylang=en), then you can uninstall .Net framework 2.0 SP2. b. If your issue is caused by installing .Net framework 3.5 SP1 (KB http://support.microsoft.com/kb/951847), Remove the .Net Framework 3.5 SP1 components and bring the system back to the state it was in before the update.  This process involves more than simply uninstalling .Net Framework 3.5 SP1, as discussed in the following blogs: http://blogs.msdn.com/dougste/archive/2008/08/19/uninstalling-net-framework-3-5-sp1.aspx http://blogs.msdn.com/astebner/archive/2008/08/01/8803442.aspx Posted By: Azim Uddin & Enamul Khaleque
Source: blogs.msdn.com

SQL 2005 SP2 is available now!
Finally, after rounds of CTPs (Community Technology Previews) and great feedback from customers, we have officially released Service Pack 2 (SP2) for SQL 2005. Download SP2You can download it by visiting the Microsoft website here. Enhancements in SQL 2005 SP2SP2 provides several new features and improvements. You can learn about them here. Download SQL 2005 SP2 Sample databasesYou can download the sample databases and learn about their enhancements by visiting here. Download Books Online refresh for SP2Books online gets updated with every release of service pack. Please download the latest version by visiting here.   As with every service pack, please make sure that you first test it out on your test and QA environment, before putting it to the production server.     Posted By: Sanchan Saxena
Source: blogs.msdn.com

tSQL concatenate strings “1, 2, 3” and select results, too

Source: jerrytech.blogspot.com

Tools of the Trade: Part IV - Developing WinDbg Extension DLLs
A WinDbg extension DLL is set of exported callback functions for implementing user defined commands to extract specific customized  information from the memory dump(s).  Extension DLLs are loaded by debugger engine and can provide extra functionality of automation tasks while performing user-mode or kenrel-mode debugging.  An extension DLL may export any number of functions that are used to execute extension commands.  Each function is explicitly declared as an export in the  DLL's definition file or .def file and function names must be in lowercase letters WinDbg (DbgEng ) extension DLL must export DebugExtensionInitialize. This will be called when the DLL is loaded, to initialize the DLL. It may be used by the DLL to initialize global variables. An extension DLL may export an optional function DebugExtensionUninitialize. If this is exported, it will be called just before the extension DLL is unloaded. An extension DLL may export a DebugExtensionNotify. If this is exported, it will be called when a session begins or ends, and when a target starts or stops executing. These notifications are also provided to IDebugEventCallbacks objects registered with a client. An extension DLL may export KnownStructOutput. If this is exported, it will be called when the DLL is loaded. This function returns a list of structures that the DLL knows how to print on a single line. It may be called later to format instances of these structures for printing. So, how to develop   your own Windbg Extension DLL? Lets follow these steps: 1. Download & Install debugging tools for windows from http://www.microsoft.com/whdc/devtools/debugging/installx86.Mspx 2. Create "Win32 Console Application" using VS 2008 3. Select Application type as "DLL" and click "Finish" . 4. Add a "Module-Definition File (.def)" called "wdbrowser" to the project. One way to export your extension function is by specifying the function names in the EXPORTS section of the .def file. You may you use other ways of exporting functions, such as __dllexport 5. Configure the project "Additional include Directories" to point to header files that comes with Windbg. Default folder for x86 is "C:\Program Files\Debugging Tools for Windows (x86)\sdk\inc" 6. Configure the project "Additional Library Directories" to point to library files that comes with Windbg. Default folder  for x86 libraries  is  ""C:\Program Files\Debugging Tools for Windows (x86)\sdk\lib\i386"" 7. The library, "dbgeng.lib " & "dbgeng.dll" has the implementation of the Debug engine exported functions. So, add "dbgeng.lib" in "Additional Dependencies". 8. Add name of the module definition file created at the above Step #3 9. Now Include the following required headers files in "stdafx.h" #include <windows.h> #include <imagehlp.h> #include <wdbgexts.h> #include <dbgeng.h> #include <extsfns.h> 10. Declare following two global variables in your extension project main implementation file. //Version. EXT_API_VERSION g_ExtApiVersion = {1,1,EXT_API_VERSION_NUMBER,0} ; WINDBG_EXTENSION_APIS ExtensionApis = {0}; 11. Declare following debug engine COM interface pointers. IDebugAdvanced2*  gAdvancedDebug2=NULL; IDebugControl4*   gDebugControl4=NULL; IDebugControl*    gExecuteCmd=NULL; IDebugClient*                 gDebugClient=NULL; 12. Next step is to declare and implement WinDbgExtensionDllInit function in your DLL main implementation source file. In this example that is "wdbrowser.cpp" . The WinDbgExntensionDllInit is the first function that will be called by windbg . So, this function is the idle for implementing any extension specific initialization or related functionality. Please refer http://msdn.microsoft.com/en-us/library/cc267872.aspx for more details about this function. VOID WDBGAPI WinDbgExtensionDllInit (PWINDBG_EXTENSION_APIS lpExtensionApis, USHORT usMajorVersion, USHORT usMinorVersion) {                   ExtensionApis = *lpExtensionApis;                   HRESULT hResult = S_FALSE;                         if (hResult = DebugCreate(__uuidof(IDebugClient), (void**) &gDebugClient) != S_OK)                          {                                                 dprintf("Acuqiring IDebugClient* Failled\n\n");                                                 return;                          }                          if (hResult = gDebugClient->QueryInterface(__uuidof(IDebugControl), (void**) &gExecuteCmd) != S_OK)                          {                                         dprintf("Acuqiring IDebugControl* Failled\n\n");                                                 return;                          }                          if (hResult = gDebugClient->QueryInterface(__uuidof(IDebugAdvanced2), (void**) &gAdvancedDebug2) != S_OK)                          {                                               dprintf("Acuqiring IDebugAdvanced2* Failled\n\n");                                                 return;                          }                          if (hResult = gDebugClient->QueryInterface(__uuidof(IDebugControl4), (void**) &gDebugControl4) != S_OK)                          {                             dprintf("Acuqiring IDebugControl4* Failled\n\n");                                                 return;                          } } 13. Declare another exported function ExtensionApiVersion to report the version of your extension to windbg. Please refer to http://msdn.microsoft.com/en-us/library/cc267873.aspx for detailed information about this function. LPEXT_API_VERSION WDBGAPI ExtensionApiVersion (void) {     return &g_ExtApiVersion; } 14. Define Debug engine's interface pointers, so that your extension module can interact with debug engine. For more information please refer http://msdn.microsoft.com/en-us/library/cc265976.aspx  - IDebugClient, http://msdn.microsoft.com/en-us/library/cc266102.aspx - IDebugControl http://msdn.microsoft.com/en-us/library/cc265957.aspx - IDebugAdvanced IDebugAdvanced2* gAdvancedDebug2=NULL; IDebugControl4* gDebugControl4=NULL; IDebugControl* gExecuteCmd=NULL; IDebugClient*               gDebugClient=NULL; 15. Next step is to - implement debug engine's callback interface IDebugOutputCallbacks . Debug engine callbacks your implementation of IDebugOutCallbacks::Output() with output as a result of the commands that are executed by your extension function. Refer to http://msdn.microsoft.com/en-us/library/cc265716.aspx   for detailed information about IDebugOutputCallbacks::Output() 16. Add the following new class in a header file that inherits the IDebugOutputCallbacks interface . #ifndef __OUT_HPP__ #define __OUT_HPP__ #include <string> #include <sstream> class StdioOutputCallbacks : public IDebugOutputCallbacks { private:                         std::string m_OutputBuffer;                         //                         //This buffer holds the output from the command execution.                         //                         CHAR m_OutPutBuffer[4096]; public:                         void InitOutPutBuffer();                         std::string GetOutputBuffer()                         {                                                 return m_OutputBuffer;                         };                         void ClearOutPutBuffer()                                       {                                                 m_OutputBuffer = "";                         };     STDMETHOD(QueryInterface)(         THIS_         IN REFIID InterfaceId,         OUT PVOID* Interface         );     STDMETHOD_(ULONG, AddRef)(         THIS         );     STDMETHOD_(ULONG, Release)(         THIS         );     // IDebugOutputCallbacks.     STDMETHOD(Output)(         THIS_         IN ULONG Mask,         IN PCSTR Text         ); }; extern StdioOutputCallbacks g_OutputCb; #endif // #ifndef __OUT_HPP__ 17. Add the following code that implements the IDebugOutputCallbacks interface methods, especially Output() #include "stdafx.h" #include <stdio.h> #include <windows.h> #include <dbgeng.h> #include "OutputCallBack.h" StdioOutputCallbacks g_OutputCb; STDMETHODIMP StdioOutputCallbacks::QueryInterface(     THIS_     IN REFIID InterfaceId,     OUT PVOID* Interface     ) {     *Interface = NULL;     if (IsEqualIID(InterfaceId, __uuidof(IUnknown)) ||         IsEqualIID(InterfaceId, __uuidof(IDebugOutputCallbacks)))     {         *Interface = (IDebugOutputCallbacks *)this;         AddRef();         return S_OK;     }     else     {         return E_NOINTERFACE;     } } STDMETHODIMP_(ULONG) StdioOutputCallbacks::AddRef(     THIS     ) {     // This class is designed to be static so     // there's no true refcount.     return 1; } STDMETHODIMP_(ULONG) StdioOutputCallbacks::Release(     THIS     ) {     // This class is designed to be static so     // there's no true refcount.     return 0; } STDMETHODIMP StdioOutputCallbacks::Output(     THIS_     IN ULONG Mask,     IN PCSTR Text     ) {     UNREFERENCED_PARAMETER(Mask);                         m_OutputBuffer += Text;     return S_OK; } void StdioOutputCallbacks::InitOutPutBuffer() {                         m_OutputBuffer.erase(); } 18. Add implementation of your extension function. In this example, we choose to implement an extension that displays the variable names, types in the frame 2 of the current thread. The implementation is: DECLARE_API (dvf3) { // // Install output callbacks. // if ((gDebugClient->SetOutputCallbacks((PDEBUG_OUTPUT_CALLBACKS) &g_OutputCb))!= S_OK) { dprintf("*****Error while installing Outputcallback.*****\n\n"); return; } // // Since frame count starts from 0 index, we have to pass 2 as parameter for .frame command for the frame# 2 // //Execute command to extrac 2nd frame. if (gExecuteCmd->Execute(DEBUG_OUTCTL_THIS_CLIENT | //Send output to only outputcallbacks DEBUG_OUTCTL_OVERRIDE_MASK | DEBUG_OUTCTL_NOT_LOGGED, ".frame 2", DEBUG_EXECUTE_DEFAULT ) != S_OK) { dprintf("Executing .frame 2 failled\n"); return; } //Execute command to extrac 2nd frame. if (gExecuteCmd->Execute(DEBUG_OUTCTL_THIS_CLIENT | //Send output to only outputcallbacks DEBUG_OUTCTL_OVERRIDE_MASK | DEBUG_OUTCTL_NOT_LOGGED, "dv /i /t /v", DEBUG_EXECUTE_DEFAULT ) != S_OK) { dprintf("Executing dv /i /t /v failled\n"); return; } dprintf("***** Extracting locals & formal params from frame 2 *****"); dprintf("\n%s\n", g_OutputCb.GetOutputBuffer().c_str()); } 19. Re-build the project. Copy the .DLL from release folder to a folder where Windbg looks for extension DLLs. On x86 machine default location is  "<Drive letter>\Program Files\Debugging Tools for Windows (x86)\winext" 20. The extension is ready for the use or test. 21. Start windbg and open a full user mode dump. Type .load myextension and hit enter to load the extension DLL into Windbg process space 22. Run .chain command to verify if your extension is loaded by WinDbg . you will see output similar to below output, if your extension is loaded. 23. Type !dvf3 to run the extension function for extracting and displaying variable names, types from the frame 2 . Additional references: http://msdn.microsoft.com/en-us/library/cc265826.aspx - describes about how to Interact with debug engine, I/O operations with debug engine, Memory access, Using symbols, source files. http://www.codeplex.com/ODbgExt  - Microsoft Open Debugger Extension for Windbg Happy developing debug engine extensions! Posted By: Srini Gajjela & Enamul Khaleque (DSD-SQLDeveloper group @ Microsoft)
Source: blogs.msdn.com

How to connect to file-based data sources (Microsoft Access , Microsoft Excel and Text files ) from a 64 bit application
  The Issue: A 64-bit process can load only 64-bit components in it's process boundary. Same is true for a 32-bit process also. So, if your application is 64 bit, you will need a 64-bit provider or driver to connect to  Microsoft Access (mdb, accdb) or Microsoft Excel 2010 (xls, xlsx, and xlsb) or  text files. Bad news is that there is no 64-bit provider or driver available "yet" to connect to these file-based data sources. Good news is that a 64-bit Provider is heading your way which is currently in beta phase.   The Kludge: The common workaround is to connect to a 32-bit SQL Server instance that has a Linked Server to the Access/Excel/Text file. This is a hack, can be difficult to get set-up, and can have stability and performance issues, and realistically, we at Microsoft would rather not support this setup or issues arising from it.   The Good news:  A 64-bit driver is headed your way. This is great news for users in a 64-bit world. Soon you'll be able to connect to these file-based data sources from your 64-bit application, rather than wrestle with obscure settings to force them to connect via a Linked Server.   The next version of Microsoft Office, Office 2010, will be available in a 64-bit version. This version will include a 64-bit version of "2010 Office System Driver Connectivity Components" which will include all the needed 64-bit ODBC driver and OLEDB providers to connect to these file-based data sources.   You will not have to buy or install Office 2010 to obtain and use the new 64-bit components. Like the current version of the provider, it will be available as a free download.   You can download the beta version from here: http://www.microsoft.com/downloads/details.aspx?familyid=C06B8369-60DD-4B64-A44B-84B371EDE16D&displaylang=en   Connection string for 64-bit OLEDB Provider: For Microsoft Office Access : Set the Provider string to “Microsoft.ACE.OLEDB.12.0" For Microsoft Office Excel   : Add “Excel 12.0” to the Extended Properties of the OLEDB connection string.   Connection string for 64-bit ODBC Driver: For Microsoft Office Access: Set the Connection String to “Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=path to mdb/accdb file” For Microsoft Office Excel: Set the Connection String to “Driver={Microsoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb)};DBQ=path to xls/xlsx/xlsm/xlsb file”   The gotchas: You cannot install the 32-bit version and the 64-bit version of the "2010 Office System Driver Connectivity Components" on the same computer. You cannot install the 64-bit version of the "2010 Office System Driver Connectivity Components" on a computer that already has the 32-bit Office 2007 ACE Provider. However, the 32-bit Office 2007 provider can coexist side-by-side with the 32-bit version of the "2010 Office System Driver Connectivity Components".   Authors:  Enamul Khaleque & Srini Gajjela [DSD-SQLDeveloper group at Microsoft]  
Source: blogs.msdn.com

Dealing with very large SQL Compact database files
Working with very large SQL Compact files, you may run into two issues:   1.     Manipulating a large SQL Compact database in VS 2008 IDE requires a hotfix   http://support.microsoft.com/kb/968436   Error message when you use the Visual Studio 2008 IDE to manipulate a SQL Server Compact 3.5 Database file which is larger than 128 Megabytes: "The database file that is larger than the configured maximum database size"   2.     Upgrading a very large database from SQL Compact 3.1 to 3.5 requires another hotfix http://support.microsoft.com/kb/971027 FIX: Error message when you upgrade a very large database to SQL Server Compact 3.5: "The database file is larger than the configured maximum database size. This setting takes effect on the first concurrent database connection only" Hotfix 3.5.5692.12 fixes a problem where large SQL CE database upgrade fails with an error: The database file is larger than the configured maximum database size. This setting takes effect on the first concurrent database connection only. [ Required Max Database Size (in MB; 0 if unknown) = <size>] Depending upon the file-size and available resources on the machine, the upgrade process for a very large database may consume significant memory.  This is expected behavior.   You may want to stop other applications to make room for the Upgrade process to complete.
Source: blogs.msdn.com

SQL Compact Performance: PostSyncCleanup to turn off UpdateStatistics during initial download of SQL Compact Replication
SQL Compact Replication initial subscription may take significant time on devices.   The UpdateStatistics operation adds additional overhead to the initial download.    Indicators: The Client Agent log may show the start and end times of UpdateStatistics operation.  For example: SQLCECA: 04/09/2009-12:21:26 UpdateStatistics Start app=<appName>.exe SQLCECA: 04/09/2009-12:27:59 UpdateStatistics End hr=0 app=<appName>.exe   Workaround:   Recent hotfixes for SQL Compact 3.5 and 3.1 (KB article URLs given below) expose a new API for SQL Compact applications to turn off UpdateStatistics during initial download of Replication.   Disabling UpdateStatistics can be done only on the merge replication, and not for RDA operations. One should use,   SqlCeReplication repl = new SqlCeReplication(); … repl.PostSyncCleanup = 2;   to make sure that update stats is not called after merge replication sync.    To use the new method in your SQL Compact app, in addition to applying the hotfix to device, you will also need to deploy the updated System.Data.SqlServerCe.dll  to the desktop folder where VS can find it, else an error will occur:   'PostSyncCleanup' is not a member of 'System.Data.SqlServerCe.SqlCeReplication'.   You can follow these steps to resolve the above error:   Extract SYSTEM~1.004 file from the device cab, rename it to “ System.Data.SqlServerCe.dll ” and replace at the appropriate path.   When working with SQL Server Compact 3.5 (Visual Studio 2008): “C:\Program Files\Microsoft SQL Server Compact Edition\v3.5\Devices\System.Data.SqlServerCe.dll”   If your project refers to “System.Data.SqlServerCe.dll” in another location then replace the DLL at that location as well.    Once the modified app runs on the device, if the hotfix bits were not deployed you may see the following error (as expected):   “Method not found: set_ PostSyncCleanup System.Data.SqlServerCe.SqlCeReplication”   Uninstalling SQL Compact 3.5 SP1 RTM and installing the hotfix .cab files should resolve the above error.   References:   http://support.microsoft.com/kb/963060   http://support.microsoft.com/kb/960142   Posted by:  Syed
Source: blogs.msdn.com

Performance: Tips for faster download of a large SQL Compact 3.5 database via Replication
SQL Compact 3.x Replication initial subscription is resource-intensive and may take significant time on devices.  Read the introduction to the SQL Compact Blog for this problem.   Introduction (from SQL Compact Blog)   “The initial subscription is going to bring down lot of data from server to client in merge replication.  It means that the requirement for bandwidth, memory, processing power is going to be high for the creation of subscription.  However, for devices all these requirements are not easy and even if they are available, it takes very long time to get the subscription created on the client database.”   Symptoms: SQL Compact 3.5 Replication initial download for certain large publication takes a significant amount of time on mobile devices.  The corresponding publication with earlier versions (SQL CE 2.0 and SQL 2000) did not take that much time.   Performance Tips: Here is a list of workarounds which have a potential to reduce the download time for large initial sync. Please keep in mind that choosing a single option may not resolve the issue completely.   However, a combination of several options may bring the download time to an acceptable level. - Remove unnecessary indexes from the server side (articles), especially indexes on Varchar columns This helps to a certain extent; however the overall download time may still be high. - Apply hotfix to turn off UpdateStatistics:   The Client Agent log during download indicates that a portion of time was spent in executing update statistics.  After applying a new hotfix release, the application may invoke a new method call to turn off update statistics.   PostSyncCleanup to turn off UpdateStatistics during initial download of SQL Compact Replication   KB http://support.microsoft.com/kb/963060 FIX: An error message is logged, and the synchronization may take a long time to finish when you synchronize a merge replication that contains a SQL Server Compact 3.5 subscriber - Turn off unnecessary logging (Client Agent Log, other types of device logging) on production devices. - Move to a high-end device, where one can use higher buffer pool sizes and can also place the database .sdf file in main memory as appropriate.- Pre-create the .sdf on desktop and then download to devices.  This relates to managing and deploying the solution to n number of devices in the field.   You may use the move-copy detection feature of SQL Compact on devices.   For more details on how it works, please see the SQL Compact team’s blog entry “"Merge Subscribe Once, Copy and Deploy Multiple Times" at http://blogs.msdn.com/sqlservercompact/archive/2007/06/26/merge-subscribe-once-copy-and-deploy-multiple-times.aspx - Increase buffer pool size:We got some real good benefit in terms of running times by increasing buffer pool size to higher values.  By default this value is 640K on a device.  The running times for a very large single-table publication with Max Buffer Size  of 640K, 1024K, 2048K, and 4096K were as follows:1) 640K  -  86 m 34 secs2) 1024K - 73 m 56 secs3) 2048K - 55 m 13 secs4) 4096K - 38 m 30 secs You can specify ‘Max Buffer Size’ in the connect string, as follows:repl.subscriberconnectionstring = "provider=MICROSOFT.SQLServer.OLEDB.CE.3.5;data source=35Repl.sdf;SSCE:Max Buffer Size=4096"; Keep in mind that this setting consumes more program memory on the device.  However, this should not be an issue if you have a good 20 MB of program memory available. If the initial sync data is huge, you may set the buffer pool size to a conveniently higher value only for initial sync.   There are real improvements in running times with bigger buffer pool sizes.    However, certain devices can't increase buffer pool size beyond 2-3 MB, so, a very large buffer-size is not always applicable.   Fix for rowguid index processing delays:   In some cases, the delay is due to rowguid index processing, please look for the hotfix KB 970414 when it is available.    
Source: blogs.msdn.com

Plan guides (plan freezing) in SQL Server 2005/2008
  SQL Server 2005   The plan guides were first introduced in SQL Server 2005.You may find detailed information about the plan guides in SQL Server 2005 here:   General information http://msdn.microsoft.com/en-us/library/ms187032(SQL.90).aspx    Description of sp_create_plan_guide http://msdn.microsoft.com/en-us/library/ms179880(SQL.90).aspx   This feature can be useful when you are trying to enforce the optimizer using a specific execution plan without changing the original query. In SQL Server 2005, however,  you have to specify the statement exactly as it was submitted to the SQL Server. Sometimes it can be difficult since you have to capture a SQL Profiler trace and copy the SQL statement from there. Also there are certain rules that have to be applied. For example, the SQL parser does ignore spaces and escape characters inside the query text, but does not allow it at the end, so it may add complexity to capturing the actual statement.   In SQL Server 2005 query statistics will appear in DMVs including the T-SQL statement. In order to make it easier you may create a plan guide based on query statistics DMVs directly.   Here is the script that demonstrates this.   NOTE: Before you execute the examples below please make sure you have the  AdventureWorks database installed and the compatibility level for it  set to 90. If you don’t have the AdventureWorks database you can download it from: http://www.codeplex.com/SqlServerSamples       use AdventureWorks go   --- Cleaning cache for this sample dbcc freeproccache go   --- Running query first time to get plan generated for freezing set statistics xml on exec sp_executesql   N'SELECT COUNT(*) FROM [HumanResources].[Employee] e INNER JOIN [Person].[Contact] c ON c.[ContactID] = e.[ContactID]' set statistics xml off go   --- Based on query pattern creating a plan guide - freezing plan declare @sql_text nvarchar(max),         @sql_xml_plan nvarchar(max)   select @sql_text=     substring(sqt.text, (qs.statement_start_offset/2)+1,((CASE     qs.statement_end_offset WHEN -1 THEN DATALENGTH(sqt.text) ELSE       qs.statement_end_offset END - qs.statement_start_offset)/2) + 1), @sql_xml_plan =          convert(nvarchar(max),sqp.query_plan) from sys.dm_exec_query_stats qs        cross apply sys.dm_exec_sql_text(qs.sql_handle) sqt        cross apply sys.dm_exec_query_plan(qs.plan_handle) sqp where text like '%Employee%'   if @sql_text<>'' begin        select @sql_text, @sql_xml_plan        set @sql_xml_plan = 'OPTION(USE PLAN '''+@sql_xml_plan+''')'        exec sp_create_plan_guide @name =N'MyPlan_Guide_1'            , @stmt = @sql_text            , @type = N'SQL'            , @module_or_batch = NULL            , @params = NULL            , @hints = @sql_xml_plan end     You may check the plan guide is created by querying sys.plan_guides catalog view   select * from sys.plan_guides     Now execute the query again.   --- This time we will see USEPLAN=1 and plan guide name in XML plan output set statistics xml on exec sp_executesql   N'SELECT COUNT(*) FROM [HumanResources].[Employee] e INNER JOIN [Person].[Contact] c ON c.[ContactID] = e.[ContactID]' set statistics xml off   Click on ShowPlanXML hyperlink   SQL Server Management Studio 2005 will show you XML. Look at <StmtSimple> and <QueryPlan> tags.   As you can see SQL Server picked the created plan guide and USEPLAN option.   WARNING! You should be careful using the plan guides in SQL Server 2005. In case if metadata and/or data distribution has been changed the optimizer will not be able to use the plan guide anymore and the query will fail with the following error:     Msg 8698, Level 16, State 0, Line 1 Query processor could not produce query plan because USE PLAN hint contains plan that could not be verified to be legal for query. Remove or replace USE PLAN hint. For best likelihood of successful plan forcing, verify that the plan provided in the USE PLAN hint is one generated automatically by SQL Server for the same query.     To demonstrate this, disable the existent index that is used in the plan guide above   ALTER INDEX IX_Employee_ManagerID ON HumanResources.Employee DISABLE GO   and try running the query again.   set statistics xml on exec sp_executesql   N'SELECT COUNT(*) FROM [HumanResources].[Employee] e INNER JOIN [Person].[Contact] c ON c.[ContactID] = e.[ContactID]' set statistics xml off     To clean up your server after this demonstration:   --- Rebuild disabled index to enable it ALTER INDEX IX_Employee_ManagerID ON HumanResources.Employee REBUILD GO   --- Drop plan guide EXEC sp_control_plan_guide N'DROP',N'MyPlan_Guide_1';     SQL Server 2008   In SQL Server 2008 plan guides feature has been improved. In addition to sp_create_plan_guide you can also use the sp_create_plan_guide_from_handle stored procedure   Understanding plan guides http://msdn.microsoft.com/en-us/library/ms190417(SQL.90).aspx   sp_create_plan_guide_from_handle http://technet.microsoft.com/en-us/library/bb964726.aspx   So now you can create a plan guide based on the actual plan handle without pulling the T-SQL statement text   Here is the example:   use AdventureWorks go   --- Cleaning cache for this sample dbcc freeproccache go   --- Running query first time to get plan generated for freezing set statistics xml on exec sp_executesql   N'SELECT COUNT(*) FROM [HumanResources].[Employee] e INNER JOIN [Person].[Contact] c ON c.[ContactID] = e.[ContactID]' set statistics xml off go     --- Based on query pattern creating a plan guide - freezing plan declare @plan_handle varbinary(1000)   select @plan_handle = plan_handle from sys.dm_exec_query_stats qs        cross apply sys.dm_exec_sql_text(qs.sql_handle) sqt where text like '%Employee%'   select @plan_handle   exec sp_create_plan_guide_from_handle 'MyPlan_Guide_1', @plan_handle=@plan_handle     As you can see creating the plan guide is easier now,  and you may also easily copy and paste the plan handle from the DMV output and manually pass it to sp_create_plan_guide_from_handle   Run the query again   set statistics xml on exec sp_executesql   N'SELECT COUNT(*) FROM [HumanResources].[Employee] e INNER JOIN [Person].[Contact] c ON c.[ContactID] = e.[ContactID]' set statistics xml off go     Click on ShowPlanXML hyperlink   In SQL Server 2008 SQL Server Management studio will take you directly to the graphic execution plan automatically   Then right click on the execution plan page and choose Properties   To clean up your server after this demonstration:   --- Drop plan guide EXEC sp_control_plan_guide N'DROP',N'MyPlan_Guide_1';     Another difference in SQL Server 2008 from SQL Server 2005 is with the optimizer behavior  if the metadata has been changed. If this is the case, then the plan guide can not be used anymore.  The SQL Server 2008 optimizer will silently skip the plan guide and continue with query execution with whatever execution plan is the best. This makes the use of plan guides more robust and less dangerous than in SQL Server 2005.   To monitor the plan guides behavior you may use two new SQL Profiler events in SQL Server 2008   Posted by: Sergey Pustovit – SQL Server Escalation Services  
Source: blogs.msdn.com

Creating HTTP endpoint fails with 7850 error.
Creating a HTTP endpoint in SQL Server 2005 or SQL Server 2008 may fail with the following error messages: Msg 7850, Level 16, State 1, Line 1 The user 'domain\myuser' does not have permission to register endpoint 'training_sql_endpoint' on the specified URL.  Please ensure the URL refers to a namespace that is reserved for listening by SQL. Msg 7807, Level 16, State 1, Line 1 An error ('0x80070005') occurred while attempting to register the endpoint 'training_sql_endpoint'.   The error message is actually incorrect in this context, it instead should report that it is the SQL Service account and also direct you to reserve the namespace explicitly, such as (this is what SQL Server 2008 now has): The SQL Server Service account does not have permission to register the supplied URL on the endpoint '%.*ls'.  Use sp_reserve_http_namespace to explicitly reserve the URL namespace before you try to register the URL again. When you run CREATE ENDPOINT to create a HTTP endpoint, this is done under the context of the SQL Server service account.  If the namespace reservation does not already exist, then SQL will implicitly create the reservation.  However, this requires that the SQL Server service account have local administrator privileges on the computer.  If the SQL Service account does not have local administrator, on SQL Server 2005 it will fail with the message noted earlier.  To resolve this you have two options: 1.        Add the SQL Server service account to the local administrators group, restart and then run the CREATE ENDPOINT again. 2.       Or, explicitly reserve the name space while logged on as a Windows Authentication user that has Local Administrator on the computer and sysadmin on SQL,  *before* you run CREATE ENDPOINT.  For example: sp_reserve_http_namespace N'http://*:2050/sql/myfolder' Then when you run the CREATE ENDPOINT, the SQL Service account will not have to reserve the namespace because it already exists, and proceed with creating the endpoint.  Note that when you reserve a namespace explicitly, you need to be sure that the string of the namespace you reserve matches the parameters in the CREATE ENDPOINT statement.  So for the namespace above, the CREATE ENDPOINT would need to look like the following for SQL to match it up correctly:  CREATE ENDPOINT [myendpoint]           STATE=STARTED AS HTTP (PATH=N'/sql/myfolder', PORTS = (CLEAR), AUTHENTICATION = (NTLM, KERBEROS, INTEGRATED), SITE=N'*', CLEAR_PORT = 2050, COMPRESSION=DISABLED)                 The following link has more on this in “Identifying the Namespace for an Endpoint” http://msdn.microsoft.com/en-us/library/ms190614.aspx  
Source: blogs.msdn.com

tSQL Regular Expressions

Source: jerrytech.blogspot.com

PART II - Data security enhancements in SQL Server 2005
In the previous edition (Data security enhancements in SQL Server 2005) , we talked about data encryption/decryption in general.   In this installment, let’s talk about key maintenance. It is important to ensure that we can decrypt our data, even in case of user mistake or corruption.   For this, we need to protect our keys in our database. We have couple of options to do this.   Master key and certificate You can use the following steps to backup Master key and Certificates   ·         Backing Master key             BACKUP MASTER KEY TO FILE = 'path_to_file'       ENCRYPTION BY PASSWORD = 'password'         RESTORE MASTER KEY FROM FILE = 'path_to_file'       DECRYPTION BY PASSWORD = 'password'       ENCRYPTION BY PASSWORD = 'password'       [ FORCE ]   More information on the syntax is available here.   Backing Certificate        BACKUP CERTIFICATE certname TO FILE = 'path_to_file'       [ WITH PRIVATE KEY       (         FILE = 'path_to_private_key_file' ,         ENCRYPTION BY PASSWORD = 'encryption_password'         [ , DECRYPTION BY PASSWORD = 'decryption_password' ]       )   ]   To restore a backed up certificate, use the CREATE CERTIFICATE statement. More information is available here.   Symmetric key In order to maintain symmetric key properly and have the ability to restore it, you will have to use the same values for the following during key creation:   ·         algorithm, ·         key_source, ·         identity_value   Key can be protected by password or certificate, but values above should exactly be the same. You may also use different name of symmetric key. Let’s take an example to better understand this. SELECT * INTO Employee_symm FROM AdventureWorks.HumanResources.Employee GO   ALTER TABLE Employee_symm     ADD EncryptedNationalIDNumber varbinary(128); GO     --- Lets create symmetric key with protection by password first CREATE SYMMETRIC KEY symm_key_combo WITH       ALGORITHM = triple_des,       IDENTITY_VALUE = 'Example of encryption',       KEY_SOURCE = 'Put here some unique and long enough phrase.' ENCRYPTION BY PASSWORD = 'Avc#ptNO$cf@o!'; GO     --- Encrypt data OPEN SYMMETRIC KEY symm_key_combo DECRYPTION BY PASSWORD = 'Avc#ptNO$cf@o!' GO   UPDATE Employee_symm SET EncryptedNationalIDNumber = EncryptByKey(Key_GUID('symm_key_combo'), NationalIDNumber); GO     --- Now let’s drop symmetric key DROP SYMMETRIC KEY symm_key_combo GO     --- Re-create symmetric key with protection by certificate. --- Also let’s use different name for symmetric key CREATE SYMMETRIC KEY symm_key_combo_new WITH       -- Values here should be exactly the same       ALGORITHM = triple_des,       IDENTITY_VALUE = 'Example of encryption',       KEY_SOURCE = 'Put here some unique and long enough phrase.'   --- Protection can be done by certificate now though ENCRYPTION BY CERTIFICATE HumanResources037;     --- Now let’s decrypt here using new symmetric key OPEN SYMMETRIC KEY symm_key_combo_new DECRYPTION BY CERTIFICATE HumanResources037;   SELECT NationalIDNumber, EncryptedNationalIDNumber      AS 'Encrypted ID Number',     CONVERT(nvarchar, DecryptByKey(EncryptedNationalIDNumber))     AS 'Decrypted ID Number' FROM Employee_symm; CLOSE SYMMETRIC KEY symm_key_combo_new DROP SYMMETRIC KEY symm_key_combo_new DROP TABLE Employee_symm     So in order to be able to restore symmetric key following 3 values need to be set         ALGORITHM = triple_des,       IDENTITY_VALUE = 'Example of encryption',       KEY_SOURCE = 'Put here some unique and long enough phrase.'   Use the same values to re-create the symmetric key in future.   Hope this helps! Posted By: Sergey Pustovit   
Source: blogs.msdn.com

tSQL Running Total: CTE wins

Source: jerrytech.blogspot.com

SQL Null: Grouping and sorting
Because SQL Server defines all Null markers as being unequal to one another, a special definition was required in order to group by Nulls together when performing certain operations.  SQL defines “any two values that are equal to one another, or any two Nulls”, as “not distinct”.  This definition of not distinct allows SQL to group and sort Nulls when the “GROUP BY” clause are used.  Let’s look at an example using group by and nulls.  You will see, that SQL Server does group them together by using the not distinct definition: DECLARE @table1 TABLE( ID INT, myStr VARCHAR(100)) INSERT INTO @table1( ID, myStr) Values (1, 'value a'), (2, 'value b'), (3, 'value a'), (4, 'value a'), (5, 'value b'), (6, null), (7, 'value b'), (8, null), (9, null), (10, 'value a'), (11, null), (12, null) SELECT * FROM @table1 SELECT myStr,       COUNT(1) as CountFROM   @table1GROUP  BY myStr The 1st select results show the raw values: ID myStr 1 value a 2 value b 3 value a 4 value a 5 value b 6 NULL 7 value b 8 NULL 9 NULL 10 value a 11 NULL 12 NULL The 2nd results show the group by: myStr Count NULL 5 value a 4 value b 3 As you can see, when SQL Server completes the group by clause it combines rows with NULL values.  When thinking about how it handles string concatenations, this is a little twist that can catch you off guard or make you think.  Distinct, same thing: SELECT distinct myStr FROM @table1 The result is 3 rows: NULL, value a, value b Hope this helps you when thinking about null values and grouping.  Try the example and look how SQL Server handles the default sort order for Nulls.
Source: brian.chipsofttech.com

Performance of SQL’s NullIf() Function (versus Case)

Source: jerrytech.blogspot.com

How to enable TDS parser to display TDS frames when SQLServer is listening on port other than default 1433
If you try to view a netmon trace in Netmon 3.4, you will see TDS traffic is nicely parsed for you. You will see a display similar to this: The parsing works nicely because SQLServer is listening on default tcp port 1433. But, if your SQLServer is listening on a different port (other than 1433), then these TDS parsing won't work by default. You will see a display similar to this (no frames resolving to TDS): To enable the parser to parse TDS traffic for ports other than 1433 (in this case, 63959), we need to take following steps: 1. Click on "Parsers" tab in netmon UI. This will display list of parsers installed. You will see tcp.npl, tds.npl along with several different parsers 2. Double click on tcp.npl and search for "1433", you will get into a switch/case code block saying "case 1433". We basically need to include our port 63959 here. Just add a case statement above “case 1433” without any parsing code. case 63959: case 1433: //TDS-parsing code goes here   This is what it’s look like after the change:   3. Save your changes 4. Reload your netmon trace, now it should look like following. With the change, TDS parser will resolve TDS traffic on this particular non-default port (63959). Author : Enamul(MSFT), SQL Developer Technical Lead
Source: blogs.msdn.com

SQL Server Support FAQ
How often should I expect to hear from my Support Engineer? This will vary based on the issue and its impact to your business.  If you have specific preferences for frequency of contact, time of day, or method (prefer phone vs. e-mail), please discuss this with your support engineer.     My Support Engineer is not available and I need help now. If you need immediate assistance and your engineer is unavailable, please contact the engineer’s manager.  Manager contact information is in the engineer’s signature block and in the initial e-mail.    If you are unable to reach the manager, you can call the Microsoft Support line (800-936-5800), provide your case number, and request to speak with the next available support engineer.  You will be routed to the phone queue for that specialty and connected to an engineer.  The Support Engineer receiving the call may need time to review the case history and steps taken to date, depending on the duration and complexity of the issue.     My support issue is not progressing well. Each Support Engineer should provide a management contact in his/her signature block and in the initial e-mail.  If you have:   ·         Schedule conflicts with your assigned engineer’s availability. ·         Communications issues. ·         Need a faster resolution. ·         Or are otherwise dissatisfied with the support provided, please engage the engineer’s manager.    We strive to deliver outstanding customer service, and we appreciate you letting us know if we are not meeting your expectations.     I need a different engineer. As mentioned in the preceding section, “My support issue is not progressing well”, you may contact the engineer’s manager and request a different engineer.     My Support Engineer has suggested that we archive the case.  What does this mean? If there is going to be an extended delay before you can implement a change or collect requested data, we may ask to archive your case.  This means that the case will be closed on our end.  You can reopen the case if you encounter the same error on the same instance/server again, or if you have the data needed to continue troubleshooting.   In order to reopen the case, contact the support engineer or his/her manager directly.  You may be asked to provide feedback on support.  Please assess the support based on the service received to date, with the understanding that Microsoft does not consider an archived case resolved.     We thought that our issue was solved, but the problem has recurred.  Can I re-open my support case? As long as the problem and the server are the same as in the case, you can re-open the case.  To do this, first try to reach the previous engineer or his/her manager.  If unable to reach them, contact the Microsoft Support line (800-936-5800), provide your case number, and request that Microsoft re-open the case.    Note that in some cases, the same symptoms can be caused by multiple issues.  For example, if a server experienced slow performance and the disk I/O subsystem was fixed and performance restored, and later performance degraded again, this could be due to out of date statistics or other issues.  In these ambiguous cases, you may be asked to open a new case.     What do the Microsoft Case Severities mean? A – Critical – 1 hour initial response goal ·         High-impact problem in which production, operations, or development is proceeding but is severely impacted, or where production and/or profitability will be severely impacted within days.   B – Urgent – 4 hour initial response goal ·         High-impact problem where production is proceeding, but in a significantly impaired fashion. ·         Time sensitive issue important to long-term productivity that is not causing an immediate work stoppage.   C – Important - 24 hour initial response goal ·         Important issue which does not have significant current productivity impact for the customer.   D – Monitor ·         This severity is used for support issues that are waiting, usually to confirm a provided solution or when waiting for an intermittent problem to recur, in order to collect needed data.   The Support Engineer for my support issue said that I need to open an additional support incident.  Why should I have to open another incident? Microsoft defines an incident or case as: ·         A single support issue and the commercially reasonable efforts to resolve it ·         A problem that cannot be broken down into subordinate problems   NOTE:  The Customer and the Support Engineer must agree on the case resolution.  Also be aware that Microsoft does not charge for code defects, documentation errors, and product “wishes”.   As documented on Microsoft’s public website at http://support.microsoft.com/?LN=en-us&sd=tech&scid=gp%3Ben-us%3Bofferprophone&x=7&y=16#faq607   EXAMPLE:  A support issue is opened for slow performance of an application with a SQL Server back-end.  Troubleshooting identifies a specific stored procedure as the problem, and updates to statistics and an increase in covering indexes resolves the issue.    However, the same server is also experiencing slow performance for merge replication.  This is considered a different issue, which is unrelated to the original issue for which the support case was opened.     I’m interested in a higher level of service and a closer relationship with Microsoft. Microsoft offers a managed support relationship known as Premier Support, designed to meet the needs of an enterprise customer.  For an overview of the service elements, please see http://www.microsoft.com/services/microsoftservices/srv_prem.mspx  or call 1-800-936-3500 to request contact by a Premier Support representative.     Advisory vs. Break-Fix Microsoft provides support services for customers with problems encountered implementing or using Microsoft products.  These are commonly known as “break-fix issues”.  Occasionally customers need short-term (less than 40 hours) consultative assistance, as opposed to the standard break-fix service.   This short-term consultative assistance is known as Advisory service.  This is available to Microsoft’s Premier customers as an element in their support contracts.  It is also available to other customers on an hourly charge basis.   For more information on Advisory Services, please see the Advisory Services section at http://support.microsoft.com/default.aspx?scid=fh;EN-US;OfferProPhone or go to the Advisory Services page, http://support.microsoft.com/gp/advisoryservice .   Hope this helps!   Posted By: Microsoft SQL Server Support Team  
Source: blogs.msdn.com

How to configure SQL server to listen on different ports on different IP addresses?
Technorati Tags: SQL Port The following post describes how you can configure your SQL Server to listen on a different port(s) on different IP addresses that are available on your system. This procedure applies to both SQL server 2005 and SQL server 2008. Case 1: SQL Server is installed in an environment where the IP addresses had not changed since the time it is originally installed. 1) Open SQL Server configuration manager. 2) Ensure that TCP/IP protocol is enabled By default, all the IP addresses listen the same port or ports configured in SQL Server configuration manager. The SQL Server error log will have an entry like the following: 2009-07-15 17:40:06.39 Server      Server is listening on [ 'any' <ipv4> 2675]. 3) In the TCP/IP properties set ‘Listen All’ to ‘No’ 4) Go to the IP addresses tab for the instance, set Enabled to Yes and TCP port number field for the specific IP address to the desired port. The screen will look as follows: 5) Restart the SQL Server. Now you should see an entry similar to the following in the SQL error log: 2009-07-15 18:03:10.58 Server      Server is listening on [ x.y.z.w <ipv4> 2000]. 2009-07-15 18:03:10.59 Server      Server is listening on [ x.y.z.v <ipv4> 2001]. As you can see from the above each of the IP addresses is listening on different port. Case 2: SQL Server is installed in an environment where the IP addresses change dynamically, but the number of IPs active on the system are the same (For example there are two IPs active on the system, but because of lease expiry or when moving to a new subnet, the system hosting SQL Server got either one or both of its IPs changed). In this case, get the output of ipconfig /all on the system, and edit one or all the IP addresses as needed with the new IP addresses that are active on the system using a similar procedure discussed in Case 1. Case 3: You add an additional IP address on the system: In that scenario, you will not be able to use the procedure documented in Case 1  or Case 2 above as the Configuration Manager’s IP address list will just only have as many entries as the number of IPs that SQL Server found when it is installed In this scenario, you can take the following steps to update the registry values SQL server looks at to listen on different ports on different IP addresses. Warning Serious problems might occur if you modify the registry incorrectly by using Registry Editor or by using another method. These problems might require that you reinstall your operating system. Microsoft cannot guarantee that these problems can be solved. Modify the registry at your own risk. . For more information about how to back up, restore, and modify the registry, click the following article number to view the article in the Microsoft Knowledge Base: 322756 How to back up and restore the registry in Windows 1) Navigate to the following registry key on the SQL server machine: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\!INSTANCEID!\MSSQLServer\SuperSocketNetLib\Tcp\ Note: !INSTANCEID! is a place holder for your SQL server instance. 2) Right click on IP1, export the registry key as SQLIP template.reg 3) Edit the  key name and IP address key .reg file that you exported in step 2 with notepad with the new IP address. (You can get the IP address list on the system by executing ipconfig /all > ipconfig.txt command from the command prompt). The contents would look as follows: Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.SQL10\MSSQLServer\SuperSocketNetLib\Tcp\IP3]—> Change this name to a new value for example IP4 "Enabled"=dword:00000000 "Active"=dword:00000001 "TcpPort"="2001" "TcpDynamicPorts"="" "DisplayName"="Specific IP Address" "IpAddress"="a.b.c.d" –> Update this with new IP address value  4) After editing the file save it with a different name – for example new IP4.reg 5) Double click the .reg file from step 3 to import the key as a new entry under [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.SQL10\MSSQLServer\SuperSocketNetLib\Tcp] registry entry. 6) Repeat steps (3),(4) and (5) for any other new IP addresses that you currently want to configure on the system. Note: After adding the above registry keys, the new IP addresses should now show up in SQL server configuration manager. 7) Optional: Clean up any IPs that are no longer active by deleting the associated <IP_n> registry keys. 8) In SQL Server configuration manager, IP addresses tab, ensure that only the addresses that are listed in the ipconfig output on the system have Enabled property set to Yes and the other set to No Note: If IP All is set to No and if the IP addresses tab has some IP addresses that have ‘Enabled’ set to ‘Yes’ but the actual IP is no longer active on the system, SQL Server service fails to start with an error message like the following logged to SQL Server error log: 2009-07-16 15:43:07.87 Server      Server is listening on [ 127.0.0.1 <ipv4> 2753]. 2009-07-16 15:43:07.89 Server      Error: 26024, Severity: 16, State: 1. 2009-07-16 15:43:07.89 Server      Server failed to listen on x.y.z,w <ipv4> 2000. Error: 0x2741. To proceed, notify your system administrator. 2009-07-16 15:43:07.95 Server      Error: 17182, Severity: 16, State: 1. 2009-07-16 15:43:07.95 Server      TDSSNIClient initialization failed with error 0x2741, status code 0xa. Reason: Unable to initialize the TCP/IP listener. The requested address is not valid in its context. 2009-07-16 15:43:07.95 Server      Error: 17182, Severity: 16, State: 1. 2009-07-16 15:43:07.95 Server      TDSSNIClient initialization failed with error 0x2741, status code 0x1. Reason: Initialization failed with an infrastructure error. Check for previous errors. The requested address is not valid in its context. 2009-07-16 15:43:07.95 Server      Error: 17826, Severity: 18, State: 3. 2009-07-16 15:43:07.95 Server      Could not start the network library because of an internal error in the network library. To determine the cause, review the errors immediately preceding this one in the error log. 2009-07-16 15:43:07.95 Server      Error: 17120, Severity: 16, State: 1. 2009-07-16 15:43:07.95 Server      SQL Server could not spawn FRunCM thread. Check the SQL Server error log and the Windows event logs for information about possible related problems. Case 4: SQL Server is installed in a clustered environment. On cluster, you cannot configure SQL Server to listen on a specific IP addresses. You must chose IPALL. The IP addresses on which the cluster instance will be listening on is determined by cluster resources (configurable through Cluster Administrator, by adding IP Address resources under SQL Network Name resource). Additional links: How to: Configure a Server to Listen on a Specific TCP Port (SQL Server Configuration Manager) 294453    How to set up SQL Server to listen on multiple static TCP ports 823938    How to configure an instance of SQL Server to listen on a specific TCP port or a dynamic port Configure and manage SQL Server 2005 from your application using WMI IPv6 and SQL Server 2005 Connecting Using IPv6         Ramu Konidena Microsoft SQL Server Support Technical Lead
Source: blogs.msdn.com

Tools of the Trade: Part II – The Repro
In this series of posts, we’re going to take a look at some of the tools we at Microsoft Developer Support -- Database commonly use, how to use them, and the types of issues they can be used to resolve.   This post might seem like a bit of a stretch.  For one thing, it’s not about a SQL related tool.  That’s not such a big deal – there are plenty of tools that we use that have applications beyond SQL.  You can expect more posts in this series about tools that have applications far beyond troubleshooting databases and database connectivity.   However, this isn’t about a specific, pre-existing tool either.  Call it a meta-tool, I guess.  Pseudo-tool.  Something along those lines.  One way or the other, even if it isn’t already built for you, a reproduction can be one of the most invaluable and time-saving tools you’ll use to troubleshoot your most difficult issues. The Repro In the Product Support Services world at Microsoft, we refer to a simplified, stripped down series of steps that lead to a specific behavior as a Repro.  This may simply be a step-by-step to recreate an error, or a simple application that recreates an unusual behavior.  When we are working on an issue that is buried in a hulking, production application, we often like to tackle it is by attempting to recreate the issue outside of the in vivo environment.  There are a lot of benefits to investing some time in developing a Repro.   It’s probably pretty obvious why you would not want to troubleshoot in a production environment.  I personally like to avoid debugging anything that generates revenue – it just seems to ease the tension.  There’s more to it, though – not only do you avoid tinkering with your company’s livelihood, a Repro also gives you the advantage of portability.  Whatever technology you’re working with, it is much, much easier to get the assistance of Subject Matter Experts if you can actually send them something to play with.   Another big benefit of the Repro is that it helps you to focus on the issue.  This is particularly helpful when you need to use other tools to analyze the behavior with which you’re dealing. Whatever kind of tracing you’re doing, you are bound to pick-up extraneous information if you’re running the trace on a production application.  Unless you’re very fond of wading through gigantic logs looking for clues, a Repro really helps to avoid getting lost in superfluous data.   Finally, a repro can save you a huge amount of time.  It can get so irritating having to click through the same series of screens and options, over and over again, as you troubleshoot an issue in a full-scale application.  Being able to make a change to code or the environment, and quickly test dramatically improves the speed of the troubleshooting process.  It also has the added benefit of keeping debugging from getting boring.   That’s my case for investing the time in creating a Repro.  The next question is: what is the best way to build a Repro?  Well, that’s going to depend greatly on the issue and the environment.   One thing that I like to do is work from the inside out.  In other words, if I know that a specific line of code is throwing an exception, I’ll create an application with the smallest amount of code I can use to execute that line.  If that doesn’t reproduce the behavior, I’ll add more code from the application around it, building up the Repro until it gets the behavior I’m looking for.   Another thing that I’ve found is important is to make sure that I attempt my Repro in an environment that’s as similar to production as possible.  I do not recommend attempting to reproduce an issue occurring in a data-access application running on Windows Server 2003 on your grandma’s Mac.  It’ll be much easier to create a repro on something that’s as close to the original scenario as possible and then move it to other environments to see if the behavior is any different.   One of the Technical Leads on our team has a good tip for reproducing issues that occur intermittently.  For example, say you find that every-so-often your application gets an error when attempting to connect to a data-store.  In this situation, it’s probably worth a try to take the connection you’re trying to make and wrap it in a tight loop.  It’ll be much easier to launch this and catch the failure than it will be to keep running and re-running a small repro that makes the connection a single time and exits.  If you are troubleshooting an issue occurring in a web-application, you may try a tool like Application Center Test to execute your repro many times simultaneously (see: this link for a discussion of using ACT).   Finally, I really recommend carefully documenting your repro.  Again, it makes it much easier to get the help of others, and it will also prevent you from forgetting what you had done on Friday after a long weekend.  Even if you really cannot find a way to reproduce the issue in a stripped down environment, documenting the steps an end-user takes when the issue occurs can be valuable for helping to clarify the problem statement and focus your efforts.     Posted By:       Justin Trobec  
Source: blogs.msdn.com

No comments:

Post a Comment