Pages

Monday, October 11, 2010

Creating HTTP endpoint fails with 7850 error.


Creating a HTTP endpoint in SQL Server 2005 or SQL Server 2008 may fail with the following error messages: Msg 7850, Level 16, State 1, Line 1 The user 'domain\myuser' does not have permission to register endpoint 'training_sql_endpoint' on the specified URL.  Please ensure the URL refers to a namespace that is reserved for listening by SQL. Msg 7807, Level 16, State 1, Line 1 An error ('0x80070005') occurred while attempting to register the endpoint 'training_sql_endpoint'.   The error message is actually incorrect in this context, it instead should report that it is the SQL Service account and also direct you to reserve the namespace explicitly, such as (this is what SQL Server 2008 now has): The SQL Server Service account does not have permission to register the supplied URL on the endpoint '%.*ls'.  Use sp_reserve_http_namespace to explicitly reserve the URL namespace before you try to register the URL again. When you run CREATE ENDPOINT to create a HTTP endpoint, this is done under the context of the SQL Server service account.  If the namespace reservation does not already exist, then SQL will implicitly create the reservation.  However, this requires that the SQL Server service account have local administrator privileges on the computer.  If the SQL Service account does not have local administrator, on SQL Server 2005 it will fail with the message noted earlier.  To resolve this you have two options: 1.        Add the SQL Server service account to the local administrators group, restart and then run the CREATE ENDPOINT again. 2.       Or, explicitly reserve the name space while logged on as a Windows Authentication user that has Local Administrator on the computer and sysadmin on SQL,  *before* you run CREATE ENDPOINT.  For example: sp_reserve_http_namespace N'http://*:2050/sql/myfolder' Then when you run the CREATE ENDPOINT, the SQL Service account will not have to reserve the namespace because it already exists, and proceed with creating the endpoint.  Note that when you reserve a namespace explicitly, you need to be sure that the string of the namespace you reserve matches the parameters in the CREATE ENDPOINT statement.  So for the namespace above, the CREATE ENDPOINT would need to look like the following for SQL to match it up correctly:  CREATE ENDPOINT [myendpoint]           STATE=STARTED AS HTTP (PATH=N'/sql/myfolder', PORTS = (CLEAR), AUTHENTICATION = (NTLM, KERBEROS, INTEGRATED), SITE=N'*', CLEAR_PORT = 2050, COMPRESSION=DISABLED)                 The following link has more on this in “Identifying the Namespace for an Endpoint” http://msdn.microsoft.com/en-us/library/ms190614.aspx  
Source: blogs.msdn.com

After Installation of .NET Framework 3.5 SP1 (or .Net framework 2.0 SP2) , SSIS packages using ODBC 3rd party drivers may fail
The Issue: In the last few months, we have seen quite a few cases where after installing .net framework 3.5 SP1 ( KB http://support.microsoft.com/kb/951847), SSIS packages using certain ODBC 3rd party drivers and importing character data may fail with the error below - Error Message: [DataReader Source [<number>]] Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "component "DataReader Source" (<number>)" failed because error code 0x80131937 occurred, and the error row disposition on "output column "ColumnName" (<number)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure. The error messages may vary depending on if you are using DataReader Source in Visual Studio 2005 or ADO .Net Source in Visual Studio 2008 – [ADO NET Source [<number>]] Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "component "ADO NET Source" (<number>)" failed because error code 0x80131937 occurred, and the error row disposition on "output column "ColumnName" (<number>)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure. Error 0xc0209029: Data Flow Task: The "component "<ComponentName>" (<number>)" failed because error code 0x80131937 occurred... [ADO NET Source [<number>] Error: The component "ADO NET Source" (<number>) was unable to process the data. Pipeline component has returned HRESULT error code 0xC0209029 from a method call. [SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "ADO NET Source" (<number>) returned error code 0xC02090F5. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure. NOTE: Not all ODBC drivers are affected by this problem. This problem affects primarily SSIS packages which import character data using certain 3rd party ODBC drivers, through the use of ADO.Net connection manager and System.Data.ODBC managed provider. In Visual Studio 2005: SSIS DataReader Source --> ADO.Net connection manager --> .Net Providers \ ODBC Data Provider (System.Data.Odbc) --> Vendor ODBC driver --> Character data In Visual Studio 2008: ADO .Net Source --> ADO.Net connection manager --> .Net Providers \ ODBC Data Provider (System.Data.Odbc) --> Vendor ODBC driver --> Character data Cause: The problem started with .Net Framework 2.0 SP2 after a change was made in the way OdbcDataReader checks for Nulls in Sequential Access mode, calling SQLGetData with a BufferLength argument of 0 for variable width data types.  Certain ODBC drivers are not compatible with this pattern. For more information on this function and its arguments, see: SQLGetData Function http://msdn.microsoft.com/en-us/library/ms715441.aspx .Net framework 2.0 SP2 is also included .Net framework 3.5 SP1 components. Microsoft is still investigating the issue and understands the difficult impact that this change has on customers. Determining the full impact of the issue (for example, what are all the data types that can cause the issue etc) is still a work in progress. You can track the progress from the connect site below. http://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=416455 In the meantime, we suggest that you use one of the workarounds mentioned below. Workaround: Workaround#1: (recommended) Modify the SSIS package to redirect the DataReader (or ADO .Net Source in Visual Studio 2008) Error Output to a text file, thereby allowing the package to succeed even though no rows are written out to the error output file. A. Go to Data Flow Task. B. On DataReader Source Properties-> ‘Input and Output Properties’ Tab -> DataReader OutPut -> Output columns C. Select the columns (only with character data type) and change the ‘ErrorRowDisposition’ and TruncationRowDisposition’ properties to RD_RidectRow D. Redirect the DataReader Error Output to a text file E. Repeat A-D for all Data Flow Tasks that fail with the above error. Workaround#2: If you need a quick fix (because this is a production environment and you cannot afford to modify SSIS packages at this time) – you may need to uninstall the components of .Net Framework 3.5 SP1 or .Net framework 2.0 SP2, depending on what framework version you installed that caused the issue. a. If your issue is caused by installing only .Net framework 2.0 SP2 (for example, from a Microsoft download site http://www.microsoft.com/downloads/details.aspx?familyid=5B2C0358-915B-4EB5-9B1D-10E506DA9D0F&displaylang=en), then you can uninstall .Net framework 2.0 SP2. b. If your issue is caused by installing .Net framework 3.5 SP1 (KB http://support.microsoft.com/kb/951847), Remove the .Net Framework 3.5 SP1 components and bring the system back to the state it was in before the update.  This process involves more than simply uninstalling .Net Framework 3.5 SP1, as discussed in the following blogs: http://blogs.msdn.com/dougste/archive/2008/08/19/uninstalling-net-framework-3-5-sp1.aspx http://blogs.msdn.com/astebner/archive/2008/08/01/8803442.aspx Posted By: Azim Uddin & Enamul Khaleque
Source: blogs.msdn.com

Step by step N-tier configuration of Sync services for ADO.NET 2.0
Recently I have worked on couple of cases where customers were trying to use N-tier configuration of Sync services for ADO.NET on IIS. In this blog we will use IIS and setup N-tier configuration of sync service for ADO.NET version 2.0 which comes as part of Microsoft Sync Framework 1.0.  Preparing Environment: We need a development machine that we will use to develop the application, a middle tier server where IIS is installed and configured for WCF services and database server. If you like, you can use the same development box for IIS and database server too. a) Client tier We will develop the application (client and WCF service) and run the client part of the application in this machine. We will use Visual Studio 2008, SP1, so it should be installed on this machine. Install sync services on this client machine. Sync Services for ADO.NET version 2.0 comes with Microsoft Sync Framework 1.0 which gets installed if you install SQL Server 2008 or Visual studio 2008, SP1. You can also download it from http://www.microsoft.com/downloads/details.aspx?FamilyId=C88BA2D1-CEF3-4149-B301-9B056E7FB1E6&displaylang=en Install SQL Server Compact 3.5 Service Pack 1 on this client machine if it is already not there. SQL Server Compact is available in three ways: - Integrated with SQL Server 2008 or later versions, Integrated with Microsoft Visual Studio 2008 or later versions and as a download from the Web site at: http://www.microsoft.com/sqlserver/2005/en/us/compact-downloads.aspx b) Middle tier If you want to use a separated middle tier server for IIS to run WCF service then make sure you have IIS installed and configured on this box to run WCF. Install sync services on this machine too. I have used the same development machine for middle tier so I did not have to install it again. c) Database Server Install a version of SQL Server other than SQL Server Compact to act as the server database. If you like to you can use SQL Server Express that installs with Visual Studio. I have used developer edition of SQL 2008, SP1. We are going to use a sample database that should be prepared by following the article on “Setup Scripts for Sync Services How-to Topics” at: http://msdn.microsoft.com/en-us/library/bb726041.aspx. Copy the T-SQL statements of “Custom Change Tracking for Offline Scenarios” that creates a custom change tracking infrastructure. Once you run this script successfully from query analyzer of your SQL Server it will create the new database by the name SyncSamplesDb. I have the database created as shown below:   Developing and deploying WCF service: Create a WCF Services Application in Visual Studio 2008: (You may need to run Visual Studio as Administrator to create virtual directories in IIS) Type “WcfForSync” as the name of the project and  click on “OK” button. It will create the WCF project with IService1.cs, Service1.svc, Service1.svc.cs and Web.config files with other files and folders. I will keep these default files for simplicity. Open the IService1.cs file by double clicking it and replace the code with the code below: (Note that the code samples in the following sections have been taken from the MSDN articles mentioned in the reference section at the bottom with little modification) using System.Collections.ObjectModel; using System.ServiceModel; using System.Data; using Microsoft.Synchronization; using Microsoft.Synchronization.Data; using Microsoft.Synchronization.Data.Server; namespace WcfForSync {     [ServiceContract]     public interface IService1     {         [OperationContract()]         SyncContext ApplyChanges(SyncGroupMetadata groupMetadata, DataSet dataSet, SyncSession syncSession);         [OperationContract()]         SyncContext GetChanges(SyncGroupMetadata groupMetadata, SyncSession syncSession);         [OperationContract()]         SyncSchema GetSchema(Collection<string> tableNames, SyncSession syncSession);         [OperationContract()]         SyncServerInfo GetServerInfo(SyncSession syncSession);     } } Next add a class file by the name SampleServerSyncProvider.cs as below: Replace the code in this file with the code below: using System.Data; using System.Data.SqlClient; using Microsoft.Synchronization; using Microsoft.Synchronization.Data; using Microsoft.Synchronization.Data.Server; namespace WcfForSync {     //Create a class that is derived from     //Microsoft.Synchronization.Server.DbServerSyncProvider.     public class SampleServerSyncProvider : DbServerSyncProvider     {         public SampleServerSyncProvider()         {             //Create a connection to the sample server database.             Utility util = new Utility();             SqlConnection serverConn = new SqlConnection(util.ServerConnString);             this.Connection = serverConn;             //Create a command to retrieve a new anchor value from             //the server. In this case, we use a timestamp value             //that is retrieved and stored in the client database.             //During each synchronization, the new anchor value and             //the last anchor value from the previous synchronization             //are used: the set of changes between these upper and             //lower bounds is synchronized.             //             //SyncSession.SyncNewReceivedAnchor is a string constant;             //you could also use @sync_new_received_anchor directly in             //your queries.             SqlCommand selectNewAnchorCommand = new SqlCommand();             string newAnchorVariable = "@" + SyncSession.SyncNewReceivedAnchor;             selectNewAnchorCommand.CommandText = "SELECT " + newAnchorVariable + " = min_active_rowversion() - 1";             selectNewAnchorCommand.Parameters.Add(newAnchorVariable, SqlDbType.Timestamp);             selectNewAnchorCommand.Parameters[newAnchorVariable].Direction = ParameterDirection.Output;             selectNewAnchorCommand.Connection = serverConn;             this.SelectNewAnchorCommand = selectNewAnchorCommand;             //Create a SyncAdapter for the Customer table by using             //the SqlSyncAdapterBuilder:             //  * Specify the base table and tombstone table names.             //  * Specify the columns that are used to track when             //    changes are made.             //  * Specify download-only synchronization.             //  * Call ToSyncAdapter to create the SyncAdapter.             //  * Specify a name for the SyncAdapter that matches the             //    the name specified for the corresponding SyncTable.             //    Do not include the schema names (Sales in this case).             SqlSyncAdapterBuilder customerBuilder = new SqlSyncAdapterBuilder(serverConn);             customerBuilder.TableName = "Sales.Customer";             customerBuilder.TombstoneTableName = customerBuilder.TableName + "_Tombstone";             customerBuilder.SyncDirection = SyncDirection.DownloadOnly;             customerBuilder.CreationTrackingColumn = "InsertTimestamp";             customerBuilder.UpdateTrackingColumn = "UpdateTimestamp";             customerBuilder.DeletionTrackingColumn = "DeleteTimestamp";             SyncAdapter customerSyncAdapter = customerBuilder.ToSyncAdapter();             customerSyncAdapter.TableName = "Customer";             this.SyncAdapters.Add(customerSyncAdapter);         }     }     public class Utility     {         //Return the server connection string.         public string ServerConnString         {            get { return @"Data Source= SQLServer\instance; Initial Catalog=SyncSamplesDb; User Id=xxxxxx; Password=xxxxxx"; }         }     } } Note: You need to update the connection string in the above Utility class to connect to your SQL Server. Open Service1.svc.cs file in the project by double clicking on it: and replace the existing code with the code below: using System.Collections.ObjectModel; using System.ServiceModel; using System.Data; using Microsoft.Synchronization; using Microsoft.Synchronization.Data; using Microsoft.Synchronization.Data.Server; namespace WcfForSync {     // NOTE: If you change the class name "Service1" here, you must also update the reference to "Service1" in App.config.     public class Service1 : IService1     {          private SampleServerSyncProvider _serverSyncProvider;          public Service1()         {             this._serverSyncProvider = new SampleServerSyncProvider();         }         [System.Diagnostics.DebuggerNonUserCodeAttribute()]         public virtual SyncContext ApplyChanges(SyncGroupMetadata groupMetadata, DataSet dataSet, SyncSession syncSession)         {             return this._serverSyncProvider.ApplyChanges(groupMetadata, dataSet, syncSession);         }         [System.Diagnostics.DebuggerNonUserCodeAttribute()]         public virtual SyncContext GetChanges(SyncGroupMetadata groupMetadata, SyncSession syncSession)         {             return this._serverSyncProvider.GetChanges(groupMetadata, syncSession);         }         [System.Diagnostics.DebuggerNonUserCodeAttribute()]         public virtual SyncSchema GetSchema(Collection<string> tableNames, SyncSession syncSession)         {             return this._serverSyncProvider.GetSchema(tableNames, syncSession);         }         [System.Diagnostics.DebuggerNonUserCodeAttribute()]         public virtual SyncServerInfo GetServerInfo(SyncSession syncSession)         {             return this._serverSyncProvider.GetServerInfo(syncSession);         }     } } The application requires references to Microsoft.Synchronization.dll, Microsoft.Synchronization.Data.dll and Microsoft.Synchronization.Data.Server.dll. Right click on “References” from the project and click on Add Reference… Select Microsoft.Synchronization.dll (Location on my machine: C:\Program Files (x86)\Microsoft Sync Framework\v1.0\Runtime\x86) And next add the references to the other two dlls (Location on my machine: C:\Program Files (x86)\Microsoft Sync Framework\v1.0\Runtime\ADO.NET\V2.0\x86) Now you should be able to build the project successfully. Once building is successful publish the WCF to IIS. Go to the properties of the project and under Web* tab type IIS server information (middletierserver) , click on the “Create Virtual Directory” as below: From IIS you can see a Virtual directory has been created: If you browse the WCF (http://middletierserver/WcfForSync/Service1.svc) you should get the following page. I have used the same development machine for IIS, so it is showing “localhost” in the URL. Note: When tried to browse WCF I have noticed following error with some IIS machines: HTTP Error 404.3 – Not Found The page you are requesting cannot be served because of the extension configuration. If the page is a script, add a handler. If the file should be downloaded, add a MIME map. Detailed Error InformationModule StaticFileModule. If you encounter this error please take necessary action as per article at: http://blogs.msdn.com/rjohri/archive/2009/06/29/the-page-you-are-requesting-cannot-be-served-because-of-the-extension-configuration.aspx Developing client application and consuming WCF service: Create a Console application in Visual Studio: Once the project is created, reference the dlls  Microsoft.Synchronization.dll, Microsoft.Synchronization.Data.dll, and Microsoft.Synchronization.Data.SqlServerCe.dll as before. You also need to reference System.Data.SqlServerCe.dll (you should have this dll on your machine once you installed SQL Server Compact 3.5 Service Pack 1, location of this dll on my machine: C:\Program Files (x86)\Microsoft SQL Server Compact Edition\v3.5\Desktop as shown below) Now we need to do service reference to our WCF that we developed and deployed before. Right click on the Reference in this client project and select Add Service Reference… In the Add Service Reference screen you type the URL of our WCF and click on Go button as shown below: Click on “OK” button, it will create service reference as below: Replace the code in the file Program.cs with the following code: using System; using System.IO; using System.Data; using System.Data.SqlClient; using System.Data.SqlServerCe; using Microsoft.Synchronization; using Microsoft.Synchronization.Data; using Microsoft.Synchronization.Data.SqlServerCe; namespace ClientForSync {     class Program     {         static void Main(string[] args)         {             //The Utility class handles all functionality that is not             //directly related to synchronization, such as holding connection             //string information and making changes to the server database.             Utility util = new Utility();             //The SampleStats class handles information from the SyncStatistics             //object that the Synchronize method returns.             SampleStats sampleStats = new SampleStats();             //Delete and re-create the database. The client synchronization             //provider also enables you to create the client database             //if it does not exist.             ////util.SetClientPassword();             util.RecreateClientDatabase();             //Initial synchronization. Instantiate the SyncAgent             //and call Synchronize.             SampleSyncAgent sampleSyncAgent = new SampleSyncAgent();             SyncStatistics syncStatistics = sampleSyncAgent.Synchronize();             sampleStats.DisplayStats(syncStatistics, "initial");             //Make changes on the server.             util.MakeDataChangesOnServer();             //Subsequent synchronization.             syncStatistics = sampleSyncAgent.Synchronize();             sampleStats.DisplayStats(syncStatistics, "subsequent");             //Return server data back to its original state.             util.CleanUpServer();             //Exit.             Console.Write("\nPress Enter to close the window.");             Console.ReadLine();         }     }     //Create a class that is derived from     //Microsoft.Synchronization.SyncAgent.     public class SampleSyncAgent : SyncAgent     {         public SampleSyncAgent()         {             //Instantiate a client synchronization provider and specify it             //as the local provider for this synchronization agent.             this.LocalProvider = new SampleClientSyncProvider();             //The remote provider now references a proxy instead of directly referencing the server provider. The proxy is created by passing a reference to a WCF service             ServiceReference1.Service1Client serviceProxy = new ServiceReference1.Service1Client();             this.RemoteProvider = new ServerSyncProviderProxy(serviceProxy);             //Add the Customer table: specify a synchronization direction of             //DownloadOnly.             SyncTable customerSyncTable = new SyncTable("Customer");             customerSyncTable.CreationOption = TableCreationOption.DropExistingOrCreateNewTable;             customerSyncTable.SyncDirection = SyncDirection.DownloadOnly;             this.Configuration.SyncTables.Add(customerSyncTable);         }     }     //Create a class that is derived from     //Microsoft.Synchronization.Data.SqlServerCe.SqlCeClientSyncProvider.     //You can just instantiate the provider directly and associate it     //with the SyncAgent, but you could use this class to handle client     //provider events and other client-side processing.     public class SampleClientSyncProvider : SqlCeClientSyncProvider     {         public SampleClientSyncProvider()         {             //Specify a connection string for the sample client database.             Utility util = new Utility();             this.ConnectionString = util.ClientConnString;         }     }     //Handle the statistics that are returned by the SyncAgent.     public class SampleStats     {         public void DisplayStats(SyncStatistics syncStatistics, string syncType)         {             Console.WriteLine(String.Empty);             if (syncType == "initial")             {                 Console.WriteLine("****** Initial Synchronization ******");             }             else if (syncType == "subsequent")             {                 Console.WriteLine("***** Subsequent Synchronization ****");             }             Console.WriteLine("Start Time: " + syncStatistics.SyncStartTime);             Console.WriteLine("Total Changes Downloaded: " + syncStatistics.TotalChangesDownloaded);             Console.WriteLine("Complete Time: " + syncStatistics.SyncCompleteTime);             Console.WriteLine(String.Empty);         }     }     public class Utility     {        //Return the client connection string with the password. Don’t forget to create folder         public string ClientConnString         {           get { return @"Data Source='D:\\SyncServices\\SyncSampleClient.sdf'; Password=xxxxxxx"; }         }         //Return the server connection string.         public string ServerConnString         {             get { return @"Data Source= SQLServer\instance; Initial Catalog=SyncSamplesDb; User Id=xxxxxx; Password=xxxxxx"; }         }         //Make server changes that are synchronized on the second         //synchronization.         public void MakeDataChangesOnServer()         {             int rowCount = 0;             using (SqlConnection serverConn = new SqlConnection(this.ServerConnString))             {                 SqlCommand sqlCommand = serverConn.CreateCommand();                 sqlCommand.CommandText =                     "INSERT INTO Sales.Customer (CustomerName, SalesPerson, CustomerType) " +                     "VALUES ('Cycle Mart', 'James Bailey', 'Retail') " +                     "UPDATE Sales.Customer " +                     "SET  SalesPerson = 'James Bailey' " +                     "WHERE CustomerName = 'Tandem Bicycle Store' " +                     "DELETE FROM Sales.Customer WHERE CustomerName = 'Sharp Bikes'";                 serverConn.Open();                 rowCount = sqlCommand.ExecuteNonQuery();                 serverConn.Close();             }             Console.WriteLine("Rows inserted, updated, or deleted at the server: " + rowCount);         }         //Revert changes that were made during synchronization.         public void CleanUpServer()         {             using (SqlConnection serverConn = new SqlConnection(this.ServerConnString))             {                 SqlCommand sqlCommand = serverConn.CreateCommand();                 sqlCommand.CommandType = CommandType.StoredProcedure;                 sqlCommand.CommandText = "usp_InsertSampleData";                 serverConn.Open();                 sqlCommand.ExecuteNonQuery();                 serverConn.Close();             }         }         //Delete the client database.         public void RecreateClientDatabase()         {             using (SqlCeConnection clientConn = new SqlCeConnection(this.ClientConnString))             {                 if (File.Exists(clientConn.Database))                 {                     File.Delete(clientConn.Database);                 }             }             SqlCeEngine sqlCeEngine = new SqlCeEngine(this.ClientConnString);             sqlCeEngine.CreateDatabase();         }     } } Now we have to update the connection string information in the code. In the Utility class in the below methods you should update the connection string for SQL server and client database (SQL CE database) //Return the client connection string with the password. Don’t forget to create folder         public string ClientConnString         {           get { return @"Data Source='D:\\SyncServices\\SyncSampleClient.sdf'; Password=xxxxxxx"; }         }         //Return the server connection string.         public string ServerConnString         {             get { return @"Data Source= SQLServer\instance; Initial Catalog=SyncSamplesDb; User Id=xxxxxxx; Password=xxxxxxx"; }         } Build this client project, at this point the project should compile successfully. Running/testing synchronization: Run the Console application and you should see the result of the synchronization of this sample as below: You can run query on the Customer table in SyncSamplesDb to check the changes by setting the break point in the client app and also by opening a SQL profiler you can see the activities going on in SQL Server. Happy Synchronization! References: Overview (Synchronization Services) : http://msdn.microsoft.com/en-us/library/bb726031(SQL.90).aspx Architecture and Classes (Synchronization Services) : http://msdn.microsoft.com/en-us/library/bb726025(SQL.90).aspx Getting Started: A Synchronization Services Application: http://msdn.microsoft.com/en-us/library/bb726015.aspx How to: Configure N-Tier Synchronization at: http://msdn.microsoft.com/en-us/library/bb902831.aspx A large scale implementation of Sync Service example at: http://blogs.msdn.com/sync/archive/2009/10/19/sharepoint-2010-now-integrates-microsoft-sync-framework.aspx Microsoft Sync Framework 2.0 redistributable Package - http://www.microsoft.com/downloads/details.aspx?FamilyId=109DB36E-CDD0-4514-9FB5-B77D9CEA37F6&displaylang=en   Author : Faiz(MSFT), SQL Developer Engineer Reviewed by : Enamul(MSFT), SQL Developer Technical Lead; Azim(MSFT), SQL Developer Technical Lead;Srini(MSFT), SQL Developer Engineer
Source: blogs.msdn.com

WHITEPAPER - Online Indexing Operations in SQL Server 2005
I wanted to post a quick blog about a new whitepaper from Microsoft. Introduced in Microsoft SQL Server 2005 Enterprise Edition, the online index feature provides a powerful way to perform maintenance operations such as rebuilding or creating indexes in a production system without sacrificing DML concurrency. Users are not blocked from querying and updating the underlying table during the index operation. This white paper is targeted to database administrators who want to use the online index feature of SQL Server 2005 Enterprise Edition to increase server availability in their production environment and need to understand the advantages and disadvantages of using this feature. You can read about it here. Posted By:       Sanchan Saxena    
Source: blogs.msdn.com

Tools of the Trade: Part II – The Repro
In this series of posts, we’re going to take a look at some of the tools we at Microsoft Developer Support -- Database commonly use, how to use them, and the types of issues they can be used to resolve.   This post might seem like a bit of a stretch.  For one thing, it’s not about a SQL related tool.  That’s not such a big deal – there are plenty of tools that we use that have applications beyond SQL.  You can expect more posts in this series about tools that have applications far beyond troubleshooting databases and database connectivity.   However, this isn’t about a specific, pre-existing tool either.  Call it a meta-tool, I guess.  Pseudo-tool.  Something along those lines.  One way or the other, even if it isn’t already built for you, a reproduction can be one of the most invaluable and time-saving tools you’ll use to troubleshoot your most difficult issues. The Repro In the Product Support Services world at Microsoft, we refer to a simplified, stripped down series of steps that lead to a specific behavior as a Repro.  This may simply be a step-by-step to recreate an error, or a simple application that recreates an unusual behavior.  When we are working on an issue that is buried in a hulking, production application, we often like to tackle it is by attempting to recreate the issue outside of the in vivo environment.  There are a lot of benefits to investing some time in developing a Repro.   It’s probably pretty obvious why you would not want to troubleshoot in a production environment.  I personally like to avoid debugging anything that generates revenue – it just seems to ease the tension.  There’s more to it, though – not only do you avoid tinkering with your company’s livelihood, a Repro also gives you the advantage of portability.  Whatever technology you’re working with, it is much, much easier to get the assistance of Subject Matter Experts if you can actually send them something to play with.   Another big benefit of the Repro is that it helps you to focus on the issue.  This is particularly helpful when you need to use other tools to analyze the behavior with which you’re dealing. Whatever kind of tracing you’re doing, you are bound to pick-up extraneous information if you’re running the trace on a production application.  Unless you’re very fond of wading through gigantic logs looking for clues, a Repro really helps to avoid getting lost in superfluous data.   Finally, a repro can save you a huge amount of time.  It can get so irritating having to click through the same series of screens and options, over and over again, as you troubleshoot an issue in a full-scale application.  Being able to make a change to code or the environment, and quickly test dramatically improves the speed of the troubleshooting process.  It also has the added benefit of keeping debugging from getting boring.   That’s my case for investing the time in creating a Repro.  The next question is: what is the best way to build a Repro?  Well, that’s going to depend greatly on the issue and the environment.   One thing that I like to do is work from the inside out.  In other words, if I know that a specific line of code is throwing an exception, I’ll create an application with the smallest amount of code I can use to execute that line.  If that doesn’t reproduce the behavior, I’ll add more code from the application around it, building up the Repro until it gets the behavior I’m looking for.   Another thing that I’ve found is important is to make sure that I attempt my Repro in an environment that’s as similar to production as possible.  I do not recommend attempting to reproduce an issue occurring in a data-access application running on Windows Server 2003 on your grandma’s Mac.  It’ll be much easier to create a repro on something that’s as close to the original scenario as possible and then move it to other environments to see if the behavior is any different.   One of the Technical Leads on our team has a good tip for reproducing issues that occur intermittently.  For example, say you find that every-so-often your application gets an error when attempting to connect to a data-store.  In this situation, it’s probably worth a try to take the connection you’re trying to make and wrap it in a tight loop.  It’ll be much easier to launch this and catch the failure than it will be to keep running and re-running a small repro that makes the connection a single time and exits.  If you are troubleshooting an issue occurring in a web-application, you may try a tool like Application Center Test to execute your repro many times simultaneously (see: this link for a discussion of using ACT).   Finally, I really recommend carefully documenting your repro.  Again, it makes it much easier to get the help of others, and it will also prevent you from forgetting what you had done on Friday after a long weekend.  Even if you really cannot find a way to reproduce the issue in a stripped down environment, documenting the steps an end-user takes when the issue occurs can be valuable for helping to clarify the problem statement and focus your efforts.     Posted By:       Justin Trobec  
Source: blogs.msdn.com

How to install/enable .Net 3.5 SP1 on Windows Server 2008 R2 for SQL Server 2008 and SQL Server 2008 R2
The .NET Framework 3.5 SP1 (also referred to as .NET Framework 3.5.1) is a prerequisite for SQL Server 2008. SQL Server 2008 Setup for standalone instance will install the .NET Framework 3.5 SP1 if it is not already installed. In Windows Server 2008 R2, the .NET Framework is a feature and installing it is different when compared to older versions of Windows Operating System. In previous versions, one could either download the .NET Framework 3.5.1 install binaries from Microsoft download site or use the install binaries from the redist folder of the SQL Server 2008 installation media. Starting with Windows Server 2008 R2, the method of installing .NET Framework 3.5.1 has changed. This document explains how to verify that .NET Framework 3.5.1 is installed and if it is not installed how you can add it.    How to verify if .NET Framework 3.5 SP1 is installed: Here are the steps to verify that .NET Framework 3.5.1 is installed on Windows Server 2008 R2. Click the Start button in the lower left hand corner of the display. Highlight Administrative Tools and select Server Manager. In the Server Manager interface, click Features to display all the installed Features in the right hand pane. Verify that .NET Framework 3.5.1 is listed. If .NET Framework 3.5.1 feature is not listed, you can use either of the following methods to install it: Method 1: Using Server Manager Interface In the Server Manager interface, select Add Features to displays a list of possible features. In the Select Features interface, expand .NET Framework 3.5.1 Features. Once you expand .NET Framework 3.5.1 Features, you will see two check boxes. One for .NET Framework 3.5.1 and other for WCF Activation. Check the box next to .NET Framework 3.5.1 and click Next. Note: If you do not expand .NET Framework 3.5.1 Features and check it, you will get a pop-up titled Add Features Wizard  as shown below. Click Cancel and expand .NET Framework 3.5.1 Features and then check .NET Framework 3.5.1 check box below it.   You cannot install .NET Framework 3.5.1 Features unless the required role services and features are also installed. In the Confirm Installation Selections interface, review the selections and then click Install. Allow the installation process to complete and then click Close. Method 2: Using PowerShell Click the Start button in the lower left hand corner of the display. Highlight All Programs and select Accessories Expand Windows PowerShell and right click Windows PowerShell and select Run as administrator. Click Yes on the User Account Control box. At the PowerShell command prompt, type the following commands, and then press ENTER after each command: Import-Module ServerManager Add-WindowsFeature as-net-framework Note: A screenshot is shown below:  
Source: blogs.msdn.com

Tools of the Trade: Part III – Kerberos (I)
In this series of posts, we’re going to take a look at some of the tools we at Microsoft Developer Support -- Database commonly use, how to use them, and the types of issues they can be used to resolve.    In this article, Adam Saxton starts down the long road of double-hop troubleshooting by providing a gentle introduction to Kerberos, and how to recognize Kerberos+SQL related issues.  Future articles will move beyond identification, and discuss the tools and processes we use to troubleshoot such issues.   Kerberos We receive numerous calls where the problem falls under Kerberos.  This will present itself when trying to establish a connection to SQL Server (or a Linked Server).  I’m going to discuss what Kerberos is, how to recognize when there is a problem with it, and go over some of the items that need to be looked at in order to resolve a Kerberos issue with SQL Server.  As there is a lot of information on this topic, I’ll be splitting it up over a few blog posts.     What is Kerberos? Kerberos is a secure method for authenticating a request for a service in a computer network, and is used to verify a user or host identity. Kerberos lets a user request an encrypted "ticket" from an authentication process that can then be used to request a particular service from a server. The user's password does not have to pass through the network.   Put simply, it is a more secure form of Windows Authentication versus NTLM.  Also, understand that Kerberos is not a Windows specific technology.  Windows will always try to use Kerberos first over TCP.  If that doesn’t work, it will drop down to NTLM.  The exception to this is if you are trying to access a resource that is local.  NTLM will always be used when accessing local resources.  Also note that Kerberos cannot currently be used over Named Pipes or Shared Memory.     How does Kerberos work? There are a lot of details into how Kerberos works.  I’m not going to get that deep into it within this blog series.  I’m going to relate this to how it works with SQL.  To do that, I’m going to use the classic example of a Double-hop with a Web Server.  We use the work Double-hop to explain that the IIS Server is using a resource that is located on a different server.  In this case the first “hop” is from the web browser client to the IIS Server (ASP or ASPX Page); the second hop is to the SQL Server on a different machine. The server on the second hop is also referred to as the Backend Server.   In order to successfully establish a connection using Kerberos, a SQL ServicePrincipalName (SPN) needs to exist within Active Directory.  I will talk more about SPN’s and where they are located in a later post as the topic is complex enough to deserve its own.  I would say about 80-90% of calls we receive relating to Kerberos involve either missing SPN or duplicate SPN’s.   When you log onto your machine, you will get what we call a Kerberos Ticket.  To be more specific, you get a Ticket-Granting Ticket (TGT).  You use the TGT as a master ticket to access all Kerberos services on a network. A Kerberos ticket includes all the user credentials or computer credentials in an encrypted format. These credentials are used to identify a specific user or a specific computer on a network for access to Kerberos services.  When you access a Kerberos service, that service uses the ticket that you received to authenticate you.  After the service verifies your identity and authenticates you, the service issues a service ticket. This is where the SPN’s come into play.  Think of the SPN as a pointer for Kerberos so it knows where to go.  That’s why, if it’s missing or there are duplicates, it doesn’t know what to do.     How to recognize a Kerberos error with SQL Server: There are a few errors that we see where we can make an educated guess that a connectivity issue is Kerberos related.    Cannot Generate SSPI Context   Login failed for user ‘(null)’   Login failed for user ‘NT AUTHORITY\ANONYMOUS’   Those are the three biggest errors we see in relation to Kerberos with SQL Server.  These errors occur when the credentials from the first hop cannot be passed to the second hop.  Usually, this is because we are failing over to NTLM or our SPN configuration is not correct.  There are also other settings within Active Directory that come into play as well as certain machine configuration based on how services are setup.  For now, let’s focus on the basics, so I’ll get more into those setting and machine configurations in a later post.   One of the last things I’d like to mention is that I’ve seen customers get the following error confused with a Kerberos issue:   Login failed for user ‘SomeUser’   The SomeUser could be whatever user (SQL or Windows) that you are trying to connect with.  If you see the actual username in the error that means the credentials actually got to the SQL Server.  The issue at that point lies in the SQL Security configuration itself (either the server as a whole, or the database you are trying to connect to).  This is not a Kerberos related issue, but instead is a much simpler login failure.   Resources:   Kerberos in Windows 2000   Kerberos in Windows 2003 How to use Kerberos authentication in SQL Server Posted By:       Adam Saxton
Source: blogs.msdn.com

SQL Null: Grouping and sorting
Because SQL Server defines all Null markers as being unequal to one another, a special definition was required in order to group by Nulls together when performing certain operations.  SQL defines “any two values that are equal to one another, or any two Nulls”, as “not distinct”.  This definition of not distinct allows SQL to group and sort Nulls when the “GROUP BY” clause are used.  Let’s look at an example using group by and nulls.  You will see, that SQL Server does group them together by using the not distinct definition: DECLARE @table1 TABLE( ID INT, myStr VARCHAR(100)) INSERT INTO @table1( ID, myStr) Values (1, 'value a'), (2, 'value b'), (3, 'value a'), (4, 'value a'), (5, 'value b'), (6, null), (7, 'value b'), (8, null), (9, null), (10, 'value a'), (11, null), (12, null) SELECT * FROM @table1 SELECT myStr,       COUNT(1) as CountFROM   @table1GROUP  BY myStr The 1st select results show the raw values: ID myStr 1 value a 2 value b 3 value a 4 value a 5 value b 6 NULL 7 value b 8 NULL 9 NULL 10 value a 11 NULL 12 NULL The 2nd results show the group by: myStr Count NULL 5 value a 4 value b 3 As you can see, when SQL Server completes the group by clause it combines rows with NULL values.  When thinking about how it handles string concatenations, this is a little twist that can catch you off guard or make you think.  Distinct, same thing: SELECT distinct myStr FROM @table1 The result is 3 rows: NULL, value a, value b Hope this helps you when thinking about null values and grouping.  Try the example and look how SQL Server handles the default sort order for Nulls.
Source: brian.chipsofttech.com

How to connect to file-based data sources (Microsoft Access , Microsoft Excel and Text files ) from a 64 bit application
  The Issue: A 64-bit process can load only 64-bit components in it's process boundary. Same is true for a 32-bit process also. So, if your application is 64 bit, you will need a 64-bit provider or driver to connect to  Microsoft Access (mdb, accdb) or Microsoft Excel 2010 (xls, xlsx, and xlsb) or  text files. Bad news is that there is no 64-bit provider or driver available "yet" to connect to these file-based data sources. Good news is that a 64-bit Provider is heading your way which is currently in beta phase.   The Kludge: The common workaround is to connect to a 32-bit SQL Server instance that has a Linked Server to the Access/Excel/Text file. This is a hack, can be difficult to get set-up, and can have stability and performance issues, and realistically, we at Microsoft would rather not support this setup or issues arising from it.   The Good news:  A 64-bit driver is headed your way. This is great news for users in a 64-bit world. Soon you'll be able to connect to these file-based data sources from your 64-bit application, rather than wrestle with obscure settings to force them to connect via a Linked Server.   The next version of Microsoft Office, Office 2010, will be available in a 64-bit version. This version will include a 64-bit version of "2010 Office System Driver Connectivity Components" which will include all the needed 64-bit ODBC driver and OLEDB providers to connect to these file-based data sources.   You will not have to buy or install Office 2010 to obtain and use the new 64-bit components. Like the current version of the provider, it will be available as a free download.   You can download the beta version from here: http://www.microsoft.com/downloads/details.aspx?familyid=C06B8369-60DD-4B64-A44B-84B371EDE16D&displaylang=en   Connection string for 64-bit OLEDB Provider: For Microsoft Office Access : Set the Provider string to “Microsoft.ACE.OLEDB.12.0" For Microsoft Office Excel   : Add “Excel 12.0” to the Extended Properties of the OLEDB connection string.   Connection string for 64-bit ODBC Driver: For Microsoft Office Access: Set the Connection String to “Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=path to mdb/accdb file” For Microsoft Office Excel: Set the Connection String to “Driver={Microsoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb)};DBQ=path to xls/xlsx/xlsm/xlsb file”   The gotchas: You cannot install the 32-bit version and the 64-bit version of the "2010 Office System Driver Connectivity Components" on the same computer. You cannot install the 64-bit version of the "2010 Office System Driver Connectivity Components" on a computer that already has the 32-bit Office 2007 ACE Provider. However, the 32-bit Office 2007 provider can coexist side-by-side with the 32-bit version of the "2010 Office System Driver Connectivity Components".   Authors:  Enamul Khaleque & Srini Gajjela [DSD-SQLDeveloper group at Microsoft]  
Source: blogs.msdn.com

Wisconsin SQL Server User Group
A new blogpost has been posted by Ted Krueger (onpnt) on Oct 11, 2010 in category Database Administration
Source: blogs.lessthandot.com

List_all_SQL_Agent_jobs_and_the_owner_of_those_jobs
A new article has been posted in the wiki.
Source: wiki.lessthandot.com

Class not registered error after restoring an Analysis Services database that uses the SQL Server 2005 SQL Native Client.
After restoring an Analysis Services database that uses the SQLNCLI.1 SQL Native Client (SNAC) on a server which has never had SQL Server 2005 installed, an error similar to the following occurs during processing operations: Internal error: The operation terminated unsuccessfully. Internal error: The operation terminated unsuccessfully. OLE DB error: OLE DB or ODBC error: Class not registered. Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Adventure Works DW', Name of 'Adventure Works DW'. Errors in the OLAP storage engine: An error occurred while processing the 'Internet_Sales_2001' partition of the 'Internet Sales' measure group for the 'Adventure Works' cube from the Adventure Works DW database. Server: The operation has been cancelled.   During execution of queries that access data contained in ROLAP storage, an error message similar to the following is returned: Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Adventure Works DW', Name of 'Internet Sales Order Details' was being processed. Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Adventure Works DW', Name of 'Adventure Works DW'. OLE DB error: OLE DB or ODBC error: Class not registered. Server: The operation has been cancelled.   Opening the Property Sheet for the Data Source from SQL Server Management Studio (SSMS) or the Data Source Designer in Business Intelligence Development Studio (BIDS) and attempting to edit the connection string returns the following error from Connection Manager: The specified provider is not supported. Please choose different provider in connection manager   Checking the connection string from either SSMS or BIDS, it is similar to the following: Provider=SQLNCLI.1;Data Source=MyServer;Persist Security Info=True;User ID=sa;Initial Catalog=AdventureWorksDW   Changing the connection string to use the native OLE DB provider, similar to the following: Provider=SQLOLEDB.1;Password=Olapolap1;Persist Security Info=True;User ID=sa;Initial Catalog=AdventureWorksDW;Data Source=MyServer   Likewise, modifying the connection string to use the SQL Server 2008 SQL Server SNAC, similar to the following. Provider=SQLNCLI10.1;Integrated Security="";Persist Security Info=False;User ID=sa;Initial Catalog=AdventureWorksDW;Data Source=MyServer;Initial File Name="";Server SPN=""   The issue occurs because the SQLNCLI.1 provider does not ship with SQL Server 2008 and the provider name for SNAC connections changed in SQL Server 2008. The SNAC Version Provider names for SQL Server 2005 and SQL Server 2008 are:  SQL Server 2005           SQLNCLI.1 SQL Server 2008           SQLNCLI10.1   To workaround this issue, any of the following options can be used: Edit the Data Source(s) for the database and switch from the SQLNCLI.1 provider to the SQLOLEDB.1 provider Edit the Data Source(s) for the database and switch from the SQLNCLI.1 provider to the SQLNCLI10.1 Native Client provider. Download and install the SQL Server 2005 SQL Native Client from the following link: http://www.microsoft.com/downloads/details.aspx?FamilyID=d09c1d60-a13c-4479-9b91-9e8b9d835cdc&DisplayLang=en   John Desch Microsoft SQL Server Escalation Engineer
Source: blogs.msdn.com

SQL 2005 SP2 is available now!
Finally, after rounds of CTPs (Community Technology Previews) and great feedback from customers, we have officially released Service Pack 2 (SP2) for SQL 2005. Download SP2You can download it by visiting the Microsoft website here. Enhancements in SQL 2005 SP2SP2 provides several new features and improvements. You can learn about them here. Download SQL 2005 SP2 Sample databasesYou can download the sample databases and learn about their enhancements by visiting here. Download Books Online refresh for SP2Books online gets updated with every release of service pack. Please download the latest version by visiting here.   As with every service pack, please make sure that you first test it out on your test and QA environment, before putting it to the production server.     Posted By: Sanchan Saxena
Source: blogs.msdn.com

Tools of the Trade: Part IV - Developing WinDbg Extension DLLs
A WinDbg extension DLL is set of exported callback functions for implementing user defined commands to extract specific customized  information from the memory dump(s).  Extension DLLs are loaded by debugger engine and can provide extra functionality of automation tasks while performing user-mode or kenrel-mode debugging.  An extension DLL may export any number of functions that are used to execute extension commands.  Each function is explicitly declared as an export in the  DLL's definition file or .def file and function names must be in lowercase letters WinDbg (DbgEng ) extension DLL must export DebugExtensionInitialize. This will be called when the DLL is loaded, to initialize the DLL. It may be used by the DLL to initialize global variables. An extension DLL may export an optional function DebugExtensionUninitialize. If this is exported, it will be called just before the extension DLL is unloaded. An extension DLL may export a DebugExtensionNotify. If this is exported, it will be called when a session begins or ends, and when a target starts or stops executing. These notifications are also provided to IDebugEventCallbacks objects registered with a client. An extension DLL may export KnownStructOutput. If this is exported, it will be called when the DLL is loaded. This function returns a list of structures that the DLL knows how to print on a single line. It may be called later to format instances of these structures for printing. So, how to develop   your own Windbg Extension DLL? Lets follow these steps: 1. Download & Install debugging tools for windows from http://www.microsoft.com/whdc/devtools/debugging/installx86.Mspx 2. Create "Win32 Console Application" using VS 2008 3. Select Application type as "DLL" and click "Finish" . 4. Add a "Module-Definition File (.def)" called "wdbrowser" to the project. One way to export your extension function is by specifying the function names in the EXPORTS section of the .def file. You may you use other ways of exporting functions, such as __dllexport 5. Configure the project "Additional include Directories" to point to header files that comes with Windbg. Default folder for x86 is "C:\Program Files\Debugging Tools for Windows (x86)\sdk\inc" 6. Configure the project "Additional Library Directories" to point to library files that comes with Windbg. Default folder  for x86 libraries  is  ""C:\Program Files\Debugging Tools for Windows (x86)\sdk\lib\i386"" 7. The library, "dbgeng.lib " & "dbgeng.dll" has the implementation of the Debug engine exported functions. So, add "dbgeng.lib" in "Additional Dependencies". 8. Add name of the module definition file created at the above Step #3 9. Now Include the following required headers files in "stdafx.h" #include <windows.h> #include <imagehlp.h> #include <wdbgexts.h> #include <dbgeng.h> #include <extsfns.h> 10. Declare following two global variables in your extension project main implementation file. //Version. EXT_API_VERSION g_ExtApiVersion = {1,1,EXT_API_VERSION_NUMBER,0} ; WINDBG_EXTENSION_APIS ExtensionApis = {0}; 11. Declare following debug engine COM interface pointers. IDebugAdvanced2*  gAdvancedDebug2=NULL; IDebugControl4*   gDebugControl4=NULL; IDebugControl*    gExecuteCmd=NULL; IDebugClient*                 gDebugClient=NULL; 12. Next step is to declare and implement WinDbgExtensionDllInit function in your DLL main implementation source file. In this example that is "wdbrowser.cpp" . The WinDbgExntensionDllInit is the first function that will be called by windbg . So, this function is the idle for implementing any extension specific initialization or related functionality. Please refer http://msdn.microsoft.com/en-us/library/cc267872.aspx for more details about this function. VOID WDBGAPI WinDbgExtensionDllInit (PWINDBG_EXTENSION_APIS lpExtensionApis, USHORT usMajorVersion, USHORT usMinorVersion) {                   ExtensionApis = *lpExtensionApis;                   HRESULT hResult = S_FALSE;                         if (hResult = DebugCreate(__uuidof(IDebugClient), (void**) &gDebugClient) != S_OK)                          {                                                 dprintf("Acuqiring IDebugClient* Failled\n\n");                                                 return;                          }                          if (hResult = gDebugClient->QueryInterface(__uuidof(IDebugControl), (void**) &gExecuteCmd) != S_OK)                          {                                         dprintf("Acuqiring IDebugControl* Failled\n\n");                                                 return;                          }                          if (hResult = gDebugClient->QueryInterface(__uuidof(IDebugAdvanced2), (void**) &gAdvancedDebug2) != S_OK)                          {                                               dprintf("Acuqiring IDebugAdvanced2* Failled\n\n");                                                 return;                          }                          if (hResult = gDebugClient->QueryInterface(__uuidof(IDebugControl4), (void**) &gDebugControl4) != S_OK)                          {                             dprintf("Acuqiring IDebugControl4* Failled\n\n");                                                 return;                          } } 13. Declare another exported function ExtensionApiVersion to report the version of your extension to windbg. Please refer to http://msdn.microsoft.com/en-us/library/cc267873.aspx for detailed information about this function. LPEXT_API_VERSION WDBGAPI ExtensionApiVersion (void) {     return &g_ExtApiVersion; } 14. Define Debug engine's interface pointers, so that your extension module can interact with debug engine. For more information please refer http://msdn.microsoft.com/en-us/library/cc265976.aspx  - IDebugClient, http://msdn.microsoft.com/en-us/library/cc266102.aspx - IDebugControl http://msdn.microsoft.com/en-us/library/cc265957.aspx - IDebugAdvanced IDebugAdvanced2* gAdvancedDebug2=NULL; IDebugControl4* gDebugControl4=NULL; IDebugControl* gExecuteCmd=NULL; IDebugClient*               gDebugClient=NULL; 15. Next step is to - implement debug engine's callback interface IDebugOutputCallbacks . Debug engine callbacks your implementation of IDebugOutCallbacks::Output() with output as a result of the commands that are executed by your extension function. Refer to http://msdn.microsoft.com/en-us/library/cc265716.aspx   for detailed information about IDebugOutputCallbacks::Output() 16. Add the following new class in a header file that inherits the IDebugOutputCallbacks interface . #ifndef __OUT_HPP__ #define __OUT_HPP__ #include <string> #include <sstream> class StdioOutputCallbacks : public IDebugOutputCallbacks { private:                         std::string m_OutputBuffer;                         //                         //This buffer holds the output from the command execution.                         //                         CHAR m_OutPutBuffer[4096]; public:                         void InitOutPutBuffer();                         std::string GetOutputBuffer()                         {                                                 return m_OutputBuffer;                         };                         void ClearOutPutBuffer()                                       {                                                 m_OutputBuffer = "";                         };     STDMETHOD(QueryInterface)(         THIS_         IN REFIID InterfaceId,         OUT PVOID* Interface         );     STDMETHOD_(ULONG, AddRef)(         THIS         );     STDMETHOD_(ULONG, Release)(         THIS         );     // IDebugOutputCallbacks.     STDMETHOD(Output)(         THIS_         IN ULONG Mask,         IN PCSTR Text         ); }; extern StdioOutputCallbacks g_OutputCb; #endif // #ifndef __OUT_HPP__ 17. Add the following code that implements the IDebugOutputCallbacks interface methods, especially Output() #include "stdafx.h" #include <stdio.h> #include <windows.h> #include <dbgeng.h> #include "OutputCallBack.h" StdioOutputCallbacks g_OutputCb; STDMETHODIMP StdioOutputCallbacks::QueryInterface(     THIS_     IN REFIID InterfaceId,     OUT PVOID* Interface     ) {     *Interface = NULL;     if (IsEqualIID(InterfaceId, __uuidof(IUnknown)) ||         IsEqualIID(InterfaceId, __uuidof(IDebugOutputCallbacks)))     {         *Interface = (IDebugOutputCallbacks *)this;         AddRef();         return S_OK;     }     else     {         return E_NOINTERFACE;     } } STDMETHODIMP_(ULONG) StdioOutputCallbacks::AddRef(     THIS     ) {     // This class is designed to be static so     // there's no true refcount.     return 1; } STDMETHODIMP_(ULONG) StdioOutputCallbacks::Release(     THIS     ) {     // This class is designed to be static so     // there's no true refcount.     return 0; } STDMETHODIMP StdioOutputCallbacks::Output(     THIS_     IN ULONG Mask,     IN PCSTR Text     ) {     UNREFERENCED_PARAMETER(Mask);                         m_OutputBuffer += Text;     return S_OK; } void StdioOutputCallbacks::InitOutPutBuffer() {                         m_OutputBuffer.erase(); } 18. Add implementation of your extension function. In this example, we choose to implement an extension that displays the variable names, types in the frame 2 of the current thread. The implementation is: DECLARE_API (dvf3) { // // Install output callbacks. // if ((gDebugClient->SetOutputCallbacks((PDEBUG_OUTPUT_CALLBACKS) &g_OutputCb))!= S_OK) { dprintf("*****Error while installing Outputcallback.*****\n\n"); return; } // // Since frame count starts from 0 index, we have to pass 2 as parameter for .frame command for the frame# 2 // //Execute command to extrac 2nd frame. if (gExecuteCmd->Execute(DEBUG_OUTCTL_THIS_CLIENT | //Send output to only outputcallbacks DEBUG_OUTCTL_OVERRIDE_MASK | DEBUG_OUTCTL_NOT_LOGGED, ".frame 2", DEBUG_EXECUTE_DEFAULT ) != S_OK) { dprintf("Executing .frame 2 failled\n"); return; } //Execute command to extrac 2nd frame. if (gExecuteCmd->Execute(DEBUG_OUTCTL_THIS_CLIENT | //Send output to only outputcallbacks DEBUG_OUTCTL_OVERRIDE_MASK | DEBUG_OUTCTL_NOT_LOGGED, "dv /i /t /v", DEBUG_EXECUTE_DEFAULT ) != S_OK) { dprintf("Executing dv /i /t /v failled\n"); return; } dprintf("***** Extracting locals & formal params from frame 2 *****"); dprintf("\n%s\n", g_OutputCb.GetOutputBuffer().c_str()); } 19. Re-build the project. Copy the .DLL from release folder to a folder where Windbg looks for extension DLLs. On x86 machine default location is  "<Drive letter>\Program Files\Debugging Tools for Windows (x86)\winext" 20. The extension is ready for the use or test. 21. Start windbg and open a full user mode dump. Type .load myextension and hit enter to load the extension DLL into Windbg process space 22. Run .chain command to verify if your extension is loaded by WinDbg . you will see output similar to below output, if your extension is loaded. 23. Type !dvf3 to run the extension function for extracting and displaying variable names, types from the frame 2 . Additional references: http://msdn.microsoft.com/en-us/library/cc265826.aspx - describes about how to Interact with debug engine, I/O operations with debug engine, Memory access, Using symbols, source files. http://www.codeplex.com/ODbgExt  - Microsoft Open Debugger Extension for Windbg Happy developing debug engine extensions! Posted By: Srini Gajjela & Enamul Khaleque (DSD-SQLDeveloper group @ Microsoft)
Source: blogs.msdn.com

SQL Compact Performance: PostSyncCleanup to turn off UpdateStatistics during initial download of SQL Compact Replication
SQL Compact Replication initial subscription may take significant time on devices.   The UpdateStatistics operation adds additional overhead to the initial download.    Indicators: The Client Agent log may show the start and end times of UpdateStatistics operation.  For example: SQLCECA: 04/09/2009-12:21:26 UpdateStatistics Start app=<appName>.exe SQLCECA: 04/09/2009-12:27:59 UpdateStatistics End hr=0 app=<appName>.exe   Workaround:   Recent hotfixes for SQL Compact 3.5 and 3.1 (KB article URLs given below) expose a new API for SQL Compact applications to turn off UpdateStatistics during initial download of Replication.   Disabling UpdateStatistics can be done only on the merge replication, and not for RDA operations. One should use,   SqlCeReplication repl = new SqlCeReplication(); … repl.PostSyncCleanup = 2;   to make sure that update stats is not called after merge replication sync.    To use the new method in your SQL Compact app, in addition to applying the hotfix to device, you will also need to deploy the updated System.Data.SqlServerCe.dll  to the desktop folder where VS can find it, else an error will occur:   'PostSyncCleanup' is not a member of 'System.Data.SqlServerCe.SqlCeReplication'.   You can follow these steps to resolve the above error:   Extract SYSTEM~1.004 file from the device cab, rename it to “ System.Data.SqlServerCe.dll ” and replace at the appropriate path.   When working with SQL Server Compact 3.5 (Visual Studio 2008): “C:\Program Files\Microsoft SQL Server Compact Edition\v3.5\Devices\System.Data.SqlServerCe.dll”   If your project refers to “System.Data.SqlServerCe.dll” in another location then replace the DLL at that location as well.    Once the modified app runs on the device, if the hotfix bits were not deployed you may see the following error (as expected):   “Method not found: set_ PostSyncCleanup System.Data.SqlServerCe.SqlCeReplication”   Uninstalling SQL Compact 3.5 SP1 RTM and installing the hotfix .cab files should resolve the above error.   References:   http://support.microsoft.com/kb/963060   http://support.microsoft.com/kb/960142   Posted by:  Syed
Source: blogs.msdn.com

How to flip a bit in SQL Server by using the Bitwise NOT operator
A new blogpost has been posted by SQLDenis on Sep 29, 2010 in category Microsoft SQL Server
Source: blogs.lessthandot.com

General network errors (GNEs), Timeout expired, SQL Server not found or Access Denied…
Many times, the key to solving these errors is a network packet trace.  The problem is that these errors are often thrown intermittently.  This can obviously make it problematic to capture a network trace.    Network Monitor 3 (AKA Netmon3) solves this problem quite nicely.  Netmon3 has the built-in ability to capture rolling traces.  This allows you to start the trace and then walk away and be sure that you will have captured the error.   Here are my steps for configuring Netmon3 for a rolling trace:   Please see the steps below in order to configure Network Monitor:   1.    Download Network Monitor 3 a.    (http://www.microsoft.com/downloads/details.aspx?FamilyID=18b1d59d-f4d8-4213-8d17-2f6dde7d7aac&DisplayLang=en)   2.    Install Network Monitor on both the client and server experiencing the problem.  If you are not able to install it on the server you can install it on another machine that is connected to the same hub as the server or to the admin port of the switch that the server is on.  If you go this route you must synchronize the time between the client, server, and this third machine.   3.    Synchronize the clocks on the Client and SQL machines a.    At one of the machines bring up a command prompt and execute the following:                                            i.    Net time <\\machinenamewewanttosynchwith> /set /yes   4.    Turn off TCP Chimney if any of the machines are Windows 2003 a.    bring up a command prompt and execute the following:                                            i.    Netsh int ip set chimney DISABLED   5.    Using the NMCap command-line utility, start the chained trace (chained = create a new capture file after reaching the “Temporary capture file size” rather than overwriting the current capture) a.    [Netmon 3 installation folder]\NMCap /network * /capture /file test.chn:100M  (creates 100 MB chained files)                                            i.    NOTE:  YOU MUST USE THE .CHN EXTENSION TO CAPTURE CHAINED TRACES                                           ii.    NOTE:  The target folder must exist for NMCap to create the trace file       b.    More information can be found:                                          i.    http://blogs.technet.com/netmon/archive/2006/10/24/nmcap-the-easy-way-to-automate-capturing.aspx                                         ii.    Or by running the command-line switch /examples                                        iii.    Or by running the command-line switch /?   6.    Get the IP addresses of the client and the server a.    ipconfig /all on the client > client.txt b.    ipconfig /all on the server > server.txt   7.    Reproduce the problem, please note the exact time the problem reproduces.  Also, please note the exact error message.   8.    Stop the trace by hitting Ctrl-c in the command-line window   Please note that Netmon3 does not run on Windows 2000.  Also, step #6 is probably the single most important step (besides actually starting the trace).  If you don’t know the timestamp of the error, it is extremely difficult to find the offending packets in the trace.   Happy captures!   Posted By: Evan Basalik
Source: blogs.msdn.com

SSIS runs in BIDS but not with SQL Agent
A new blogpost has been posted by Ted Krueger (onpnt) on Oct 6, 2010 in category Database Administration
Source: blogs.lessthandot.com

No comments:

Post a Comment