Monika's SQL Corner
Sunday, October 17, 2010
SQL Server 2005 setup fails when MSXML Core Services 6.0 Service Pack 2 has already been installed
There is a known issue with SQL Server setup when MSXML6 component update has been installed on the system. The problem described in KB 968749 http://support.microsoft.com/kb/968749 has raised a lot of concerns by customers. The concerns are related to the fact that the solution is manual and not usable in a Large Enterprise environments. In order to automate MSXML6 component un-install we have created the automatic solution for this issue. However the solution needs to be implemented on a case by case basis. If you are experiencing the issue described in KB 968749 and need an automated solution for this, please contact SQL CSS. Please see details on how to open the incident here http://support.microsoft.com/ The incident for this specific issue is going to be free of charge. We apologize for any inconvenience. SQL CSS. Follow us on twitter (http://twitter.com/MicrosoftSQLCSS)
Source: blogs.msdn.com
How to connect to file-based data sources (Microsoft Access , Microsoft Excel and Text files ) from a 64 bit application
The Issue: A 64-bit process can load only 64-bit components in it's process boundary. Same is true for a 32-bit process also. So, if your application is 64 bit, you will need a 64-bit provider or driver to connect to Microsoft Access (mdb, accdb) or Microsoft Excel 2010 (xls, xlsx, and xlsb) or text files. Bad news is that there is no 64-bit provider or driver available "yet" to connect to these file-based data sources. Good news is that a 64-bit Provider is heading your way which is currently in beta phase. The Kludge: The common workaround is to connect to a 32-bit SQL Server instance that has a Linked Server to the Access/Excel/Text file. This is a hack, can be difficult to get set-up, and can have stability and performance issues, and realistically, we at Microsoft would rather not support this setup or issues arising from it. The Good news: A 64-bit driver is headed your way. This is great news for users in a 64-bit world. Soon you'll be able to connect to these file-based data sources from your 64-bit application, rather than wrestle with obscure settings to force them to connect via a Linked Server. The next version of Microsoft Office, Office 2010, will be available in a 64-bit version. This version will include a 64-bit version of "2010 Office System Driver Connectivity Components" which will include all the needed 64-bit ODBC driver and OLEDB providers to connect to these file-based data sources. You will not have to buy or install Office 2010 to obtain and use the new 64-bit components. Like the current version of the provider, it will be available as a free download. You can download the beta version from here: http://www.microsoft.com/downloads/details.aspx?familyid=C06B8369-60DD-4B64-A44B-84B371EDE16D&displaylang=en Connection string for 64-bit OLEDB Provider: For Microsoft Office Access : Set the Provider string to “Microsoft.ACE.OLEDB.12.0" For Microsoft Office Excel : Add “Excel 12.0” to the Extended Properties of the OLEDB connection string. Connection string for 64-bit ODBC Driver: For Microsoft Office Access: Set the Connection String to “Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=path to mdb/accdb file” For Microsoft Office Excel: Set the Connection String to “Driver={Microsoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb)};DBQ=path to xls/xlsx/xlsm/xlsb file” The gotchas: You cannot install the 32-bit version and the 64-bit version of the "2010 Office System Driver Connectivity Components" on the same computer. You cannot install the 64-bit version of the "2010 Office System Driver Connectivity Components" on a computer that already has the 32-bit Office 2007 ACE Provider. However, the 32-bit Office 2007 provider can coexist side-by-side with the 32-bit version of the "2010 Office System Driver Connectivity Components". Authors: Enamul Khaleque & Srini Gajjela [DSD-SQLDeveloper group at Microsoft]
Source: blogs.msdn.com
WHITEPAPER - Online Indexing Operations in SQL Server 2005
I wanted to post a quick blog about a new whitepaper from Microsoft. Introduced in Microsoft SQL Server 2005 Enterprise Edition, the online index feature provides a powerful way to perform maintenance operations such as rebuilding or creating indexes in a production system without sacrificing DML concurrency. Users are not blocked from querying and updating the underlying table during the index operation. This white paper is targeted to database administrators who want to use the online index feature of SQL Server 2005 Enterprise Edition to increase server availability in their production environment and need to understand the advantages and disadvantages of using this feature. You can read about it here. Posted By: Sanchan Saxena
Source: blogs.msdn.com
How to install/enable .Net 3.5 SP1 on Windows Server 2008 R2 for SQL Server 2008 and SQL Server 2008 R2
The .NET Framework 3.5 SP1 (also referred to as .NET Framework 3.5.1) is a prerequisite for SQL Server 2008. SQL Server 2008 Setup for standalone instance will install the .NET Framework 3.5 SP1 if it is not already installed. In Windows Server 2008 R2, the .NET Framework is a feature and installing it is different when compared to older versions of Windows Operating System. In previous versions, one could either download the .NET Framework 3.5.1 install binaries from Microsoft download site or use the install binaries from the redist folder of the SQL Server 2008 installation media. Starting with Windows Server 2008 R2, the method of installing .NET Framework 3.5.1 has changed. This document explains how to verify that .NET Framework 3.5.1 is installed and if it is not installed how you can add it. How to verify if .NET Framework 3.5 SP1 is installed: Here are the steps to verify that .NET Framework 3.5.1 is installed on Windows Server 2008 R2. Click the Start button in the lower left hand corner of the display. Highlight Administrative Tools and select Server Manager. In the Server Manager interface, click Features to display all the installed Features in the right hand pane. Verify that .NET Framework 3.5.1 is listed. If .NET Framework 3.5.1 feature is not listed, you can use either of the following methods to install it: Method 1: Using Server Manager Interface In the Server Manager interface, select Add Features to displays a list of possible features. In the Select Features interface, expand .NET Framework 3.5.1 Features. Once you expand .NET Framework 3.5.1 Features, you will see two check boxes. One for .NET Framework 3.5.1 and other for WCF Activation. Check the box next to .NET Framework 3.5.1 and click Next. Note: If you do not expand .NET Framework 3.5.1 Features and check it, you will get a pop-up titled Add Features Wizard as shown below. Click Cancel and expand .NET Framework 3.5.1 Features and then check .NET Framework 3.5.1 check box below it. You cannot install .NET Framework 3.5.1 Features unless the required role services and features are also installed. In the Confirm Installation Selections interface, review the selections and then click Install. Allow the installation process to complete and then click Close. Method 2: Using PowerShell Click the Start button in the lower left hand corner of the display. Highlight All Programs and select Accessories Expand Windows PowerShell and right click Windows PowerShell and select Run as administrator. Click Yes on the User Account Control box. At the PowerShell command prompt, type the following commands, and then press ENTER after each command: Import-Module ServerManager Add-WindowsFeature as-net-framework Note: A screenshot is shown below:
Source: blogs.msdn.com
Dealing with very large SQL Compact database files
Working with very large SQL Compact files, you may run into two issues: 1. Manipulating a large SQL Compact database in VS 2008 IDE requires a hotfix http://support.microsoft.com/kb/968436 Error message when you use the Visual Studio 2008 IDE to manipulate a SQL Server Compact 3.5 Database file which is larger than 128 Megabytes: "The database file that is larger than the configured maximum database size" 2. Upgrading a very large database from SQL Compact 3.1 to 3.5 requires another hotfix http://support.microsoft.com/kb/971027 FIX: Error message when you upgrade a very large database to SQL Server Compact 3.5: "The database file is larger than the configured maximum database size. This setting takes effect on the first concurrent database connection only" Hotfix 3.5.5692.12 fixes a problem where large SQL CE database upgrade fails with an error: The database file is larger than the configured maximum database size. This setting takes effect on the first concurrent database connection only. [ Required Max Database Size (in MB; 0 if unknown) = <size>] Depending upon the file-size and available resources on the machine, the upgrade process for a very large database may consume significant memory. This is expected behavior. You may want to stop other applications to make room for the Upgrade process to complete.
Source: blogs.msdn.com
Don’t turn the dataflow into a cursor [SSIS]
I saw a thread on the SSIS forum today that went something like this: I have the following dataset: AccountNo Date DailyMovement 00000001 28/08/2010 10 00000001 29/08/2010 5 00000001 30/08/2010 7 00000002 28/08/2010 8 00000002 29/08/2010 6 for which I want to compute a running total per [AccountNo] & [Date] like so: AccountNo Date DailyMovement RunningTotal 00000001 28/08/2010 10 10 00000001 29/08/2010 5 15 00000001 30/08/2010 7 22 00000002 28/08/2010 8 8 00000002 29/08/2010 6 14 How do I do that using a script component? That last comment: How do I do that using a script component? is a fairly common question. People assume that if a calculated value is dependant on prior rows then a script component needs to be involved because that running total needs to be tracked somewhere, but that isn’t necessarily the case. Ask yourself, how would you do this if the data were residing in a database table, would you do this?: SELECT '00000001' AS [AccountNo], CONVERT(date,'20080828') AS [Date],10 AS [DailyMovement] INTO #balances UNION ALL SELECT '00000001' AS [AccountNo], CONVERT(date,'20080829') AS [Date],5 AS [DailyMovement] UNION ALL SELECT '00000001' AS [AccountNo], CONVERT(date,'20080830') AS [Date],7 AS [DailyMovement] UNION ALL SELECT '00000002' AS [AccountNo], CONVERT(date,'20080828') AS [Date],8 AS [DailyMovement] UNION ALL SELECT '00000002' AS [AccountNo], CONVERT(date,'20080829') AS [Date],6 AS [DailyMovement] SELECT [AccountNo] , [Date] , [DailyMovement] , ( SELECT SUM(DailyMovement) FROM #balances b2 WHERE b1.[AccountNo] = b2.[AccountNo] AND b1.[Date] >= b2.[Date]) FROM #balances b1; OK that works, and here’s a screenshot to prove it: But really, would any of you actually do this? Hopefully not, the use of the correlated subquery has simply turned what should be a nice set operation into a cursor-in-disguise (something I have talked about before) because that subquery will be getting executed for every row in the table. Instead you could run the following on that same dataset: SELECT b1.[AccountNo] , b1.[Date] , b1.[DailyMovement] , SUM(b2.[DailyMovement]) AS [RunningTotal] FROM #balances b1 INNER JOIN #balances b2 ON b1.[AccountNo] = b2.[AccountNo] AND b1.[Date] >= b2.[Date] GROUP BY b1.[AccountNo],b1.[Date],b1.[DailyMovement]; and you get the same result but with a much more amenable execution plan (execute with SET STATISTICS IO ON if you don’t believe me)! The same principle applies in a SSIS dataflow. Often there is no need to resort to a script component, the same result can be achieved using some smart dataflow design. You can use, for example, the same approach as in the second SQL query (above) using a combination of SSIS’ Sort, Aggregate, Merge Join and Conditional Split components. That is not to say that one will be quicker than the other but at least you’ll have something that is more intuitive and arguably more maintainable. Of course if performance gain is your primary goal then the correct advice is, as always, “test and measure, test and measure, test and measure”!!! Ask yourself “If I could, how would I solve this using T-SQL?” and see if that influences your dataflow design at all. Invariably script components should be the last choice, not the first! @jamiet
Source: feedproxy.google.com
SQL 2005 SP2 is available now!
Finally, after rounds of CTPs (Community Technology Previews) and great feedback from customers, we have officially released Service Pack 2 (SP2) for SQL 2005. Download SP2You can download it by visiting the Microsoft website here. Enhancements in SQL 2005 SP2SP2 provides several new features and improvements. You can learn about them here. Download SQL 2005 SP2 Sample databasesYou can download the sample databases and learn about their enhancements by visiting here. Download Books Online refresh for SP2Books online gets updated with every release of service pack. Please download the latest version by visiting here. As with every service pack, please make sure that you first test it out on your test and QA environment, before putting it to the production server. Posted By: Sanchan Saxena
Source: blogs.msdn.com
Class not registered error after restoring an Analysis Services database that uses the SQL Server 2005 SQL Native Client.
After restoring an Analysis Services database that uses the SQLNCLI.1 SQL Native Client (SNAC) on a server which has never had SQL Server 2005 installed, an error similar to the following occurs during processing operations: Internal error: The operation terminated unsuccessfully. Internal error: The operation terminated unsuccessfully. OLE DB error: OLE DB or ODBC error: Class not registered. Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Adventure Works DW', Name of 'Adventure Works DW'. Errors in the OLAP storage engine: An error occurred while processing the 'Internet_Sales_2001' partition of the 'Internet Sales' measure group for the 'Adventure Works' cube from the Adventure Works DW database. Server: The operation has been cancelled. During execution of queries that access data contained in ROLAP storage, an error message similar to the following is returned: Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Adventure Works DW', Name of 'Internet Sales Order Details' was being processed. Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Adventure Works DW', Name of 'Adventure Works DW'. OLE DB error: OLE DB or ODBC error: Class not registered. Server: The operation has been cancelled. Opening the Property Sheet for the Data Source from SQL Server Management Studio (SSMS) or the Data Source Designer in Business Intelligence Development Studio (BIDS) and attempting to edit the connection string returns the following error from Connection Manager: The specified provider is not supported. Please choose different provider in connection manager Checking the connection string from either SSMS or BIDS, it is similar to the following: Provider=SQLNCLI.1;Data Source=MyServer;Persist Security Info=True;User ID=sa;Initial Catalog=AdventureWorksDW Changing the connection string to use the native OLE DB provider, similar to the following: Provider=SQLOLEDB.1;Password=Olapolap1;Persist Security Info=True;User ID=sa;Initial Catalog=AdventureWorksDW;Data Source=MyServer Likewise, modifying the connection string to use the SQL Server 2008 SQL Server SNAC, similar to the following. Provider=SQLNCLI10.1;Integrated Security="";Persist Security Info=False;User ID=sa;Initial Catalog=AdventureWorksDW;Data Source=MyServer;Initial File Name="";Server SPN="" The issue occurs because the SQLNCLI.1 provider does not ship with SQL Server 2008 and the provider name for SNAC connections changed in SQL Server 2008. The SNAC Version Provider names for SQL Server 2005 and SQL Server 2008 are: SQL Server 2005 SQLNCLI.1 SQL Server 2008 SQLNCLI10.1 To workaround this issue, any of the following options can be used: Edit the Data Source(s) for the database and switch from the SQLNCLI.1 provider to the SQLOLEDB.1 provider Edit the Data Source(s) for the database and switch from the SQLNCLI.1 provider to the SQLNCLI10.1 Native Client provider. Download and install the SQL Server 2005 SQL Native Client from the following link: http://www.microsoft.com/downloads/details.aspx?FamilyID=d09c1d60-a13c-4479-9b91-9e8b9d835cdc&DisplayLang=en John Desch Microsoft SQL Server Escalation Engineer
Source: blogs.msdn.com
Tools of the Trade: Part IV - Developing WinDbg Extension DLLs
A WinDbg extension DLL is set of exported callback functions for implementing user defined commands to extract specific customized information from the memory dump(s). Extension DLLs are loaded by debugger engine and can provide extra functionality of automation tasks while performing user-mode or kenrel-mode debugging. An extension DLL may export any number of functions that are used to execute extension commands. Each function is explicitly declared as an export in the DLL's definition file or .def file and function names must be in lowercase letters WinDbg (DbgEng ) extension DLL must export DebugExtensionInitialize. This will be called when the DLL is loaded, to initialize the DLL. It may be used by the DLL to initialize global variables. An extension DLL may export an optional function DebugExtensionUninitialize. If this is exported, it will be called just before the extension DLL is unloaded. An extension DLL may export a DebugExtensionNotify. If this is exported, it will be called when a session begins or ends, and when a target starts or stops executing. These notifications are also provided to IDebugEventCallbacks objects registered with a client. An extension DLL may export KnownStructOutput. If this is exported, it will be called when the DLL is loaded. This function returns a list of structures that the DLL knows how to print on a single line. It may be called later to format instances of these structures for printing. So, how to develop your own Windbg Extension DLL? Lets follow these steps: 1. Download & Install debugging tools for windows from http://www.microsoft.com/whdc/devtools/debugging/installx86.Mspx 2. Create "Win32 Console Application" using VS 2008 3. Select Application type as "DLL" and click "Finish" . 4. Add a "Module-Definition File (.def)" called "wdbrowser" to the project. One way to export your extension function is by specifying the function names in the EXPORTS section of the .def file. You may you use other ways of exporting functions, such as __dllexport 5. Configure the project "Additional include Directories" to point to header files that comes with Windbg. Default folder for x86 is "C:\Program Files\Debugging Tools for Windows (x86)\sdk\inc" 6. Configure the project "Additional Library Directories" to point to library files that comes with Windbg. Default folder for x86 libraries is ""C:\Program Files\Debugging Tools for Windows (x86)\sdk\lib\i386"" 7. The library, "dbgeng.lib " & "dbgeng.dll" has the implementation of the Debug engine exported functions. So, add "dbgeng.lib" in "Additional Dependencies". 8. Add name of the module definition file created at the above Step #3 9. Now Include the following required headers files in "stdafx.h" #include <windows.h> #include <imagehlp.h> #include <wdbgexts.h> #include <dbgeng.h> #include <extsfns.h> 10. Declare following two global variables in your extension project main implementation file. //Version. EXT_API_VERSION g_ExtApiVersion = {1,1,EXT_API_VERSION_NUMBER,0} ; WINDBG_EXTENSION_APIS ExtensionApis = {0}; 11. Declare following debug engine COM interface pointers. IDebugAdvanced2* gAdvancedDebug2=NULL; IDebugControl4* gDebugControl4=NULL; IDebugControl* gExecuteCmd=NULL; IDebugClient* gDebugClient=NULL; 12. Next step is to declare and implement WinDbgExtensionDllInit function in your DLL main implementation source file. In this example that is "wdbrowser.cpp" . The WinDbgExntensionDllInit is the first function that will be called by windbg . So, this function is the idle for implementing any extension specific initialization or related functionality. Please refer http://msdn.microsoft.com/en-us/library/cc267872.aspx for more details about this function. VOID WDBGAPI WinDbgExtensionDllInit (PWINDBG_EXTENSION_APIS lpExtensionApis, USHORT usMajorVersion, USHORT usMinorVersion) { ExtensionApis = *lpExtensionApis; HRESULT hResult = S_FALSE; if (hResult = DebugCreate(__uuidof(IDebugClient), (void**) &gDebugClient) != S_OK) { dprintf("Acuqiring IDebugClient* Failled\n\n"); return; } if (hResult = gDebugClient->QueryInterface(__uuidof(IDebugControl), (void**) &gExecuteCmd) != S_OK) { dprintf("Acuqiring IDebugControl* Failled\n\n"); return; } if (hResult = gDebugClient->QueryInterface(__uuidof(IDebugAdvanced2), (void**) &gAdvancedDebug2) != S_OK) { dprintf("Acuqiring IDebugAdvanced2* Failled\n\n"); return; } if (hResult = gDebugClient->QueryInterface(__uuidof(IDebugControl4), (void**) &gDebugControl4) != S_OK) { dprintf("Acuqiring IDebugControl4* Failled\n\n"); return; } } 13. Declare another exported function ExtensionApiVersion to report the version of your extension to windbg. Please refer to http://msdn.microsoft.com/en-us/library/cc267873.aspx for detailed information about this function. LPEXT_API_VERSION WDBGAPI ExtensionApiVersion (void) { return &g_ExtApiVersion; } 14. Define Debug engine's interface pointers, so that your extension module can interact with debug engine. For more information please refer http://msdn.microsoft.com/en-us/library/cc265976.aspx - IDebugClient, http://msdn.microsoft.com/en-us/library/cc266102.aspx - IDebugControl http://msdn.microsoft.com/en-us/library/cc265957.aspx - IDebugAdvanced IDebugAdvanced2* gAdvancedDebug2=NULL; IDebugControl4* gDebugControl4=NULL; IDebugControl* gExecuteCmd=NULL; IDebugClient* gDebugClient=NULL; 15. Next step is to - implement debug engine's callback interface IDebugOutputCallbacks . Debug engine callbacks your implementation of IDebugOutCallbacks::Output() with output as a result of the commands that are executed by your extension function. Refer to http://msdn.microsoft.com/en-us/library/cc265716.aspx for detailed information about IDebugOutputCallbacks::Output() 16. Add the following new class in a header file that inherits the IDebugOutputCallbacks interface . #ifndef __OUT_HPP__ #define __OUT_HPP__ #include <string> #include <sstream> class StdioOutputCallbacks : public IDebugOutputCallbacks { private: std::string m_OutputBuffer; // //This buffer holds the output from the command execution. // CHAR m_OutPutBuffer[4096]; public: void InitOutPutBuffer(); std::string GetOutputBuffer() { return m_OutputBuffer; }; void ClearOutPutBuffer() { m_OutputBuffer = ""; }; STDMETHOD(QueryInterface)( THIS_ IN REFIID InterfaceId, OUT PVOID* Interface ); STDMETHOD_(ULONG, AddRef)( THIS ); STDMETHOD_(ULONG, Release)( THIS ); // IDebugOutputCallbacks. STDMETHOD(Output)( THIS_ IN ULONG Mask, IN PCSTR Text ); }; extern StdioOutputCallbacks g_OutputCb; #endif // #ifndef __OUT_HPP__ 17. Add the following code that implements the IDebugOutputCallbacks interface methods, especially Output() #include "stdafx.h" #include <stdio.h> #include <windows.h> #include <dbgeng.h> #include "OutputCallBack.h" StdioOutputCallbacks g_OutputCb; STDMETHODIMP StdioOutputCallbacks::QueryInterface( THIS_ IN REFIID InterfaceId, OUT PVOID* Interface ) { *Interface = NULL; if (IsEqualIID(InterfaceId, __uuidof(IUnknown)) || IsEqualIID(InterfaceId, __uuidof(IDebugOutputCallbacks))) { *Interface = (IDebugOutputCallbacks *)this; AddRef(); return S_OK; } else { return E_NOINTERFACE; } } STDMETHODIMP_(ULONG) StdioOutputCallbacks::AddRef( THIS ) { // This class is designed to be static so // there's no true refcount. return 1; } STDMETHODIMP_(ULONG) StdioOutputCallbacks::Release( THIS ) { // This class is designed to be static so // there's no true refcount. return 0; } STDMETHODIMP StdioOutputCallbacks::Output( THIS_ IN ULONG Mask, IN PCSTR Text ) { UNREFERENCED_PARAMETER(Mask); m_OutputBuffer += Text; return S_OK; } void StdioOutputCallbacks::InitOutPutBuffer() { m_OutputBuffer.erase(); } 18. Add implementation of your extension function. In this example, we choose to implement an extension that displays the variable names, types in the frame 2 of the current thread. The implementation is: DECLARE_API (dvf3) { // // Install output callbacks. // if ((gDebugClient->SetOutputCallbacks((PDEBUG_OUTPUT_CALLBACKS) &g_OutputCb))!= S_OK) { dprintf("*****Error while installing Outputcallback.*****\n\n"); return; } // // Since frame count starts from 0 index, we have to pass 2 as parameter for .frame command for the frame# 2 // //Execute command to extrac 2nd frame. if (gExecuteCmd->Execute(DEBUG_OUTCTL_THIS_CLIENT | //Send output to only outputcallbacks DEBUG_OUTCTL_OVERRIDE_MASK | DEBUG_OUTCTL_NOT_LOGGED, ".frame 2", DEBUG_EXECUTE_DEFAULT ) != S_OK) { dprintf("Executing .frame 2 failled\n"); return; } //Execute command to extrac 2nd frame. if (gExecuteCmd->Execute(DEBUG_OUTCTL_THIS_CLIENT | //Send output to only outputcallbacks DEBUG_OUTCTL_OVERRIDE_MASK | DEBUG_OUTCTL_NOT_LOGGED, "dv /i /t /v", DEBUG_EXECUTE_DEFAULT ) != S_OK) { dprintf("Executing dv /i /t /v failled\n"); return; } dprintf("***** Extracting locals & formal params from frame 2 *****"); dprintf("\n%s\n", g_OutputCb.GetOutputBuffer().c_str()); } 19. Re-build the project. Copy the .DLL from release folder to a folder where Windbg looks for extension DLLs. On x86 machine default location is "<Drive letter>\Program Files\Debugging Tools for Windows (x86)\winext" 20. The extension is ready for the use or test. 21. Start windbg and open a full user mode dump. Type .load myextension and hit enter to load the extension DLL into Windbg process space 22. Run .chain command to verify if your extension is loaded by WinDbg . you will see output similar to below output, if your extension is loaded. 23. Type !dvf3 to run the extension function for extracting and displaying variable names, types from the frame 2 . Additional references: http://msdn.microsoft.com/en-us/library/cc265826.aspx - describes about how to Interact with debug engine, I/O operations with debug engine, Memory access, Using symbols, source files. http://www.codeplex.com/ODbgExt - Microsoft Open Debugger Extension for Windbg Happy developing debug engine extensions! Posted By: Srini Gajjela & Enamul Khaleque (DSD-SQLDeveloper group @ Microsoft)
Source: blogs.msdn.com
Setting up database unit testing as part of a Continuous Integration build process [VS2010 DB Tools - Datadude]
[This blog post assumes a familiarity with the terms Continuous Integration (CI), MSBuild & MSTest. If you don’t have a good appreciation of those terms then this blog post probably isn’t for you anyway so don’t worry about it!] Over the past few days I have been working to get database unit tests executing as part of our Continuous Integration (CI) build and in this blog post I’ll explain how I went about it because it is decidedly not straightforward. We are using the DB Tools in Visual Studio 2010 (aka DBPro or Datadude - I will refer to it as Datadude from here on in) which includes unit testing functionality. The rest of this blog post also assumes a familiarity with database unit testing in Datadude although if you want to do some background reading an excellent place to start is Jamie Laflen’s whitepaper Apply Test-Driven Development to your Database Projects on MSDN. We got to the point where we had a C# test project containing database unit tests that executed successfully inside Visual Studio. For demonstration purposes I have put together a solution that contains a simple database project and a test project containing a database unit test: And just to prove that the test executes successfully in Visual Studio: N.B. The code for the database object(s) and the unit test itself are not important, that is outside the scope of this blog post. At this point we have some tests that run in our development sandbox, the configuration for which is done using the Database Test Configuration dialog: Those config settings are stored in the app.config file which exists as part of our test project (screenshot of which is at the top of this blog post). If we take a look inside that file we can see the ConnectionString that we defined in the Database Test Configuration dialog: Note the <DatabaseUnitTesting> element, we’re going to be coming back to that a little later!! Once we have the tests running in Visual Studio the next step is to get them running inside a CI build and for that we call out to MSTest.exe from inside an MSBuild script: When we execute that script we get some positive results and all looks peachy: OK, we now have a test that runs successfully both in Visual Studio and by calling MSTest.exe directly from MSBuild. The problem I have to solve now though (and this is the real crux of this blog post) is that the test is still running against the server I specified in app.config; I don’t want this, I want the test to run against a server of my choosing, namely my build server. For this I had to call on the help of the aforementioned Jamie Laflen (who, luckily, I have met in the past) and he gave me guidance on how to achieve it. Below are the steps that you need to take. Firstly you need to tell Datadude that you want to specify some different credentials and that is done by adding an attribute AllowConfigurationOverride="true" to the <DatabaseUnitTesting> element of app.config like so: Don’t forget to build your test project after changing the app.config file!!! This will cause datadude unit testing framework to go and search for a file called *.dbunittest.config where the * indicates either: The name of machine upon which the tests are being run or The name of the user running the tests According to Jamie L datadude runs the following precedence checks: Is an override specified in app.config? If not, use app.config Does an override file exist called <machinename>.dbunittest.config? If so, use it, if not… Does an override file exist called <username>.dbunittest.config? If so, use it, if not… Fail! “So”, you may be asking, “what goes in this *.dbunittest.config file then?”. Fair question, that’s where the <DatabaseUnitTesting> element that I mentioned earlier comes in. Copy that element from the app.config file into your *.dbunittest.config file, remove the AllowConfigurationOverride="true" attribute and change the ConnectionString property accordingly: The next step is to tell MSTest.exe that there is another file that it needs to be aware of when it runs the tests and to do that we need to create a custom testsettings file that will be used instead of the default Local.testsettings. To create a new testsettings file use the Add New Item dialog: In the Test Settings dialog go to ‘Deployment’, click ‘Enable deployment’, then ‘Add File…’. Browse to your *.dbunittest.config file and add it to the project: At this point you’re good to go and all you need to do edit your call to MSTest.exe and tell it to use your new testsettings file: Notice the extra information in the output: You’ll have to trust me that it used the new config file and therefore the new ConnectionString, although having said that you do get some useful information in the test results file (*.trx) that MSTest.exe creates: I think its fair to say that this process is a little fiddly to say the least so I’m hoping that Microsoft come up with a better solution in the future, one that integrates better into the whole testing infrastructure. Meanwhile you can download the demo that I put together for this blog post from my SkyDrive. Hope that helps! Comments are welcome. Jamiet UPDATE: Atul Verma has written a 3-part blog series on how to write your database unit tests so if Jamie Laflen's whitepaper (linked to above) doesn't tickle your fancy try reading Atul's series instead: How to unit test SQL Server 2008 database using Visual Studio 2010 How to unit test SQL Server 2008 database using Visual Studio 2010 – Part 2 How to unit test SQL Server 2008 database using Visual Studio 2010 – Part 3
Source: feedproxy.google.com
Working with the SQL Server Powershell Provider and Central Management Servers from Powershell
I am a big fan of the Central Management Server setup that exists in SQL Server 2008, as well as how you can leverage this for the Enterprise Policy Management Framework that Laura Rubbelke ( Blog | Twitter ) created on Codeplex. Today one of my...(read more)
Source: sqlblog.com
Creating HTTP endpoint fails with 7850 error.
Creating a HTTP endpoint in SQL Server 2005 or SQL Server 2008 may fail with the following error messages: Msg 7850, Level 16, State 1, Line 1 The user 'domain\myuser' does not have permission to register endpoint 'training_sql_endpoint' on the specified URL. Please ensure the URL refers to a namespace that is reserved for listening by SQL. Msg 7807, Level 16, State 1, Line 1 An error ('0x80070005') occurred while attempting to register the endpoint 'training_sql_endpoint'. The error message is actually incorrect in this context, it instead should report that it is the SQL Service account and also direct you to reserve the namespace explicitly, such as (this is what SQL Server 2008 now has): The SQL Server Service account does not have permission to register the supplied URL on the endpoint '%.*ls'. Use sp_reserve_http_namespace to explicitly reserve the URL namespace before you try to register the URL again. When you run CREATE ENDPOINT to create a HTTP endpoint, this is done under the context of the SQL Server service account. If the namespace reservation does not already exist, then SQL will implicitly create the reservation. However, this requires that the SQL Server service account have local administrator privileges on the computer. If the SQL Service account does not have local administrator, on SQL Server 2005 it will fail with the message noted earlier. To resolve this you have two options: 1. Add the SQL Server service account to the local administrators group, restart and then run the CREATE ENDPOINT again. 2. Or, explicitly reserve the name space while logged on as a Windows Authentication user that has Local Administrator on the computer and sysadmin on SQL, *before* you run CREATE ENDPOINT. For example: sp_reserve_http_namespace N'http://*:2050/sql/myfolder' Then when you run the CREATE ENDPOINT, the SQL Service account will not have to reserve the namespace because it already exists, and proceed with creating the endpoint. Note that when you reserve a namespace explicitly, you need to be sure that the string of the namespace you reserve matches the parameters in the CREATE ENDPOINT statement. So for the namespace above, the CREATE ENDPOINT would need to look like the following for SQL to match it up correctly: CREATE ENDPOINT [myendpoint] STATE=STARTED AS HTTP (PATH=N'/sql/myfolder', PORTS = (CLEAR), AUTHENTICATION = (NTLM, KERBEROS, INTEGRATED), SITE=N'*', CLEAR_PORT = 2050, COMPRESSION=DISABLED) The following link has more on this in “Identifying the Namespace for an Endpoint” http://msdn.microsoft.com/en-us/library/ms190614.aspx
Source: blogs.msdn.com
Step by step N-tier configuration of Sync services for ADO.NET 2.0
Recently I have worked on couple of cases where customers were trying to use N-tier configuration of Sync services for ADO.NET on IIS. In this blog we will use IIS and setup N-tier configuration of sync service for ADO.NET version 2.0 which comes as part of Microsoft Sync Framework 1.0. Preparing Environment: We need a development machine that we will use to develop the application, a middle tier server where IIS is installed and configured for WCF services and database server. If you like, you can use the same development box for IIS and database server too. a) Client tier We will develop the application (client and WCF service) and run the client part of the application in this machine. We will use Visual Studio 2008, SP1, so it should be installed on this machine. Install sync services on this client machine. Sync Services for ADO.NET version 2.0 comes with Microsoft Sync Framework 1.0 which gets installed if you install SQL Server 2008 or Visual studio 2008, SP1. You can also download it from http://www.microsoft.com/downloads/details.aspx?FamilyId=C88BA2D1-CEF3-4149-B301-9B056E7FB1E6&displaylang=en Install SQL Server Compact 3.5 Service Pack 1 on this client machine if it is already not there. SQL Server Compact is available in three ways: - Integrated with SQL Server 2008 or later versions, Integrated with Microsoft Visual Studio 2008 or later versions and as a download from the Web site at: http://www.microsoft.com/sqlserver/2005/en/us/compact-downloads.aspx b) Middle tier If you want to use a separated middle tier server for IIS to run WCF service then make sure you have IIS installed and configured on this box to run WCF. Install sync services on this machine too. I have used the same development machine for middle tier so I did not have to install it again. c) Database Server Install a version of SQL Server other than SQL Server Compact to act as the server database. If you like to you can use SQL Server Express that installs with Visual Studio. I have used developer edition of SQL 2008, SP1. We are going to use a sample database that should be prepared by following the article on “Setup Scripts for Sync Services How-to Topics” at: http://msdn.microsoft.com/en-us/library/bb726041.aspx. Copy the T-SQL statements of “Custom Change Tracking for Offline Scenarios” that creates a custom change tracking infrastructure. Once you run this script successfully from query analyzer of your SQL Server it will create the new database by the name SyncSamplesDb. I have the database created as shown below: Developing and deploying WCF service: Create a WCF Services Application in Visual Studio 2008: (You may need to run Visual Studio as Administrator to create virtual directories in IIS) Type “WcfForSync” as the name of the project and click on “OK” button. It will create the WCF project with IService1.cs, Service1.svc, Service1.svc.cs and Web.config files with other files and folders. I will keep these default files for simplicity. Open the IService1.cs file by double clicking it and replace the code with the code below: (Note that the code samples in the following sections have been taken from the MSDN articles mentioned in the reference section at the bottom with little modification) using System.Collections.ObjectModel; using System.ServiceModel; using System.Data; using Microsoft.Synchronization; using Microsoft.Synchronization.Data; using Microsoft.Synchronization.Data.Server; namespace WcfForSync { [ServiceContract] public interface IService1 { [OperationContract()] SyncContext ApplyChanges(SyncGroupMetadata groupMetadata, DataSet dataSet, SyncSession syncSession); [OperationContract()] SyncContext GetChanges(SyncGroupMetadata groupMetadata, SyncSession syncSession); [OperationContract()] SyncSchema GetSchema(Collection<string> tableNames, SyncSession syncSession); [OperationContract()] SyncServerInfo GetServerInfo(SyncSession syncSession); } } Next add a class file by the name SampleServerSyncProvider.cs as below: Replace the code in this file with the code below: using System.Data; using System.Data.SqlClient; using Microsoft.Synchronization; using Microsoft.Synchronization.Data; using Microsoft.Synchronization.Data.Server; namespace WcfForSync { //Create a class that is derived from //Microsoft.Synchronization.Server.DbServerSyncProvider. public class SampleServerSyncProvider : DbServerSyncProvider { public SampleServerSyncProvider() { //Create a connection to the sample server database. Utility util = new Utility(); SqlConnection serverConn = new SqlConnection(util.ServerConnString); this.Connection = serverConn; //Create a command to retrieve a new anchor value from //the server. In this case, we use a timestamp value //that is retrieved and stored in the client database. //During each synchronization, the new anchor value and //the last anchor value from the previous synchronization //are used: the set of changes between these upper and //lower bounds is synchronized. // //SyncSession.SyncNewReceivedAnchor is a string constant; //you could also use @sync_new_received_anchor directly in //your queries. SqlCommand selectNewAnchorCommand = new SqlCommand(); string newAnchorVariable = "@" + SyncSession.SyncNewReceivedAnchor; selectNewAnchorCommand.CommandText = "SELECT " + newAnchorVariable + " = min_active_rowversion() - 1"; selectNewAnchorCommand.Parameters.Add(newAnchorVariable, SqlDbType.Timestamp); selectNewAnchorCommand.Parameters[newAnchorVariable].Direction = ParameterDirection.Output; selectNewAnchorCommand.Connection = serverConn; this.SelectNewAnchorCommand = selectNewAnchorCommand; //Create a SyncAdapter for the Customer table by using //the SqlSyncAdapterBuilder: // * Specify the base table and tombstone table names. // * Specify the columns that are used to track when // changes are made. // * Specify download-only synchronization. // * Call ToSyncAdapter to create the SyncAdapter. // * Specify a name for the SyncAdapter that matches the // the name specified for the corresponding SyncTable. // Do not include the schema names (Sales in this case). SqlSyncAdapterBuilder customerBuilder = new SqlSyncAdapterBuilder(serverConn); customerBuilder.TableName = "Sales.Customer"; customerBuilder.TombstoneTableName = customerBuilder.TableName + "_Tombstone"; customerBuilder.SyncDirection = SyncDirection.DownloadOnly; customerBuilder.CreationTrackingColumn = "InsertTimestamp"; customerBuilder.UpdateTrackingColumn = "UpdateTimestamp"; customerBuilder.DeletionTrackingColumn = "DeleteTimestamp"; SyncAdapter customerSyncAdapter = customerBuilder.ToSyncAdapter(); customerSyncAdapter.TableName = "Customer"; this.SyncAdapters.Add(customerSyncAdapter); } } public class Utility { //Return the server connection string. public string ServerConnString { get { return @"Data Source= SQLServer\instance; Initial Catalog=SyncSamplesDb; User Id=xxxxxx; Password=xxxxxx"; } } } } Note: You need to update the connection string in the above Utility class to connect to your SQL Server. Open Service1.svc.cs file in the project by double clicking on it: and replace the existing code with the code below: using System.Collections.ObjectModel; using System.ServiceModel; using System.Data; using Microsoft.Synchronization; using Microsoft.Synchronization.Data; using Microsoft.Synchronization.Data.Server; namespace WcfForSync { // NOTE: If you change the class name "Service1" here, you must also update the reference to "Service1" in App.config. public class Service1 : IService1 { private SampleServerSyncProvider _serverSyncProvider; public Service1() { this._serverSyncProvider = new SampleServerSyncProvider(); } [System.Diagnostics.DebuggerNonUserCodeAttribute()] public virtual SyncContext ApplyChanges(SyncGroupMetadata groupMetadata, DataSet dataSet, SyncSession syncSession) { return this._serverSyncProvider.ApplyChanges(groupMetadata, dataSet, syncSession); } [System.Diagnostics.DebuggerNonUserCodeAttribute()] public virtual SyncContext GetChanges(SyncGroupMetadata groupMetadata, SyncSession syncSession) { return this._serverSyncProvider.GetChanges(groupMetadata, syncSession); } [System.Diagnostics.DebuggerNonUserCodeAttribute()] public virtual SyncSchema GetSchema(Collection<string> tableNames, SyncSession syncSession) { return this._serverSyncProvider.GetSchema(tableNames, syncSession); } [System.Diagnostics.DebuggerNonUserCodeAttribute()] public virtual SyncServerInfo GetServerInfo(SyncSession syncSession) { return this._serverSyncProvider.GetServerInfo(syncSession); } } } The application requires references to Microsoft.Synchronization.dll, Microsoft.Synchronization.Data.dll and Microsoft.Synchronization.Data.Server.dll. Right click on “References” from the project and click on Add Reference… Select Microsoft.Synchronization.dll (Location on my machine: C:\Program Files (x86)\Microsoft Sync Framework\v1.0\Runtime\x86) And next add the references to the other two dlls (Location on my machine: C:\Program Files (x86)\Microsoft Sync Framework\v1.0\Runtime\ADO.NET\V2.0\x86) Now you should be able to build the project successfully. Once building is successful publish the WCF to IIS. Go to the properties of the project and under Web* tab type IIS server information (middletierserver) , click on the “Create Virtual Directory” as below: From IIS you can see a Virtual directory has been created: If you browse the WCF (http://middletierserver/WcfForSync/Service1.svc) you should get the following page. I have used the same development machine for IIS, so it is showing “localhost” in the URL. Note: When tried to browse WCF I have noticed following error with some IIS machines: HTTP Error 404.3 – Not Found The page you are requesting cannot be served because of the extension configuration. If the page is a script, add a handler. If the file should be downloaded, add a MIME map. Detailed Error InformationModule StaticFileModule. If you encounter this error please take necessary action as per article at: http://blogs.msdn.com/rjohri/archive/2009/06/29/the-page-you-are-requesting-cannot-be-served-because-of-the-extension-configuration.aspx Developing client application and consuming WCF service: Create a Console application in Visual Studio: Once the project is created, reference the dlls Microsoft.Synchronization.dll, Microsoft.Synchronization.Data.dll, and Microsoft.Synchronization.Data.SqlServerCe.dll as before. You also need to reference System.Data.SqlServerCe.dll (you should have this dll on your machine once you installed SQL Server Compact 3.5 Service Pack 1, location of this dll on my machine: C:\Program Files (x86)\Microsoft SQL Server Compact Edition\v3.5\Desktop as shown below) Now we need to do service reference to our WCF that we developed and deployed before. Right click on the Reference in this client project and select Add Service Reference… In the Add Service Reference screen you type the URL of our WCF and click on Go button as shown below: Click on “OK” button, it will create service reference as below: Replace the code in the file Program.cs with the following code: using System; using System.IO; using System.Data; using System.Data.SqlClient; using System.Data.SqlServerCe; using Microsoft.Synchronization; using Microsoft.Synchronization.Data; using Microsoft.Synchronization.Data.SqlServerCe; namespace ClientForSync { class Program { static void Main(string[] args) { //The Utility class handles all functionality that is not //directly related to synchronization, such as holding connection //string information and making changes to the server database. Utility util = new Utility(); //The SampleStats class handles information from the SyncStatistics //object that the Synchronize method returns. SampleStats sampleStats = new SampleStats(); //Delete and re-create the database. The client synchronization //provider also enables you to create the client database //if it does not exist. ////util.SetClientPassword(); util.RecreateClientDatabase(); //Initial synchronization. Instantiate the SyncAgent //and call Synchronize. SampleSyncAgent sampleSyncAgent = new SampleSyncAgent(); SyncStatistics syncStatistics = sampleSyncAgent.Synchronize(); sampleStats.DisplayStats(syncStatistics, "initial"); //Make changes on the server. util.MakeDataChangesOnServer(); //Subsequent synchronization. syncStatistics = sampleSyncAgent.Synchronize(); sampleStats.DisplayStats(syncStatistics, "subsequent"); //Return server data back to its original state. util.CleanUpServer(); //Exit. Console.Write("\nPress Enter to close the window."); Console.ReadLine(); } } //Create a class that is derived from //Microsoft.Synchronization.SyncAgent. public class SampleSyncAgent : SyncAgent { public SampleSyncAgent() { //Instantiate a client synchronization provider and specify it //as the local provider for this synchronization agent. this.LocalProvider = new SampleClientSyncProvider(); //The remote provider now references a proxy instead of directly referencing the server provider. The proxy is created by passing a reference to a WCF service ServiceReference1.Service1Client serviceProxy = new ServiceReference1.Service1Client(); this.RemoteProvider = new ServerSyncProviderProxy(serviceProxy); //Add the Customer table: specify a synchronization direction of //DownloadOnly. SyncTable customerSyncTable = new SyncTable("Customer"); customerSyncTable.CreationOption = TableCreationOption.DropExistingOrCreateNewTable; customerSyncTable.SyncDirection = SyncDirection.DownloadOnly; this.Configuration.SyncTables.Add(customerSyncTable); } } //Create a class that is derived from //Microsoft.Synchronization.Data.SqlServerCe.SqlCeClientSyncProvider. //You can just instantiate the provider directly and associate it //with the SyncAgent, but you could use this class to handle client //provider events and other client-side processing. public class SampleClientSyncProvider : SqlCeClientSyncProvider { public SampleClientSyncProvider() { //Specify a connection string for the sample client database. Utility util = new Utility(); this.ConnectionString = util.ClientConnString; } } //Handle the statistics that are returned by the SyncAgent. public class SampleStats { public void DisplayStats(SyncStatistics syncStatistics, string syncType) { Console.WriteLine(String.Empty); if (syncType == "initial") { Console.WriteLine("****** Initial Synchronization ******"); } else if (syncType == "subsequent") { Console.WriteLine("***** Subsequent Synchronization ****"); } Console.WriteLine("Start Time: " + syncStatistics.SyncStartTime); Console.WriteLine("Total Changes Downloaded: " + syncStatistics.TotalChangesDownloaded); Console.WriteLine("Complete Time: " + syncStatistics.SyncCompleteTime); Console.WriteLine(String.Empty); } } public class Utility { //Return the client connection string with the password. Don’t forget to create folder public string ClientConnString { get { return @"Data Source='D:\\SyncServices\\SyncSampleClient.sdf'; Password=xxxxxxx"; } } //Return the server connection string. public string ServerConnString { get { return @"Data Source= SQLServer\instance; Initial Catalog=SyncSamplesDb; User Id=xxxxxx; Password=xxxxxx"; } } //Make server changes that are synchronized on the second //synchronization. public void MakeDataChangesOnServer() { int rowCount = 0; using (SqlConnection serverConn = new SqlConnection(this.ServerConnString)) { SqlCommand sqlCommand = serverConn.CreateCommand(); sqlCommand.CommandText = "INSERT INTO Sales.Customer (CustomerName, SalesPerson, CustomerType) " + "VALUES ('Cycle Mart', 'James Bailey', 'Retail') " + "UPDATE Sales.Customer " + "SET SalesPerson = 'James Bailey' " + "WHERE CustomerName = 'Tandem Bicycle Store' " + "DELETE FROM Sales.Customer WHERE CustomerName = 'Sharp Bikes'"; serverConn.Open(); rowCount = sqlCommand.ExecuteNonQuery(); serverConn.Close(); } Console.WriteLine("Rows inserted, updated, or deleted at the server: " + rowCount); } //Revert changes that were made during synchronization. public void CleanUpServer() { using (SqlConnection serverConn = new SqlConnection(this.ServerConnString)) { SqlCommand sqlCommand = serverConn.CreateCommand(); sqlCommand.CommandType = CommandType.StoredProcedure; sqlCommand.CommandText = "usp_InsertSampleData"; serverConn.Open(); sqlCommand.ExecuteNonQuery(); serverConn.Close(); } } //Delete the client database. public void RecreateClientDatabase() { using (SqlCeConnection clientConn = new SqlCeConnection(this.ClientConnString)) { if (File.Exists(clientConn.Database)) { File.Delete(clientConn.Database); } } SqlCeEngine sqlCeEngine = new SqlCeEngine(this.ClientConnString); sqlCeEngine.CreateDatabase(); } } } Now we have to update the connection string information in the code. In the Utility class in the below methods you should update the connection string for SQL server and client database (SQL CE database) //Return the client connection string with the password. Don’t forget to create folder public string ClientConnString { get { return @"Data Source='D:\\SyncServices\\SyncSampleClient.sdf'; Password=xxxxxxx"; } } //Return the server connection string. public string ServerConnString { get { return @"Data Source= SQLServer\instance; Initial Catalog=SyncSamplesDb; User Id=xxxxxxx; Password=xxxxxxx"; } } Build this client project, at this point the project should compile successfully. Running/testing synchronization: Run the Console application and you should see the result of the synchronization of this sample as below: You can run query on the Customer table in SyncSamplesDb to check the changes by setting the break point in the client app and also by opening a SQL profiler you can see the activities going on in SQL Server. Happy Synchronization! References: Overview (Synchronization Services) : http://msdn.microsoft.com/en-us/library/bb726031(SQL.90).aspx Architecture and Classes (Synchronization Services) : http://msdn.microsoft.com/en-us/library/bb726025(SQL.90).aspx Getting Started: A Synchronization Services Application: http://msdn.microsoft.com/en-us/library/bb726015.aspx How to: Configure N-Tier Synchronization at: http://msdn.microsoft.com/en-us/library/bb902831.aspx A large scale implementation of Sync Service example at: http://blogs.msdn.com/sync/archive/2009/10/19/sharepoint-2010-now-integrates-microsoft-sync-framework.aspx Microsoft Sync Framework 2.0 redistributable Package - http://www.microsoft.com/downloads/details.aspx?FamilyId=109DB36E-CDD0-4514-9FB5-B77D9CEA37F6&displaylang=en Author : Faiz(MSFT), SQL Developer Engineer Reviewed by : Enamul(MSFT), SQL Developer Technical Lead; Azim(MSFT), SQL Developer Technical Lead;Srini(MSFT), SQL Developer Engineer
Source: blogs.msdn.com
Tools of the Trade: Part II – The Repro
In this series of posts, we’re going to take a look at some of the tools we at Microsoft Developer Support -- Database commonly use, how to use them, and the types of issues they can be used to resolve. This post might seem like a bit of a stretch. For one thing, it’s not about a SQL related tool. That’s not such a big deal – there are plenty of tools that we use that have applications beyond SQL. You can expect more posts in this series about tools that have applications far beyond troubleshooting databases and database connectivity. However, this isn’t about a specific, pre-existing tool either. Call it a meta-tool, I guess. Pseudo-tool. Something along those lines. One way or the other, even if it isn’t already built for you, a reproduction can be one of the most invaluable and time-saving tools you’ll use to troubleshoot your most difficult issues. The Repro In the Product Support Services world at Microsoft, we refer to a simplified, stripped down series of steps that lead to a specific behavior as a Repro. This may simply be a step-by-step to recreate an error, or a simple application that recreates an unusual behavior. When we are working on an issue that is buried in a hulking, production application, we often like to tackle it is by attempting to recreate the issue outside of the in vivo environment. There are a lot of benefits to investing some time in developing a Repro. It’s probably pretty obvious why you would not want to troubleshoot in a production environment. I personally like to avoid debugging anything that generates revenue – it just seems to ease the tension. There’s more to it, though – not only do you avoid tinkering with your company’s livelihood, a Repro also gives you the advantage of portability. Whatever technology you’re working with, it is much, much easier to get the assistance of Subject Matter Experts if you can actually send them something to play with. Another big benefit of the Repro is that it helps you to focus on the issue. This is particularly helpful when you need to use other tools to analyze the behavior with which you’re dealing. Whatever kind of tracing you’re doing, you are bound to pick-up extraneous information if you’re running the trace on a production application. Unless you’re very fond of wading through gigantic logs looking for clues, a Repro really helps to avoid getting lost in superfluous data. Finally, a repro can save you a huge amount of time. It can get so irritating having to click through the same series of screens and options, over and over again, as you troubleshoot an issue in a full-scale application. Being able to make a change to code or the environment, and quickly test dramatically improves the speed of the troubleshooting process. It also has the added benefit of keeping debugging from getting boring. That’s my case for investing the time in creating a Repro. The next question is: what is the best way to build a Repro? Well, that’s going to depend greatly on the issue and the environment. One thing that I like to do is work from the inside out. In other words, if I know that a specific line of code is throwing an exception, I’ll create an application with the smallest amount of code I can use to execute that line. If that doesn’t reproduce the behavior, I’ll add more code from the application around it, building up the Repro until it gets the behavior I’m looking for. Another thing that I’ve found is important is to make sure that I attempt my Repro in an environment that’s as similar to production as possible. I do not recommend attempting to reproduce an issue occurring in a data-access application running on Windows Server 2003 on your grandma’s Mac. It’ll be much easier to create a repro on something that’s as close to the original scenario as possible and then move it to other environments to see if the behavior is any different. One of the Technical Leads on our team has a good tip for reproducing issues that occur intermittently. For example, say you find that every-so-often your application gets an error when attempting to connect to a data-store. In this situation, it’s probably worth a try to take the connection you’re trying to make and wrap it in a tight loop. It’ll be much easier to launch this and catch the failure than it will be to keep running and re-running a small repro that makes the connection a single time and exits. If you are troubleshooting an issue occurring in a web-application, you may try a tool like Application Center Test to execute your repro many times simultaneously (see: this link for a discussion of using ACT). Finally, I really recommend carefully documenting your repro. Again, it makes it much easier to get the help of others, and it will also prevent you from forgetting what you had done on Friday after a long weekend. Even if you really cannot find a way to reproduce the issue in a stripped down environment, documenting the steps an end-user takes when the issue occurs can be valuable for helping to clarify the problem statement and focus your efforts. Posted By: Justin Trobec
Source: blogs.msdn.com
Changing the security context with Dynamic SQL [T-SQL]
I’ve always found that understanding the SQL security model can be a finicky affair; that’s no criticism, its more to do with my reluctance to actually spend time immersing myself in it – I generally know what I need to know and that’s all. I remember back when I was first learning T-SQL I found the whole area of security around dynamic SQL difficult to grok until I saw an actual example of it and I figured it might be useful to anyone in the same position if I were to share a code sample demonstrating these peculiarities. Hence this blog post. The particular vagary that I want to demonstrate is that where the use of dynamic T-SQL inside a stored procedure will change the security context under which that dynamic T-SQL is executed. Code inside a stored procedure will execute as the owner of the stored procedure until some dynamic T-SQL is encountered at which time the security context switches to that of the user that called the stored procedure. Allow me to demonstrate: --Create demo DB with objects SET NOCOUNT ON; USE MASTER BEGIN TRY DROP DATABASE DynSQL; END TRY BEGIN CATCH END CATCH CREATE DATABASE DynSQL; GO USE DynSQL GO CREATE SCHEMA [sch]; GO CREATE TABLE [sch].[foo] ([bar] INT); GO --===The important bit============================ CREATE PROC [sch].[demoproc] AS BEGIN --The following statement will be executed as [sch] SELECT * FROM [sch].[foo]; DECLARE @sql NVARCHAR(MAX) = 'SELECT * from [sch].[foo];'; --The dynamic SQL in @sql will be executed as the user that called [sch].[demoproc] EXEC sp_executesql @sql; END GO CREATE LOGIN [u] WITH PASSWORD = 'p@ssw0rd'; CREATE USER [u] FOR LOGIN [u]; GRANT EXECUTE ON [sch].[demoproc] TO [u] EXECUTE AS LOGIN = 'u'; EXEC [sch].[demoproc]; -- <-Will throw error "The SELECT permission was denied on the object 'foo', database 'master', schema 'sch'." REVERT; --=============================================== --Cleanup DROP USER [u]; DROP PROC [sch].[demoproc]; DROP TABLE [sch].[foo]; DROP SCHEMA [sch]; DROP LOGIN [u] USE MASTER DROP DATABASE [DynSQL]; Let’s take a look at what happens. Firstly we get a (empty) resultset returned from the none-dynamic T-SQL inside the stored procedure because user [u] has got permission to execute [sch].[demoproc] and [sch].[demoproc] has got permission to select data from [sch].[foo]: However over on the messages tab we see that we also get an error from the dynamic T-SQL even though its the same T-SQL statement (SELECT * FROM [sch].[foo];). This is because the security context switches to the caller (i.e. [u]) and that caller does not have permission to select from [sch].[foo]: You can copy/paste the code above and execute it – it shouldn’t leave any objects lying around. Hope this helps! @Jamiet P.S. The code above was formatted for the web using The Simple-Talk Code Prettifier.
Source: feedproxy.google.com
TSQL Tuesday #11 – Physical IO’s Don’t Always Accumulate Wait Times
It is time again for another TSQL Tuesday, this time hosted by my good friend and fellow MVP, Sankar Reddy ( Blog | Twitter ). This month’s topic is Misconceptions about SQL Server and as Sankar points out in this months, there are so many misconceptions...(read more)
Source: sqlblog.com
TSQL Tuesday #11 – Misconceptions - The Tempdb Log File and VLF Counts
This post is part of the monthly community event called T-SQL Tuesday started by Adam Machanic ( blog | twitter ) and hosted by someone else each month. This month the host is Sankar Reddy ( blog | twitter ) and the topic is Misconceptions in SQL Server...(read more)
Source: sqlblog.com
Tools of the Trade: Part III – Kerberos (I)
In this series of posts, we’re going to take a look at some of the tools we at Microsoft Developer Support -- Database commonly use, how to use them, and the types of issues they can be used to resolve. In this article, Adam Saxton starts down the long road of double-hop troubleshooting by providing a gentle introduction to Kerberos, and how to recognize Kerberos+SQL related issues. Future articles will move beyond identification, and discuss the tools and processes we use to troubleshoot such issues. Kerberos We receive numerous calls where the problem falls under Kerberos. This will present itself when trying to establish a connection to SQL Server (or a Linked Server). I’m going to discuss what Kerberos is, how to recognize when there is a problem with it, and go over some of the items that need to be looked at in order to resolve a Kerberos issue with SQL Server. As there is a lot of information on this topic, I’ll be splitting it up over a few blog posts. What is Kerberos? Kerberos is a secure method for authenticating a request for a service in a computer network, and is used to verify a user or host identity. Kerberos lets a user request an encrypted "ticket" from an authentication process that can then be used to request a particular service from a server. The user's password does not have to pass through the network. Put simply, it is a more secure form of Windows Authentication versus NTLM. Also, understand that Kerberos is not a Windows specific technology. Windows will always try to use Kerberos first over TCP. If that doesn’t work, it will drop down to NTLM. The exception to this is if you are trying to access a resource that is local. NTLM will always be used when accessing local resources. Also note that Kerberos cannot currently be used over Named Pipes or Shared Memory. How does Kerberos work? There are a lot of details into how Kerberos works. I’m not going to get that deep into it within this blog series. I’m going to relate this to how it works with SQL. To do that, I’m going to use the classic example of a Double-hop with a Web Server. We use the work Double-hop to explain that the IIS Server is using a resource that is located on a different server. In this case the first “hop” is from the web browser client to the IIS Server (ASP or ASPX Page); the second hop is to the SQL Server on a different machine. The server on the second hop is also referred to as the Backend Server. In order to successfully establish a connection using Kerberos, a SQL ServicePrincipalName (SPN) needs to exist within Active Directory. I will talk more about SPN’s and where they are located in a later post as the topic is complex enough to deserve its own. I would say about 80-90% of calls we receive relating to Kerberos involve either missing SPN or duplicate SPN’s. When you log onto your machine, you will get what we call a Kerberos Ticket. To be more specific, you get a Ticket-Granting Ticket (TGT). You use the TGT as a master ticket to access all Kerberos services on a network. A Kerberos ticket includes all the user credentials or computer credentials in an encrypted format. These credentials are used to identify a specific user or a specific computer on a network for access to Kerberos services. When you access a Kerberos service, that service uses the ticket that you received to authenticate you. After the service verifies your identity and authenticates you, the service issues a service ticket. This is where the SPN’s come into play. Think of the SPN as a pointer for Kerberos so it knows where to go. That’s why, if it’s missing or there are duplicates, it doesn’t know what to do. How to recognize a Kerberos error with SQL Server: There are a few errors that we see where we can make an educated guess that a connectivity issue is Kerberos related. Cannot Generate SSPI Context Login failed for user ‘(null)’ Login failed for user ‘NT AUTHORITY\ANONYMOUS’ Those are the three biggest errors we see in relation to Kerberos with SQL Server. These errors occur when the credentials from the first hop cannot be passed to the second hop. Usually, this is because we are failing over to NTLM or our SPN configuration is not correct. There are also other settings within Active Directory that come into play as well as certain machine configuration based on how services are setup. For now, let’s focus on the basics, so I’ll get more into those setting and machine configurations in a later post. One of the last things I’d like to mention is that I’ve seen customers get the following error confused with a Kerberos issue: Login failed for user ‘SomeUser’ The SomeUser could be whatever user (SQL or Windows) that you are trying to connect with. If you see the actual username in the error that means the credentials actually got to the SQL Server. The issue at that point lies in the SQL Security configuration itself (either the server as a whole, or the database you are trying to connect to). This is not a Kerberos related issue, but instead is a much simpler login failure. Resources: Kerberos in Windows 2000 Kerberos in Windows 2003 How to use Kerberos authentication in SQL Server Posted By: Adam Saxton
Source: blogs.msdn.com
Commonly used 32-bit CPU registers and their purpose
While debugging a dump, we commonly see various CPU registers. Each register has different purpose. I am trying to put them together in one place for your easy reference. You can list all the registers in windbg with “r” command: Please note that a given register may be used for general purpose at any time. EAX: Arithmetic operations, I/O port access and interrupt call EDX: Arithmetic operations, I/O port access and interrupt call. If you have a multiplication operation which results in more than what a single register can handle, then the most significant 16 numbers are stored in EDX and least significant ones are stored in EAX EBX: Holds return values ECX: Used for loop counters. Also used for "this" pointer for a class EIP: Instruction pointer. Points to the next instruction to execute. ESP: Stack pointer. This points to the top of the stack. EBP: Base/Frame pointer. Author : Enamul(MSFT), SQL Developer Technical Lead
Source: blogs.msdn.com
How to enable TDS parser to display TDS frames when SQLServer is listening on port other than default 1433
If you try to view a netmon trace in Netmon 3.4, you will see TDS traffic is nicely parsed for you. You will see a display similar to this: The parsing works nicely because SQLServer is listening on default tcp port 1433. But, if your SQLServer is listening on a different port (other than 1433), then these TDS parsing won't work by default. You will see a display similar to this (no frames resolving to TDS): To enable the parser to parse TDS traffic for ports other than 1433 (in this case, 63959), we need to take following steps: 1. Click on "Parsers" tab in netmon UI. This will display list of parsers installed. You will see tcp.npl, tds.npl along with several different parsers 2. Double click on tcp.npl and search for "1433", you will get into a switch/case code block saying "case 1433". We basically need to include our port 63959 here. Just add a case statement above “case 1433” without any parsing code. case 63959: case 1433: //TDS-parsing code goes here This is what it’s look like after the change: 3. Save your changes 4. Reload your netmon trace, now it should look like following. With the change, TDS parser will resolve TDS traffic on this particular non-default port (63959). Author : Enamul(MSFT), SQL Developer Technical Lead
Source: blogs.msdn.com
SQLBits videos (distinctly amateur)
Hi all, SQLBits 7 is here, I’m here, and I’m walking around being very obnoxious and sticking a video camera in people’s faces. Thankfully nobody has offered me outside yet so if you’re interested in having a look at what’s going on in and around SQLBits check out my videos at http://www.youtube.com/user/jamiekth Here’s a few for your delectation. more coming, I promise!! UPDATE: The videos are now available as a single-click playlist at http://www.youtube.com/view_play_list?p=01617594AB55E450 Brent Ozar SQLBits backstage - Rob Farley and Chris Webb shoot the breeze SQLBits entrance hall - Gavin Payne, Andre Kamman, Jon Kehayias More to come throughout the weekend so keep checking out my YouTube channel: http://www.youtube.com/user/jamiekth and keep an eye on my Twitter feed where I’ll be posting them all too! @Jamiet UPDATE: Direct links to the videos here: SQLBits backstage - Chris Webb SQLBits backstage - Rob Farley and Chris Webb shoot the breeze SQLBits backstage - Tim Kent SQLBits entrance hall - Gavin Payne, Andre Kamman, Jon Kehayias SQLBits York - Allan Mitchell SQLBits attendees SQLBits 7 - York - Kevin Kline SQLBits 7 - York - Brent Ozar SQLBits 7 - York - James Rowland-Jones SQLBits 7 - York - Delegates at lunch SQLBits 7 - York - Dave Bally SQLBits 7 - York - Chris Testa-O'Neill & Maciej Pilecki SQLBits 7 – York – Neil Hambly SQLBits 7 - York - Simon Sabin (and his phone) SQLBits 7 - York - Buck Woody's hat (and Buck Woody)
Source: feedproxy.google.com
SQL Compact Performance: PostSyncCleanup to turn off UpdateStatistics during initial download of SQL Compact Replication
SQL Compact Replication initial subscription may take significant time on devices. The UpdateStatistics operation adds additional overhead to the initial download. Indicators: The Client Agent log may show the start and end times of UpdateStatistics operation. For example: SQLCECA: 04/09/2009-12:21:26 UpdateStatistics Start app=<appName>.exe SQLCECA: 04/09/2009-12:27:59 UpdateStatistics End hr=0 app=<appName>.exe Workaround: Recent hotfixes for SQL Compact 3.5 and 3.1 (KB article URLs given below) expose a new API for SQL Compact applications to turn off UpdateStatistics during initial download of Replication. Disabling UpdateStatistics can be done only on the merge replication, and not for RDA operations. One should use, SqlCeReplication repl = new SqlCeReplication(); … repl.PostSyncCleanup = 2; to make sure that update stats is not called after merge replication sync. To use the new method in your SQL Compact app, in addition to applying the hotfix to device, you will also need to deploy the updated System.Data.SqlServerCe.dll to the desktop folder where VS can find it, else an error will occur: 'PostSyncCleanup' is not a member of 'System.Data.SqlServerCe.SqlCeReplication'. You can follow these steps to resolve the above error: Extract SYSTEM~1.004 file from the device cab, rename it to “ System.Data.SqlServerCe.dll ” and replace at the appropriate path. When working with SQL Server Compact 3.5 (Visual Studio 2008): “C:\Program Files\Microsoft SQL Server Compact Edition\v3.5\Devices\System.Data.SqlServerCe.dll” If your project refers to “System.Data.SqlServerCe.dll” in another location then replace the DLL at that location as well. Once the modified app runs on the device, if the hotfix bits were not deployed you may see the following error (as expected): “Method not found: set_ PostSyncCleanup System.Data.SqlServerCe.SqlCeReplication” Uninstalling SQL Compact 3.5 SP1 RTM and installing the hotfix .cab files should resolve the above error. References: http://support.microsoft.com/kb/963060 http://support.microsoft.com/kb/960142 Posted by: Syed
Source: blogs.msdn.com
Take your script library with you [T-SQL]
As SQL professionals I’m sure many of you build up a library of SQL scripts that you take from project to project, right? Here’s a tip for those of you that have such a collection and want to make sure that you always have ready access to it. I’m a big fan of all things sync-related and there are many tools on the market now that will sync the contents of a Windows folder to the cloud or around different machines; Dropbox is a popular one, as is Windows Live Mesh which I use and I also hear good things about SugarSync (they’re all free by the way). It doesn’t matter which one you use, the basic premise of these tools is that any file dropped into a monitored folder automatically gets copied up to a cloud store and whenever you buy/inherit a new machine simply installing the client will bring that stuff down for you. In short, its an ideal solution for always having access to your SQL script library. Here’s a screenshot of the Windows Live Mesh client: Notice that I have a folder called “Personal SQL Store” (I also have one for my Powershell scripts too). Its synced to two of my laptops and also to SkyDrive Synced Storage which is Windows Live’s name for the cloud store that I referred to earlier. Up at that cloud store I have ready access to the contents of that folder: It makes it really simple to get to those folders from any computer in the world (although admittedly I have occasionally come a cropper thanks to some over zealous IT departments) and I’d highly recommend using a similar system if you have such a script library that you always need access to particularly if, like me, you’re always on the move around different client sites. Alternatively if you don’t have your own set of scripts I have the option to share my folder with up to nine people so if you want read-only access to my script library let me know. UPDATE: Adam Machanic posted a comment that made me realise it was pretty stupid to invite you to share my script library without telling you what was in it. Hence: cdf.TitleCase - Returns Title Case Formatted String dbo.FindNumberOfOccurencesOfOneStringInAnother fnSplitWithCTE - take a delimited list of characters and return them as a dataset All Sessions Data cache usage per table DatawarehouseCleardown - Deletes all data from all tables in a database, respecting FKs. Discovering_tempdb_Stats – Various queries to return info about tempdb Empty-SQL-Cache Get Index Fragmentation Get Table Row Counts Index information including column names Paul Randal myth - DDL Triggers are INSTEAD OF triggers Paul Randal myth - Null bitmap myth Split csv into a dataset using XML & XQuery SSISPackageStatsCollector_v2 tablestats – Stats for every table in a DB View all permissions csp_ToolRenameObjects2005 – Rename FKs, CHECK constraints, Indexes according to a defined naming convention. Work on SQL2005 and above dbo.csp_ToolIDAssignSurrogateKey – Assign/Reassign surrogate keys on a table sp_generate_inserts2005 – Generate inserts according to existing data in a table. Works on SQL2005 and above SqlClrToolkit – a suite of useful SQLCLR procs/functions. FIRST/LAST/MERGE aggregate functions, SafeDivision, RegexMatch, CountString, CapitaliseText, RemoveWhitespace, DynamicPivotSQL, Split, DynamicPivot, DirectoryInfo, RegistryInfo, FindChildTables, EnableDisableConstraints SSAS clearcache Hopefully the names are self-explanatory enough!UPDATE 2: Jezza101 left a comment below suggesting that I just zip the whole lot up and post it somewhere which is what I have done at: http://cid-550f681dad532637.office.live.com/self.aspx/Public/BlogShare/20101003/Personal%20SQL%20Store.zip Hope this is useful! @Jamiet
Source: feedproxy.google.com
After Installation of .NET Framework 3.5 SP1 (or .Net framework 2.0 SP2) , SSIS packages using ODBC 3rd party drivers may fail
The Issue: In the last few months, we have seen quite a few cases where after installing .net framework 3.5 SP1 ( KB http://support.microsoft.com/kb/951847), SSIS packages using certain ODBC 3rd party drivers and importing character data may fail with the error below - Error Message: [DataReader Source [<number>]] Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "component "DataReader Source" (<number>)" failed because error code 0x80131937 occurred, and the error row disposition on "output column "ColumnName" (<number)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure. The error messages may vary depending on if you are using DataReader Source in Visual Studio 2005 or ADO .Net Source in Visual Studio 2008 – [ADO NET Source [<number>]] Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "component "ADO NET Source" (<number>)" failed because error code 0x80131937 occurred, and the error row disposition on "output column "ColumnName" (<number>)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure. Error 0xc0209029: Data Flow Task: The "component "<ComponentName>" (<number>)" failed because error code 0x80131937 occurred... [ADO NET Source [<number>] Error: The component "ADO NET Source" (<number>) was unable to process the data. Pipeline component has returned HRESULT error code 0xC0209029 from a method call. [SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "ADO NET Source" (<number>) returned error code 0xC02090F5. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure. NOTE: Not all ODBC drivers are affected by this problem. This problem affects primarily SSIS packages which import character data using certain 3rd party ODBC drivers, through the use of ADO.Net connection manager and System.Data.ODBC managed provider. In Visual Studio 2005: SSIS DataReader Source --> ADO.Net connection manager --> .Net Providers \ ODBC Data Provider (System.Data.Odbc) --> Vendor ODBC driver --> Character data In Visual Studio 2008: ADO .Net Source --> ADO.Net connection manager --> .Net Providers \ ODBC Data Provider (System.Data.Odbc) --> Vendor ODBC driver --> Character data Cause: The problem started with .Net Framework 2.0 SP2 after a change was made in the way OdbcDataReader checks for Nulls in Sequential Access mode, calling SQLGetData with a BufferLength argument of 0 for variable width data types. Certain ODBC drivers are not compatible with this pattern. For more information on this function and its arguments, see: SQLGetData Function http://msdn.microsoft.com/en-us/library/ms715441.aspx .Net framework 2.0 SP2 is also included .Net framework 3.5 SP1 components. Microsoft is still investigating the issue and understands the difficult impact that this change has on customers. Determining the full impact of the issue (for example, what are all the data types that can cause the issue etc) is still a work in progress. You can track the progress from the connect site below. http://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=416455 In the meantime, we suggest that you use one of the workarounds mentioned below. Workaround: Workaround#1: (recommended) Modify the SSIS package to redirect the DataReader (or ADO .Net Source in Visual Studio 2008) Error Output to a text file, thereby allowing the package to succeed even though no rows are written out to the error output file. A. Go to Data Flow Task. B. On DataReader Source Properties-> ‘Input and Output Properties’ Tab -> DataReader OutPut -> Output columns C. Select the columns (only with character data type) and change the ‘ErrorRowDisposition’ and TruncationRowDisposition’ properties to RD_RidectRow D. Redirect the DataReader Error Output to a text file E. Repeat A-D for all Data Flow Tasks that fail with the above error. Workaround#2: If you need a quick fix (because this is a production environment and you cannot afford to modify SSIS packages at this time) – you may need to uninstall the components of .Net Framework 3.5 SP1 or .Net framework 2.0 SP2, depending on what framework version you installed that caused the issue. a. If your issue is caused by installing only .Net framework 2.0 SP2 (for example, from a Microsoft download site http://www.microsoft.com/downloads/details.aspx?familyid=5B2C0358-915B-4EB5-9B1D-10E506DA9D0F&displaylang=en), then you can uninstall .Net framework 2.0 SP2. b. If your issue is caused by installing .Net framework 3.5 SP1 (KB http://support.microsoft.com/kb/951847), Remove the .Net Framework 3.5 SP1 components and bring the system back to the state it was in before the update. This process involves more than simply uninstalling .Net Framework 3.5 SP1, as discussed in the following blogs: http://blogs.msdn.com/dougste/archive/2008/08/19/uninstalling-net-framework-3-5-sp1.aspx http://blogs.msdn.com/astebner/archive/2008/08/01/8803442.aspx Posted By: Azim Uddin & Enamul Khaleque
Source: blogs.msdn.com
TSQL Tuesday #11 Misconceptions – Enable AWE on 64bit SQL Servers
This post is part of the monthly community event called T-SQL Tuesday started by Adam Machanic ( blog | twitter ) and hosted by someone else each month. This month the host is Sankar Reddy ( blog | twitter ) and the topic is Misconceptions in SQL Server...(read more)
Source: sqlblog.com
There is no such thing as a “Small Change” to a production database
It seems like every week I get hit up with some kind of “Its a view only for this specific report,” or “It is a simple change that the vendor would have made if they had access” type of request related to SQL Server. If you’ve never dealt with me,...(read more)
Source: sqlblog.com
SQL Server 2008 R2 DACPAC Bug - Cannot insert the value NULL into column 'created_by', table 'msdb.dbo.sysdac_instances_internal'
While trying out the new DACPAC feature in SQL Server 2008 R2 recently, I ran into a bug that makes no sense to me at all. In my environment we use an Active Directory Group for the SQL Database Administrators and that group is a member of the sysadmins...(read more)
Source: sqlblog.com
Performance: Tips for faster download of a large SQL Compact 3.5 database via Replication
SQL Compact 3.x Replication initial subscription is resource-intensive and may take significant time on devices. Read the introduction to the SQL Compact Blog for this problem. Introduction (from SQL Compact Blog) “The initial subscription is going to bring down lot of data from server to client in merge replication. It means that the requirement for bandwidth, memory, processing power is going to be high for the creation of subscription. However, for devices all these requirements are not easy and even if they are available, it takes very long time to get the subscription created on the client database.” Symptoms: SQL Compact 3.5 Replication initial download for certain large publication takes a significant amount of time on mobile devices. The corresponding publication with earlier versions (SQL CE 2.0 and SQL 2000) did not take that much time. Performance Tips: Here is a list of workarounds which have a potential to reduce the download time for large initial sync. Please keep in mind that choosing a single option may not resolve the issue completely. However, a combination of several options may bring the download time to an acceptable level. - Remove unnecessary indexes from the server side (articles), especially indexes on Varchar columns This helps to a certain extent; however the overall download time may still be high. - Apply hotfix to turn off UpdateStatistics: The Client Agent log during download indicates that a portion of time was spent in executing update statistics. After applying a new hotfix release, the application may invoke a new method call to turn off update statistics. PostSyncCleanup to turn off UpdateStatistics during initial download of SQL Compact Replication KB http://support.microsoft.com/kb/963060 FIX: An error message is logged, and the synchronization may take a long time to finish when you synchronize a merge replication that contains a SQL Server Compact 3.5 subscriber - Turn off unnecessary logging (Client Agent Log, other types of device logging) on production devices. - Move to a high-end device, where one can use higher buffer pool sizes and can also place the database .sdf file in main memory as appropriate.- Pre-create the .sdf on desktop and then download to devices. This relates to managing and deploying the solution to n number of devices in the field. You may use the move-copy detection feature of SQL Compact on devices. For more details on how it works, please see the SQL Compact team’s blog entry “"Merge Subscribe Once, Copy and Deploy Multiple Times" at http://blogs.msdn.com/sqlservercompact/archive/2007/06/26/merge-subscribe-once-copy-and-deploy-multiple-times.aspx - Increase buffer pool size:We got some real good benefit in terms of running times by increasing buffer pool size to higher values. By default this value is 640K on a device. The running times for a very large single-table publication with Max Buffer Size of 640K, 1024K, 2048K, and 4096K were as follows:1) 640K - 86 m 34 secs2) 1024K - 73 m 56 secs3) 2048K - 55 m 13 secs4) 4096K - 38 m 30 secs You can specify ‘Max Buffer Size’ in the connect string, as follows:repl.subscriberconnectionstring = "provider=MICROSOFT.SQLServer.OLEDB.CE.3.5;data source=35Repl.sdf;SSCE:Max Buffer Size=4096"; Keep in mind that this setting consumes more program memory on the device. However, this should not be an issue if you have a good 20 MB of program memory available. If the initial sync data is huge, you may set the buffer pool size to a conveniently higher value only for initial sync. There are real improvements in running times with bigger buffer pool sizes. However, certain devices can't increase buffer pool size beyond 2-3 MB, so, a very large buffer-size is not always applicable. Fix for rowguid index processing delays: In some cases, the delay is due to rowguid index processing, please look for the hotfix KB 970414 when it is available.
Source: blogs.msdn.com
Scripting Database Mail Configuration with Powershell and SMO
Setting up Database Mail in SQL Server 2005 and 2008 is a common task that is performed post server setup. However, if you just took over a fairly large environment and faced having to configure Database Mail on dozens, maybe hundreds of servers, SQL...(read more)
Source: sqlblog.com
My SQLBits presentation this coming Friday – SSIS Performance Tuning
SQLBits is once again upon us, this time its number 7 and it starts on 30th September 2010 in York. I’ve been looking forward to this one since it was first announced because its in my original neck of the woods so as well as going to this great conference I get to go and visit family as well. The guys, Simon, James, Chris, Allan, Darren, Martin & Chris (and anyone else I have forgotten sorry), always put on a great conference and I’m sure this one will be no exception. I’ve taken a look at the agenda and am slightly disappointed to learn that I’m on at the same time as folks that I would really like to go and see for myself; I was especially keen to go to Jonathan Kehayias’ talk on Extended Events, sadly not to be: Can’t wait for this Friday, hope to see you there! @Jamiet
Source: feedproxy.google.com
Be wary of using UNC paths in SSIS packages
I have recently discovered what I believe to be a bug in SQL Server Integration Services (SSIS) 2008 and am taking the opportunity herein to warn about it so that nobody else gets stung by it. The bug concerns the use of Uniform Naming Convention (UNC) paths inside a .dtsx package (i.e. paths that start with a “\\”). I have managed to reproduce a situation where a package will attempt to validate a File Connection Manager containing a UNC path even if the task(s) that use that Connection Manager have got DelayValidation=TRUE. In other words, the package may attempt to validate a Connection that will fail and this will cause errors in your package. The first screenshot below shows the execution of my repro package. Notice: When the package starts up @[User::FakeFileLocation] points to a non-existent drive "u:" Also be aware of a few things that aren’t evident from the screenshot: The ConnectionString of the "FILE" connection manager is set to @[User::FakeFileLocation] The script task changes @[User::FakeFileLocation] to be the same as @[User::RealFileLocation] The dataflow task, the connection manager & the package all have DelayValidation=TRUE In this case everything works OK, no errors are thrown, everything is peachy. Now take a look at this second screenshot. It shows the same package as before but with one important difference, @[User::FakeFileLocation] has been changed from u:\NonExistentFile to \\u\NonExistentFile. Notice in the output that we have some errors because SSIS has attempted to validate that UNC path; it did not attempt to do so before the change. One other point of note is that these errors did not cause the package to stop executing however the error will still “bubble-up” to any calling package and will be interpreted as a package failure which is what makes this such a dangerous bug. I have submitted the repro to Microsoft at https://connect.microsoft.com/SQLServer/feedback/details/585479/. No reply as yet. @Jamiet UPDATE: Microsoft have acknowledged the bug and have resolved to fix it in SQL11 (aka SQL Server Denali). See here for the following response: We believe we found the cause of the problem. One of the system API we use to help resolve path can return different error codes depending on the network configurations and what are available. We will fix the bug in Release 11 of SSIS.
Source: feedproxy.google.com
General network errors (GNEs), Timeout expired, SQL Server not found or Access Denied…
Many times, the key to solving these errors is a network packet trace. The problem is that these errors are often thrown intermittently. This can obviously make it problematic to capture a network trace. Network Monitor 3 (AKA Netmon3) solves this problem quite nicely. Netmon3 has the built-in ability to capture rolling traces. This allows you to start the trace and then walk away and be sure that you will have captured the error. Here are my steps for configuring Netmon3 for a rolling trace: Please see the steps below in order to configure Network Monitor: 1. Download Network Monitor 3 a. (http://www.microsoft.com/downloads/details.aspx?FamilyID=18b1d59d-f4d8-4213-8d17-2f6dde7d7aac&DisplayLang=en) 2. Install Network Monitor on both the client and server experiencing the problem. If you are not able to install it on the server you can install it on another machine that is connected to the same hub as the server or to the admin port of the switch that the server is on. If you go this route you must synchronize the time between the client, server, and this third machine. 3. Synchronize the clocks on the Client and SQL machines a. At one of the machines bring up a command prompt and execute the following: i. Net time <\\machinenamewewanttosynchwith> /set /yes 4. Turn off TCP Chimney if any of the machines are Windows 2003 a. bring up a command prompt and execute the following: i. Netsh int ip set chimney DISABLED 5. Using the NMCap command-line utility, start the chained trace (chained = create a new capture file after reaching the “Temporary capture file size” rather than overwriting the current capture) a. [Netmon 3 installation folder]\NMCap /network * /capture /file test.chn:100M (creates 100 MB chained files) i. NOTE: YOU MUST USE THE .CHN EXTENSION TO CAPTURE CHAINED TRACES ii. NOTE: The target folder must exist for NMCap to create the trace file b. More information can be found: i. http://blogs.technet.com/netmon/archive/2006/10/24/nmcap-the-easy-way-to-automate-capturing.aspx ii. Or by running the command-line switch /examples iii. Or by running the command-line switch /? 6. Get the IP addresses of the client and the server a. ipconfig /all on the client > client.txt b. ipconfig /all on the server > server.txt 7. Reproduce the problem, please note the exact time the problem reproduces. Also, please note the exact error message. 8. Stop the trace by hitting Ctrl-c in the command-line window Please note that Netmon3 does not run on Windows 2000. Also, step #6 is probably the single most important step (besides actually starting the trace). If you don’t know the timestamp of the error, it is extremely difficult to find the offending packets in the trace. Happy captures! Posted By: Evan Basalik
Source: blogs.msdn.com
Enforcing naming conventions using database unit testing
My naming convention obsession Anyone that has ever worked with me will tell you that I am a stickler for naming conventions. I have a somewhat obsessive reputation for it; I can’t help it – I seem to have a deep seated uncontrollable desire to ensure that every object in my database(s) is/are named consistently (is there anyone else out there equally as obsessive?). I have tried various techniques down the years to try and enforce naming conventions but none of them really worked. I’ve got scripts that alter object names (such a script is in my script library in fact) but these are only any use if they actually get run, they don’t actually enforce the conventions – that’s a manual step. I’ve thought about using Policy-Based Management (PBM) to enforce naming conventions but given I’m a developer and not a DBA that’s not something that is readily available to me and besides, using PBM to enforce naming conventions is reactive rather than proactive if you are developing the code on a machine where the policies are not enforced. Another option I looked into using was Team Foundation Server (TFS) check-in policies; these are policies that can be applied to artefacts when they get checked-in to TFS’s source control system. This option really appealed to me because the naming conventions could be enforced during check-in (i.e. very very early) and didn’t require DBA intervention. In practice though enforcing naming conventions using TFS check-in policies has a few sizable issues: Its not easy. It would require you to parse the file that was getting checked-in, decide what sort of object is defined in the file, and then check the name of the object based on things like object name, schema, etc... TFS check-in policies are not installed on the TFS server, they are installed on the development workstations. This means there is a dependency and, even though the source code for the check-in policies can be distributed with your application source code, I didn’t really like this. You’re relying on each developer to enforce the check-in policy and with the greatest will in the world….that aint gonna happen. Its too easy to turn them off. There is the obvious dependency on using TFS, not something every development shop uses even in the Microsoft space. Database unit testing to the rescue No, a better solution was needed and I came up with one in the shape of automated database unit testing. I have spoken recently about how I have become a big fan of database unit testing (see my post Experiences from writing sp_CascadingDataViewer – DB unit testing and code distribution) and being able to enforce naming conventions is one very good reason for that. Enforcing naming conventions using automated unit tests has a number of advantages: They can be written against the metadata of the objects themselves (i.e. by querying SQL Server’s system views) so there’s no parsing that needs to be done. They can be employed as part of a Continuous Integration (CI) process and run as a build verification test (BVT). Someone checks-in an object that violates the naming convention? Bang: broken build! Developers can’t circumvent the tests. Nothing needs to be installed on the development workstations. The tests live wholly as part of your source code. Not dependent on use of a particular source control system Hence I have written some unit tests that enforce the following naming conventions: Check constraints must be of the form CK_<schemaName><tableName>_XXX Column names must begin with a capital letter Column names cannot contain underscores Default constraints must be named DF_<schemaName><tableName>_<ColumnName> Foreign keys must be of the form FK_<parentObjectSchema><parentObject>_REF_<referencedObjectSchema><referencedObject>XXX Non-unique clustered keys must be of the form IXC_<schemaName<TableName>_<Column><Column><Column>… Non-unique non-clustered keys must be of the form IX_<schemaName><TableName>_<Column><Column><Column>... Unique clustered keys must be of the form IXUN_<schemaName><TableName>_<Column><Column><Column>… Unique non-clustered keys must be of the form IXUN_<schemaName><TableName>_<ColumnColumnColumn>... Primary keys must be of the form PK_<schemaName><tableName> Stored procedure names should not contain underscores Stored procedure names must begin with a capital letter Table names must not contain underscores Table names must begin with a capital letter I’m not stating that you should agree with these naming conventions (I don’t necessarily agree with them myself – they were defined before I arrived on my current project), the point here is that all of these rules can be enforced and its very easy to do it. Here’s the code for the unit test that enforces the primary key naming convention: /*PK name is PK_<schemaName><tableName>*/ SET NOCOUNT ON DECLARE @cnt INT; SELECT * INTO #t FROM ( SELECT OBJECT_NAME(c.[parent_object_id]) AS [TableName],OBJECT_SCHEMA_NAME(c.[parent_object_id]) AS [SchemaName],c.* FROM [sys].[key_constraints] c INNER JOIN [sys].[tables] t ON c.[parent_object_id] = t.[object_id] LEFT OUTER JOIN sys.extended_properties ep ON t.[object_id] = ep.major_id AND ep.[name] = 'microsoft_database_tools_support' WHERE ep.[major_id] IS NULL AND c.[type] = 'PK' )q WHERE [name] <> N'PK_' + [SchemaName] + [TableName] SET @cnt = @@ROWCOUNT; IF (@cnt > 0) BEGIN DECLARE @msg NVARCHAR(2048); SELECT @msg = '%d Primary Keys do not conform to naming convention (PK_<schemaName><tableName>):' + STUFF((SELECT ', ' + [name] FROM #t a FOR XML PATH('') ),1,2,'') FROM ( SELECT 1 AS [Id],* FROM #t t )q GROUP BY q.[Id] SELECT @msg RAISERROR(@msg,11,1,@cnt); END Essentially all it does is pull all of the primary keys out of [sys].[key_constraints], checks to see what the name should be, then if it finds any that violate the naming convention raise an error containing the names of all the primary keys in question. Here’s the error obtained when running the test against [AdventureWorks] (I’ve highlighted the pertinent bit): Test method Prs.SchemaTests.NamingConventions.PrimaryKeys threw exception: System.Data.SqlClient.SqlException: 70 Primary Keys do not conform to naming convention (PK_<schemaName><tableName>):PK_ErrorLog_ErrorLogID, PK_Address_AddressID, PK_AddressType_AddressTypeID, PK_AWBuildVersion_SystemInformationID, PK_BillOfMaterials_BillOfMaterialsID, PK_Contact_ContactID, PK_ContactCreditCard_ContactID_CreditCardID, PK_ContactType_ContactTypeID, PK_CountryRegionCurrency_CountryRegionCode_CurrencyCode, PK_CountryRegion_CountryRegionCode, PK_CreditCard_CreditCardID, PK_Culture_CultureID, PK_Currency_CurrencyCode, PK_CurrencyRate_CurrencyRateID, PK_Customer_CustomerID, PK_CustomerAddress_CustomerID_AddressID, PK_DatabaseLog_DatabaseLogID, PK_Department_DepartmentID, PK_Document_DocumentID, PK_Employee_EmployeeID, PK_EmployeeAddress_EmployeeID_AddressID, PK_EmployeeDepartmentHistory_EmployeeID_StartDate_DepartmentID, PK_EmployeePayHistory_EmployeeID_RateChangeDate, PK_Illustration_IllustrationID, PK_Individual_CustomerID, PK_JobCandidate_JobCandidateID, PK_Location_LocationID, PK_Product_ProductID, PK_ProductCategory_ProductCategoryID, PK_ProductCostHistory_ProductID_StartDate, PK_ProductDescription_ProductDescriptionID, PK_ProductDocument_ProductID_DocumentID, PK_ProductInventory_ProductID_LocationID, PK_ProductListPriceHistory_ProductID_StartDate, PK_ProductModel_ProductModelID, PK_ProductModelIllustration_ProductModelID_IllustrationID, PK_ProductModelProductDescriptionCulture_ProductModelID_ProductDescriptionID_CultureID, PK_ProductPhoto_ProductPhotoID, PK_ProductProductPhoto_ProductID_ProductPhotoID, PK_ProductReview_ProductReviewID, PK_ProductSubcategory_ProductSubcategoryID, PK_ProductVendor_ProductID_VendorID, PK_PurchaseOrderDetail_PurchaseOrderID_PurchaseOrderDetailID, PK_PurchaseOrderHeader_PurchaseOrderID, PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID, PK_SalesOrderHeader_SalesOrderID, PK_SalesOrderHeaderSalesReason_SalesOrderID_SalesReasonID, PK_SalesPerson_SalesPersonID, PK_SalesPersonQuotaHistory_SalesPersonID_QuotaDate, PK_SalesReason_SalesReasonID, PK_SalesTaxRate_SalesTaxRateID, PK_SalesTerritory_Territor... I am currently including these tests inside a C# test project inside Visual Studio 2010. Visual Studio has a rather nice feature that allows you to link to artefacts in other projects and hence we can host our single test class containing all of these tests in one place and link to it from whichever test project we want (typically you will have a test project per database) thus following the DRY principle. Here I show the dialog that demonstrates adding a link to an existing test class: And how it appears in the project. Note that NamingConventions.cs exists in both test projects but one is just a link to the other: Wrap-up I’m not sure my colleagues are too happy about these new tests given that they’re now breaking the build more often but nonetheless I think they realise the value (I guess I’ll find out tomorrow when they read this!!!) All-in-all its working very well for us and I’m now a very happy bunny knowing that naming conventions are being enforced and will continue to be so with zero effort from here on in. I have made the test class that contains all of the tests that I detailed above available on my SkyDrive at http://cid-550f681dad532637.office.live.com/self.aspx/Public/BlogShare/20101005/MIControlTest.zip. If you want to use it you should simply be able to drop it into an existing C# database test project and away you go (change the tests to suit your naming conventions of course though). Hope this helps. If it does please let me know, I’d really love some feedback on this. @Jamiet
Source: feedproxy.google.com
How to configure SQL server to listen on different ports on different IP addresses?
Technorati Tags: SQL Port The following post describes how you can configure your SQL Server to listen on a different port(s) on different IP addresses that are available on your system. This procedure applies to both SQL server 2005 and SQL server 2008. Case 1: SQL Server is installed in an environment where the IP addresses had not changed since the time it is originally installed. 1) Open SQL Server configuration manager. 2) Ensure that TCP/IP protocol is enabled By default, all the IP addresses listen the same port or ports configured in SQL Server configuration manager. The SQL Server error log will have an entry like the following: 2009-07-15 17:40:06.39 Server Server is listening on [ 'any' <ipv4> 2675]. 3) In the TCP/IP properties set ‘Listen All’ to ‘No’ 4) Go to the IP addresses tab for the instance, set Enabled to Yes and TCP port number field for the specific IP address to the desired port. The screen will look as follows: 5) Restart the SQL Server. Now you should see an entry similar to the following in the SQL error log: 2009-07-15 18:03:10.58 Server Server is listening on [ x.y.z.w <ipv4> 2000]. 2009-07-15 18:03:10.59 Server Server is listening on [ x.y.z.v <ipv4> 2001]. As you can see from the above each of the IP addresses is listening on different port. Case 2: SQL Server is installed in an environment where the IP addresses change dynamically, but the number of IPs active on the system are the same (For example there are two IPs active on the system, but because of lease expiry or when moving to a new subnet, the system hosting SQL Server got either one or both of its IPs changed). In this case, get the output of ipconfig /all on the system, and edit one or all the IP addresses as needed with the new IP addresses that are active on the system using a similar procedure discussed in Case 1. Case 3: You add an additional IP address on the system: In that scenario, you will not be able to use the procedure documented in Case 1 or Case 2 above as the Configuration Manager’s IP address list will just only have as many entries as the number of IPs that SQL Server found when it is installed In this scenario, you can take the following steps to update the registry values SQL server looks at to listen on different ports on different IP addresses. Warning Serious problems might occur if you modify the registry incorrectly by using Registry Editor or by using another method. These problems might require that you reinstall your operating system. Microsoft cannot guarantee that these problems can be solved. Modify the registry at your own risk. . For more information about how to back up, restore, and modify the registry, click the following article number to view the article in the Microsoft Knowledge Base: 322756 How to back up and restore the registry in Windows 1) Navigate to the following registry key on the SQL server machine: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\!INSTANCEID!\MSSQLServer\SuperSocketNetLib\Tcp\ Note: !INSTANCEID! is a place holder for your SQL server instance. 2) Right click on IP1, export the registry key as SQLIP template.reg 3) Edit the key name and IP address key .reg file that you exported in step 2 with notepad with the new IP address. (You can get the IP address list on the system by executing ipconfig /all > ipconfig.txt command from the command prompt). The contents would look as follows: Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.SQL10\MSSQLServer\SuperSocketNetLib\Tcp\IP3]—> Change this name to a new value for example IP4 "Enabled"=dword:00000000 "Active"=dword:00000001 "TcpPort"="2001" "TcpDynamicPorts"="" "DisplayName"="Specific IP Address" "IpAddress"="a.b.c.d" –> Update this with new IP address value 4) After editing the file save it with a different name – for example new IP4.reg 5) Double click the .reg file from step 3 to import the key as a new entry under [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.SQL10\MSSQLServer\SuperSocketNetLib\Tcp] registry entry. 6) Repeat steps (3),(4) and (5) for any other new IP addresses that you currently want to configure on the system. Note: After adding the above registry keys, the new IP addresses should now show up in SQL server configuration manager. 7) Optional: Clean up any IPs that are no longer active by deleting the associated <IP_n> registry keys. 8) In SQL Server configuration manager, IP addresses tab, ensure that only the addresses that are listed in the ipconfig output on the system have Enabled property set to Yes and the other set to No Note: If IP All is set to No and if the IP addresses tab has some IP addresses that have ‘Enabled’ set to ‘Yes’ but the actual IP is no longer active on the system, SQL Server service fails to start with an error message like the following logged to SQL Server error log: 2009-07-16 15:43:07.87 Server Server is listening on [ 127.0.0.1 <ipv4> 2753]. 2009-07-16 15:43:07.89 Server Error: 26024, Severity: 16, State: 1. 2009-07-16 15:43:07.89 Server Server failed to listen on x.y.z,w <ipv4> 2000. Error: 0x2741. To proceed, notify your system administrator. 2009-07-16 15:43:07.95 Server Error: 17182, Severity: 16, State: 1. 2009-07-16 15:43:07.95 Server TDSSNIClient initialization failed with error 0x2741, status code 0xa. Reason: Unable to initialize the TCP/IP listener. The requested address is not valid in its context. 2009-07-16 15:43:07.95 Server Error: 17182, Severity: 16, State: 1. 2009-07-16 15:43:07.95 Server TDSSNIClient initialization failed with error 0x2741, status code 0x1. Reason: Initialization failed with an infrastructure error. Check for previous errors. The requested address is not valid in its context. 2009-07-16 15:43:07.95 Server Error: 17826, Severity: 18, State: 3. 2009-07-16 15:43:07.95 Server Could not start the network library because of an internal error in the network library. To determine the cause, review the errors immediately preceding this one in the error log. 2009-07-16 15:43:07.95 Server Error: 17120, Severity: 16, State: 1. 2009-07-16 15:43:07.95 Server SQL Server could not spawn FRunCM thread. Check the SQL Server error log and the Windows event logs for information about possible related problems. Case 4: SQL Server is installed in a clustered environment. On cluster, you cannot configure SQL Server to listen on a specific IP addresses. You must chose IPALL. The IP addresses on which the cluster instance will be listening on is determined by cluster resources (configurable through Cluster Administrator, by adding IP Address resources under SQL Network Name resource). Additional links: How to: Configure a Server to Listen on a Specific TCP Port (SQL Server Configuration Manager) 294453 How to set up SQL Server to listen on multiple static TCP ports 823938 How to configure an instance of SQL Server to listen on a specific TCP port or a dynamic port Configure and manage SQL Server 2005 from your application using WMI IPv6 and SQL Server 2005 Connecting Using IPv6 Ramu Konidena Microsoft SQL Server Support Technical Lead
Source: blogs.msdn.com
Plan guides (plan freezing) in SQL Server 2005/2008
SQL Server 2005 The plan guides were first introduced in SQL Server 2005.You may find detailed information about the plan guides in SQL Server 2005 here: General information http://msdn.microsoft.com/en-us/library/ms187032(SQL.90).aspx Description of sp_create_plan_guide http://msdn.microsoft.com/en-us/library/ms179880(SQL.90).aspx This feature can be useful when you are trying to enforce the optimizer using a specific execution plan without changing the original query. In SQL Server 2005, however, you have to specify the statement exactly as it was submitted to the SQL Server. Sometimes it can be difficult since you have to capture a SQL Profiler trace and copy the SQL statement from there. Also there are certain rules that have to be applied. For example, the SQL parser does ignore spaces and escape characters inside the query text, but does not allow it at the end, so it may add complexity to capturing the actual statement. In SQL Server 2005 query statistics will appear in DMVs including the T-SQL statement. In order to make it easier you may create a plan guide based on query statistics DMVs directly. Here is the script that demonstrates this. NOTE: Before you execute the examples below please make sure you have the AdventureWorks database installed and the compatibility level for it set to 90. If you don’t have the AdventureWorks database you can download it from: http://www.codeplex.com/SqlServerSamples use AdventureWorks go --- Cleaning cache for this sample dbcc freeproccache go --- Running query first time to get plan generated for freezing set statistics xml on exec sp_executesql N'SELECT COUNT(*) FROM [HumanResources].[Employee] e INNER JOIN [Person].[Contact] c ON c.[ContactID] = e.[ContactID]' set statistics xml off go --- Based on query pattern creating a plan guide - freezing plan declare @sql_text nvarchar(max), @sql_xml_plan nvarchar(max) select @sql_text= substring(sqt.text, (qs.statement_start_offset/2)+1,((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(sqt.text) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) + 1), @sql_xml_plan = convert(nvarchar(max),sqp.query_plan) from sys.dm_exec_query_stats qs cross apply sys.dm_exec_sql_text(qs.sql_handle) sqt cross apply sys.dm_exec_query_plan(qs.plan_handle) sqp where text like '%Employee%' if @sql_text<>'' begin select @sql_text, @sql_xml_plan set @sql_xml_plan = 'OPTION(USE PLAN '''+@sql_xml_plan+''')' exec sp_create_plan_guide @name =N'MyPlan_Guide_1' , @stmt = @sql_text , @type = N'SQL' , @module_or_batch = NULL , @params = NULL , @hints = @sql_xml_plan end You may check the plan guide is created by querying sys.plan_guides catalog view select * from sys.plan_guides Now execute the query again. --- This time we will see USEPLAN=1 and plan guide name in XML plan output set statistics xml on exec sp_executesql N'SELECT COUNT(*) FROM [HumanResources].[Employee] e INNER JOIN [Person].[Contact] c ON c.[ContactID] = e.[ContactID]' set statistics xml off Click on ShowPlanXML hyperlink SQL Server Management Studio 2005 will show you XML. Look at <StmtSimple> and <QueryPlan> tags. As you can see SQL Server picked the created plan guide and USEPLAN option. WARNING! You should be careful using the plan guides in SQL Server 2005. In case if metadata and/or data distribution has been changed the optimizer will not be able to use the plan guide anymore and the query will fail with the following error: Msg 8698, Level 16, State 0, Line 1 Query processor could not produce query plan because USE PLAN hint contains plan that could not be verified to be legal for query. Remove or replace USE PLAN hint. For best likelihood of successful plan forcing, verify that the plan provided in the USE PLAN hint is one generated automatically by SQL Server for the same query. To demonstrate this, disable the existent index that is used in the plan guide above ALTER INDEX IX_Employee_ManagerID ON HumanResources.Employee DISABLE GO and try running the query again. set statistics xml on exec sp_executesql N'SELECT COUNT(*) FROM [HumanResources].[Employee] e INNER JOIN [Person].[Contact] c ON c.[ContactID] = e.[ContactID]' set statistics xml off To clean up your server after this demonstration: --- Rebuild disabled index to enable it ALTER INDEX IX_Employee_ManagerID ON HumanResources.Employee REBUILD GO --- Drop plan guide EXEC sp_control_plan_guide N'DROP',N'MyPlan_Guide_1'; SQL Server 2008 In SQL Server 2008 plan guides feature has been improved. In addition to sp_create_plan_guide you can also use the sp_create_plan_guide_from_handle stored procedure Understanding plan guides http://msdn.microsoft.com/en-us/library/ms190417(SQL.90).aspx sp_create_plan_guide_from_handle http://technet.microsoft.com/en-us/library/bb964726.aspx So now you can create a plan guide based on the actual plan handle without pulling the T-SQL statement text Here is the example: use AdventureWorks go --- Cleaning cache for this sample dbcc freeproccache go --- Running query first time to get plan generated for freezing set statistics xml on exec sp_executesql N'SELECT COUNT(*) FROM [HumanResources].[Employee] e INNER JOIN [Person].[Contact] c ON c.[ContactID] = e.[ContactID]' set statistics xml off go --- Based on query pattern creating a plan guide - freezing plan declare @plan_handle varbinary(1000) select @plan_handle = plan_handle from sys.dm_exec_query_stats qs cross apply sys.dm_exec_sql_text(qs.sql_handle) sqt where text like '%Employee%' select @plan_handle exec sp_create_plan_guide_from_handle 'MyPlan_Guide_1', @plan_handle=@plan_handle As you can see creating the plan guide is easier now, and you may also easily copy and paste the plan handle from the DMV output and manually pass it to sp_create_plan_guide_from_handle Run the query again set statistics xml on exec sp_executesql N'SELECT COUNT(*) FROM [HumanResources].[Employee] e INNER JOIN [Person].[Contact] c ON c.[ContactID] = e.[ContactID]' set statistics xml off go Click on ShowPlanXML hyperlink In SQL Server 2008 SQL Server Management studio will take you directly to the graphic execution plan automatically Then right click on the execution plan page and choose Properties To clean up your server after this demonstration: --- Drop plan guide EXEC sp_control_plan_guide N'DROP',N'MyPlan_Guide_1'; Another difference in SQL Server 2008 from SQL Server 2005 is with the optimizer behavior if the metadata has been changed. If this is the case, then the plan guide can not be used anymore. The SQL Server 2008 optimizer will silently skip the plan guide and continue with query execution with whatever execution plan is the best. This makes the use of plan guides more robust and less dangerous than in SQL Server 2005. To monitor the plan guides behavior you may use two new SQL Profiler events in SQL Server 2008 Posted by: Sergey Pustovit – SQL Server Escalation Services
Source: blogs.msdn.com
SSIS MSBuild task now included in MSBuild Extension Pack
The information in this blog post may be of interest to anyone out there that is using Continuous Integration with msbuild in order to build/deploy a project that includes SSIS packages. The MSBuild Extension Pack at http://msbuildextensionpack.codeplex.com/ now includes an MSBuild task that will build an SSIS project, the output from which is a .deploymentmanifest file that can be used in concert with the SSIS deployment utility. Building a SSIS project using the SSIS MSBuild Task essentially has the same effect as right-clicking on a SSIS project and selecting Build with CreateDeploymentUtility=True (i.e. produces a .deploymentmanifest file): The advantage of using the MSBuild task is of course that you can now do it in an MSBuild script, none of this manual mouse-clicking required thank you very much! The SSIS MSBuild task was previously available in the SSIS community samples project at http://sqlsrvintegrationsrv.codeplex.com but having it as part of the MSBuild Extension Pack is definitely advantageous as this is a very widely used set of utilities. The source code has been available in the MSBuild Extension pack since changeset 54481 which was checked-in on 26th August 2010 and was included in the August 2010 Release. Note that in order to deploy your packages after using the SSIS MSBuild task you will need to call the SSIS deployment utility and pass in the outputted .deploymentmanifest file. Information on doing that is decidedly thin on the ground so I’ll try and elucidate. The SSIS Deployment Utility is an executable that gets installed with SQL Server Integration Services workstation tools and is called dtsinstall.exe. You can call it on the command-line and pass in the path to a .deploymentmanifest file but as far as I can determine there is no way to do an unattended install (i.e. there is no way to tell dtsinstall.exe where to install the packages to, it simply opens up a GUI and prompts you for this information). This is a horribly antiquated method of deployment and I’m hunting around to see if there’s anything I’ve missed here – if you know any better please let me know. Hope this helps! @Jamiet
Source: feedproxy.google.com
Some thoughts on Visual Studio database references and how they should be used for SQL Server BI
Over the past few weeks I have been making heavy use of the Database tools in Visual Studio 2010 (formerly known as datadude, DBPro, VSTS for Database Professionals or one of a plethora of other names that it has gone by over the past few years) and one of the features that has most impressed me has been database references. Database references allow you to have stored procedures in your database project that refer to objects (tables, views, stored procedures etc…) that exist in other database projects and hence when you build your database project it is able to resolve those references. Gert Drapers has a useful introduction to them at his blog post Database References; in his words database references allow you to: represent and resolve 3 and/or 4-part name usage inside a database project. Database references are conceptually the same as assembly references inside a C# or VB.NET projects; they allow you to reference objects from your database project that live inside another namespace (database). It occurred to me that similar functionality would be incredibly useful for SQL Server Integration Services(SSIS), Analysis Services (SSAS) & Reporting Services (SSRS) projects. After all reports, packages and data source views are rife with references to database objects – why shouldn’t we be able to have design-time dependency checking in our BI projects the same way that database and .Net developers do? Here are some examples of ways in which this would be useful:If a SSRS report pulls data from a view, I firstly want to know (at design-time) that that view exists. Secondly, if the view definition changes I want to know that my report will still work.Similarly for a SSAS cube, if a view referenced in the DSV changes or disappears I want my SSAS project build to fail or succeed accordinglySimilarly for a SSIS Execute SQL Task, if it executes a stored procedure I want to know that the named parameters that I am using actually exist in the stored procedure. To this end I have submitted three identical suggestions to Connect, one each for SSIS, SSAS & SSRS: [SSAS] Declare database object dependencies[SSRS] Declare database object dependencies [SSIS] Declare database object dependencies I have said before in my blog post The SQL developer gap that we database and BI developers deserve as much love from Microsoft as our .Net counterparts do and I’m of the opinion that providing design-time dependency checking across the full gamut of BI projects would be a huge step in the right direction. The holy grail is an object dependency graph from database tables all the way up through database views, database functions, stored procedures, DAL, ORM layer & business logic layer all the way through to presentation – I don’t know how long it will take but we will get there one day I am sure, hopefully sooner rather than later! @JamieT
Source: feedproxy.google.com
PART II - Data security enhancements in SQL Server 2005
In the previous edition (Data security enhancements in SQL Server 2005) , we talked about data encryption/decryption in general. In this installment, let’s talk about key maintenance. It is important to ensure that we can decrypt our data, even in case of user mistake or corruption. For this, we need to protect our keys in our database. We have couple of options to do this. Master key and certificate You can use the following steps to backup Master key and Certificates · Backing Master key BACKUP MASTER KEY TO FILE = 'path_to_file' ENCRYPTION BY PASSWORD = 'password' RESTORE MASTER KEY FROM FILE = 'path_to_file' DECRYPTION BY PASSWORD = 'password' ENCRYPTION BY PASSWORD = 'password' [ FORCE ] More information on the syntax is available here. Backing Certificate BACKUP CERTIFICATE certname TO FILE = 'path_to_file' [ WITH PRIVATE KEY ( FILE = 'path_to_private_key_file' , ENCRYPTION BY PASSWORD = 'encryption_password' [ , DECRYPTION BY PASSWORD = 'decryption_password' ] ) ] To restore a backed up certificate, use the CREATE CERTIFICATE statement. More information is available here. Symmetric key In order to maintain symmetric key properly and have the ability to restore it, you will have to use the same values for the following during key creation: · algorithm, · key_source, · identity_value Key can be protected by password or certificate, but values above should exactly be the same. You may also use different name of symmetric key. Let’s take an example to better understand this. SELECT * INTO Employee_symm FROM AdventureWorks.HumanResources.Employee GO ALTER TABLE Employee_symm ADD EncryptedNationalIDNumber varbinary(128); GO --- Lets create symmetric key with protection by password first CREATE SYMMETRIC KEY symm_key_combo WITH ALGORITHM = triple_des, IDENTITY_VALUE = 'Example of encryption', KEY_SOURCE = 'Put here some unique and long enough phrase.' ENCRYPTION BY PASSWORD = 'Avc#ptNO$cf@o!'; GO --- Encrypt data OPEN SYMMETRIC KEY symm_key_combo DECRYPTION BY PASSWORD = 'Avc#ptNO$cf@o!' GO UPDATE Employee_symm SET EncryptedNationalIDNumber = EncryptByKey(Key_GUID('symm_key_combo'), NationalIDNumber); GO --- Now let’s drop symmetric key DROP SYMMETRIC KEY symm_key_combo GO --- Re-create symmetric key with protection by certificate. --- Also let’s use different name for symmetric key CREATE SYMMETRIC KEY symm_key_combo_new WITH -- Values here should be exactly the same ALGORITHM = triple_des, IDENTITY_VALUE = 'Example of encryption', KEY_SOURCE = 'Put here some unique and long enough phrase.' --- Protection can be done by certificate now though ENCRYPTION BY CERTIFICATE HumanResources037; --- Now let’s decrypt here using new symmetric key OPEN SYMMETRIC KEY symm_key_combo_new DECRYPTION BY CERTIFICATE HumanResources037; SELECT NationalIDNumber, EncryptedNationalIDNumber AS 'Encrypted ID Number', CONVERT(nvarchar, DecryptByKey(EncryptedNationalIDNumber)) AS 'Decrypted ID Number' FROM Employee_symm; CLOSE SYMMETRIC KEY symm_key_combo_new DROP SYMMETRIC KEY symm_key_combo_new DROP TABLE Employee_symm So in order to be able to restore symmetric key following 3 values need to be set ALGORITHM = triple_des, IDENTITY_VALUE = 'Example of encryption', KEY_SOURCE = 'Put here some unique and long enough phrase.' Use the same values to re-create the symmetric key in future. Hope this helps! Posted By: Sergey Pustovit
Source: blogs.msdn.com
Experiences from writing sp_CascadingDataViewer – DB unit testing and code distribution
Its now been a month since I introduced sp_CascadingDataViewer and I wanted to take the opportunity to talk about a couple of my experiences while writing it. SQL Server needs packages Version 1 of sp_CascadingDataViewer is an 899 line stored procedure (view the code at changeset 58999), that’s a lot of code and much of it is repeated. I would have loved to have hived off different parts into dedicated functions but I couldn’t do that because my aim was to keep everything in a single stored procedure thus making it as easy as possible for someone to add it to their SQL Server instance. That is an unfortunate trade-off that we have to make with SQL Server. Oracle has a solution to this problem called packages. A package is ostensibly a collection of stored procedures and functions that can be distributed and installed as a single unit. That makes them extremely portable and would be an ideal mechanism for me to distribute a collection of stored procedures and functions that make up Cascading Data Viewer. SQL Server needs something equivalent to Oracle packages. You need to be doing automated database unit testing sp_CascadingDataViewer wouldn’t exist without automated database unit testing in Visual Studio. Period. With something as complicated as sp_CascadingDataViewer there were many many combinations of code, schema and data that could cause is to break. I had to deal with all those combinations and have a way of ensuring that any changes that I made did not break something else and automated testing was the means with which I did that. I wrote the code for sp_CascadingDataViewer in Visual Studio rather than SQL Server Management Studio (as I do for all my T-SQL code these days) which meant that my code and the tests lived in the same solution. At the time of writing that solution includes 30 tests and all are available in the source code on Codeplex: With a simple key chord (CTRL+R, CTRL+A) I could deploy my code and run all my tests against it – that’s an incredibly powerful mechanism and I actually find it to be very productive method of development even if you have many tests that need to be run. Writing sp_CascadingDataViewer switched me on to the value of automated database unit testing and I now advocate its use wherever I go. @jamiet
Source: feedproxy.google.com
SQL Server Support FAQ
How often should I expect to hear from my Support Engineer? This will vary based on the issue and its impact to your business. If you have specific preferences for frequency of contact, time of day, or method (prefer phone vs. e-mail), please discuss this with your support engineer. My Support Engineer is not available and I need help now. If you need immediate assistance and your engineer is unavailable, please contact the engineer’s manager. Manager contact information is in the engineer’s signature block and in the initial e-mail. If you are unable to reach the manager, you can call the Microsoft Support line (800-936-5800), provide your case number, and request to speak with the next available support engineer. You will be routed to the phone queue for that specialty and connected to an engineer. The Support Engineer receiving the call may need time to review the case history and steps taken to date, depending on the duration and complexity of the issue. My support issue is not progressing well. Each Support Engineer should provide a management contact in his/her signature block and in the initial e-mail. If you have: · Schedule conflicts with your assigned engineer’s availability. · Communications issues. · Need a faster resolution. · Or are otherwise dissatisfied with the support provided, please engage the engineer’s manager. We strive to deliver outstanding customer service, and we appreciate you letting us know if we are not meeting your expectations. I need a different engineer. As mentioned in the preceding section, “My support issue is not progressing well”, you may contact the engineer’s manager and request a different engineer. My Support Engineer has suggested that we archive the case. What does this mean? If there is going to be an extended delay before you can implement a change or collect requested data, we may ask to archive your case. This means that the case will be closed on our end. You can reopen the case if you encounter the same error on the same instance/server again, or if you have the data needed to continue troubleshooting. In order to reopen the case, contact the support engineer or his/her manager directly. You may be asked to provide feedback on support. Please assess the support based on the service received to date, with the understanding that Microsoft does not consider an archived case resolved. We thought that our issue was solved, but the problem has recurred. Can I re-open my support case? As long as the problem and the server are the same as in the case, you can re-open the case. To do this, first try to reach the previous engineer or his/her manager. If unable to reach them, contact the Microsoft Support line (800-936-5800), provide your case number, and request that Microsoft re-open the case. Note that in some cases, the same symptoms can be caused by multiple issues. For example, if a server experienced slow performance and the disk I/O subsystem was fixed and performance restored, and later performance degraded again, this could be due to out of date statistics or other issues. In these ambiguous cases, you may be asked to open a new case. What do the Microsoft Case Severities mean? A – Critical – 1 hour initial response goal · High-impact problem in which production, operations, or development is proceeding but is severely impacted, or where production and/or profitability will be severely impacted within days. B – Urgent – 4 hour initial response goal · High-impact problem where production is proceeding, but in a significantly impaired fashion. · Time sensitive issue important to long-term productivity that is not causing an immediate work stoppage. C – Important - 24 hour initial response goal · Important issue which does not have significant current productivity impact for the customer. D – Monitor · This severity is used for support issues that are waiting, usually to confirm a provided solution or when waiting for an intermittent problem to recur, in order to collect needed data. The Support Engineer for my support issue said that I need to open an additional support incident. Why should I have to open another incident? Microsoft defines an incident or case as: · A single support issue and the commercially reasonable efforts to resolve it · A problem that cannot be broken down into subordinate problems NOTE: The Customer and the Support Engineer must agree on the case resolution. Also be aware that Microsoft does not charge for code defects, documentation errors, and product “wishes”. As documented on Microsoft’s public website at http://support.microsoft.com/?LN=en-us&sd=tech&scid=gp%3Ben-us%3Bofferprophone&x=7&y=16#faq607 EXAMPLE: A support issue is opened for slow performance of an application with a SQL Server back-end. Troubleshooting identifies a specific stored procedure as the problem, and updates to statistics and an increase in covering indexes resolves the issue. However, the same server is also experiencing slow performance for merge replication. This is considered a different issue, which is unrelated to the original issue for which the support case was opened. I’m interested in a higher level of service and a closer relationship with Microsoft. Microsoft offers a managed support relationship known as Premier Support, designed to meet the needs of an enterprise customer. For an overview of the service elements, please see http://www.microsoft.com/services/microsoftservices/srv_prem.mspx or call 1-800-936-3500 to request contact by a Premier Support representative. Advisory vs. Break-Fix Microsoft provides support services for customers with problems encountered implementing or using Microsoft products. These are commonly known as “break-fix issues”. Occasionally customers need short-term (less than 40 hours) consultative assistance, as opposed to the standard break-fix service. This short-term consultative assistance is known as Advisory service. This is available to Microsoft’s Premier customers as an element in their support contracts. It is also available to other customers on an hourly charge basis. For more information on Advisory Services, please see the Advisory Services section at http://support.microsoft.com/default.aspx?scid=fh;EN-US;OfferProPhone or go to the Advisory Services page, http://support.microsoft.com/gp/advisoryservice . Hope this helps! Posted By: Microsoft SQL Server Support Team
Source: blogs.msdn.com
Newly closed Connect items auger well for SSIS in Denali
Todd Mcdermid spoke recently on his blog post Integration Services vNext Coming Soon about how some recently closed Connect items had encouraged him as to the future of SSIS. Also, Matt Masson from the SSIS team has written a similarly encouraging blog post about some upcoming SSIS enhancements at Upcoming Product Changes. Now its my turn. Just this morning I had three Connect items returned to me as "fixed" and, like Todd, I'm delighted to see these items getting closed as such. I have long complained about the shortcomings of SSIS's logging framework as I don't believe it produces enough "context" as to why a container happens to be executing; the consequence being that we end up getting lost in a plethora of log records where we can't see how each one relates to another. These three Connect items give me hope that this issue is getting addressed in the next version of SSIS (aka SQL11 aka Denali). The three Connect items in question are: Can't differrentiate between multiple instances of a task running in parallel - If a container happens to be running in parallel with another instance of itself (e.g. a dataflow task in a package that has been called from two Execute Package Tasks) there is no way to differentiate between the two. A solution would be to provide an Execution identifier for the execution of each container just like the ExecutionGUID that we get for a package. Please put ExecutionGUID property on DtsContainer - Pretty much the same as the last one SSIS: Make container stack available - This refers to what I call the context of a task being executed. We know that a task is executing but what are all the ancestral tasks and containers in the container hierarchy that have led to that task being executed? The provision of more execution metadata for logging purposes gives me hope that the next version of SSIS will have a much better story around logging. Hopefully we won't have to wait long to find out. @Jamiet
Source: feedproxy.google.com
SQLBits 7 Wrap Up
Last week I was in Great Britain for SQL Bits 7. This was the first conference outside of the United States that I have attended, and I was honored to attend SQL Bits 7 as a speaker during Friday’s Sessions. The city of York is absolutely...(read more)
Source: sqlblog.com
PASS 2010 Birds of a Feather Lunch
We are just weeks away from the #SQLPASS North American Summit for 2010 and yesterday. Mike Walsh ( Blog | Twitter ) posted the list of topics for the Birds of a Feather Lunch event that was started last year and has been continued into this year. ...(read more)
Source: sqlblog.com
T-SQL Tuesday #008: Turning Civilians into Soldiers
This month’s T-SQL Tuesday, started by Adam Machanic ( Blog | Twitter ), is being put on by SQLServerCentral author and MCM, Robert Davis ( Blog | Twitter ). This month the topic is learning and teaching...(read more)
Source: sqlblog.com
Subscribe to:
Posts (Atom)