Pages

Monday, October 11, 2010

How to connect to file-based data sources (Microsoft Access , Microsoft Excel and Text files ) from a 64 bit application


  The Issue: A 64-bit process can load only 64-bit components in it's process boundary. Same is true for a 32-bit process also. So, if your application is 64 bit, you will need a 64-bit provider or driver to connect to  Microsoft Access (mdb, accdb) or Microsoft Excel 2010 (xls, xlsx, and xlsb) or  text files. Bad news is that there is no 64-bit provider or driver available "yet" to connect to these file-based data sources. Good news is that a 64-bit Provider is heading your way which is currently in beta phase.   The Kludge: The common workaround is to connect to a 32-bit SQL Server instance that has a Linked Server to the Access/Excel/Text file. This is a hack, can be difficult to get set-up, and can have stability and performance issues, and realistically, we at Microsoft would rather not support this setup or issues arising from it.   The Good news:  A 64-bit driver is headed your way. This is great news for users in a 64-bit world. Soon you'll be able to connect to these file-based data sources from your 64-bit application, rather than wrestle with obscure settings to force them to connect via a Linked Server.   The next version of Microsoft Office, Office 2010, will be available in a 64-bit version. This version will include a 64-bit version of "2010 Office System Driver Connectivity Components" which will include all the needed 64-bit ODBC driver and OLEDB providers to connect to these file-based data sources.   You will not have to buy or install Office 2010 to obtain and use the new 64-bit components. Like the current version of the provider, it will be available as a free download.   You can download the beta version from here: http://www.microsoft.com/downloads/details.aspx?familyid=C06B8369-60DD-4B64-A44B-84B371EDE16D&displaylang=en   Connection string for 64-bit OLEDB Provider: For Microsoft Office Access : Set the Provider string to “Microsoft.ACE.OLEDB.12.0" For Microsoft Office Excel   : Add “Excel 12.0” to the Extended Properties of the OLEDB connection string.   Connection string for 64-bit ODBC Driver: For Microsoft Office Access: Set the Connection String to “Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=path to mdb/accdb file” For Microsoft Office Excel: Set the Connection String to “Driver={Microsoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb)};DBQ=path to xls/xlsx/xlsm/xlsb file”   The gotchas: You cannot install the 32-bit version and the 64-bit version of the "2010 Office System Driver Connectivity Components" on the same computer. You cannot install the 64-bit version of the "2010 Office System Driver Connectivity Components" on a computer that already has the 32-bit Office 2007 ACE Provider. However, the 32-bit Office 2007 provider can coexist side-by-side with the 32-bit version of the "2010 Office System Driver Connectivity Components".   Authors:  Enamul Khaleque & Srini Gajjela [DSD-SQLDeveloper group at Microsoft]  

SQL Server 2005 setup fails when MSXML Core Services 6.0 Service Pack 2 has already been installed
  There is a known issue with SQL Server setup when MSXML6 component update has been installed on the system. The problem described in KB 968749 http://support.microsoft.com/kb/968749 has raised a lot of concerns by customers. The concerns are related to the fact that the solution is manual and not usable in a Large Enterprise environments. In order to automate MSXML6 component un-install we have created the automatic solution for this issue. However the solution needs to be implemented on a case by case basis.   If you are experiencing the issue described in KB 968749 and need an automated solution for this, please contact SQL CSS. Please see details on how to open the incident here http://support.microsoft.com/ The incident for this specific issue is going to be free of charge.   We apologize for any inconvenience.   SQL CSS.   Follow us on twitter (http://twitter.com/MicrosoftSQLCSS)

How to enable TDS parser to display TDS frames when SQLServer is listening on port other than default 1433
If you try to view a netmon trace in Netmon 3.4, you will see TDS traffic is nicely parsed for you. You will see a display similar to this: The parsing works nicely because SQLServer is listening on default tcp port 1433. But, if your SQLServer is listening on a different port (other than 1433), then these TDS parsing won't work by default. You will see a display similar to this (no frames resolving to TDS): To enable the parser to parse TDS traffic for ports other than 1433 (in this case, 63959), we need to take following steps: 1. Click on "Parsers" tab in netmon UI. This will display list of parsers installed. You will see tcp.npl, tds.npl along with several different parsers 2. Double click on tcp.npl and search for "1433", you will get into a switch/case code block saying "case 1433". We basically need to include our port 63959 here. Just add a case statement above “case 1433” without any parsing code. case 63959: case 1433: //TDS-parsing code goes here   This is what it’s look like after the change:   3. Save your changes 4. Reload your netmon trace, now it should look like following. With the change, TDS parser will resolve TDS traffic on this particular non-default port (63959). Author : Enamul(MSFT), SQL Developer Technical Lead

Commonly used 32-bit CPU registers and their purpose
While debugging a dump, we commonly see various CPU registers. Each register has different purpose. I am trying to put them together in one place for your easy reference.   You can list all the registers in windbg with “r” command:   Please note that a given register may be used for general purpose at any time.   EAX: Arithmetic operations, I/O port access and interrupt call EDX: Arithmetic operations, I/O port access and interrupt call. If you have a multiplication operation which results in more than what a single register can handle, then the most significant 16 numbers are stored in EDX and least significant ones are stored in EAX EBX: Holds return values ECX: Used for loop counters. Also used for "this" pointer for a class EIP: Instruction pointer. Points to the next instruction to execute. ESP: Stack pointer. This points to the top of the stack. EBP: Base/Frame pointer.   Author : Enamul(MSFT), SQL Developer Technical Lead

How to configure SQL server to listen on different ports on different IP addresses?
Technorati Tags: SQL Port The following post describes how you can configure your SQL Server to listen on a different port(s) on different IP addresses that are available on your system. This procedure applies to both SQL server 2005 and SQL server 2008. Case 1: SQL Server is installed in an environment where the IP addresses had not changed since the time it is originally installed. 1) Open SQL Server configuration manager. 2) Ensure that TCP/IP protocol is enabled By default, all the IP addresses listen the same port or ports configured in SQL Server configuration manager. The SQL Server error log will have an entry like the following: 2009-07-15 17:40:06.39 Server      Server is listening on [ 'any' <ipv4> 2675]. 3) In the TCP/IP properties set ‘Listen All’ to ‘No’ 4) Go to the IP addresses tab for the instance, set Enabled to Yes and TCP port number field for the specific IP address to the desired port. The screen will look as follows: 5) Restart the SQL Server. Now you should see an entry similar to the following in the SQL error log: 2009-07-15 18:03:10.58 Server      Server is listening on [ x.y.z.w <ipv4> 2000]. 2009-07-15 18:03:10.59 Server      Server is listening on [ x.y.z.v <ipv4> 2001]. As you can see from the above each of the IP addresses is listening on different port. Case 2: SQL Server is installed in an environment where the IP addresses change dynamically, but the number of IPs active on the system are the same (For example there are two IPs active on the system, but because of lease expiry or when moving to a new subnet, the system hosting SQL Server got either one or both of its IPs changed). In this case, get the output of ipconfig /all on the system, and edit one or all the IP addresses as needed with the new IP addresses that are active on the system using a similar procedure discussed in Case 1. Case 3: You add an additional IP address on the system: In that scenario, you will not be able to use the procedure documented in Case 1  or Case 2 above as the Configuration Manager’s IP address list will just only have as many entries as the number of IPs that SQL Server found when it is installed In this scenario, you can take the following steps to update the registry values SQL server looks at to listen on different ports on different IP addresses. Warning Serious problems might occur if you modify the registry incorrectly by using Registry Editor or by using another method. These problems might require that you reinstall your operating system. Microsoft cannot guarantee that these problems can be solved. Modify the registry at your own risk. . For more information about how to back up, restore, and modify the registry, click the following article number to view the article in the Microsoft Knowledge Base: 322756 How to back up and restore the registry in Windows 1) Navigate to the following registry key on the SQL server machine: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\!INSTANCEID!\MSSQLServer\SuperSocketNetLib\Tcp\ Note: !INSTANCEID! is a place holder for your SQL server instance. 2) Right click on IP1, export the registry key as SQLIP template.reg 3) Edit the  key name and IP address key .reg file that you exported in step 2 with notepad with the new IP address. (You can get the IP address list on the system by executing ipconfig /all > ipconfig.txt command from the command prompt). The contents would look as follows: Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.SQL10\MSSQLServer\SuperSocketNetLib\Tcp\IP3]—> Change this name to a new value for example IP4 "Enabled"=dword:00000000 "Active"=dword:00000001 "TcpPort"="2001" "TcpDynamicPorts"="" "DisplayName"="Specific IP Address" "IpAddress"="a.b.c.d" –> Update this with new IP address value  4) After editing the file save it with a different name – for example new IP4.reg 5) Double click the .reg file from step 3 to import the key as a new entry under [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10.SQL10\MSSQLServer\SuperSocketNetLib\Tcp] registry entry. 6) Repeat steps (3),(4) and (5) for any other new IP addresses that you currently want to configure on the system. Note: After adding the above registry keys, the new IP addresses should now show up in SQL server configuration manager. 7) Optional: Clean up any IPs that are no longer active by deleting the associated <IP_n> registry keys. 8) In SQL Server configuration manager, IP addresses tab, ensure that only the addresses that are listed in the ipconfig output on the system have Enabled property set to Yes and the other set to No Note: If IP All is set to No and if the IP addresses tab has some IP addresses that have ‘Enabled’ set to ‘Yes’ but the actual IP is no longer active on the system, SQL Server service fails to start with an error message like the following logged to SQL Server error log: 2009-07-16 15:43:07.87 Server      Server is listening on [ 127.0.0.1 <ipv4> 2753]. 2009-07-16 15:43:07.89 Server      Error: 26024, Severity: 16, State: 1. 2009-07-16 15:43:07.89 Server      Server failed to listen on x.y.z,w <ipv4> 2000. Error: 0x2741. To proceed, notify your system administrator. 2009-07-16 15:43:07.95 Server      Error: 17182, Severity: 16, State: 1. 2009-07-16 15:43:07.95 Server      TDSSNIClient initialization failed with error 0x2741, status code 0xa. Reason: Unable to initialize the TCP/IP listener. The requested address is not valid in its context. 2009-07-16 15:43:07.95 Server      Error: 17182, Severity: 16, State: 1. 2009-07-16 15:43:07.95 Server      TDSSNIClient initialization failed with error 0x2741, status code 0x1. Reason: Initialization failed with an infrastructure error. Check for previous errors. The requested address is not valid in its context. 2009-07-16 15:43:07.95 Server      Error: 17826, Severity: 18, State: 3. 2009-07-16 15:43:07.95 Server      Could not start the network library because of an internal error in the network library. To determine the cause, review the errors immediately preceding this one in the error log. 2009-07-16 15:43:07.95 Server      Error: 17120, Severity: 16, State: 1. 2009-07-16 15:43:07.95 Server      SQL Server could not spawn FRunCM thread. Check the SQL Server error log and the Windows event logs for information about possible related problems. Case 4: SQL Server is installed in a clustered environment. On cluster, you cannot configure SQL Server to listen on a specific IP addresses. You must chose IPALL. The IP addresses on which the cluster instance will be listening on is determined by cluster resources (configurable through Cluster Administrator, by adding IP Address resources under SQL Network Name resource). Additional links: How to: Configure a Server to Listen on a Specific TCP Port (SQL Server Configuration Manager) 294453    How to set up SQL Server to listen on multiple static TCP ports 823938    How to configure an instance of SQL Server to listen on a specific TCP port or a dynamic port Configure and manage SQL Server 2005 from your application using WMI IPv6 and SQL Server 2005 Connecting Using IPv6         Ramu Konidena Microsoft SQL Server Support Technical Lead

Using Dynamic MDX in Reporting Services: Part 1
If you're using Analysis Services as a data source for Reporting Services reports, you can build a simple dataset using the graphical query designer, but you'll want to switch to the generic query designer to create the MDX query string manually when you have more advanced requirements. Using the generic query designer, you can:· Impose greater control over the sets that you want to add to the rows axis by using set functions.· Add query-scoped named sets to the query in addition to calculated members. (Calculated members can also be added in the graphical query designer, but not named sets.)· Build dynamic MDX queries.When would you need a dynamic MDX query? Whenever you want to modify the query based on a condition known only at run-time, typically based on a parameter value. If you're using the graphical query designer, you can auto-generate the report parameter's query for available values by selecting the Parameter checkbox. When the user selects a value during report execution, Reporting Services passes the unique name for the selection to the query and all is well. However, there might be situations when the user selection doesn't come from the cube, so you must find a way to convert the parameter value into a value that will work with the query.In a series of posts, I will explore the available options for working with dynamic MDX queries in Reporting Services. By dynamic MDX, I mean that the query can be different each time it executes. In this post, I cover the use of StrToMember() and StrToSet() functions in parameters.A very common scenario is the requirement to pass dates into a query. If you have a date filter for the report, do you really want users to navigate through a list of dates from the cube as shown below?This list of dates - even if it's arranged hierarchically by month, quarter, and year - is what you get when you build the parameter directly from the date hierarchy in the query designer as shown below.Wouldn't a more user-friendly experience allow the user to select a data from a calendar control? I can do this by changing the auto-generated report parameter's data type to a Date/Time data type and clear the "Allow multiple values" check box. I must also change the Available Values setting for the parameter to None. I can set the default value to "No default value" to force the user to make a selection, or do something nice like define an expression to set a date, like =Today().So far, so good. But the problem now is that the date data type returned by the calendar control cannot be used by the MDX query without some intervention. I need to change the Parameter Value mapped to the query parameter in the Dataset Properties to an expression, like this:="[Date].[Calendar].[Date].[" + Format(CDate(Parameters!DateCalendar.Value), "MMMM d, yyyy") + "]" The expression that you use to convert a date like 2/1/2008 to a valid unique name in your Date dimension might look different. My example is specific to the Adventure Works 2008 R2 cube, which requires the date member to look like this: [Date].[Calendar].[Date].[February 1, 2008].That's fine so far, but the result of this expression is a string and the MDX query requires a member or a set. The autogenerated query already makes this change for you fortunately. However, if you're creating your query manually, you should understand what it's doing, especially if you need to make changes to it.The autogenerated query looks like this before I make changes:SELECT NON EMPTY { [Measures].[Sales Amount] } ON COLUMNS,NON EMPTY { ([Product].[Category].[Category].ALLMEMBERS ) }DIMENSION PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON ROWSFROM ( SELECT ( STRTOSET(@DateCalendar, CONSTRAINED) ) ON COLUMNSFROM [Adventure Works])WHERE ( IIF( STRTOSET(@DateCalendar, CONSTRAINED).Count = 1, STRTOSET(@DateCalendar, CONSTRAINED), [Date].[Calendar].currentmember ) ) CELL PROPERTIES VALUE, BACK_COLOR, FORE_COLOR, FORMATTED_VALUE, FORMAT_STRING, FONT_NAME, FONT_SIZE, FONT_FLAGSI prefer to simplify the query as shown below - removing the text highlighted in red text above. The function does what it says - changes the string (represented by the parameter @DateCalendar) into a set object. I remove the WHERE clause from the query as the FROM clause adequately restricts the query results to cell values related to the selected date. If I need the dimension properties in the report to display something or if I need the cell properties for report formatting, I'll include only the ones I need, but for this example I have removed them all from the query.SELECT NON EMPTY { [Measures].[Sales Amount] } ON COLUMNS,NON EMPTY { ([Product].[Category].[Category].ALLMEMBERS ) } ON ROWSFROM ( SELECT ( STRTOSET(@DateCalendar, CONSTRAINED) ) ON COLUMNSFROM [Adventure Works])You could change the StrToSet() function to a StrToMember() function and get the same result. It's not harmful to leave StrToSet() as it is. It just returns a set of one member in this case-the date from the calendar control which is a valid set. The CONSTRAINED flag is used to prevent an injection attack and requires the expression to resolve to a valid member before the query executes.One challenge that often confounds people working with MDX queries in the generic query designer is the inability to copy and paste the query into Management Studio for testing when parameters are in the query as shown above. The MDX query editor doesn't support parameters. Teo Lachev (blog | twitter) posted some advice for working with parameterized MDX queries in Management Studio which I encourage you to check out.In my next post, I'll explain how to use the OLE DB for OLAP provider with dynamic MDX to create a dataset.

24 Hours of PASS for Latin America!
One of the biggest pushes I was trying to make while on the PASS board was to help grow the organization into other areas of the world, and probably a little out of the organization's comfort zone. We in western English-speaking countries often totally underestimate the passion for SQL Server and the number of professionals that work with it around the world. In my time on the board, I was both amazed and privileged to see the types of events already occurring and the vast number of attendees. Often...(read more)

T-SQL joke
I was attending an ORM class for developers where I heard it said "Don't select star from a table". I've heard this many times and I'm sure you have, too. It makes for brittle code in case a column is added to the table. I started wondering if I could create an example that is an exception to the rule and came up with this: create table aTable ( star int ); select star from aTable ;...(read more)

Tools of the Trade: Part IV - Developing WinDbg Extension DLLs
A WinDbg extension DLL is set of exported callback functions for implementing user defined commands to extract specific customized  information from the memory dump(s).  Extension DLLs are loaded by debugger engine and can provide extra functionality of automation tasks while performing user-mode or kenrel-mode debugging.  An extension DLL may export any number of functions that are used to execute extension commands.  Each function is explicitly declared as an export in the  DLL's definition file or .def file and function names must be in lowercase letters WinDbg (DbgEng ) extension DLL must export DebugExtensionInitialize. This will be called when the DLL is loaded, to initialize the DLL. It may be used by the DLL to initialize global variables. An extension DLL may export an optional function DebugExtensionUninitialize. If this is exported, it will be called just before the extension DLL is unloaded. An extension DLL may export a DebugExtensionNotify. If this is exported, it will be called when a session begins or ends, and when a target starts or stops executing. These notifications are also provided to IDebugEventCallbacks objects registered with a client. An extension DLL may export KnownStructOutput. If this is exported, it will be called when the DLL is loaded. This function returns a list of structures that the DLL knows how to print on a single line. It may be called later to format instances of these structures for printing. So, how to develop   your own Windbg Extension DLL? Lets follow these steps: 1. Download & Install debugging tools for windows from http://www.microsoft.com/whdc/devtools/debugging/installx86.Mspx 2. Create "Win32 Console Application" using VS 2008 3. Select Application type as "DLL" and click "Finish" . 4. Add a "Module-Definition File (.def)" called "wdbrowser" to the project. One way to export your extension function is by specifying the function names in the EXPORTS section of the .def file. You may you use other ways of exporting functions, such as __dllexport 5. Configure the project "Additional include Directories" to point to header files that comes with Windbg. Default folder for x86 is "C:\Program Files\Debugging Tools for Windows (x86)\sdk\inc" 6. Configure the project "Additional Library Directories" to point to library files that comes with Windbg. Default folder  for x86 libraries  is  ""C:\Program Files\Debugging Tools for Windows (x86)\sdk\lib\i386"" 7. The library, "dbgeng.lib " & "dbgeng.dll" has the implementation of the Debug engine exported functions. So, add "dbgeng.lib" in "Additional Dependencies". 8. Add name of the module definition file created at the above Step #3 9. Now Include the following required headers files in "stdafx.h" #include <windows.h> #include <imagehlp.h> #include <wdbgexts.h> #include <dbgeng.h> #include <extsfns.h> 10. Declare following two global variables in your extension project main implementation file. //Version. EXT_API_VERSION g_ExtApiVersion = {1,1,EXT_API_VERSION_NUMBER,0} ; WINDBG_EXTENSION_APIS ExtensionApis = {0}; 11. Declare following debug engine COM interface pointers. IDebugAdvanced2*  gAdvancedDebug2=NULL; IDebugControl4*   gDebugControl4=NULL; IDebugControl*    gExecuteCmd=NULL; IDebugClient*                 gDebugClient=NULL; 12. Next step is to declare and implement WinDbgExtensionDllInit function in your DLL main implementation source file. In this example that is "wdbrowser.cpp" . The WinDbgExntensionDllInit is the first function that will be called by windbg . So, this function is the idle for implementing any extension specific initialization or related functionality. Please refer http://msdn.microsoft.com/en-us/library/cc267872.aspx for more details about this function. VOID WDBGAPI WinDbgExtensionDllInit (PWINDBG_EXTENSION_APIS lpExtensionApis, USHORT usMajorVersion, USHORT usMinorVersion) {                   ExtensionApis = *lpExtensionApis;                   HRESULT hResult = S_FALSE;                         if (hResult = DebugCreate(__uuidof(IDebugClient), (void**) &gDebugClient) != S_OK)                          {                                                 dprintf("Acuqiring IDebugClient* Failled\n\n");                                                 return;                          }                          if (hResult = gDebugClient->QueryInterface(__uuidof(IDebugControl), (void**) &gExecuteCmd) != S_OK)                          {                                         dprintf("Acuqiring IDebugControl* Failled\n\n");                                                 return;                          }                          if (hResult = gDebugClient->QueryInterface(__uuidof(IDebugAdvanced2), (void**) &gAdvancedDebug2) != S_OK)                          {                                               dprintf("Acuqiring IDebugAdvanced2* Failled\n\n");                                                 return;                          }                          if (hResult = gDebugClient->QueryInterface(__uuidof(IDebugControl4), (void**) &gDebugControl4) != S_OK)                          {                             dprintf("Acuqiring IDebugControl4* Failled\n\n");                                                 return;                          } } 13. Declare another exported function ExtensionApiVersion to report the version of your extension to windbg. Please refer to http://msdn.microsoft.com/en-us/library/cc267873.aspx for detailed information about this function. LPEXT_API_VERSION WDBGAPI ExtensionApiVersion (void) {     return &g_ExtApiVersion; } 14. Define Debug engine's interface pointers, so that your extension module can interact with debug engine. For more information please refer http://msdn.microsoft.com/en-us/library/cc265976.aspx  - IDebugClient, http://msdn.microsoft.com/en-us/library/cc266102.aspx - IDebugControl http://msdn.microsoft.com/en-us/library/cc265957.aspx - IDebugAdvanced IDebugAdvanced2* gAdvancedDebug2=NULL; IDebugControl4* gDebugControl4=NULL; IDebugControl* gExecuteCmd=NULL; IDebugClient*               gDebugClient=NULL; 15. Next step is to - implement debug engine's callback interface IDebugOutputCallbacks . Debug engine callbacks your implementation of IDebugOutCallbacks::Output() with output as a result of the commands that are executed by your extension function. Refer to http://msdn.microsoft.com/en-us/library/cc265716.aspx   for detailed information about IDebugOutputCallbacks::Output() 16. Add the following new class in a header file that inherits the IDebugOutputCallbacks interface . #ifndef __OUT_HPP__ #define __OUT_HPP__ #include <string> #include <sstream> class StdioOutputCallbacks : public IDebugOutputCallbacks { private:                         std::string m_OutputBuffer;                         //                         //This buffer holds the output from the command execution.                         //                         CHAR m_OutPutBuffer[4096]; public:                         void InitOutPutBuffer();                         std::string GetOutputBuffer()                         {                                                 return m_OutputBuffer;                         };                         void ClearOutPutBuffer()                                       {                                                 m_OutputBuffer = "";                         };     STDMETHOD(QueryInterface)(         THIS_         IN REFIID InterfaceId,         OUT PVOID* Interface         );     STDMETHOD_(ULONG, AddRef)(         THIS         );     STDMETHOD_(ULONG, Release)(         THIS         );     // IDebugOutputCallbacks.     STDMETHOD(Output)(         THIS_         IN ULONG Mask,         IN PCSTR Text         ); }; extern StdioOutputCallbacks g_OutputCb; #endif // #ifndef __OUT_HPP__ 17. Add the following code that implements the IDebugOutputCallbacks interface methods, especially Output() #include "stdafx.h" #include <stdio.h> #include <windows.h> #include <dbgeng.h> #include "OutputCallBack.h" StdioOutputCallbacks g_OutputCb; STDMETHODIMP StdioOutputCallbacks::QueryInterface(     THIS_     IN REFIID InterfaceId,     OUT PVOID* Interface     ) {     *Interface = NULL;     if (IsEqualIID(InterfaceId, __uuidof(IUnknown)) ||         IsEqualIID(InterfaceId, __uuidof(IDebugOutputCallbacks)))     {         *Interface = (IDebugOutputCallbacks *)this;         AddRef();         return S_OK;     }     else     {         return E_NOINTERFACE;     } } STDMETHODIMP_(ULONG) StdioOutputCallbacks::AddRef(     THIS     ) {     // This class is designed to be static so     // there's no true refcount.     return 1; } STDMETHODIMP_(ULONG) StdioOutputCallbacks::Release(     THIS     ) {     // This class is designed to be static so     // there's no true refcount.     return 0; } STDMETHODIMP StdioOutputCallbacks::Output(     THIS_     IN ULONG Mask,     IN PCSTR Text     ) {     UNREFERENCED_PARAMETER(Mask);                         m_OutputBuffer += Text;     return S_OK; } void StdioOutputCallbacks::InitOutPutBuffer() {                         m_OutputBuffer.erase(); } 18. Add implementation of your extension function. In this example, we choose to implement an extension that displays the variable names, types in the frame 2 of the current thread. The implementation is: DECLARE_API (dvf3) { // // Install output callbacks. // if ((gDebugClient->SetOutputCallbacks((PDEBUG_OUTPUT_CALLBACKS) &g_OutputCb))!= S_OK) { dprintf("*****Error while installing Outputcallback.*****\n\n"); return; } // // Since frame count starts from 0 index, we have to pass 2 as parameter for .frame command for the frame# 2 // //Execute command to extrac 2nd frame. if (gExecuteCmd->Execute(DEBUG_OUTCTL_THIS_CLIENT | //Send output to only outputcallbacks DEBUG_OUTCTL_OVERRIDE_MASK | DEBUG_OUTCTL_NOT_LOGGED, ".frame 2", DEBUG_EXECUTE_DEFAULT ) != S_OK) { dprintf("Executing .frame 2 failled\n"); return; } //Execute command to extrac 2nd frame. if (gExecuteCmd->Execute(DEBUG_OUTCTL_THIS_CLIENT | //Send output to only outputcallbacks DEBUG_OUTCTL_OVERRIDE_MASK | DEBUG_OUTCTL_NOT_LOGGED, "dv /i /t /v", DEBUG_EXECUTE_DEFAULT ) != S_OK) { dprintf("Executing dv /i /t /v failled\n"); return; } dprintf("***** Extracting locals & formal params from frame 2 *****"); dprintf("\n%s\n", g_OutputCb.GetOutputBuffer().c_str()); } 19. Re-build the project. Copy the .DLL from release folder to a folder where Windbg looks for extension DLLs. On x86 machine default location is  "<Drive letter>\Program Files\Debugging Tools for Windows (x86)\winext" 20. The extension is ready for the use or test. 21. Start windbg and open a full user mode dump. Type .load myextension and hit enter to load the extension DLL into Windbg process space 22. Run .chain command to verify if your extension is loaded by WinDbg . you will see output similar to below output, if your extension is loaded. 23. Type !dvf3 to run the extension function for extracting and displaying variable names, types from the frame 2 . Additional references: http://msdn.microsoft.com/en-us/library/cc265826.aspx - describes about how to Interact with debug engine, I/O operations with debug engine, Memory access, Using symbols, source files. http://www.codeplex.com/ODbgExt  - Microsoft Open Debugger Extension for Windbg Happy developing debug engine extensions! Posted By: Srini Gajjela & Enamul Khaleque (DSD-SQLDeveloper group @ Microsoft)

Step by step N-tier configuration of Sync services for ADO.NET 2.0
Recently I have worked on couple of cases where customers were trying to use N-tier configuration of Sync services for ADO.NET on IIS. In this blog we will use IIS and setup N-tier configuration of sync service for ADO.NET version 2.0 which comes as part of Microsoft Sync Framework 1.0.  Preparing Environment: We need a development machine that we will use to develop the application, a middle tier server where IIS is installed and configured for WCF services and database server. If you like, you can use the same development box for IIS and database server too. a) Client tier We will develop the application (client and WCF service) and run the client part of the application in this machine. We will use Visual Studio 2008, SP1, so it should be installed on this machine. Install sync services on this client machine. Sync Services for ADO.NET version 2.0 comes with Microsoft Sync Framework 1.0 which gets installed if you install SQL Server 2008 or Visual studio 2008, SP1. You can also download it from http://www.microsoft.com/downloads/details.aspx?FamilyId=C88BA2D1-CEF3-4149-B301-9B056E7FB1E6&displaylang=en Install SQL Server Compact 3.5 Service Pack 1 on this client machine if it is already not there. SQL Server Compact is available in three ways: - Integrated with SQL Server 2008 or later versions, Integrated with Microsoft Visual Studio 2008 or later versions and as a download from the Web site at: http://www.microsoft.com/sqlserver/2005/en/us/compact-downloads.aspx b) Middle tier If you want to use a separated middle tier server for IIS to run WCF service then make sure you have IIS installed and configured on this box to run WCF. Install sync services on this machine too. I have used the same development machine for middle tier so I did not have to install it again. c) Database Server Install a version of SQL Server other than SQL Server Compact to act as the server database. If you like to you can use SQL Server Express that installs with Visual Studio. I have used developer edition of SQL 2008, SP1. We are going to use a sample database that should be prepared by following the article on “Setup Scripts for Sync Services How-to Topics” at: http://msdn.microsoft.com/en-us/library/bb726041.aspx. Copy the T-SQL statements of “Custom Change Tracking for Offline Scenarios” that creates a custom change tracking infrastructure. Once you run this script successfully from query analyzer of your SQL Server it will create the new database by the name SyncSamplesDb. I have the database created as shown below:   Developing and deploying WCF service: Create a WCF Services Application in Visual Studio 2008: (You may need to run Visual Studio as Administrator to create virtual directories in IIS) Type “WcfForSync” as the name of the project and  click on “OK” button. It will create the WCF project with IService1.cs, Service1.svc, Service1.svc.cs and Web.config files with other files and folders. I will keep these default files for simplicity. Open the IService1.cs file by double clicking it and replace the code with the code below: (Note that the code samples in the following sections have been taken from the MSDN articles mentioned in the reference section at the bottom with little modification) using System.Collections.ObjectModel; using System.ServiceModel; using System.Data; using Microsoft.Synchronization; using Microsoft.Synchronization.Data; using Microsoft.Synchronization.Data.Server; namespace WcfForSync {     [ServiceContract]     public interface IService1     {         [OperationContract()]         SyncContext ApplyChanges(SyncGroupMetadata groupMetadata, DataSet dataSet, SyncSession syncSession);         [OperationContract()]         SyncContext GetChanges(SyncGroupMetadata groupMetadata, SyncSession syncSession);         [OperationContract()]         SyncSchema GetSchema(Collection<string> tableNames, SyncSession syncSession);         [OperationContract()]         SyncServerInfo GetServerInfo(SyncSession syncSession);     } } Next add a class file by the name SampleServerSyncProvider.cs as below: Replace the code in this file with the code below: using System.Data; using System.Data.SqlClient; using Microsoft.Synchronization; using Microsoft.Synchronization.Data; using Microsoft.Synchronization.Data.Server; namespace WcfForSync {     //Create a class that is derived from     //Microsoft.Synchronization.Server.DbServerSyncProvider.     public class SampleServerSyncProvider : DbServerSyncProvider     {         public SampleServerSyncProvider()         {             //Create a connection to the sample server database.             Utility util = new Utility();             SqlConnection serverConn = new SqlConnection(util.ServerConnString);             this.Connection = serverConn;             //Create a command to retrieve a new anchor value from             //the server. In this case, we use a timestamp value             //that is retrieved and stored in the client database.             //During each synchronization, the new anchor value and             //the last anchor value from the previous synchronization             //are used: the set of changes between these upper and             //lower bounds is synchronized.             //             //SyncSession.SyncNewReceivedAnchor is a string constant;             //you could also use @sync_new_received_anchor directly in             //your queries.             SqlCommand selectNewAnchorCommand = new SqlCommand();             string newAnchorVariable = "@" + SyncSession.SyncNewReceivedAnchor;             selectNewAnchorCommand.CommandText = "SELECT " + newAnchorVariable + " = min_active_rowversion() - 1";             selectNewAnchorCommand.Parameters.Add(newAnchorVariable, SqlDbType.Timestamp);             selectNewAnchorCommand.Parameters[newAnchorVariable].Direction = ParameterDirection.Output;             selectNewAnchorCommand.Connection = serverConn;             this.SelectNewAnchorCommand = selectNewAnchorCommand;             //Create a SyncAdapter for the Customer table by using             //the SqlSyncAdapterBuilder:             //  * Specify the base table and tombstone table names.             //  * Specify the columns that are used to track when             //    changes are made.             //  * Specify download-only synchronization.             //  * Call ToSyncAdapter to create the SyncAdapter.             //  * Specify a name for the SyncAdapter that matches the             //    the name specified for the corresponding SyncTable.             //    Do not include the schema names (Sales in this case).             SqlSyncAdapterBuilder customerBuilder = new SqlSyncAdapterBuilder(serverConn);             customerBuilder.TableName = "Sales.Customer";             customerBuilder.TombstoneTableName = customerBuilder.TableName + "_Tombstone";             customerBuilder.SyncDirection = SyncDirection.DownloadOnly;             customerBuilder.CreationTrackingColumn = "InsertTimestamp";             customerBuilder.UpdateTrackingColumn = "UpdateTimestamp";             customerBuilder.DeletionTrackingColumn = "DeleteTimestamp";             SyncAdapter customerSyncAdapter = customerBuilder.ToSyncAdapter();             customerSyncAdapter.TableName = "Customer";             this.SyncAdapters.Add(customerSyncAdapter);         }     }     public class Utility     {         //Return the server connection string.         public string ServerConnString         {            get { return @"Data Source= SQLServer\instance; Initial Catalog=SyncSamplesDb; User Id=xxxxxx; Password=xxxxxx"; }         }     } } Note: You need to update the connection string in the above Utility class to connect to your SQL Server. Open Service1.svc.cs file in the project by double clicking on it: and replace the existing code with the code below: using System.Collections.ObjectModel; using System.ServiceModel; using System.Data; using Microsoft.Synchronization; using Microsoft.Synchronization.Data; using Microsoft.Synchronization.Data.Server; namespace WcfForSync {     // NOTE: If you change the class name "Service1" here, you must also update the reference to "Service1" in App.config.     public class Service1 : IService1     {          private SampleServerSyncProvider _serverSyncProvider;          public Service1()         {             this._serverSyncProvider = new SampleServerSyncProvider();         }         [System.Diagnostics.DebuggerNonUserCodeAttribute()]         public virtual SyncContext ApplyChanges(SyncGroupMetadata groupMetadata, DataSet dataSet, SyncSession syncSession)         {             return this._serverSyncProvider.ApplyChanges(groupMetadata, dataSet, syncSession);         }         [System.Diagnostics.DebuggerNonUserCodeAttribute()]         public virtual SyncContext GetChanges(SyncGroupMetadata groupMetadata, SyncSession syncSession)         {             return this._serverSyncProvider.GetChanges(groupMetadata, syncSession);         }         [System.Diagnostics.DebuggerNonUserCodeAttribute()]         public virtual SyncSchema GetSchema(Collection<string> tableNames, SyncSession syncSession)         {             return this._serverSyncProvider.GetSchema(tableNames, syncSession);         }         [System.Diagnostics.DebuggerNonUserCodeAttribute()]         public virtual SyncServerInfo GetServerInfo(SyncSession syncSession)         {             return this._serverSyncProvider.GetServerInfo(syncSession);         }     } } The application requires references to Microsoft.Synchronization.dll, Microsoft.Synchronization.Data.dll and Microsoft.Synchronization.Data.Server.dll. Right click on “References” from the project and click on Add Reference… Select Microsoft.Synchronization.dll (Location on my machine: C:\Program Files (x86)\Microsoft Sync Framework\v1.0\Runtime\x86) And next add the references to the other two dlls (Location on my machine: C:\Program Files (x86)\Microsoft Sync Framework\v1.0\Runtime\ADO.NET\V2.0\x86) Now you should be able to build the project successfully. Once building is successful publish the WCF to IIS. Go to the properties of the project and under Web* tab type IIS server information (middletierserver) , click on the “Create Virtual Directory” as below: From IIS you can see a Virtual directory has been created: If you browse the WCF (http://middletierserver/WcfForSync/Service1.svc) you should get the following page. I have used the same development machine for IIS, so it is showing “localhost” in the URL. Note: When tried to browse WCF I have noticed following error with some IIS machines: HTTP Error 404.3 – Not Found The page you are requesting cannot be served because of the extension configuration. If the page is a script, add a handler. If the file should be downloaded, add a MIME map. Detailed Error InformationModule StaticFileModule. If you encounter this error please take necessary action as per article at: http://blogs.msdn.com/rjohri/archive/2009/06/29/the-page-you-are-requesting-cannot-be-served-because-of-the-extension-configuration.aspx Developing client application and consuming WCF service: Create a Console application in Visual Studio: Once the project is created, reference the dlls  Microsoft.Synchronization.dll, Microsoft.Synchronization.Data.dll, and Microsoft.Synchronization.Data.SqlServerCe.dll as before. You also need to reference System.Data.SqlServerCe.dll (you should have this dll on your machine once you installed SQL Server Compact 3.5 Service Pack 1, location of this dll on my machine: C:\Program Files (x86)\Microsoft SQL Server Compact Edition\v3.5\Desktop as shown below) Now we need to do service reference to our WCF that we developed and deployed before. Right click on the Reference in this client project and select Add Service Reference… In the Add Service Reference screen you type the URL of our WCF and click on Go button as shown below: Click on “OK” button, it will create service reference as below: Replace the code in the file Program.cs with the following code: using System; using System.IO; using System.Data; using System.Data.SqlClient; using System.Data.SqlServerCe; using Microsoft.Synchronization; using Microsoft.Synchronization.Data; using Microsoft.Synchronization.Data.SqlServerCe; namespace ClientForSync {     class Program     {         static void Main(string[] args)         {             //The Utility class handles all functionality that is not             //directly related to synchronization, such as holding connection             //string information and making changes to the server database.             Utility util = new Utility();             //The SampleStats class handles information from the SyncStatistics             //object that the Synchronize method returns.             SampleStats sampleStats = new SampleStats();             //Delete and re-create the database. The client synchronization             //provider also enables you to create the client database             //if it does not exist.             ////util.SetClientPassword();             util.RecreateClientDatabase();             //Initial synchronization. Instantiate the SyncAgent             //and call Synchronize.             SampleSyncAgent sampleSyncAgent = new SampleSyncAgent();             SyncStatistics syncStatistics = sampleSyncAgent.Synchronize();             sampleStats.DisplayStats(syncStatistics, "initial");             //Make changes on the server.             util.MakeDataChangesOnServer();             //Subsequent synchronization.             syncStatistics = sampleSyncAgent.Synchronize();             sampleStats.DisplayStats(syncStatistics, "subsequent");             //Return server data back to its original state.             util.CleanUpServer();             //Exit.             Console.Write("\nPress Enter to close the window.");             Console.ReadLine();         }     }     //Create a class that is derived from     //Microsoft.Synchronization.SyncAgent.     public class SampleSyncAgent : SyncAgent     {         public SampleSyncAgent()         {             //Instantiate a client synchronization provider and specify it             //as the local provider for this synchronization agent.             this.LocalProvider = new SampleClientSyncProvider();             //The remote provider now references a proxy instead of directly referencing the server provider. The proxy is created by passing a reference to a WCF service             ServiceReference1.Service1Client serviceProxy = new ServiceReference1.Service1Client();             this.RemoteProvider = new ServerSyncProviderProxy(serviceProxy);             //Add the Customer table: specify a synchronization direction of             //DownloadOnly.             SyncTable customerSyncTable = new SyncTable("Customer");             customerSyncTable.CreationOption = TableCreationOption.DropExistingOrCreateNewTable;             customerSyncTable.SyncDirection = SyncDirection.DownloadOnly;             this.Configuration.SyncTables.Add(customerSyncTable);         }     }     //Create a class that is derived from     //Microsoft.Synchronization.Data.SqlServerCe.SqlCeClientSyncProvider.     //You can just instantiate the provider directly and associate it     //with the SyncAgent, but you could use this class to handle client     //provider events and other client-side processing.     public class SampleClientSyncProvider : SqlCeClientSyncProvider     {         public SampleClientSyncProvider()         {             //Specify a connection string for the sample client database.             Utility util = new Utility();             this.ConnectionString = util.ClientConnString;         }     }     //Handle the statistics that are returned by the SyncAgent.     public class SampleStats     {         public void DisplayStats(SyncStatistics syncStatistics, string syncType)         {             Console.WriteLine(String.Empty);             if (syncType == "initial")             {                 Console.WriteLine("****** Initial Synchronization ******");             }             else if (syncType == "subsequent")             {                 Console.WriteLine("***** Subsequent Synchronization ****");             }             Console.WriteLine("Start Time: " + syncStatistics.SyncStartTime);             Console.WriteLine("Total Changes Downloaded: " + syncStatistics.TotalChangesDownloaded);             Console.WriteLine("Complete Time: " + syncStatistics.SyncCompleteTime);             Console.WriteLine(String.Empty);         }     }     public class Utility     {        //Return the client connection string with the password. Don’t forget to create folder         public string ClientConnString         {           get { return @"Data Source='D:\\SyncServices\\SyncSampleClient.sdf'; Password=xxxxxxx"; }         }         //Return the server connection string.         public string ServerConnString         {             get { return @"Data Source= SQLServer\instance; Initial Catalog=SyncSamplesDb; User Id=xxxxxx; Password=xxxxxx"; }         }         //Make server changes that are synchronized on the second         //synchronization.         public void MakeDataChangesOnServer()         {             int rowCount = 0;             using (SqlConnection serverConn = new SqlConnection(this.ServerConnString))             {                 SqlCommand sqlCommand = serverConn.CreateCommand();                 sqlCommand.CommandText =                     "INSERT INTO Sales.Customer (CustomerName, SalesPerson, CustomerType) " +                     "VALUES ('Cycle Mart', 'James Bailey', 'Retail') " +                     "UPDATE Sales.Customer " +                     "SET  SalesPerson = 'James Bailey' " +                     "WHERE CustomerName = 'Tandem Bicycle Store' " +                     "DELETE FROM Sales.Customer WHERE CustomerName = 'Sharp Bikes'";                 serverConn.Open();                 rowCount = sqlCommand.ExecuteNonQuery();                 serverConn.Close();             }             Console.WriteLine("Rows inserted, updated, or deleted at the server: " + rowCount);         }         //Revert changes that were made during synchronization.         public void CleanUpServer()         {             using (SqlConnection serverConn = new SqlConnection(this.ServerConnString))             {                 SqlCommand sqlCommand = serverConn.CreateCommand();                 sqlCommand.CommandType = CommandType.StoredProcedure;                 sqlCommand.CommandText = "usp_InsertSampleData";                 serverConn.Open();                 sqlCommand.ExecuteNonQuery();                 serverConn.Close();             }         }         //Delete the client database.         public void RecreateClientDatabase()         {             using (SqlCeConnection clientConn = new SqlCeConnection(this.ClientConnString))             {                 if (File.Exists(clientConn.Database))                 {                     File.Delete(clientConn.Database);                 }             }             SqlCeEngine sqlCeEngine = new SqlCeEngine(this.ClientConnString);             sqlCeEngine.CreateDatabase();         }     } } Now we have to update the connection string information in the code. In the Utility class in the below methods you should update the connection string for SQL server and client database (SQL CE database) //Return the client connection string with the password. Don’t forget to create folder         public string ClientConnString         {           get { return @"Data Source='D:\\SyncServices\\SyncSampleClient.sdf'; Password=xxxxxxx"; }         }         //Return the server connection string.         public string ServerConnString         {             get { return @"Data Source= SQLServer\instance; Initial Catalog=SyncSamplesDb; User Id=xxxxxxx; Password=xxxxxxx"; }         } Build this client project, at this point the project should compile successfully. Running/testing synchronization: Run the Console application and you should see the result of the synchronization of this sample as below: You can run query on the Customer table in SyncSamplesDb to check the changes by setting the break point in the client app and also by opening a SQL profiler you can see the activities going on in SQL Server. Happy Synchronization! References: Overview (Synchronization Services) : http://msdn.microsoft.com/en-us/library/bb726031(SQL.90).aspx Architecture and Classes (Synchronization Services) : http://msdn.microsoft.com/en-us/library/bb726025(SQL.90).aspx Getting Started: A Synchronization Services Application: http://msdn.microsoft.com/en-us/library/bb726015.aspx How to: Configure N-Tier Synchronization at: http://msdn.microsoft.com/en-us/library/bb902831.aspx A large scale implementation of Sync Service example at: http://blogs.msdn.com/sync/archive/2009/10/19/sharepoint-2010-now-integrates-microsoft-sync-framework.aspx Microsoft Sync Framework 2.0 redistributable Package - http://www.microsoft.com/downloads/details.aspx?FamilyId=109DB36E-CDD0-4514-9FB5-B77D9CEA37F6&displaylang=en   Author : Faiz(MSFT), SQL Developer Engineer Reviewed by : Enamul(MSFT), SQL Developer Technical Lead; Azim(MSFT), SQL Developer Technical Lead;Srini(MSFT), SQL Developer Engineer

WHITEPAPER - Online Indexing Operations in SQL Server 2005
I wanted to post a quick blog about a new whitepaper from Microsoft. Introduced in Microsoft SQL Server 2005 Enterprise Edition, the online index feature provides a powerful way to perform maintenance operations such as rebuilding or creating indexes in a production system without sacrificing DML concurrency. Users are not blocked from querying and updating the underlying table during the index operation. This white paper is targeted to database administrators who want to use the online index feature of SQL Server 2005 Enterprise Edition to increase server availability in their production environment and need to understand the advantages and disadvantages of using this feature. You can read about it here. Posted By:       Sanchan Saxena    

After Installation of .NET Framework 3.5 SP1 (or .Net framework 2.0 SP2) , SSIS packages using ODBC 3rd party drivers may fail
The Issue: In the last few months, we have seen quite a few cases where after installing .net framework 3.5 SP1 ( KB http://support.microsoft.com/kb/951847), SSIS packages using certain ODBC 3rd party drivers and importing character data may fail with the error below - Error Message: [DataReader Source [<number>]] Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "component "DataReader Source" (<number>)" failed because error code 0x80131937 occurred, and the error row disposition on "output column "ColumnName" (<number)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure. The error messages may vary depending on if you are using DataReader Source in Visual Studio 2005 or ADO .Net Source in Visual Studio 2008 – [ADO NET Source [<number>]] Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "component "ADO NET Source" (<number>)" failed because error code 0x80131937 occurred, and the error row disposition on "output column "ColumnName" (<number>)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure. Error 0xc0209029: Data Flow Task: The "component "<ComponentName>" (<number>)" failed because error code 0x80131937 occurred... [ADO NET Source [<number>] Error: The component "ADO NET Source" (<number>) was unable to process the data. Pipeline component has returned HRESULT error code 0xC0209029 from a method call. [SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "ADO NET Source" (<number>) returned error code 0xC02090F5. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure. NOTE: Not all ODBC drivers are affected by this problem. This problem affects primarily SSIS packages which import character data using certain 3rd party ODBC drivers, through the use of ADO.Net connection manager and System.Data.ODBC managed provider. In Visual Studio 2005: SSIS DataReader Source --> ADO.Net connection manager --> .Net Providers \ ODBC Data Provider (System.Data.Odbc) --> Vendor ODBC driver --> Character data In Visual Studio 2008: ADO .Net Source --> ADO.Net connection manager --> .Net Providers \ ODBC Data Provider (System.Data.Odbc) --> Vendor ODBC driver --> Character data Cause: The problem started with .Net Framework 2.0 SP2 after a change was made in the way OdbcDataReader checks for Nulls in Sequential Access mode, calling SQLGetData with a BufferLength argument of 0 for variable width data types.  Certain ODBC drivers are not compatible with this pattern. For more information on this function and its arguments, see: SQLGetData Function http://msdn.microsoft.com/en-us/library/ms715441.aspx .Net framework 2.0 SP2 is also included .Net framework 3.5 SP1 components. Microsoft is still investigating the issue and understands the difficult impact that this change has on customers. Determining the full impact of the issue (for example, what are all the data types that can cause the issue etc) is still a work in progress. You can track the progress from the connect site below. http://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=416455 In the meantime, we suggest that you use one of the workarounds mentioned below. Workaround: Workaround#1: (recommended) Modify the SSIS package to redirect the DataReader (or ADO .Net Source in Visual Studio 2008) Error Output to a text file, thereby allowing the package to succeed even though no rows are written out to the error output file. A. Go to Data Flow Task. B. On DataReader Source Properties-> ‘Input and Output Properties’ Tab -> DataReader OutPut -> Output columns C. Select the columns (only with character data type) and change the ‘ErrorRowDisposition’ and TruncationRowDisposition’ properties to RD_RidectRow D. Redirect the DataReader Error Output to a text file E. Repeat A-D for all Data Flow Tasks that fail with the above error. Workaround#2: If you need a quick fix (because this is a production environment and you cannot afford to modify SSIS packages at this time) – you may need to uninstall the components of .Net Framework 3.5 SP1 or .Net framework 2.0 SP2, depending on what framework version you installed that caused the issue. a. If your issue is caused by installing only .Net framework 2.0 SP2 (for example, from a Microsoft download site http://www.microsoft.com/downloads/details.aspx?familyid=5B2C0358-915B-4EB5-9B1D-10E506DA9D0F&displaylang=en), then you can uninstall .Net framework 2.0 SP2. b. If your issue is caused by installing .Net framework 3.5 SP1 (KB http://support.microsoft.com/kb/951847), Remove the .Net Framework 3.5 SP1 components and bring the system back to the state it was in before the update.  This process involves more than simply uninstalling .Net Framework 3.5 SP1, as discussed in the following blogs: http://blogs.msdn.com/dougste/archive/2008/08/19/uninstalling-net-framework-3-5-sp1.aspx http://blogs.msdn.com/astebner/archive/2008/08/01/8803442.aspx Posted By: Azim Uddin & Enamul Khaleque

PART II - Data security enhancements in SQL Server 2005
In the previous edition (Data security enhancements in SQL Server 2005) , we talked about data encryption/decryption in general.   In this installment, let’s talk about key maintenance. It is important to ensure that we can decrypt our data, even in case of user mistake or corruption.   For this, we need to protect our keys in our database. We have couple of options to do this.   Master key and certificate You can use the following steps to backup Master key and Certificates   ·         Backing Master key             BACKUP MASTER KEY TO FILE = 'path_to_file'       ENCRYPTION BY PASSWORD = 'password'         RESTORE MASTER KEY FROM FILE = 'path_to_file'       DECRYPTION BY PASSWORD = 'password'       ENCRYPTION BY PASSWORD = 'password'       [ FORCE ]   More information on the syntax is available here.   Backing Certificate        BACKUP CERTIFICATE certname TO FILE = 'path_to_file'       [ WITH PRIVATE KEY       (         FILE = 'path_to_private_key_file' ,         ENCRYPTION BY PASSWORD = 'encryption_password'         [ , DECRYPTION BY PASSWORD = 'decryption_password' ]       )   ]   To restore a backed up certificate, use the CREATE CERTIFICATE statement. More information is available here.   Symmetric key In order to maintain symmetric key properly and have the ability to restore it, you will have to use the same values for the following during key creation:   ·         algorithm, ·         key_source, ·         identity_value   Key can be protected by password or certificate, but values above should exactly be the same. You may also use different name of symmetric key. Let’s take an example to better understand this. SELECT * INTO Employee_symm FROM AdventureWorks.HumanResources.Employee GO   ALTER TABLE Employee_symm     ADD EncryptedNationalIDNumber varbinary(128); GO     --- Lets create symmetric key with protection by password first CREATE SYMMETRIC KEY symm_key_combo WITH       ALGORITHM = triple_des,       IDENTITY_VALUE = 'Example of encryption',       KEY_SOURCE = 'Put here some unique and long enough phrase.' ENCRYPTION BY PASSWORD = 'Avc#ptNO$cf@o!'; GO     --- Encrypt data OPEN SYMMETRIC KEY symm_key_combo DECRYPTION BY PASSWORD = 'Avc#ptNO$cf@o!' GO   UPDATE Employee_symm SET EncryptedNationalIDNumber = EncryptByKey(Key_GUID('symm_key_combo'), NationalIDNumber); GO     --- Now let’s drop symmetric key DROP SYMMETRIC KEY symm_key_combo GO     --- Re-create symmetric key with protection by certificate. --- Also let’s use different name for symmetric key CREATE SYMMETRIC KEY symm_key_combo_new WITH       -- Values here should be exactly the same       ALGORITHM = triple_des,       IDENTITY_VALUE = 'Example of encryption',       KEY_SOURCE = 'Put here some unique and long enough phrase.'   --- Protection can be done by certificate now though ENCRYPTION BY CERTIFICATE HumanResources037;     --- Now let’s decrypt here using new symmetric key OPEN SYMMETRIC KEY symm_key_combo_new DECRYPTION BY CERTIFICATE HumanResources037;   SELECT NationalIDNumber, EncryptedNationalIDNumber      AS 'Encrypted ID Number',     CONVERT(nvarchar, DecryptByKey(EncryptedNationalIDNumber))     AS 'Decrypted ID Number' FROM Employee_symm; CLOSE SYMMETRIC KEY symm_key_combo_new DROP SYMMETRIC KEY symm_key_combo_new DROP TABLE Employee_symm     So in order to be able to restore symmetric key following 3 values need to be set         ALGORITHM = triple_des,       IDENTITY_VALUE = 'Example of encryption',       KEY_SOURCE = 'Put here some unique and long enough phrase.'   Use the same values to re-create the symmetric key in future.   Hope this helps! Posted By: Sergey Pustovit   

Just a Haircut?
This weekend was a little hurried and harried. I immensely enjoyed presenting at SQL Saturday #48 ! I got to meet one of my co-workers face-to-face for the first time and any time I get to spend some time with K. Brian Kelley ( Blog | @kbriankelley ) is always a treat. In the rush, I didn't have time for a haircut before I left for Waltham to deliver From Zero To SSIS in cooperation with Boston SQL Training - the first ever SSIS course written cover-to-cover by me! I went shopping earlier and while...(read more)

Working with the SQL Server Powershell Provider and Central Management Servers from Powershell
I am a big fan of the Central Management Server setup that exists in SQL Server 2008, as well as how you can leverage this for the Enterprise Policy Management Framework that Laura Rubbelke ( Blog | Twitter ) created on Codeplex.  Today one of my DBA’s asked a question about how to leverage the list of servers in the CMS through PowerShell and I had to double back on my own code because in PowerGUI the SQL Server Powershell provider is not natively registered so I had to write some additional...(read more)

Plan guides (plan freezing) in SQL Server 2005/2008
  SQL Server 2005   The plan guides were first introduced in SQL Server 2005.You may find detailed information about the plan guides in SQL Server 2005 here:   General information http://msdn.microsoft.com/en-us/library/ms187032(SQL.90).aspx    Description of sp_create_plan_guide http://msdn.microsoft.com/en-us/library/ms179880(SQL.90).aspx   This feature can be useful when you are trying to enforce the optimizer using a specific execution plan without changing the original query. In SQL Server 2005, however,  you have to specify the statement exactly as it was submitted to the SQL Server. Sometimes it can be difficult since you have to capture a SQL Profiler trace and copy the SQL statement from there. Also there are certain rules that have to be applied. For example, the SQL parser does ignore spaces and escape characters inside the query text, but does not allow it at the end, so it may add complexity to capturing the actual statement.   In SQL Server 2005 query statistics will appear in DMVs including the T-SQL statement. In order to make it easier you may create a plan guide based on query statistics DMVs directly.   Here is the script that demonstrates this.   NOTE: Before you execute the examples below please make sure you have the  AdventureWorks database installed and the compatibility level for it  set to 90. If you don’t have the AdventureWorks database you can download it from: http://www.codeplex.com/SqlServerSamples       use AdventureWorks go   --- Cleaning cache for this sample dbcc freeproccache go   --- Running query first time to get plan generated for freezing set statistics xml on exec sp_executesql   N'SELECT COUNT(*) FROM [HumanResources].[Employee] e INNER JOIN [Person].[Contact] c ON c.[ContactID] = e.[ContactID]' set statistics xml off go   --- Based on query pattern creating a plan guide - freezing plan declare @sql_text nvarchar(max),         @sql_xml_plan nvarchar(max)   select @sql_text=     substring(sqt.text, (qs.statement_start_offset/2)+1,((CASE     qs.statement_end_offset WHEN -1 THEN DATALENGTH(sqt.text) ELSE       qs.statement_end_offset END - qs.statement_start_offset)/2) + 1), @sql_xml_plan =          convert(nvarchar(max),sqp.query_plan) from sys.dm_exec_query_stats qs        cross apply sys.dm_exec_sql_text(qs.sql_handle) sqt        cross apply sys.dm_exec_query_plan(qs.plan_handle) sqp where text like '%Employee%'   if @sql_text<>'' begin        select @sql_text, @sql_xml_plan        set @sql_xml_plan = 'OPTION(USE PLAN '''+@sql_xml_plan+''')'        exec sp_create_plan_guide @name =N'MyPlan_Guide_1'            , @stmt = @sql_text            , @type = N'SQL'            , @module_or_batch = NULL            , @params = NULL            , @hints = @sql_xml_plan end     You may check the plan guide is created by querying sys.plan_guides catalog view   select * from sys.plan_guides     Now execute the query again.   --- This time we will see USEPLAN=1 and plan guide name in XML plan output set statistics xml on exec sp_executesql   N'SELECT COUNT(*) FROM [HumanResources].[Employee] e INNER JOIN [Person].[Contact] c ON c.[ContactID] = e.[ContactID]' set statistics xml off   Click on ShowPlanXML hyperlink   SQL Server Management Studio 2005 will show you XML. Look at <StmtSimple> and <QueryPlan> tags.   As you can see SQL Server picked the created plan guide and USEPLAN option.   WARNING! You should be careful using the plan guides in SQL Server 2005. In case if metadata and/or data distribution has been changed the optimizer will not be able to use the plan guide anymore and the query will fail with the following error:     Msg 8698, Level 16, State 0, Line 1 Query processor could not produce query plan because USE PLAN hint contains plan that could not be verified to be legal for query. Remove or replace USE PLAN hint. For best likelihood of successful plan forcing, verify that the plan provided in the USE PLAN hint is one generated automatically by SQL Server for the same query.     To demonstrate this, disable the existent index that is used in the plan guide above   ALTER INDEX IX_Employee_ManagerID ON HumanResources.Employee DISABLE GO   and try running the query again.   set statistics xml on exec sp_executesql   N'SELECT COUNT(*) FROM [HumanResources].[Employee] e INNER JOIN [Person].[Contact] c ON c.[ContactID] = e.[ContactID]' set statistics xml off     To clean up your server after this demonstration:   --- Rebuild disabled index to enable it ALTER INDEX IX_Employee_ManagerID ON HumanResources.Employee REBUILD GO   --- Drop plan guide EXEC sp_control_plan_guide N'DROP',N'MyPlan_Guide_1';     SQL Server 2008   In SQL Server 2008 plan guides feature has been improved. In addition to sp_create_plan_guide you can also use the sp_create_plan_guide_from_handle stored procedure   Understanding plan guides http://msdn.microsoft.com/en-us/library/ms190417(SQL.90).aspx   sp_create_plan_guide_from_handle http://technet.microsoft.com/en-us/library/bb964726.aspx   So now you can create a plan guide based on the actual plan handle without pulling the T-SQL statement text   Here is the example:   use AdventureWorks go   --- Cleaning cache for this sample dbcc freeproccache go   --- Running query first time to get plan generated for freezing set statistics xml on exec sp_executesql   N'SELECT COUNT(*) FROM [HumanResources].[Employee] e INNER JOIN [Person].[Contact] c ON c.[ContactID] = e.[ContactID]' set statistics xml off go     --- Based on query pattern creating a plan guide - freezing plan declare @plan_handle varbinary(1000)   select @plan_handle = plan_handle from sys.dm_exec_query_stats qs        cross apply sys.dm_exec_sql_text(qs.sql_handle) sqt where text like '%Employee%'   select @plan_handle   exec sp_create_plan_guide_from_handle 'MyPlan_Guide_1', @plan_handle=@plan_handle     As you can see creating the plan guide is easier now,  and you may also easily copy and paste the plan handle from the DMV output and manually pass it to sp_create_plan_guide_from_handle   Run the query again   set statistics xml on exec sp_executesql   N'SELECT COUNT(*) FROM [HumanResources].[Employee] e INNER JOIN [Person].[Contact] c ON c.[ContactID] = e.[ContactID]' set statistics xml off go     Click on ShowPlanXML hyperlink   In SQL Server 2008 SQL Server Management studio will take you directly to the graphic execution plan automatically   Then right click on the execution plan page and choose Properties   To clean up your server after this demonstration:   --- Drop plan guide EXEC sp_control_plan_guide N'DROP',N'MyPlan_Guide_1';     Another difference in SQL Server 2008 from SQL Server 2005 is with the optimizer behavior  if the metadata has been changed. If this is the case, then the plan guide can not be used anymore.  The SQL Server 2008 optimizer will silently skip the plan guide and continue with query execution with whatever execution plan is the best. This makes the use of plan guides more robust and less dangerous than in SQL Server 2005.   To monitor the plan guides behavior you may use two new SQL Profiler events in SQL Server 2008   Posted by: Sergey Pustovit – SQL Server Escalation Services  

Dealing with very large SQL Compact database files
Working with very large SQL Compact files, you may run into two issues:   1.     Manipulating a large SQL Compact database in VS 2008 IDE requires a hotfix   http://support.microsoft.com/kb/968436   Error message when you use the Visual Studio 2008 IDE to manipulate a SQL Server Compact 3.5 Database file which is larger than 128 Megabytes: "The database file that is larger than the configured maximum database size"   2.     Upgrading a very large database from SQL Compact 3.1 to 3.5 requires another hotfix http://support.microsoft.com/kb/971027 FIX: Error message when you upgrade a very large database to SQL Server Compact 3.5: "The database file is larger than the configured maximum database size. This setting takes effect on the first concurrent database connection only" Hotfix 3.5.5692.12 fixes a problem where large SQL CE database upgrade fails with an error: The database file is larger than the configured maximum database size. This setting takes effect on the first concurrent database connection only. [ Required Max Database Size (in MB; 0 if unknown) = <size>] Depending upon the file-size and available resources on the machine, the upgrade process for a very large database may consume significant memory.  This is expected behavior.   You may want to stop other applications to make room for the Upgrade process to complete.

Tools of the Trade: Part II – The Repro
In this series of posts, we’re going to take a look at some of the tools we at Microsoft Developer Support -- Database commonly use, how to use them, and the types of issues they can be used to resolve.   This post might seem like a bit of a stretch.  For one thing, it’s not about a SQL related tool.  That’s not such a big deal – there are plenty of tools that we use that have applications beyond SQL.  You can expect more posts in this series about tools that have applications far beyond troubleshooting databases and database connectivity.   However, this isn’t about a specific, pre-existing tool either.  Call it a meta-tool, I guess.  Pseudo-tool.  Something along those lines.  One way or the other, even if it isn’t already built for you, a reproduction can be one of the most invaluable and time-saving tools you’ll use to troubleshoot your most difficult issues. The Repro In the Product Support Services world at Microsoft, we refer to a simplified, stripped down series of steps that lead to a specific behavior as a Repro.  This may simply be a step-by-step to recreate an error, or a simple application that recreates an unusual behavior.  When we are working on an issue that is buried in a hulking, production application, we often like to tackle it is by attempting to recreate the issue outside of the in vivo environment.  There are a lot of benefits to investing some time in developing a Repro.   It’s probably pretty obvious why you would not want to troubleshoot in a production environment.  I personally like to avoid debugging anything that generates revenue – it just seems to ease the tension.  There’s more to it, though – not only do you avoid tinkering with your company’s livelihood, a Repro also gives you the advantage of portability.  Whatever technology you’re working with, it is much, much easier to get the assistance of Subject Matter Experts if you can actually send them something to play with.   Another big benefit of the Repro is that it helps you to focus on the issue.  This is particularly helpful when you need to use other tools to analyze the behavior with which you’re dealing. Whatever kind of tracing you’re doing, you are bound to pick-up extraneous information if you’re running the trace on a production application.  Unless you’re very fond of wading through gigantic logs looking for clues, a Repro really helps to avoid getting lost in superfluous data.   Finally, a repro can save you a huge amount of time.  It can get so irritating having to click through the same series of screens and options, over and over again, as you troubleshoot an issue in a full-scale application.  Being able to make a change to code or the environment, and quickly test dramatically improves the speed of the troubleshooting process.  It also has the added benefit of keeping debugging from getting boring.   That’s my case for investing the time in creating a Repro.  The next question is: what is the best way to build a Repro?  Well, that’s going to depend greatly on the issue and the environment.   One thing that I like to do is work from the inside out.  In other words, if I know that a specific line of code is throwing an exception, I’ll create an application with the smallest amount of code I can use to execute that line.  If that doesn’t reproduce the behavior, I’ll add more code from the application around it, building up the Repro until it gets the behavior I’m looking for.   Another thing that I’ve found is important is to make sure that I attempt my Repro in an environment that’s as similar to production as possible.  I do not recommend attempting to reproduce an issue occurring in a data-access application running on Windows Server 2003 on your grandma’s Mac.  It’ll be much easier to create a repro on something that’s as close to the original scenario as possible and then move it to other environments to see if the behavior is any different.   One of the Technical Leads on our team has a good tip for reproducing issues that occur intermittently.  For example, say you find that every-so-often your application gets an error when attempting to connect to a data-store.  In this situation, it’s probably worth a try to take the connection you’re trying to make and wrap it in a tight loop.  It’ll be much easier to launch this and catch the failure than it will be to keep running and re-running a small repro that makes the connection a single time and exits.  If you are troubleshooting an issue occurring in a web-application, you may try a tool like Application Center Test to execute your repro many times simultaneously (see: this link for a discussion of using ACT).   Finally, I really recommend carefully documenting your repro.  Again, it makes it much easier to get the help of others, and it will also prevent you from forgetting what you had done on Friday after a long weekend.  Even if you really cannot find a way to reproduce the issue in a stripped down environment, documenting the steps an end-user takes when the issue occurs can be valuable for helping to clarify the problem statement and focus your efforts.     Posted By:       Justin Trobec  

Are you replacing Temp Tables with Snapshot Isolation?
My office is a cross-platform shop with both Oracle 10g and SQL Server 2000/5/8. Lately we've started doing some cross-training, so that all the DBAs, the theory at least, could function on either platform. The Oracle guys, I'm afraid, are way out ahead of me in this regard. In my defense, though, I have to manage more than 100 servers to their 5! Testing the waters with Oracle has been really interesting - I like looking at different technologies for comparison. One issue that we've had great discussion...(read more)

Recap Share- & PerformancePoint First Deployment
Happily, at my office we just rolled out our first MS/BI-stack dashboard project last week, using SharePoint 2010, PerformancePoint, Analysis Services, and SQL Server 2005 and 2008, on Windows 2008 R2. It was quite bumpy - needlessly so in some spots - but it works, looks great and offers excellent information to the business. Most important to me personally is that we now have all the infrastructure in place for the delivery of content through this channel moving forward, and it overcame a failed...(read more)

Performance: Tips for faster download of a large SQL Compact 3.5 database via Replication
SQL Compact 3.x Replication initial subscription is resource-intensive and may take significant time on devices.  Read the introduction to the SQL Compact Blog for this problem.   Introduction (from SQL Compact Blog)   “The initial subscription is going to bring down lot of data from server to client in merge replication.  It means that the requirement for bandwidth, memory, processing power is going to be high for the creation of subscription.  However, for devices all these requirements are not easy and even if they are available, it takes very long time to get the subscription created on the client database.”   Symptoms: SQL Compact 3.5 Replication initial download for certain large publication takes a significant amount of time on mobile devices.  The corresponding publication with earlier versions (SQL CE 2.0 and SQL 2000) did not take that much time.   Performance Tips: Here is a list of workarounds which have a potential to reduce the download time for large initial sync. Please keep in mind that choosing a single option may not resolve the issue completely.   However, a combination of several options may bring the download time to an acceptable level. - Remove unnecessary indexes from the server side (articles), especially indexes on Varchar columns This helps to a certain extent; however the overall download time may still be high. - Apply hotfix to turn off UpdateStatistics:   The Client Agent log during download indicates that a portion of time was spent in executing update statistics.  After applying a new hotfix release, the application may invoke a new method call to turn off update statistics.   PostSyncCleanup to turn off UpdateStatistics during initial download of SQL Compact Replication   KB http://support.microsoft.com/kb/963060 FIX: An error message is logged, and the synchronization may take a long time to finish when you synchronize a merge replication that contains a SQL Server Compact 3.5 subscriber - Turn off unnecessary logging (Client Agent Log, other types of device logging) on production devices. - Move to a high-end device, where one can use higher buffer pool sizes and can also place the database .sdf file in main memory as appropriate.- Pre-create the .sdf on desktop and then download to devices.  This relates to managing and deploying the solution to n number of devices in the field.   You may use the move-copy detection feature of SQL Compact on devices.   For more details on how it works, please see the SQL Compact team’s blog entry “"Merge Subscribe Once, Copy and Deploy Multiple Times" at http://blogs.msdn.com/sqlservercompact/archive/2007/06/26/merge-subscribe-once-copy-and-deploy-multiple-times.aspx - Increase buffer pool size:We got some real good benefit in terms of running times by increasing buffer pool size to higher values.  By default this value is 640K on a device.  The running times for a very large single-table publication with Max Buffer Size  of 640K, 1024K, 2048K, and 4096K were as follows:1) 640K  -  86 m 34 secs2) 1024K - 73 m 56 secs3) 2048K - 55 m 13 secs4) 4096K - 38 m 30 secs You can specify ‘Max Buffer Size’ in the connect string, as follows:repl.subscriberconnectionstring = "provider=MICROSOFT.SQLServer.OLEDB.CE.3.5;data source=35Repl.sdf;SSCE:Max Buffer Size=4096"; Keep in mind that this setting consumes more program memory on the device.  However, this should not be an issue if you have a good 20 MB of program memory available. If the initial sync data is huge, you may set the buffer pool size to a conveniently higher value only for initial sync.   There are real improvements in running times with bigger buffer pool sizes.    However, certain devices can't increase buffer pool size beyond 2-3 MB, so, a very large buffer-size is not always applicable.   Fix for rowguid index processing delays:   In some cases, the delay is due to rowguid index processing, please look for the hotfix KB 970414 when it is available.    

SQL Server Support FAQ
How often should I expect to hear from my Support Engineer? This will vary based on the issue and its impact to your business.  If you have specific preferences for frequency of contact, time of day, or method (prefer phone vs. e-mail), please discuss this with your support engineer.     My Support Engineer is not available and I need help now. If you need immediate assistance and your engineer is unavailable, please contact the engineer’s manager.  Manager contact information is in the engineer’s signature block and in the initial e-mail.    If you are unable to reach the manager, you can call the Microsoft Support line (800-936-5800), provide your case number, and request to speak with the next available support engineer.  You will be routed to the phone queue for that specialty and connected to an engineer.  The Support Engineer receiving the call may need time to review the case history and steps taken to date, depending on the duration and complexity of the issue.     My support issue is not progressing well. Each Support Engineer should provide a management contact in his/her signature block and in the initial e-mail.  If you have:   ·         Schedule conflicts with your assigned engineer’s availability. ·         Communications issues. ·         Need a faster resolution. ·         Or are otherwise dissatisfied with the support provided, please engage the engineer’s manager.    We strive to deliver outstanding customer service, and we appreciate you letting us know if we are not meeting your expectations.     I need a different engineer. As mentioned in the preceding section, “My support issue is not progressing well”, you may contact the engineer’s manager and request a different engineer.     My Support Engineer has suggested that we archive the case.  What does this mean? If there is going to be an extended delay before you can implement a change or collect requested data, we may ask to archive your case.  This means that the case will be closed on our end.  You can reopen the case if you encounter the same error on the same instance/server again, or if you have the data needed to continue troubleshooting.   In order to reopen the case, contact the support engineer or his/her manager directly.  You may be asked to provide feedback on support.  Please assess the support based on the service received to date, with the understanding that Microsoft does not consider an archived case resolved.     We thought that our issue was solved, but the problem has recurred.  Can I re-open my support case? As long as the problem and the server are the same as in the case, you can re-open the case.  To do this, first try to reach the previous engineer or his/her manager.  If unable to reach them, contact the Microsoft Support line (800-936-5800), provide your case number, and request that Microsoft re-open the case.    Note that in some cases, the same symptoms can be caused by multiple issues.  For example, if a server experienced slow performance and the disk I/O subsystem was fixed and performance restored, and later performance degraded again, this could be due to out of date statistics or other issues.  In these ambiguous cases, you may be asked to open a new case.     What do the Microsoft Case Severities mean? A – Critical – 1 hour initial response goal ·         High-impact problem in which production, operations, or development is proceeding but is severely impacted, or where production and/or profitability will be severely impacted within days.   B – Urgent – 4 hour initial response goal ·         High-impact problem where production is proceeding, but in a significantly impaired fashion. ·         Time sensitive issue important to long-term productivity that is not causing an immediate work stoppage.   C – Important - 24 hour initial response goal ·         Important issue which does not have significant current productivity impact for the customer.   D – Monitor ·         This severity is used for support issues that are waiting, usually to confirm a provided solution or when waiting for an intermittent problem to recur, in order to collect needed data.   The Support Engineer for my support issue said that I need to open an additional support incident.  Why should I have to open another incident? Microsoft defines an incident or case as: ·         A single support issue and the commercially reasonable efforts to resolve it ·         A problem that cannot be broken down into subordinate problems   NOTE:  The Customer and the Support Engineer must agree on the case resolution.  Also be aware that Microsoft does not charge for code defects, documentation errors, and product “wishes”.   As documented on Microsoft’s public website at http://support.microsoft.com/?LN=en-us&sd=tech&scid=gp%3Ben-us%3Bofferprophone&x=7&y=16#faq607   EXAMPLE:  A support issue is opened for slow performance of an application with a SQL Server back-end.  Troubleshooting identifies a specific stored procedure as the problem, and updates to statistics and an increase in covering indexes resolves the issue.    However, the same server is also experiencing slow performance for merge replication.  This is considered a different issue, which is unrelated to the original issue for which the support case was opened.     I’m interested in a higher level of service and a closer relationship with Microsoft. Microsoft offers a managed support relationship known as Premier Support, designed to meet the needs of an enterprise customer.  For an overview of the service elements, please see http://www.microsoft.com/services/microsoftservices/srv_prem.mspx  or call 1-800-936-3500 to request contact by a Premier Support representative.     Advisory vs. Break-Fix Microsoft provides support services for customers with problems encountered implementing or using Microsoft products.  These are commonly known as “break-fix issues”.  Occasionally customers need short-term (less than 40 hours) consultative assistance, as opposed to the standard break-fix service.   This short-term consultative assistance is known as Advisory service.  This is available to Microsoft’s Premier customers as an element in their support contracts.  It is also available to other customers on an hourly charge basis.   For more information on Advisory Services, please see the Advisory Services section at http://support.microsoft.com/default.aspx?scid=fh;EN-US;OfferProPhone or go to the Advisory Services page, http://support.microsoft.com/gp/advisoryservice .   Hope this helps!   Posted By: Microsoft SQL Server Support Team  

SQL Server 2008 R2 Idea: Data Profiling of any table in SSMS
I love the data profiling task that was added to SSIS in SQL Server 2008. It allows you to get a profile of the data in a table - this includes things like what are potential candidate keys, what length are the data values in each column, what percentage of rows in a table are null in each column, what is the distribution of values in the data, etc. etc. If you haven't used it, I'd suggest taking a look at it. It's a real hidden gem in the product. However, I think they missed the #1 place that this...(read more)

General network errors (GNEs), Timeout expired, SQL Server not found or Access Denied…
Many times, the key to solving these errors is a network packet trace.  The problem is that these errors are often thrown intermittently.  This can obviously make it problematic to capture a network trace.    Network Monitor 3 (AKA Netmon3) solves this problem quite nicely.  Netmon3 has the built-in ability to capture rolling traces.  This allows you to start the trace and then walk away and be sure that you will have captured the error.   Here are my steps for configuring Netmon3 for a rolling trace:   Please see the steps below in order to configure Network Monitor:   1.    Download Network Monitor 3 a.    (http://www.microsoft.com/downloads/details.aspx?FamilyID=18b1d59d-f4d8-4213-8d17-2f6dde7d7aac&DisplayLang=en)   2.    Install Network Monitor on both the client and server experiencing the problem.  If you are not able to install it on the server you can install it on another machine that is connected to the same hub as the server or to the admin port of the switch that the server is on.  If you go this route you must synchronize the time between the client, server, and this third machine.   3.    Synchronize the clocks on the Client and SQL machines a.    At one of the machines bring up a command prompt and execute the following:                                            i.    Net time <\\machinenamewewanttosynchwith> /set /yes   4.    Turn off TCP Chimney if any of the machines are Windows 2003 a.    bring up a command prompt and execute the following:                                            i.    Netsh int ip set chimney DISABLED   5.    Using the NMCap command-line utility, start the chained trace (chained = create a new capture file after reaching the “Temporary capture file size” rather than overwriting the current capture) a.    [Netmon 3 installation folder]\NMCap /network * /capture /file test.chn:100M  (creates 100 MB chained files)                                            i.    NOTE:  YOU MUST USE THE .CHN EXTENSION TO CAPTURE CHAINED TRACES                                           ii.    NOTE:  The target folder must exist for NMCap to create the trace file       b.    More information can be found:                                          i.    http://blogs.technet.com/netmon/archive/2006/10/24/nmcap-the-easy-way-to-automate-capturing.aspx                                         ii.    Or by running the command-line switch /examples                                        iii.    Or by running the command-line switch /?   6.    Get the IP addresses of the client and the server a.    ipconfig /all on the client > client.txt b.    ipconfig /all on the server > server.txt   7.    Reproduce the problem, please note the exact time the problem reproduces.  Also, please note the exact error message.   8.    Stop the trace by hitting Ctrl-c in the command-line window   Please note that Netmon3 does not run on Windows 2000.  Also, step #6 is probably the single most important step (besides actually starting the trace).  If you don’t know the timestamp of the error, it is extremely difficult to find the offending packets in the trace.   Happy captures!   Posted By: Evan Basalik

SQL Null: Grouping and sorting
Because SQL Server defines all Null markers as being unequal to one another, a special definition was required in order to group by Nulls together when performing certain operations.  SQL defines “any two values that are equal to one another, or any two Nulls”, as “not distinct”.  This definition of not distinct allows SQL to group and sort Nulls when the “GROUP BY” clause are used.  Let’s look at an example using group by and nulls.  You will see, that SQL Server does group them together by using the not distinct definition: DECLARE @table1 TABLE( ID INT, myStr VARCHAR(100)) INSERT INTO @table1( ID, myStr) Values (1, 'value a'), (2, 'value b'), (3, 'value a'), (4, 'value a'), (5, 'value b'), (6, null), (7, 'value b'), (8, null), (9, null), (10, 'value a'), (11, null), (12, null) SELECT * FROM @table1 SELECT myStr,       COUNT(1) as CountFROM   @table1GROUP  BY myStr The 1st select results show the raw values: ID myStr 1 value a 2 value b 3 value a 4 value a 5 value b 6 NULL 7 value b 8 NULL 9 NULL 10 value a 11 NULL 12 NULL The 2nd results show the group by: myStr Count NULL 5 value a 4 value b 3 As you can see, when SQL Server completes the group by clause it combines rows with NULL values.  When thinking about how it handles string concatenations, this is a little twist that can catch you off guard or make you think.  Distinct, same thing: SELECT distinct myStr FROM @table1 The result is 3 rows: NULL, value a, value b Hope this helps you when thinking about null values and grouping.  Try the example and look how SQL Server handles the default sort order for Nulls.

How to install/enable .Net 3.5 SP1 on Windows Server 2008 R2 for SQL Server 2008 and SQL Server 2008 R2
The .NET Framework 3.5 SP1 (also referred to as .NET Framework 3.5.1) is a prerequisite for SQL Server 2008. SQL Server 2008 Setup for standalone instance will install the .NET Framework 3.5 SP1 if it is not already installed. In Windows Server 2008 R2, the .NET Framework is a feature and installing it is different when compared to older versions of Windows Operating System. In previous versions, one could either download the .NET Framework 3.5.1 install binaries from Microsoft download site or use the install binaries from the redist folder of the SQL Server 2008 installation media. Starting with Windows Server 2008 R2, the method of installing .NET Framework 3.5.1 has changed. This document explains how to verify that .NET Framework 3.5.1 is installed and if it is not installed how you can add it.    How to verify if .NET Framework 3.5 SP1 is installed: Here are the steps to verify that .NET Framework 3.5.1 is installed on Windows Server 2008 R2. Click the Start button in the lower left hand corner of the display. Highlight Administrative Tools and select Server Manager. In the Server Manager interface, click Features to display all the installed Features in the right hand pane. Verify that .NET Framework 3.5.1 is listed. If .NET Framework 3.5.1 feature is not listed, you can use either of the following methods to install it: Method 1: Using Server Manager Interface In the Server Manager interface, select Add Features to displays a list of possible features. In the Select Features interface, expand .NET Framework 3.5.1 Features. Once you expand .NET Framework 3.5.1 Features, you will see two check boxes. One for .NET Framework 3.5.1 and other for WCF Activation. Check the box next to .NET Framework 3.5.1 and click Next. Note: If you do not expand .NET Framework 3.5.1 Features and check it, you will get a pop-up titled Add Features Wizard  as shown below. Click Cancel and expand .NET Framework 3.5.1 Features and then check .NET Framework 3.5.1 check box below it.   You cannot install .NET Framework 3.5.1 Features unless the required role services and features are also installed. In the Confirm Installation Selections interface, review the selections and then click Install. Allow the installation process to complete and then click Close. Method 2: Using PowerShell Click the Start button in the lower left hand corner of the display. Highlight All Programs and select Accessories Expand Windows PowerShell and right click Windows PowerShell and select Run as administrator. Click Yes on the User Account Control box. At the PowerShell command prompt, type the following commands, and then press ENTER after each command: Import-Module ServerManager Add-WindowsFeature as-net-framework Note: A screenshot is shown below:  

Which Azure Cloud Storage Model Should I Choose for my Application?
Most applications have four parts – input, computation, storage and output.  I’ll write about all of these over time, but today I want to focus on how to choose the storage part of the equation. This won’t be a full tutorial, full of detail and all that, but I will put forth some “rules of thumb” that you can use as a starter. I’ll also try and include some good pointers so you can research more.   NOTE: Utility Computing, or “the cloud”, or platform/software/architecture as a service, is a young discipline, and most certainly will change over time. That’s its advantage – that it can change quickly to meet your needs. However, that means information (like this blog entry) can be out of date. Make sure you check the latest documentation for Azure before you make your final decision, especially if the date on the post is older than six months or so. I’ll try and come back to update them, but check them nonetheless. Always start your search on the official site: http://www.microsoft.com/windowsazure/   Let’s start out with your options. You have four types of storage you can use for your applications: ·         Blobs ·         Tables ·         Queues ·         SQL Azure databases   Here are some rules of thumb for when you use each – and again, these are only guidelines. I’ll point you to some documentation for more depth.   Blobs: Use these for binary data (in other words, not text), and think of them like files on your hard drive. There are two types – block and page. Use block blobs for streaming, like when you want to start watching a movie before it even completes the download. You can store files up to 200GB at a pop. And they parallelize well. Use page blobs when you need a LOT of storage – up to a terabyte – and pages are stored in 512KB, well, pages. You can access a “page” directly, with an address.   Tables: Massive text blocks accessed using Key/Value pairs. If you’re used to “NoSQL”, you have the idea. You get one index on that pair, so choose the sort or search wisely. Not relational, but large, and fast.   Queues: This storage is used to transfer messages between blocks of code. If you think of the stateless-programming web-world, you need a way to tell one application something that isn’t event-based. This is how you do that. Since it’s a queue, it helps you with something called “idempotency”, which means that a single message on the queue will get processed once, and only once.   SQL Azure Databases: If you need relational storage, want to leverage Transact-SQL code you already have, or need full ACID, this is for you. There are size restrictions here, but I’ll not detail them so this information lives a little longer. Check out http://microsoft.com/sqlazure for specifications, whitepapers, the lot.   OK – I’ll end with a chart. This has some more information that you might find useful in your decision process:   If you need: Blob Table Queue SQL Azure Structured Data   Y   Y Relational Database       Y Server Side Processing       Y Access from outside Azure Y Y Y Y Messaging Infrastructure     Y   Persistent Storage Y Y   Y Size Limit 200GB/1TB 100TB 100TB 50GB   More info on Azure Storage: http://msdn.microsoft.com/en-us/library/ee924681.aspx http://www.azurejournal.com/2008/10/windows-azure-data-storage/   Many thanks to my teammates, Stephanie Lemus and Rick Shahid for help with the information in this post. Thanks for the help!

SQL 2005 SP2 is available now!
Finally, after rounds of CTPs (Community Technology Previews) and great feedback from customers, we have officially released Service Pack 2 (SP2) for SQL 2005. Download SP2You can download it by visiting the Microsoft website here. Enhancements in SQL 2005 SP2SP2 provides several new features and improvements. You can learn about them here. Download SQL 2005 SP2 Sample databasesYou can download the sample databases and learn about their enhancements by visiting here. Download Books Online refresh for SP2Books online gets updated with every release of service pack. Please download the latest version by visiting here.   As with every service pack, please make sure that you first test it out on your test and QA environment, before putting it to the production server.     Posted By: Sanchan Saxena

Tools of the Trade: Part III – Kerberos (I)
In this series of posts, we’re going to take a look at some of the tools we at Microsoft Developer Support -- Database commonly use, how to use them, and the types of issues they can be used to resolve.    In this article, Adam Saxton starts down the long road of double-hop troubleshooting by providing a gentle introduction to Kerberos, and how to recognize Kerberos+SQL related issues.  Future articles will move beyond identification, and discuss the tools and processes we use to troubleshoot such issues.   Kerberos We receive numerous calls where the problem falls under Kerberos.  This will present itself when trying to establish a connection to SQL Server (or a Linked Server).  I’m going to discuss what Kerberos is, how to recognize when there is a problem with it, and go over some of the items that need to be looked at in order to resolve a Kerberos issue with SQL Server.  As there is a lot of information on this topic, I’ll be splitting it up over a few blog posts.     What is Kerberos? Kerberos is a secure method for authenticating a request for a service in a computer network, and is used to verify a user or host identity. Kerberos lets a user request an encrypted "ticket" from an authentication process that can then be used to request a particular service from a server. The user's password does not have to pass through the network.   Put simply, it is a more secure form of Windows Authentication versus NTLM.  Also, understand that Kerberos is not a Windows specific technology.  Windows will always try to use Kerberos first over TCP.  If that doesn’t work, it will drop down to NTLM.  The exception to this is if you are trying to access a resource that is local.  NTLM will always be used when accessing local resources.  Also note that Kerberos cannot currently be used over Named Pipes or Shared Memory.     How does Kerberos work? There are a lot of details into how Kerberos works.  I’m not going to get that deep into it within this blog series.  I’m going to relate this to how it works with SQL.  To do that, I’m going to use the classic example of a Double-hop with a Web Server.  We use the work Double-hop to explain that the IIS Server is using a resource that is located on a different server.  In this case the first “hop” is from the web browser client to the IIS Server (ASP or ASPX Page); the second hop is to the SQL Server on a different machine. The server on the second hop is also referred to as the Backend Server.   In order to successfully establish a connection using Kerberos, a SQL ServicePrincipalName (SPN) needs to exist within Active Directory.  I will talk more about SPN’s and where they are located in a later post as the topic is complex enough to deserve its own.  I would say about 80-90% of calls we receive relating to Kerberos involve either missing SPN or duplicate SPN’s.   When you log onto your machine, you will get what we call a Kerberos Ticket.  To be more specific, you get a Ticket-Granting Ticket (TGT).  You use the TGT as a master ticket to access all Kerberos services on a network. A Kerberos ticket includes all the user credentials or computer credentials in an encrypted format. These credentials are used to identify a specific user or a specific computer on a network for access to Kerberos services.  When you access a Kerberos service, that service uses the ticket that you received to authenticate you.  After the service verifies your identity and authenticates you, the service issues a service ticket. This is where the SPN’s come into play.  Think of the SPN as a pointer for Kerberos so it knows where to go.  That’s why, if it’s missing or there are duplicates, it doesn’t know what to do.     How to recognize a Kerberos error with SQL Server: There are a few errors that we see where we can make an educated guess that a connectivity issue is Kerberos related.    Cannot Generate SSPI Context   Login failed for user ‘(null)’   Login failed for user ‘NT AUTHORITY\ANONYMOUS’   Those are the three biggest errors we see in relation to Kerberos with SQL Server.  These errors occur when the credentials from the first hop cannot be passed to the second hop.  Usually, this is because we are failing over to NTLM or our SPN configuration is not correct.  There are also other settings within Active Directory that come into play as well as certain machine configuration based on how services are setup.  For now, let’s focus on the basics, so I’ll get more into those setting and machine configurations in a later post.   One of the last things I’d like to mention is that I’ve seen customers get the following error confused with a Kerberos issue:   Login failed for user ‘SomeUser’   The SomeUser could be whatever user (SQL or Windows) that you are trying to connect with.  If you see the actual username in the error that means the credentials actually got to the SQL Server.  The issue at that point lies in the SQL Security configuration itself (either the server as a whole, or the database you are trying to connect to).  This is not a Kerberos related issue, but instead is a much simpler login failure.   Resources:   Kerberos in Windows 2000   Kerberos in Windows 2003 How to use Kerberos authentication in SQL Server Posted By:       Adam Saxton

Enforcing naming conventions using database unit testing
My naming convention obsession Anyone that has ever worked with me will tell you that I am a stickler for naming conventions. I have a somewhat obsessive reputation for it; I can’t help it – I seem to have a deep seated uncontrollable desire to ensure that every object in my database(s) is/are named consistently (is there anyone else out there equally as obsessive?). I have tried various techniques down the years to try and enforce naming conventions but none of them really worked. I’ve got scripts that alter object names (such a script is in my script library in fact) but these are only any use if they actually get run, they don’t actually enforce the conventions – that’s a manual step. I’ve thought about using Policy-Based Management (PBM) to enforce naming conventions but given I’m a developer and not a DBA that’s not something that is readily available to me and besides, using PBM to enforce naming conventions is reactive rather than proactive if you are developing the code on a machine where the policies are not enforced. Another option I looked into using was Team Foundation Server (TFS) check-in policies; these are policies that can be applied to artefacts when they get checked-in to TFS’s source control system. This option really appealed to me because the naming conventions could be enforced during check-in (i.e. very very early) and didn’t require DBA intervention. In practice though enforcing naming conventions using TFS check-in policies has a few sizable issues: Its not easy. It would require you to parse the file that was getting checked-in, decide what sort of object is defined in the file, and then check the name of the object based on things like object name, schema, etc... TFS check-in policies are not installed on the TFS server, they are installed on the development workstations. This means there is a dependency and, even though the source code for the check-in policies can be distributed with your application source code, I didn’t really like this. You’re relying on each developer to enforce the check-in policy and with the greatest will in the world….that aint gonna happen. Its too easy to turn them off. There is the obvious dependency on using TFS, not something every development shop uses even in the Microsoft space. Database unit testing to the rescue No, a better solution was needed and I came up with one in the shape of automated database unit testing. I have spoken recently about how I have become a big fan of database unit testing (see my post Experiences from writing sp_CascadingDataViewer – DB unit testing and code distribution) and being able to enforce naming conventions is one very good reason for that. Enforcing naming conventions using automated unit tests has a number of advantages: They can be written against the metadata of the objects themselves (i.e. by querying SQL Server’s system views) so there’s no parsing that needs to be done. They can be employed as part of a Continuous Integration (CI) process and run as a build verification test (BVT). Someone checks-in an object that violates the naming convention? Bang: broken build! Developers can’t circumvent the tests. Nothing needs to be installed on the development workstations. The tests live wholly as part of your source code. Not dependent on use of a particular source control system Hence I have written some unit tests that enforce the following naming conventions: Check constraints must be of the form CK_<schemaName><tableName>_XXX Column names must begin with a capital letter Column names cannot contain underscores Default constraints must be named DF_<schemaName><tableName>_<ColumnName> Foreign keys must be of the form FK_<parentObjectSchema><parentObject>_REF_<referencedObjectSchema><referencedObject>XXX Non-unique clustered keys must be of the form IXC_<schemaName<TableName>_<Column><Column><Column>… Non-unique non-clustered keys must be of the form IX_<schemaName><TableName>_<Column><Column><Column>... Unique clustered keys must be of the form IXUN_<schemaName><TableName>_<Column><Column><Column>… Unique non-clustered keys must be of the form IXUN_<schemaName><TableName>_<ColumnColumnColumn>... Primary keys must be of the form PK_<schemaName><tableName> Stored procedure names should not contain underscores Stored procedure names must begin with a capital letter Table names must not contain underscores Table names must begin with a capital letter I’m not stating that you should agree with these naming conventions (I don’t necessarily agree with them myself – they were defined before I arrived on my current project), the point here is that all of these rules can be enforced and its very easy to do it. Here’s the code for the unit test that enforces the  primary key naming convention:    /*PK name is PK_<schemaName><tableName>*/    SET NOCOUNT ON    DECLARE @cnt    INT;    SELECT  *    INTO    #t    FROM    (            SELECT  OBJECT_NAME(c.[parent_object_id]) AS [TableName],OBJECT_SCHEMA_NAME(c.[parent_object_id]) AS [SchemaName],c.*            FROM    [sys].[key_constraints] c            INNER JOIN [sys].[tables] t                ON  c.[parent_object_id] = t.[object_id]                  LEFT OUTER JOIN sys.extended_properties ep                ON  t.[object_id] = ep.major_id                AND ep.[name] = 'microsoft_database_tools_support'            WHERE ep.[major_id] IS NULL            AND c.[type] = 'PK'            )q    WHERE   [name] <> N'PK_' + [SchemaName] + [TableName]    SET     @cnt = @@ROWCOUNT;    IF  (@cnt > 0)    BEGIN            DECLARE @msg NVARCHAR(2048);            SELECT  @msg = '%d Primary Keys do not conform to naming convention (PK_<schemaName><tableName>):' + STUFF((SELECT ', ' + [name] FROM #t a FOR XML PATH('') ),1,2,'')            FROM    (                    SELECT  1 AS [Id],*                    FROM    #t t                    )q            GROUP   BY q.[Id]            SELECT @msg            RAISERROR(@msg,11,1,@cnt);    END Essentially all it does is pull all of the primary keys out of [sys].[key_constraints], checks to see what the name should be, then if it finds any that violate the naming convention raise an error containing the names of all the primary keys in question. Here’s the error obtained when running the test against [AdventureWorks] (I’ve highlighted the pertinent bit): Test method Prs.SchemaTests.NamingConventions.PrimaryKeys threw exception: System.Data.SqlClient.SqlException: 70 Primary Keys do not conform to naming convention (PK_<schemaName><tableName>):PK_ErrorLog_ErrorLogID, PK_Address_AddressID, PK_AddressType_AddressTypeID, PK_AWBuildVersion_SystemInformationID, PK_BillOfMaterials_BillOfMaterialsID, PK_Contact_ContactID, PK_ContactCreditCard_ContactID_CreditCardID, PK_ContactType_ContactTypeID, PK_CountryRegionCurrency_CountryRegionCode_CurrencyCode, PK_CountryRegion_CountryRegionCode, PK_CreditCard_CreditCardID, PK_Culture_CultureID, PK_Currency_CurrencyCode, PK_CurrencyRate_CurrencyRateID, PK_Customer_CustomerID, PK_CustomerAddress_CustomerID_AddressID, PK_DatabaseLog_DatabaseLogID, PK_Department_DepartmentID, PK_Document_DocumentID, PK_Employee_EmployeeID, PK_EmployeeAddress_EmployeeID_AddressID, PK_EmployeeDepartmentHistory_EmployeeID_StartDate_DepartmentID, PK_EmployeePayHistory_EmployeeID_RateChangeDate, PK_Illustration_IllustrationID, PK_Individual_CustomerID, PK_JobCandidate_JobCandidateID, PK_Location_LocationID, PK_Product_ProductID, PK_ProductCategory_ProductCategoryID, PK_ProductCostHistory_ProductID_StartDate, PK_ProductDescription_ProductDescriptionID, PK_ProductDocument_ProductID_DocumentID, PK_ProductInventory_ProductID_LocationID, PK_ProductListPriceHistory_ProductID_StartDate, PK_ProductModel_ProductModelID, PK_ProductModelIllustration_ProductModelID_IllustrationID, PK_ProductModelProductDescriptionCulture_ProductModelID_ProductDescriptionID_CultureID, PK_ProductPhoto_ProductPhotoID, PK_ProductProductPhoto_ProductID_ProductPhotoID, PK_ProductReview_ProductReviewID, PK_ProductSubcategory_ProductSubcategoryID, PK_ProductVendor_ProductID_VendorID, PK_PurchaseOrderDetail_PurchaseOrderID_PurchaseOrderDetailID, PK_PurchaseOrderHeader_PurchaseOrderID, PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID, PK_SalesOrderHeader_SalesOrderID, PK_SalesOrderHeaderSalesReason_SalesOrderID_SalesReasonID, PK_SalesPerson_SalesPersonID, PK_SalesPersonQuotaHistory_SalesPersonID_QuotaDate, PK_SalesReason_SalesReasonID, PK_SalesTaxRate_SalesTaxRateID, PK_SalesTerritory_Territor... I am currently including these tests inside a C# test project inside Visual Studio 2010. Visual Studio has a rather nice feature that allows you to link to artefacts in other projects and hence we can host our single test class containing all of these tests in one place and link to it from whichever test project we want (typically you will have a test project per database) thus following the DRY principle. Here I show the dialog that demonstrates adding a link to an existing test class: And how it appears in the project. Note that NamingConventions.cs exists in both test projects but one is just a link to the other: Wrap-up I’m not sure my colleagues are too happy about these new tests given that they’re now breaking the build more often but nonetheless I think they realise the value (I guess I’ll find out tomorrow when they read this!!!) All-in-all its working very well for us and I’m now a very happy bunny knowing that naming conventions are being enforced and will continue to be so with zero effort from here on in. I have made the test class that contains all of the tests that I detailed above available on my SkyDrive at http://cid-550f681dad532637.office.live.com/self.aspx/Public/BlogShare/20101005/MIControlTest.zip. If you want to use it you should simply be able to drop it into an existing C# database test project and away you go (change the tests to suit your naming conventions of course though). Hope this helps. If it does please let me know, I’d really love some feedback on this. @Jamiet

New util proc: sp_tableinfo
Some of you might use my sp_indexinfo to easily get lots of details about the indexes in a database. I added an sp_tableinfo with info rolled up to table level. This allow to quickly see the largest tables in a database and things like that. Check it out at http://www.karaszi.com/SQLServer/util_sp_tableinfo.asp ....(read more)

Creating HTTP endpoint fails with 7850 error.
Creating a HTTP endpoint in SQL Server 2005 or SQL Server 2008 may fail with the following error messages: Msg 7850, Level 16, State 1, Line 1 The user 'domain\myuser' does not have permission to register endpoint 'training_sql_endpoint' on the specified URL.  Please ensure the URL refers to a namespace that is reserved for listening by SQL. Msg 7807, Level 16, State 1, Line 1 An error ('0x80070005') occurred while attempting to register the endpoint 'training_sql_endpoint'.   The error message is actually incorrect in this context, it instead should report that it is the SQL Service account and also direct you to reserve the namespace explicitly, such as (this is what SQL Server 2008 now has): The SQL Server Service account does not have permission to register the supplied URL on the endpoint '%.*ls'.  Use sp_reserve_http_namespace to explicitly reserve the URL namespace before you try to register the URL again. When you run CREATE ENDPOINT to create a HTTP endpoint, this is done under the context of the SQL Server service account.  If the namespace reservation does not already exist, then SQL will implicitly create the reservation.  However, this requires that the SQL Server service account have local administrator privileges on the computer.  If the SQL Service account does not have local administrator, on SQL Server 2005 it will fail with the message noted earlier.  To resolve this you have two options: 1.        Add the SQL Server service account to the local administrators group, restart and then run the CREATE ENDPOINT again. 2.       Or, explicitly reserve the name space while logged on as a Windows Authentication user that has Local Administrator on the computer and sysadmin on SQL,  *before* you run CREATE ENDPOINT.  For example: sp_reserve_http_namespace N'http://*:2050/sql/myfolder' Then when you run the CREATE ENDPOINT, the SQL Service account will not have to reserve the namespace because it already exists, and proceed with creating the endpoint.  Note that when you reserve a namespace explicitly, you need to be sure that the string of the namespace you reserve matches the parameters in the CREATE ENDPOINT statement.  So for the namespace above, the CREATE ENDPOINT would need to look like the following for SQL to match it up correctly:  CREATE ENDPOINT [myendpoint]           STATE=STARTED AS HTTP (PATH=N'/sql/myfolder', PORTS = (CLEAR), AUTHENTICATION = (NTLM, KERBEROS, INTEGRATED), SITE=N'*', CLEAR_PORT = 2050, COMPRESSION=DISABLED)                 The following link has more on this in “Identifying the Namespace for an Endpoint” http://msdn.microsoft.com/en-us/library/ms190614.aspx  

Class not registered error after restoring an Analysis Services database that uses the SQL Server 2005 SQL Native Client.
After restoring an Analysis Services database that uses the SQLNCLI.1 SQL Native Client (SNAC) on a server which has never had SQL Server 2005 installed, an error similar to the following occurs during processing operations: Internal error: The operation terminated unsuccessfully. Internal error: The operation terminated unsuccessfully. OLE DB error: OLE DB or ODBC error: Class not registered. Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Adventure Works DW', Name of 'Adventure Works DW'. Errors in the OLAP storage engine: An error occurred while processing the 'Internet_Sales_2001' partition of the 'Internet Sales' measure group for the 'Adventure Works' cube from the Adventure Works DW database. Server: The operation has been cancelled.   During execution of queries that access data contained in ROLAP storage, an error message similar to the following is returned: Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Adventure Works DW', Name of 'Internet Sales Order Details' was being processed. Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Adventure Works DW', Name of 'Adventure Works DW'. OLE DB error: OLE DB or ODBC error: Class not registered. Server: The operation has been cancelled.   Opening the Property Sheet for the Data Source from SQL Server Management Studio (SSMS) or the Data Source Designer in Business Intelligence Development Studio (BIDS) and attempting to edit the connection string returns the following error from Connection Manager: The specified provider is not supported. Please choose different provider in connection manager   Checking the connection string from either SSMS or BIDS, it is similar to the following: Provider=SQLNCLI.1;Data Source=MyServer;Persist Security Info=True;User ID=sa;Initial Catalog=AdventureWorksDW   Changing the connection string to use the native OLE DB provider, similar to the following: Provider=SQLOLEDB.1;Password=Olapolap1;Persist Security Info=True;User ID=sa;Initial Catalog=AdventureWorksDW;Data Source=MyServer   Likewise, modifying the connection string to use the SQL Server 2008 SQL Server SNAC, similar to the following. Provider=SQLNCLI10.1;Integrated Security="";Persist Security Info=False;User ID=sa;Initial Catalog=AdventureWorksDW;Data Source=MyServer;Initial File Name="";Server SPN=""   The issue occurs because the SQLNCLI.1 provider does not ship with SQL Server 2008 and the provider name for SNAC connections changed in SQL Server 2008. The SNAC Version Provider names for SQL Server 2005 and SQL Server 2008 are:  SQL Server 2005           SQLNCLI.1 SQL Server 2008           SQLNCLI10.1   To workaround this issue, any of the following options can be used: Edit the Data Source(s) for the database and switch from the SQLNCLI.1 provider to the SQLOLEDB.1 provider Edit the Data Source(s) for the database and switch from the SQLNCLI.1 provider to the SQLNCLI10.1 Native Client provider. Download and install the SQL Server 2005 SQL Native Client from the following link: http://www.microsoft.com/downloads/details.aspx?FamilyID=d09c1d60-a13c-4479-9b91-9e8b9d835cdc&DisplayLang=en   John Desch Microsoft SQL Server Escalation Engineer

How to Benefit from Unit Testing T-SQL: Reusing Manual Tests as Parts of Unit Tests
In this post we shall see how to incorporate a typical manual test in an automated test harness with minimal effort. Because our goal is to benefit from unit testing, and not to develop as many features as we can think of, we shall concentrate on the most common use cases. In fact, just one use case is so common that is happens more often all all others combined together, at least in my practice. Naturally this whole "the most common" thing is very subjective, and your perspective may be very different...(read more)

SQL Compact Performance: PostSyncCleanup to turn off UpdateStatistics during initial download of SQL Compact Replication
SQL Compact Replication initial subscription may take significant time on devices.   The UpdateStatistics operation adds additional overhead to the initial download.    Indicators: The Client Agent log may show the start and end times of UpdateStatistics operation.  For example: SQLCECA: 04/09/2009-12:21:26 UpdateStatistics Start app=<appName>.exe SQLCECA: 04/09/2009-12:27:59 UpdateStatistics End hr=0 app=<appName>.exe   Workaround:   Recent hotfixes for SQL Compact 3.5 and 3.1 (KB article URLs given below) expose a new API for SQL Compact applications to turn off UpdateStatistics during initial download of Replication.   Disabling UpdateStatistics can be done only on the merge replication, and not for RDA operations. One should use,   SqlCeReplication repl = new SqlCeReplication(); … repl.PostSyncCleanup = 2;   to make sure that update stats is not called after merge replication sync.    To use the new method in your SQL Compact app, in addition to applying the hotfix to device, you will also need to deploy the updated System.Data.SqlServerCe.dll  to the desktop folder where VS can find it, else an error will occur:   'PostSyncCleanup' is not a member of 'System.Data.SqlServerCe.SqlCeReplication'.   You can follow these steps to resolve the above error:   Extract SYSTEM~1.004 file from the device cab, rename it to “ System.Data.SqlServerCe.dll ” and replace at the appropriate path.   When working with SQL Server Compact 3.5 (Visual Studio 2008): “C:\Program Files\Microsoft SQL Server Compact Edition\v3.5\Devices\System.Data.SqlServerCe.dll”   If your project refers to “System.Data.SqlServerCe.dll” in another location then replace the DLL at that location as well.    Once the modified app runs on the device, if the hotfix bits were not deployed you may see the following error (as expected):   “Method not found: set_ PostSyncCleanup System.Data.SqlServerCe.SqlCeReplication”   Uninstalling SQL Compact 3.5 SP1 RTM and installing the hotfix .cab files should resolve the above error.   References:   http://support.microsoft.com/kb/963060   http://support.microsoft.com/kb/960142   Posted by:  Syed

SQL Null: a set of posts on the null
Null is a special marker used in Structured Query Language (SQL) to indicate that a data value does not exist in the database. Introduced by the creator of the relational database model, E. F. Codd, SQL Null serves to fulfill the requirement that all true relational database management systems (RDBMS) support a representation of "missing information and inapplicable information". Codd also introduced the use of the lowercase Greek omega (ω) symbol to represent Null in database theory. NULL is also an SQL reserved keyword used to identify the Null special marker. Null has been the focus of controversy and a source of debate because of its associated three-valued logic (3VL), special requirements for its use in SQL joins, and the special handling required by aggregate functions and SQL grouping operators. Although special functions and predicates are provided to properly handle Nulls, opponents feel that resolving these issues introduces unnecessary complexity and inconsistency into the relational model of databases. Since Null is not a member of any data domain, it is not considered a "value", but rather a marker (or placeholder) indicating the absence of value. Because of this, comparisons with Null can never result in either True or False, but always in a third logical result, Unknown. In my next few posts, I will explore the null.

How to Benefit from Unit Testing T-SQL: Exposing Failure in a User-Friendly Way
Well-written unit tests should succeed only when the module being tested meets the requirements completely.If the test fails, it can do much better than just indicate failure. If the test has managed to provide a clear and comprehensive report of what is wrong, that can save us a lot of time troubleshooting. We shall see a few examples soon. This post continues the series on unit testing, the previous posts are How to Benefit from Unit Testing T-SQL: choosing what not to test How to Benefit from...(read more)

Presenting at SQLLunch Today!
I'm honored and excited to present SSIS Design Patterns 1c: Variables, Events, and Automated Custom Logging Patterns for SQLLunch today (11 Oct 2010)! In this talk, I cover building a custom logging solution in SSIS. I hope to see you there! :{>...(read more)

Twitter Input Adapter
We are delighted to see the StreamInsight developer community shaping up. Johan Åhlén, one of the most active members of this group, has published a Twitter input adapter last week, featured on SQLServerCentral. It uses the Twitter API to receive near-real time updates from the “sample method” and “filter method” APIs (the full firehose API of all public status updates is only accessible to users with high access levels). Converting tweets into point events, the adapter opens up a wide range of semantically rich analytics over Twitter data, expressed in declarative StreamInsight LINQ. Johan has published the sources of his adapter on the StreamInsight CodePlex site. Keep up the great work, Johan! SQLServerCentral article (free account required) CodePlex download Regards, The StreamInsight Team

Which Cloud Are You Talking About?
In the early days of computing, IT delivered computing using huge mainframes, accepting input from users, crunching the numbers and returning results to a very thin client - just a green-screen terminal. PC's came into the mix, allowing users a local experience with graphical interfaces and even other larger PC's as "servers". And now comes the "cloud". It seems everything with a network card, or anything that uses a network card, is "cloud" or "cloud enabled". But what is the "real" definition of the "cloud"? In effect, it's just outsourced or virtualized IT.There are really three levels of this definition. The first is "Infrastructure as a Service" or IaaS. This means you just rent a server or storage on someone else's network. While this is interesting, you still have to figure out scale, management and so on. You're only relieved of the hardware costs and maintenance. It's interesting to IT because it buffers them against hardware costs and upgrades, at least directly. In fact, you're probably doing this yourself already, by using Hyper-V or VMWare in your datacenters. The next level is "Software as a Service", or SaaS. You can think of things like Hotmail or Expedia in this way. You log on to an application over the web, use it, and log off. Nothing to install, no upgrades, no management, no maintenance. This is also an interesting "cloud" offering, although the limitations are the same as the mainframe days - you can't easily make changes to the software, and in some cases, you just get what you get. Great for consumers, not as interesting to IT. In fact, for us IT folks, this is what we need to provide, not necessarily something we use.Finally there is the "Platform as a Service" or PaaS. In this model, you have a platform to write on, which relieves the hardware issue of IaaS, you can write code that delivers software as in SaaS, and provides a way to create, test and deliver both. You're insulated against hardware and even platform upgrades, and you're given a way to scale the application without caring where it really lives. This is the road that Microsoft takes with Windows and SQL Azure.Of course it isn't all rainbows and candy - there are things to think about, like security, uptime and more. There's a formal process for determining the applications that fit - which I'll write about later.

Help support Itzik's plea for T-SQL Windowing functions
One of our founders at Solid Quality , Itzik Ben-Gan is known to most in the SQL Community, particularly for his passion around the T-SQL language. He's recently written an amazing blog post that describes why these functions are needed, along with a plea for helping promote the ideas to the Microsoft product group via the Connect site. Anyone with an interest in T-SQL should read the post as, by itself, it provides great insights into the language and the possibilities that some subtle changes could...(read more)

No comments:

Post a Comment