Pages

Monday, October 11, 2010

How to Benefit from Unit Testing T-SQL: Reusing Manual Tests as Parts of Unit Tests


In this post we shall see how to incorporate a typical manual test in an automated test harness with minimal effort. Because our goal is to benefit from unit testing, and not to develop as many features as we can think of, we shall concentrate on the most common use cases. In fact, just one use case is so common that is happens more often all all others combined together, at least in my practice. Naturally this whole "the most common" thing is very subjective, and your perspective may be very different...(read more)
Source: sqlblog.com

tSQL concatenate strings “1, 2, 3” and select results, too

Source: jerrytech.blogspot.com

Presenting at SQLLunch Today!
I'm honored and excited to present SSIS Design Patterns 1c: Variables, Events, and Automated Custom Logging Patterns for SQLLunch today (11 Oct 2010)! In this talk, I cover building a custom logging solution in SSIS. I hope to see you there! :{>...(read more)
Source: sqlblog.com

Which Azure Cloud Storage Model Should I Choose for my Application?
Most applications have four parts – input, computation, storage and output.  I’ll write about all of these over time, but today I want to focus on how to choose the storage part of the equation. This won’t be a full tutorial, full of detail and all that, but I will put forth some “rules of thumb” that you can use as a starter. I’ll also try and include some good pointers so you can research more.   NOTE: Utility Computing, or “the cloud”, or platform/software/architecture as a service, is a young discipline, and most certainly will change over time. That’s its advantage – that it can change quickly to meet your needs. However, that means information (like this blog entry) can be out of date. Make sure you check the latest documentation for Azure before you make your final decision, especially if the date on the post is older than six months or so. I’ll try and come back to update them, but check them nonetheless. Always start your search on the official site: http://www.microsoft.com/windowsazure/   Let’s start out with your options. You have four types of storage you can use for your applications: ·         Blobs ·         Tables ·         Queues ·         SQL Azure databases   Here are some rules of thumb for when you use each – and again, these are only guidelines. I’ll point you to some documentation for more depth.   Blobs: Use these for binary data (in other words, not text), and think of them like files on your hard drive. There are two types – block and page. Use block blobs for streaming, like when you want to start watching a movie before it even completes the download. You can store files up to 200GB at a pop. And they parallelize well. Use page blobs when you need a LOT of storage – up to a terabyte – and pages are stored in 512KB, well, pages. You can access a “page” directly, with an address.   Tables: Massive text blocks accessed using Key/Value pairs. If you’re used to “NoSQL”, you have the idea. You get one index on that pair, so choose the sort or search wisely. Not relational, but large, and fast.   Queues: This storage is used to transfer messages between blocks of code. If you think of the stateless-programming web-world, you need a way to tell one application something that isn’t event-based. This is how you do that. Since it’s a queue, it helps you with something called “idempotency”, which means that a single message on the queue will get processed once, and only once.   SQL Azure Databases: If you need relational storage, want to leverage Transact-SQL code you already have, or need full ACID, this is for you. There are size restrictions here, but I’ll not detail them so this information lives a little longer. Check out http://microsoft.com/sqlazure for specifications, whitepapers, the lot.   OK – I’ll end with a chart. This has some more information that you might find useful in your decision process:   If you need: Blob Table Queue SQL Azure Structured Data   Y   Y Relational Database       Y Server Side Processing       Y Access from outside Azure Y Y Y Y Messaging Infrastructure     Y   Persistent Storage Y Y   Y Size Limit 200GB/1TB 100TB 100TB 50GB   More info on Azure Storage: http://msdn.microsoft.com/en-us/library/ee924681.aspx http://www.azurejournal.com/2008/10/windows-azure-data-storage/   Many thanks to my teammates, Stephanie Lemus and Rick Shahid for help with the information in this post. Thanks for the help!
Source: sqlblog.com

tSQL Running Total: CTE wins

Source: jerrytech.blogspot.com

How to Benefit from Unit Testing T-SQL: Exposing Failure in a User-Friendly Way
Well-written unit tests should succeed only when the module being tested meets the requirements completely.If the test fails, it can do much better than just indicate failure. If the test has managed to provide a clear and comprehensive report of what is wrong, that can save us a lot of time troubleshooting. We shall see a few examples soon. This post continues the series on unit testing, the previous posts are How to Benefit from Unit Testing T-SQL: choosing what not to test How to Benefit from...(read more)
Source: sqlblog.com

Performance of SQL’s NullIf() Function (versus Case)

Source: jerrytech.blogspot.com

tSQL Regular Expressions

Source: jerrytech.blogspot.com

T-SQL joke
I was attending an ORM class for developers where I heard it said "Don't select star from a table". I've heard this many times and I'm sure you have, too. It makes for brittle code in case a column is added to the table. I started wondering if I could create an example that is an exception to the rule and came up with this: create table aTable ( star int ); select star from aTable ;...(read more)
Source: sqlblog.com

Which Cloud Are You Talking About?
In the early days of computing, IT delivered computing using huge mainframes, accepting input from users, crunching the numbers and returning results to a very thin client - just a green-screen terminal. PC's came into the mix, allowing users a local experience with graphical interfaces and even other larger PC's as "servers". And now comes the "cloud". It seems everything with a network card, or anything that uses a network card, is "cloud" or "cloud enabled". But what is the "real" definition of the "cloud"? In effect, it's just outsourced or virtualized IT.There are really three levels of this definition. The first is "Infrastructure as a Service" or IaaS. This means you just rent a server or storage on someone else's network. While this is interesting, you still have to figure out scale, management and so on. You're only relieved of the hardware costs and maintenance. It's interesting to IT because it buffers them against hardware costs and upgrades, at least directly. In fact, you're probably doing this yourself already, by using Hyper-V or VMWare in your datacenters. The next level is "Software as a Service", or SaaS. You can think of things like Hotmail or Expedia in this way. You log on to an application over the web, use it, and log off. Nothing to install, no upgrades, no management, no maintenance. This is also an interesting "cloud" offering, although the limitations are the same as the mainframe days - you can't easily make changes to the software, and in some cases, you just get what you get. Great for consumers, not as interesting to IT. In fact, for us IT folks, this is what we need to provide, not necessarily something we use.Finally there is the "Platform as a Service" or PaaS. In this model, you have a platform to write on, which relieves the hardware issue of IaaS, you can write code that delivers software as in SaaS, and provides a way to create, test and deliver both. You're insulated against hardware and even platform upgrades, and you're given a way to scale the application without caring where it really lives. This is the road that Microsoft takes with Windows and SQL Azure.Of course it isn't all rainbows and candy - there are things to think about, like security, uptime and more. There's a formal process for determining the applications that fit - which I'll write about later.
Source: sqlblog.com

How to flip a bit in SQL Server by using the Bitwise NOT operator
A new blogpost has been posted by SQLDenis on Sep 29, 2010 in category Microsoft SQL Server
Source: blogs.lessthandot.com

The History of SQL Server Integration Services
A new blogpost has been posted by Ted Krueger (onpnt) on Oct 11, 2010 in category Database Administration
Source: blogs.lessthandot.com

New util proc: sp_tableinfo
Some of you might use my sp_indexinfo to easily get lots of details about the indexes in a database. I added an sp_tableinfo with info rolled up to table level. This allow to quickly see the largest tables in a database and things like that. Check it out at http://www.karaszi.com/SQLServer/util_sp_tableinfo.asp ....(read more)
Source: sqlblog.com

Just a Haircut?
This weekend was a little hurried and harried. I immensely enjoyed presenting at SQL Saturday #48 ! I got to meet one of my co-workers face-to-face for the first time and any time I get to spend some time with K. Brian Kelley ( Blog | @kbriankelley ) is always a treat. In the rush, I didn't have time for a haircut before I left for Waltham to deliver From Zero To SSIS in cooperation with Boston SQL Training - the first ever SSIS course written cover-to-cover by me! I went shopping earlier and while...(read more)
Source: sqlblog.com

Security Auditing a Database
A new blogpost has been posted by thirster42 on Oct 8, 2010 in category Microsoft SQL Server Admin
Source: blogs.lessthandot.com

Recap Share- & PerformancePoint First Deployment
Happily, at my office we just rolled out our first MS/BI-stack dashboard project last week, using SharePoint 2010, PerformancePoint, Analysis Services, and SQL Server 2005 and 2008, on Windows 2008 R2. It was quite bumpy - needlessly so in some spots - but it works, looks great and offers excellent information to the business. Most important to me personally is that we now have all the infrastructure in place for the delivery of content through this channel moving forward, and it overcame a failed...(read more)
Source: sqlblog.com

SQL Server 2008 R2 Idea: Data Profiling of any table in SSMS
I love the data profiling task that was added to SSIS in SQL Server 2008. It allows you to get a profile of the data in a table - this includes things like what are potential candidate keys, what length are the data values in each column, what percentage of rows in a table are null in each column, what is the distribution of values in the data, etc. etc. If you haven't used it, I'd suggest taking a look at it. It's a real hidden gem in the product. However, I think they missed the #1 place that this...(read more)
Source: sqlblog.com

Are you replacing Temp Tables with Snapshot Isolation?
My office is a cross-platform shop with both Oracle 10g and SQL Server 2000/5/8. Lately we've started doing some cross-training, so that all the DBAs, the theory at least, could function on either platform. The Oracle guys, I'm afraid, are way out ahead of me in this regard. In my defense, though, I have to manage more than 100 servers to their 5! Testing the waters with Oracle has been really interesting - I like looking at different technologies for comparison. One issue that we've had great discussion...(read more)
Source: sqlblog.com

SQL Server 2008 Service Pack 2 Released
A new blogpost has been posted by SQLDenis on Sep 29, 2010 in category Microsoft SQL Server Admin
Source: blogs.lessthandot.com

Help support Itzik's plea for T-SQL Windowing functions
One of our founders at Solid Quality , Itzik Ben-Gan is known to most in the SQL Community, particularly for his passion around the T-SQL language. He's recently written an amazing blog post that describes why these functions are needed, along with a plea for helping promote the ideas to the Microsoft product group via the Connect site. Anyone with an interest in T-SQL should read the post as, by itself, it provides great insights into the language and the possibilities that some subtle changes could...(read more)
Source: sqlblog.com

Twitter Input Adapter
We are delighted to see the StreamInsight developer community shaping up. Johan Åhlén, one of the most active members of this group, has published a Twitter input adapter last week, featured on SQLServerCentral. It uses the Twitter API to receive near-real time updates from the “sample method” and “filter method” APIs (the full firehose API of all public status updates is only accessible to users with high access levels). Converting tweets into point events, the adapter opens up a wide range of semantically rich analytics over Twitter data, expressed in declarative StreamInsight LINQ. Johan has published the sources of his adapter on the StreamInsight CodePlex site. Keep up the great work, Johan! SQLServerCentral article (free account required) CodePlex download Regards, The StreamInsight Team
Source: sqlblog.com

24 Hours of PASS for Latin America!
One of the biggest pushes I was trying to make while on the PASS board was to help grow the organization into other areas of the world, and probably a little out of the organization's comfort zone. We in western English-speaking countries often totally underestimate the passion for SQL Server and the number of professionals that work with it around the world. In my time on the board, I was both amazed and privileged to see the types of events already occurring and the vast number of attendees. Often...(read more)
Source: sqlblog.com

Enforcing naming conventions using database unit testing
My naming convention obsession Anyone that has ever worked with me will tell you that I am a stickler for naming conventions. I have a somewhat obsessive reputation for it; I can’t help it – I seem to have a deep seated uncontrollable desire to ensure that every object in my database(s) is/are named consistently (is there anyone else out there equally as obsessive?). I have tried various techniques down the years to try and enforce naming conventions but none of them really worked. I’ve got scripts that alter object names (such a script is in my script library in fact) but these are only any use if they actually get run, they don’t actually enforce the conventions – that’s a manual step. I’ve thought about using Policy-Based Management (PBM) to enforce naming conventions but given I’m a developer and not a DBA that’s not something that is readily available to me and besides, using PBM to enforce naming conventions is reactive rather than proactive if you are developing the code on a machine where the policies are not enforced. Another option I looked into using was Team Foundation Server (TFS) check-in policies; these are policies that can be applied to artefacts when they get checked-in to TFS’s source control system. This option really appealed to me because the naming conventions could be enforced during check-in (i.e. very very early) and didn’t require DBA intervention. In practice though enforcing naming conventions using TFS check-in policies has a few sizable issues: Its not easy. It would require you to parse the file that was getting checked-in, decide what sort of object is defined in the file, and then check the name of the object based on things like object name, schema, etc... TFS check-in policies are not installed on the TFS server, they are installed on the development workstations. This means there is a dependency and, even though the source code for the check-in policies can be distributed with your application source code, I didn’t really like this. You’re relying on each developer to enforce the check-in policy and with the greatest will in the world….that aint gonna happen. Its too easy to turn them off. There is the obvious dependency on using TFS, not something every development shop uses even in the Microsoft space. Database unit testing to the rescue No, a better solution was needed and I came up with one in the shape of automated database unit testing. I have spoken recently about how I have become a big fan of database unit testing (see my post Experiences from writing sp_CascadingDataViewer – DB unit testing and code distribution) and being able to enforce naming conventions is one very good reason for that. Enforcing naming conventions using automated unit tests has a number of advantages: They can be written against the metadata of the objects themselves (i.e. by querying SQL Server’s system views) so there’s no parsing that needs to be done. They can be employed as part of a Continuous Integration (CI) process and run as a build verification test (BVT). Someone checks-in an object that violates the naming convention? Bang: broken build! Developers can’t circumvent the tests. Nothing needs to be installed on the development workstations. The tests live wholly as part of your source code. Not dependent on use of a particular source control system Hence I have written some unit tests that enforce the following naming conventions: Check constraints must be of the form CK_<schemaName><tableName>_XXX Column names must begin with a capital letter Column names cannot contain underscores Default constraints must be named DF_<schemaName><tableName>_<ColumnName> Foreign keys must be of the form FK_<parentObjectSchema><parentObject>_REF_<referencedObjectSchema><referencedObject>XXX Non-unique clustered keys must be of the form IXC_<schemaName<TableName>_<Column><Column><Column>… Non-unique non-clustered keys must be of the form IX_<schemaName><TableName>_<Column><Column><Column>... Unique clustered keys must be of the form IXUN_<schemaName><TableName>_<Column><Column><Column>… Unique non-clustered keys must be of the form IXUN_<schemaName><TableName>_<ColumnColumnColumn>... Primary keys must be of the form PK_<schemaName><tableName> Stored procedure names should not contain underscores Stored procedure names must begin with a capital letter Table names must not contain underscores Table names must begin with a capital letter I’m not stating that you should agree with these naming conventions (I don’t necessarily agree with them myself – they were defined before I arrived on my current project), the point here is that all of these rules can be enforced and its very easy to do it. Here’s the code for the unit test that enforces the  primary key naming convention:    /*PK name is PK_<schemaName><tableName>*/    SET NOCOUNT ON    DECLARE @cnt    INT;    SELECT  *    INTO    #t    FROM    (            SELECT  OBJECT_NAME(c.[parent_object_id]) AS [TableName],OBJECT_SCHEMA_NAME(c.[parent_object_id]) AS [SchemaName],c.*            FROM    [sys].[key_constraints] c            INNER JOIN [sys].[tables] t                ON  c.[parent_object_id] = t.[object_id]                  LEFT OUTER JOIN sys.extended_properties ep                ON  t.[object_id] = ep.major_id                AND ep.[name] = 'microsoft_database_tools_support'            WHERE ep.[major_id] IS NULL            AND c.[type] = 'PK'            )q    WHERE   [name] <> N'PK_' + [SchemaName] + [TableName]    SET     @cnt = @@ROWCOUNT;    IF  (@cnt > 0)    BEGIN            DECLARE @msg NVARCHAR(2048);            SELECT  @msg = '%d Primary Keys do not conform to naming convention (PK_<schemaName><tableName>):' + STUFF((SELECT ', ' + [name] FROM #t a FOR XML PATH('') ),1,2,'')            FROM    (                    SELECT  1 AS [Id],*                    FROM    #t t                    )q            GROUP   BY q.[Id]            SELECT @msg            RAISERROR(@msg,11,1,@cnt);    END Essentially all it does is pull all of the primary keys out of [sys].[key_constraints], checks to see what the name should be, then if it finds any that violate the naming convention raise an error containing the names of all the primary keys in question. Here’s the error obtained when running the test against [AdventureWorks] (I’ve highlighted the pertinent bit): Test method Prs.SchemaTests.NamingConventions.PrimaryKeys threw exception: System.Data.SqlClient.SqlException: 70 Primary Keys do not conform to naming convention (PK_<schemaName><tableName>):PK_ErrorLog_ErrorLogID, PK_Address_AddressID, PK_AddressType_AddressTypeID, PK_AWBuildVersion_SystemInformationID, PK_BillOfMaterials_BillOfMaterialsID, PK_Contact_ContactID, PK_ContactCreditCard_ContactID_CreditCardID, PK_ContactType_ContactTypeID, PK_CountryRegionCurrency_CountryRegionCode_CurrencyCode, PK_CountryRegion_CountryRegionCode, PK_CreditCard_CreditCardID, PK_Culture_CultureID, PK_Currency_CurrencyCode, PK_CurrencyRate_CurrencyRateID, PK_Customer_CustomerID, PK_CustomerAddress_CustomerID_AddressID, PK_DatabaseLog_DatabaseLogID, PK_Department_DepartmentID, PK_Document_DocumentID, PK_Employee_EmployeeID, PK_EmployeeAddress_EmployeeID_AddressID, PK_EmployeeDepartmentHistory_EmployeeID_StartDate_DepartmentID, PK_EmployeePayHistory_EmployeeID_RateChangeDate, PK_Illustration_IllustrationID, PK_Individual_CustomerID, PK_JobCandidate_JobCandidateID, PK_Location_LocationID, PK_Product_ProductID, PK_ProductCategory_ProductCategoryID, PK_ProductCostHistory_ProductID_StartDate, PK_ProductDescription_ProductDescriptionID, PK_ProductDocument_ProductID_DocumentID, PK_ProductInventory_ProductID_LocationID, PK_ProductListPriceHistory_ProductID_StartDate, PK_ProductModel_ProductModelID, PK_ProductModelIllustration_ProductModelID_IllustrationID, PK_ProductModelProductDescriptionCulture_ProductModelID_ProductDescriptionID_CultureID, PK_ProductPhoto_ProductPhotoID, PK_ProductProductPhoto_ProductID_ProductPhotoID, PK_ProductReview_ProductReviewID, PK_ProductSubcategory_ProductSubcategoryID, PK_ProductVendor_ProductID_VendorID, PK_PurchaseOrderDetail_PurchaseOrderID_PurchaseOrderDetailID, PK_PurchaseOrderHeader_PurchaseOrderID, PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID, PK_SalesOrderHeader_SalesOrderID, PK_SalesOrderHeaderSalesReason_SalesOrderID_SalesReasonID, PK_SalesPerson_SalesPersonID, PK_SalesPersonQuotaHistory_SalesPersonID_QuotaDate, PK_SalesReason_SalesReasonID, PK_SalesTaxRate_SalesTaxRateID, PK_SalesTerritory_Territor... I am currently including these tests inside a C# test project inside Visual Studio 2010. Visual Studio has a rather nice feature that allows you to link to artefacts in other projects and hence we can host our single test class containing all of these tests in one place and link to it from whichever test project we want (typically you will have a test project per database) thus following the DRY principle. Here I show the dialog that demonstrates adding a link to an existing test class: And how it appears in the project. Note that NamingConventions.cs exists in both test projects but one is just a link to the other: Wrap-up I’m not sure my colleagues are too happy about these new tests given that they’re now breaking the build more often but nonetheless I think they realise the value (I guess I’ll find out tomorrow when they read this!!!) All-in-all its working very well for us and I’m now a very happy bunny knowing that naming conventions are being enforced and will continue to be so with zero effort from here on in. I have made the test class that contains all of the tests that I detailed above available on my SkyDrive at http://cid-550f681dad532637.office.live.com/self.aspx/Public/BlogShare/20101005/MIControlTest.zip. If you want to use it you should simply be able to drop it into an existing C# database test project and away you go (change the tests to suit your naming conventions of course though). Hope this helps. If it does please let me know, I’d really love some feedback on this. @Jamiet
Source: sqlblog.com

List_all_SQL_Agent_jobs_and_the_owner_of_those_jobs
A new article has been posted in the wiki.
Source: wiki.lessthandot.com

Wisconsin SQL Server User Group
A new blogpost has been posted by Ted Krueger (onpnt) on Oct 11, 2010 in category Database Administration
Source: blogs.lessthandot.com

Working with the SQL Server Powershell Provider and Central Management Servers from Powershell
I am a big fan of the Central Management Server setup that exists in SQL Server 2008, as well as how you can leverage this for the Enterprise Policy Management Framework that Laura Rubbelke ( Blog | Twitter ) created on Codeplex.  Today one of my DBA’s asked a question about how to leverage the list of servers in the CMS through PowerShell and I had to double back on my own code because in PowerGUI the SQL Server Powershell provider is not natively registered so I had to write some additional...(read more)
Source: sqlblog.com

Using Dynamic MDX in Reporting Services: Part 1
If you're using Analysis Services as a data source for Reporting Services reports, you can build a simple dataset using the graphical query designer, but you'll want to switch to the generic query designer to create the MDX query string manually when you have more advanced requirements. Using the generic query designer, you can:· Impose greater control over the sets that you want to add to the rows axis by using set functions.· Add query-scoped named sets to the query in addition to calculated members. (Calculated members can also be added in the graphical query designer, but not named sets.)· Build dynamic MDX queries.When would you need a dynamic MDX query? Whenever you want to modify the query based on a condition known only at run-time, typically based on a parameter value. If you're using the graphical query designer, you can auto-generate the report parameter's query for available values by selecting the Parameter checkbox. When the user selects a value during report execution, Reporting Services passes the unique name for the selection to the query and all is well. However, there might be situations when the user selection doesn't come from the cube, so you must find a way to convert the parameter value into a value that will work with the query.In a series of posts, I will explore the available options for working with dynamic MDX queries in Reporting Services. By dynamic MDX, I mean that the query can be different each time it executes. In this post, I cover the use of StrToMember() and StrToSet() functions in parameters.A very common scenario is the requirement to pass dates into a query. If you have a date filter for the report, do you really want users to navigate through a list of dates from the cube as shown below?This list of dates - even if it's arranged hierarchically by month, quarter, and year - is what you get when you build the parameter directly from the date hierarchy in the query designer as shown below.Wouldn't a more user-friendly experience allow the user to select a data from a calendar control? I can do this by changing the auto-generated report parameter's data type to a Date/Time data type and clear the "Allow multiple values" check box. I must also change the Available Values setting for the parameter to None. I can set the default value to "No default value" to force the user to make a selection, or do something nice like define an expression to set a date, like =Today().So far, so good. But the problem now is that the date data type returned by the calendar control cannot be used by the MDX query without some intervention. I need to change the Parameter Value mapped to the query parameter in the Dataset Properties to an expression, like this:="[Date].[Calendar].[Date].[" + Format(CDate(Parameters!DateCalendar.Value), "MMMM d, yyyy") + "]" The expression that you use to convert a date like 2/1/2008 to a valid unique name in your Date dimension might look different. My example is specific to the Adventure Works 2008 R2 cube, which requires the date member to look like this: [Date].[Calendar].[Date].[February 1, 2008].That's fine so far, but the result of this expression is a string and the MDX query requires a member or a set. The autogenerated query already makes this change for you fortunately. However, if you're creating your query manually, you should understand what it's doing, especially if you need to make changes to it.The autogenerated query looks like this before I make changes:SELECT NON EMPTY { [Measures].[Sales Amount] } ON COLUMNS,NON EMPTY { ([Product].[Category].[Category].ALLMEMBERS ) }DIMENSION PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON ROWSFROM ( SELECT ( STRTOSET(@DateCalendar, CONSTRAINED) ) ON COLUMNSFROM [Adventure Works])WHERE ( IIF( STRTOSET(@DateCalendar, CONSTRAINED).Count = 1, STRTOSET(@DateCalendar, CONSTRAINED), [Date].[Calendar].currentmember ) ) CELL PROPERTIES VALUE, BACK_COLOR, FORE_COLOR, FORMATTED_VALUE, FORMAT_STRING, FONT_NAME, FONT_SIZE, FONT_FLAGSI prefer to simplify the query as shown below - removing the text highlighted in red text above. The function does what it says - changes the string (represented by the parameter @DateCalendar) into a set object. I remove the WHERE clause from the query as the FROM clause adequately restricts the query results to cell values related to the selected date. If I need the dimension properties in the report to display something or if I need the cell properties for report formatting, I'll include only the ones I need, but for this example I have removed them all from the query.SELECT NON EMPTY { [Measures].[Sales Amount] } ON COLUMNS,NON EMPTY { ([Product].[Category].[Category].ALLMEMBERS ) } ON ROWSFROM ( SELECT ( STRTOSET(@DateCalendar, CONSTRAINED) ) ON COLUMNSFROM [Adventure Works])You could change the StrToSet() function to a StrToMember() function and get the same result. It's not harmful to leave StrToSet() as it is. It just returns a set of one member in this case-the date from the calendar control which is a valid set. The CONSTRAINED flag is used to prevent an injection attack and requires the expression to resolve to a valid member before the query executes.One challenge that often confounds people working with MDX queries in the generic query designer is the inability to copy and paste the query into Management Studio for testing when parameters are in the query as shown above. The MDX query editor doesn't support parameters. Teo Lachev (blog | twitter) posted some advice for working with parameterized MDX queries in Management Studio which I encourage you to check out.In my next post, I'll explain how to use the OLE DB for OLAP provider with dynamic MDX to create a dataset.
Source: sqlblog.com

SSIS runs in BIDS but not with SQL Agent
A new blogpost has been posted by Ted Krueger (onpnt) on Oct 6, 2010 in category Database Administration
Source: blogs.lessthandot.com

No comments:

Post a Comment