Pages

Monday, October 11, 2010

SQL Server High Availability: Quick view of Database Mirroring session between partners


From SQL Server 2005 version onwards database mirroring has been a good resource to provide High Availablity for the systems without having special hardware requirement. While I was assisting at the SME lounge within Database Platform area in Tech-Ed North America conference, I had few interesting questions from the users asking about various methods to monitor database mirroring environment. Database Mirroring Monitor is the tool which is a first hand resource to see what is happening within the partners, also the other methods such as query against DBM metada in catalog views, use of dynamic management views, PERFMON counters for DBM, DBM related event notifications and server events using WMI provider. To go further about catalog views and DMVs to view the mirroring information you can get information for each mirrored database from the instance. Here is a little TSQL to obtain commonly used mirroring metadata: SELECT d.name, d.database_id, m.mirroring_role_desc,                                m.mirroring_state_desc, m.mirroring_safety_level_desc,                                m.mirroring_partner_name, m.mirroring_partner_instance,                                m.mirroring_witness_name, m.mirroring_witness_state_desc FROM   sys.database_mirroring m JOIN sys.databases d ON     m.database_id = d.database_id WHERE  mirroring_state_desc IS NOT NULL The above query uses the sys.database_mirroring & sys.database catalog views, which can be used on principal side or mirror side instance to obtain the information and further to get witness related information you an use sys.database_mirroring_witness catalog view. A sample TSQL to get list the corresponding principal and mirror server names, database name, and safety level for all the mirroring sessions for which this server is a witness. SELECT principal_server_name, mirror_server_name,                                database_name, safety_level_desc FROM   sys.database_mirroring_witnesses   ENDPOINT is the key in database mirroring, as the connection management between SQL Server instances (since 2005 version) is based on endpoints. BOL defines it as "An endpoint is a SQL Server object that enables SQL Server to communicate over the network. For database mirroring, a server instance requires its own, dedicated database mirroring endpoint. All mirroring connections on a server instance use a single database mirroring endpoint. This endpoint is a special-purpose endpoint used exclusively to receive database mirroring connections from other server instances".  As the endpoint uses a specific port the usual catalog view of sys.database_mirroring_endpoints will not maintain the information, it is in sys.tcp_endpoints view, so here is another TSQL to join both of these catalog views to obtain information about endpoints: SELECT e.name, e.protocol_desc, e.type_desc, e.role_desc, e.state_desc,                                t.port, e.is_encryption_enabled, e.encryption_algorithm_desc,                                e.connection_auth_desc FROM   sys.database_mirroring_endpoints e JOIN sys.tcp_endpoints t ON     e.endpoint_id = t.endpoint_id   These a just monitoring aspects of a database mirroring pair, but there is much to monitor the performance of database mirroring, SQL Server provides a System Monitor performance object (aka PERFMON) for SQLServer:Database Mirroring related counters on each partner (principal and mirror). The Databases performance object provides some important information as well, such as throughput information (Transactions/sec counter). Here is the reference from Technet article about the important counters that are need to watch if you feel that performance has been degraded withint your DBM enviornment. On the principal: ·         Log Bytes Sent/sec: Number of bytes of the log sent to the mirror per second. ·         Log Send Queue KB: Total kilobytes of the log that have not yet been sent to the mirror server. ·         Transaction Delay: Delay (in milliseconds) in waiting for commit acknowledgement from the mirror. This counters reports the total delay for all the transactions in process at that time. To determine the average delay per transaction, divide this counter by the Transactions/sec counter. When running asynchronous mirroring this counter will always be 0. ·         Transactions/sec: The transaction throughput of the database. This counter is in the Databases performance object. ·         Log Bytes Flushed/sec: The rate at which log records are written to the disk. This is the log generation rate of the application. It plays a very important role in determining database mirroring performance. This counter is in the Databases performance object. ·         Disk Write Bytes/sec: The rate at which the disk is written to. This counter is in the Logical Disk performance object and represents. Monitor this counter for the data as well as the log disks. On the mirror: ·         Redo Bytes/sec: Number of bytes of the transaction log applied on the mirror database per second. ·         Redo Queue KB: Total kilobytes of hardened log that remain to be applied to the mirror database to roll it forward. ·         Disk Write Bytes/sec: The rate at which the disk is written to. This counter is in the Logical Disk performance object and represents. Monitor this counter for the data as well as the log disks on the mirror.   Lastly you can take help of event notifications (trace) by using SQL Trace which helps recording specified events, SQL Trace helps you troubleshoot performance, audit database activity, gather sample data for a test environment, debug Transact-SQL statements and stored procedures, and gather data for performance analysis tools by accessing SQLtrace or SQL Profiler. The 2 events are important for DBM related environment such as Database Mirroring state change which indicates the mirroring state of mirrored database changes and Audit Database Mirroring login event class which reports the audit messages related to DBM transport security. Later on this series I will cover the troubleshooting aspects of database mirroring environment.    
Source: feedproxy.google.com

TSQL Beginners Challenge 17- Working with Employee hierarchical structures
Submit A Solution This challenge is more about processing HIERARCHIES. The problem is all about identifying all the employees directly or indirectly to whom the given Employee reports to. The challenge is to write a query that can take a Employee Id as a parameter and list all the employees to whom the Employee is reporting, directly or indirectly. Source Data EmpRecord ---------------------------------------------------------------------------- <Employees> <Employee EmpId="1" EmpName="Niladri Biswas" /> <Employee EmpId="2" EmpName="Arina Biswas" ReportsTo="1" /> <Employee EmpId="3" EmpName="Deepak Kr. Goyal" ReportsTo="1" /> <Employee EmpId="4" EmpName="Sachin Srivastav" ReportsTo="1" /> <Employee EmpId="5" EmpName="Ranjit r" ReportsTo="2" /> <Employee EmpId="6" EmpName="Priyanka Sarkar" ReportsTo="2" /> <Employee EmpId="7" EmpName="Priyanka Das" ReportsTo="5" /> <Employee EmpId="8" EmpName="Ravi Varman" ReportsTo="5" /> <Employee EmpId="9" EmpName="Jiya Devi" ReportsTo="3" /> <Employee EmpId="10" EmpName="Sharmistha Saha" ReportsTo="3" /> </Employees> If the parameter contains 9 the query should return: Expected Results Level Path Hierarchy ----- ----------------------------------------- ----------------- 0 Jiya Devi/Deepak Kr. Goyal/Niladri Biswas Niladri Biswas 1 Jiya Devi/Deepak Kr. Goyal Deepak Kr. Goyal 2 Jiya Devi Jiya Devi Your query should ideally look like the following: DECLARE @EmpID VARCHAR(20) SELECT @EmpID = 9 -- Your query here: Script Use the following script to generate the sample data DECLARE @Employees TABLE(EmpRecord XML) INSERT INTO @Employees SELECT '<Employees> <Employee EmpId = "1" EmpName = "Niladri Biswas"/> <Employee EmpId = "2" EmpName = "Arina Biswas" ReportsTo="1"/> <Employee EmpId = "3" EmpName = "Deepak Kr. Goyal" ReportsTo="1"/> <Employee EmpId = "4" EmpName = "Sachin Srivastav" ReportsTo="1"/> <Employee EmpId = "5" EmpName = "Ranjit r" ReportsTo="2"/> <Employee EmpId = "6" EmpName = "Priyanka Sarkar" ReportsTo="2"/> <Employee EmpId = "7" EmpName = "Priyanka Das" ReportsTo="5"/> <Employee EmpId = "8" EmpName = "Ravi Varman" ReportsTo="5"/> <Employee EmpId = "9" EmpName = "Jiya Devi" ReportsTo="3"/> <Employee EmpId = "10" EmpName = "Sharmistha Saha" ReportsTo="3"/> </Employees>' SELECT * FROM @Employees Notes The solution should be a single query that starts with a "SELECT", "WITH" or ";WITH" . The program should run in SQL SERVER 2005+. Column names should respect the desired output shown. Result must be sorted in ascending order of Level. Hierarchy column should display parents starting from the topmost position to the child sought for. Use this forum to discuss your questions related to TSQL Beginners Challenge 17. To be able to post questions in the forum, you need to be a member of the group TSQL Beginners Challenges. Click here to subscribe to the group. Submit A Solution About the author
Source: feedproxy.google.com

A Solution: T-SQL Challenge 30
Check out the original post @ http://www.epperlys.com The Challenge The details of the challenge can be found on Beyond Relational’s website under TSQL Challenge 30 – A Matrix Transposition challenge in TSQL.  The challenge seemed to be targeting a couple of different skills: Effective use of the ROW_NUMBER() function Though not required, CTEs Pivoting and Unpivoting (at least conceptually) Keeping proper track of rows and columns Before we dive in, I should point out that I received the 7th highest score . . . not the greatest, but I was happy just to be on the list.  I tell you this not to boast, but to warn you that my solution has problems . . . but we’re also going to look at the steps (actually, one defining step) the winner took and why his solution was better. My Solution Let’s start with a view of the whole script and then attempt to unpack the thought processes. with m as ( select id, col1, ROW_NUMBER() over(partition by id order by col1) c from TC30_Metrics union all select id, col2, ROW_NUMBER() over(partition by id order by col2) c from TC30_Metrics union all select id, col3, ROW_NUMBER() over(partition by id order by col3) c from TC30_Metrics union all select id, col4, ROW_NUMBER() over(partition by id order by col4) c from TC30_Metrics union all select id, col5, ROW_NUMBER() over(partition by id order by col5) c from TC30_Metrics ) ,m2 as ( select id ,col1 ,c ,row_number() over(partition by id, c order by col1) as r from m ) ,m3 as ( select m2.id ,case m2.c when 1 then col1 end as col1 ,case m2.c when 2 then col1 end as col2 ,case m2.c when 3 then col1 end as col3 ,case m2.c when 4 then col1 end as col4 ,case m2.c when 5 then col1 end as col5 ,r from m2 ) select m3.id ,max(m3.col1) as col1 ,max(m3.col2) as col2 ,max(m3.col3) as col3 ,max(m3.col4) as col4 ,max(m3.col5) as col5 from m3 group by m3.id, m3.r order by id   New Column Assignment First, I wanted to tag the values with their appropriate columns.  As such, the goal of the first CTE was to put all of the values from the 5 columns into the same column of our first virtual table (effectively unpivoting).  Furthermore, we need to reorder with them, so we use the ROW_NUMBER() function, ordering by the respective column.  The “partition by id” portion of the OVER() clause is required because the challenge states that there may be multiple matrices that need to be transposed.  This will start the ordering process over for each new ID column.  The new column is simply named “c” to because it is the surrogate identifier for the columnar data.  Now the view of the data looks like: id col1 c -- ----------- -------------------- 1 1 1 1 3 2 1 7 3 1 8 4 1 9 5 1 1 1 1 3 2 1 4 3 1 5 4 1 8 5 1 0 1 1 1 2 1 2 3 1 2 4 1 5 5 1 1 1 1 1 2 1 3 3 1 4 4 1 6 5 1 3 1 1 4 2 1 5 3 1 6 4 1 7 5 Row, Row, ROW_NUMBER() The second step in my logic was to assign the columnar data to rows according to value order.  Again, simply performed by another ROW_NUMBER() function, partitioned by ID and c then ordered on the value.  I named the new virtual column “r” because it is the surrogate identifier for the row in which the value is to be placed.  This makes the data look like: id col1 c r -- ----------- -------------------- -------------------- 1 0 1 1 1 1 1 2 1 1 1 3 1 1 1 4 1 3 1 5 1 1 2 1 1 1 2 2 1 3 2 3 1 3 2 4 1 4 2 5 1 2 3 1 1 3 3 2 1 4 3 3 1 5 3 4 1 7 3 5 1 2 4 1 1 4 4 2 1 5 4 3 1 6 4 4 1 8 4 5 1 5 5 1 1 6 5 2 1 7 5 3 1 8 5 4 1 9 5 5 Put It Back Together This view of the data gives us the proper coordinates for constructing the final query.  I chose to do this in two steps, but the following could have easily been combined.  First, I performed the first step of a pivot so that the values got into the appropriate columns.  You can note, by the data below, that this causes NULL values to be inserted as any one combination from the data above renders a value in only one column at a time.  It’s kinda fun because we get this nice diagonal effect of the data: id col1 col2 col3 col4 col5 r -- ----------- ----------- ----------- ----------- ----------- -------------------- 1 0 NULL NULL NULL NULL 1 1 1 NULL NULL NULL NULL 2 1 1 NULL NULL NULL NULL 3 1 1 NULL NULL NULL NULL 4 1 3 NULL NULL NULL NULL 5 1 NULL 1 NULL NULL NULL 1 1 NULL 1 NULL NULL NULL 2 1 NULL 3 NULL NULL NULL 3 1 NULL 3 NULL NULL NULL 4 1 NULL 4 NULL NULL NULL 5 1 NULL NULL 2 NULL NULL 1 1 NULL NULL 3 NULL NULL 2 1 NULL NULL 4 NULL NULL 3 1 NULL NULL 5 NULL NULL 4 1 NULL NULL 7 NULL NULL 5 1 NULL NULL NULL 2 NULL 1 1 NULL NULL NULL 4 NULL 2 1 NULL NULL NULL 5 NULL 3 1 NULL NULL NULL 6 NULL 4 1 NULL NULL NULL 8 NULL 5 1 NULL NULL NULL NULL 5 1 1 NULL NULL NULL NULL 6 2 1 NULL NULL NULL NULL 7 3 1 NULL NULL NULL NULL 8 4 1 NULL NULL NULL NULL 9 5 Lastly, we have to get rid of the NULL values and make sure that each matrix is ordered together (via ID column).  This is accomplished by grouping on the ID and the surrogate row identifier (“r”) we established in the second step.  With that, we have our solution! What the Winner Did Better The winner of this challenge, Muhammad AlPasha, smoked us all with one major approach difference.  Looking back on the first thing I did for my solution (which EVERY other submission on the winners list did as well), unpivoting by using UNION ALL led to our demise.  Performing 5 SELECT statements on the same table with, though you end up with one result set, causes 5 reads on the base table.  Muhammad did it with one read (view his solution).  Instead of the unpivot, he tacked on the new column assignments for each respective value by create 5 new columns.  I think this approach feels antithetical to most of us out here in TSQL land because it feels like something a mainframe application would attempt to do.  But it was genius because the number of reads that this solution incurred would always be 1/5 of anything the immediate UNPIVITORs were going to do.  Below are the query plans of the new column id of the two solutions; the 5 reads from my solution is quite apparent.  I realize you can’t see the detail of the pictures, but you can make out the differences: The remainder of the mental gymnastics isn’t necessarily identical, but is quite similar.  The second CTE explodes the data so that it is similar to my first step.  It then continues to calculate the appropriate row and pivot the data back for the final result.  It took a little more code, but was the most definitely the superior plan.  Kudos, indeed. . . . complete my joy by being of the same mind, having the same love, being in full accord and of one mind.  Do nothing from rivalry or conceit, but in humility count others more significant than yourselves. Philippians 2:2-3 ESV
Source: feedproxy.google.com

SQL Null: a set of posts on the null
Null is a special marker used in Structured Query Language (SQL) to indicate that a data value does not exist in the database. Introduced by the creator of the relational database model, E. F. Codd, SQL Null serves to fulfill the requirement that all true relational database management systems (RDBMS) support a representation of "missing information and inapplicable information". Codd also introduced the use of the lowercase Greek omega (ω) symbol to represent Null in database theory. NULL is also an SQL reserved keyword used to identify the Null special marker. Null has been the focus of controversy and a source of debate because of its associated three-valued logic (3VL), special requirements for its use in SQL joins, and the special handling required by aggregate functions and SQL grouping operators. Although special functions and predicates are provided to properly handle Nulls, opponents feel that resolving these issues introduces unnecessary complexity and inconsistency into the relational model of databases. Since Null is not a member of any data domain, it is not considered a "value", but rather a marker (or placeholder) indicating the absence of value. Because of this, comparisons with Null can never result in either True or False, but always in a third logical result, Unknown. In my next few posts, I will explore the null.
Source: brian.chipsofttech.com

SQL Null: Grouping and sorting
Because SQL Server defines all Null markers as being unequal to one another, a special definition was required in order to group by Nulls together when performing certain operations.  SQL defines “any two values that are equal to one another, or any two Nulls”, as “not distinct”.  This definition of not distinct allows SQL to group and sort Nulls when the “GROUP BY” clause are used.  Let’s look at an example using group by and nulls.  You will see, that SQL Server does group them together by using the not distinct definition: DECLARE @table1 TABLE( ID INT, myStr VARCHAR(100)) INSERT INTO @table1( ID, myStr) Values (1, 'value a'), (2, 'value b'), (3, 'value a'), (4, 'value a'), (5, 'value b'), (6, null), (7, 'value b'), (8, null), (9, null), (10, 'value a'), (11, null), (12, null) SELECT * FROM @table1 SELECT myStr,       COUNT(1) as CountFROM   @table1GROUP  BY myStr The 1st select results show the raw values: ID myStr 1 value a 2 value b 3 value a 4 value a 5 value b 6 NULL 7 value b 8 NULL 9 NULL 10 value a 11 NULL 12 NULL The 2nd results show the group by: myStr Count NULL 5 value a 4 value b 3 As you can see, when SQL Server completes the group by clause it combines rows with NULL values.  When thinking about how it handles string concatenations, this is a little twist that can catch you off guard or make you think.  Distinct, same thing: SELECT distinct myStr FROM @table1 The result is 3 rows: NULL, value a, value b Hope this helps you when thinking about null values and grouping.  Try the example and look how SQL Server handles the default sort order for Nulls.
Source: brian.chipsofttech.com

T-SQL to get all the Error Messages and Error Codes
Technorati Tags: Error, Messages, Codes, SQL Server Sometimes We Tend to search for Error Codes and Error messages. We require this for trouble shooting. Here is a Small T-SQL which will get you all the Error Codes and the Respective Error Messages. Hope this Might Help us. SELECT message_id as Error_Code, severity, is_event_logged as Logged_Event, text as [Error_Message] FROM sys.messages WHERE language_id = 1033;
Source: feedproxy.google.com

Performance of SQL’s NullIf() Function (versus Case)

Source: jerrytech.blogspot.com

Find out parameters of the stored procedure
The following methods can be used to know the parameter informations of the particular stored procedure Let us consider the following procedure create procedure testing ( @id int, @user_name varchar(100), @join_date datetime ) as select @id as userid, @user_name as username,@join_date as join_date To know the parameter informations, use one of the following methods Method 1 : Use sp_help system procedure exec sp_help testing Look at the second resultset Parameter_name Type Length Prec Scale Param_order Collation ------------------ ------- ----------- --------- --------- ----------- ---------- @id int 4 10 0 1 NULL @user_name varchar 100 100 NULL 2 SQL_Latin1_General_CP1_CI_AS @join_date datetime 8 23 3 3 NULL Method 2 : Use information_schema.parameters view select * from information_schema.parameters where specific_name='testing' Method 3 : Use sp_sproc_columns system procedure exec sp_sproc_columns testing Execute methods 2 and 3 also and see the resultset.
Source: feedproxy.google.com

tSQL Regular Expressions

Source: jerrytech.blogspot.com

tSQL concatenate strings “1, 2, 3” and select results, too

Source: jerrytech.blogspot.com

tSQL Running Total: CTE wins

Source: jerrytech.blogspot.com

No comments:

Post a Comment