all bits considered data to information to knowledge

16Oct/120

Cannot connect to WMI provider: mofcomp to the rescue!

[problem]

Launching SQL Server Configuration Manager for SQL Server 2005 express edition fails with an error  "Cannot connect to WMI provider – Invalid class [0x80041010]"

[background story]

A SQL Server 2005 Express Edition has been installed on my Win2003 R2 server for quite awhile now. It was configured to use named pipes and Windows Authentication since initial intention was to prevent any outside access to it...

Well, times change - I needed a sandbox to install Informatica PowerCenter for a proof-of-concept. Informatica 9.5 PWC uses a relational database (Oracle, SQL Server or IBM DB2) to store its metadata, and, according to documentation, express editions of any of these RDBMS are sufficient for this to work (barely, not suitable for large scale production environment, for sure!)

While running Informatica 9 Pre-Installation System Check Tool (i9Pi) I have encountered a problem - the tool (and presumably Informatica PWC) is using JDBC to connect to the database, and therefore requires TCP/IP protocol to be enabled for the SQL server 2005 Express (an easy task by itself)  . But the  SQL Configuration Manager utilityfailed to launch with a bizarre message I've never seen before:

Cannot connect to WMI provider – Invalid class [0x80041010]

My initial thoughts were that my account under which I've logged onto the server lacks privileges to launch the application, so i re-logged in with an admin password - only to face exactly the same message!

[solution]

Apparently, not all files necessary for the management console have been installed/registered with WMI... I cannot say whether this is by design, or somehow my installation was corrupted. But using the Managed Object Format (MOF) compiler - supplied with Microsoft within the installation of the SQL server - I was able to remedy my situation by running the following command at the command prompt:

>mofcomp   sqlmgmproviderxpsp2up.mof

Using MOF compiler

In my case - SQL Server 2005 Express Edition - the file needed to be parsed and added to WMI repository was called [sqlmgmproviderxpsp2up.mof], for other editions (Standard, Enterrise etc.) it will be [sqlmgmprovider.mof]. Locations will also vary depending on the version and edition you've installed:

Microsoft SQL server\90\Shared - for SQL server 2005

Microsoft SQL server\100\Shared - for SQL server 2008 (yes, it can be affected, too!)

P.S. You need to start CMD with elevated privileges as the changes are made to the entire system

Hope this helps to save someone minutes (or hours) of confusion!

 

Tagged as: , , No Comments
11Jun/100

Keeping up with database changes

Scenario: several developers are hard at work cranking out code. The application under development relies on RDBMS back-end for persistent storage (in this particular case, the database is Microsoft SQL Server 2005, but the technique described applies to any RDBMS supporting DDL triggers). Developers are making changes to the client application code, creating/altering/dropping database objects (stored procedures, tables, views etc.) and, in the heat of the moment, forgetting to communicate the changes to their teammates left alone the project manager...

Yes, I know - this is not how it supposed to happen, and yet in the world out there, more often than not, it does happen... Here are some do-it-yourself ideas on how you could alleviate the pain and spare you some nasty surprises without buying more tools...

Enter DDL Triggers. This is relatively new feature with Microsoft SQL Server (though Oracle had them for ages), and, among many other things (rolling back changes, for instance), it could be used to solve the problem stated above.

A DDL (Data Definition Language) trigger in MS SQL Server can have two scopes - server and database. The Table 1.1 at the end of this post lists all the events for which DDL trigger could be created, grouped by scope. For the full syntax in creating a DDL trigger please see vendor's documentation; here I will only touch basics needed to illustrate a solution.

Here's a database scop trigger we are going to use to monitor events:

CREATE TRIGGER [tr_DDL_ALERT] ON DATABASE ---- trigger is created in context of a given database
FOR CREATE_TABLE, DROP_TABLE, ALTER_TABLE    ---- which events to capture; see Table 1.1 for full list
AS         ----
use DDL_DATABASE_LEVEL_EVENTS captures all DB events
SET NOCOUNT ON
DECLARE @xmlEventData XML ---- the generated event data is in XML format
SET @xmlEventData = eventdata() ---- get data from the EVENTDATA() function

Now, this trigger would not be much of use to anybody; you need to parse information contained in the XML message passed into your trigger upon the event. You could parse it and send an email message, or you could save it into a database, or both.

The following code saves it into a table [tbDDL_ALERT] - which, of course, has to be created beforehand:

INSERT INTO dbo.tbDDLEventLog
(
EventTime
,EventType
,ServerName
,DatabaseName
,ObjectType
,ObjectName
,UserName
,CommandText
)
SELECT REPLACE(CONVERT(VARCHAR(50), @xmlEventData.query('data(/EVENT_INSTANCE/PostTime)')),'T', ' ')
,CONVERT(VARCHAR(100), @xmlEventData.query('data(/EVENT_INSTANCE/EventType)'))
,CONVERT(VARCHAR(100), @xmlEventData.query('data(/EVENT_INSTANCE/ServerName)'))
,CONVERT(VARCHAR(100), @xmlEventData.query('data(/EVENT_INSTANCE/DatabaseName)'))
,CONVERT(VARCHAR(100), @xmlEventData.query('data(/EVENT_INSTANCE/ObjectType)'))
,CONVERT(VARCHAR(100), @xmlEventData.query('data(/EVENT_INSTANCE/ObjectName)'))
,CONVERT(VARCHAR(100), @xmlEventData.query('data(/EVENT_INSTANCE/UserName)'))
,CONVERT(VARCHAR(MAX), @xmlEventData.query('data(/EVENT_INSTANCE/TSQLCommand/CommandText)'))

And sends out email notifications using potentially obsolete extended stored procedure (assemble message (@body variable) from the elements of the XML message as shown in the example above):

EXEC master..xp_smtp_sendmail
@TO = 'me@somewhere.com'
,@from = 'someone@somewhere.com'
,@message = @body
,@subject = 'database was modified'
,@server = 'smtp.mydomain.com'

Long-term solution would be, of course, configuring SQL Server Database Mail.

In my next post I will describe how database triggers could be integrated with Hudson - an open source Continuous Integration (CI) server.

Table 1. List of the values to use with server and database scope DDL triggers

Server Scope Database Scope
ALTER_AUTHORIZATION_SERVER
CREATE_DATABASE
ALTER_DATABASE
DROP_DATABASE
CREATE_ENDPOINT
DROP_ENDPOINT
CREATE_LOGIN
ALTER_LOGIN
DROP_LOGIN
GRANT_SERVER
DENY_SERVER
REVOKE_SERVER
CREATE_APPLICATION_ROLE
ALTER_APPLICATION_ROLE
DROP_APPLICATION_ROLE
CREATE_ASSEMBLY
ALTER_ASSEMBLY
DROP_ASSEMBLY
ALTER_AUTHORIZATION_DATABASE
CREATE_CERTIFICATE
ALTER_CERTIFICATE
DROP_CERTIFICATE
CREATE_CONTRACT
DROP_CONTRACT
GRANT_DATABASE
DENY_DATABASE
REVOKE_DATABASE
CREATE_EVENT_NOTIFICATION
DROP_EVENT_NOTIFICATION
CREATE_FUNCTION
ALTER_FUNCTION
DROP_FUNCTION
CREATE_INDEX
ALTER_INDEX
DROP_INDEX
CREATE_MESSAGE_TYPE
ALTER_MESSAGE_TYPE
DROP_MESSAGE_TYPE
CREATE_PARTITION_FUNCTION
ALTER_PARTITION_FUNCTION
DROP_PARTITION_FUNCTION
CREATE_PARTITION_SCHEME
ALTER_PARTITION_SCHEME
DROP_PARTITION_SCHEME
CREATE_PROCEDURE
ALTER_PROCEDURE
DROP_PROCEDURE
CREATE_QUEUE
ALTER_QUEUE
DROP_QUEUE
CREATE_REMOTE_SERVICE_BINDING
ALTER_REMOTE_SERVICE_BINDING
DROP_REMOTE_SERVICE_BINDING
CREATE_ROLE
ALTER_ROLE
DROP_ROLE
CREATE_ROUTE
ALTER_ROUTE
DROP_ROUTE
CREATE_SCHEMA
ALTER_SCHEMA
DROP_SCHEMA
CREATE_SERVICE
ALTER_SERVICE
DROP_SERVICE
CREATE_STATISTICS
DROP_STATISTICS
UPDATE_STATISTICS
CREATE_SYNONYM
DROP_SYNONYM
CREATE_TABLE
ALTER_TABLE
DROP_TABLE
CREATE_TRIGGER
ALTER_TRIGGER
DROP_TRIGGER
CREATE_TYPE
DROP_TYPE
CREATE_USER
ALTER_USER
DROP_USER
CREATE_VIEW
ALTER_VIEW
DROP_VIEW
CREATE_XML_SCHEMA_COLLECTION
ALTER_XML_SCHEMA_COLLECTION
DROP_XML_SCHEMA_COLLECTION
21Jan/100

SQL Server: passing data between procedures

The common programming task - passing parameters between functions - is far from simple in Transact-SQL. One has to pay close attention to a particular version of the RDBMS that implements the language. To add to confusion, ever since Microsoft SQL Server and Sybase had parted their ways (version 7.0 and 11.5, respectively), there are two ever diverging dialects of Transact-SQL.

This article How to Share Data Between Stored Procedures  by  Erland Sommarskog goes into excruciating details explaining different options a programmer has when there is a need to pass data between stored procedure. Saved my team some time, and provided an opportunity to learn. Thank you!

The following table is taken verbatim from the original post by Mr. Sommarskog, and links back to his site:

Method Input/ Output SQL Server versions Comment
Using OUTPUT Parameters Output All Not generally applicable, but sometimes overlooked.
Table-valued Functions Output SQL 2000 Probably the best method for output, but has some restrictions.
Inline Functions Use this when you want to reuse a single SELECT.
Multi-statement Functions When you need to encapsulate more complex logic.
Using a Table In/Out All Most general methods with no restrictions, but a little more complex to use.
Sharing a Temp Table Mainly for single pair of caller/callee.
Process-keyed Table Best choice for many callers to same callee.
Global Temp Tables A variation of Process-Keyed.
INSERT-EXEC Output SQL 6.5 Does not require rewrite. Has some gotchas.
Table Parameters and Table Types In/(Out) SQL 2008 Could have been the final answer, but due to a restriction it is only mildly useful in this context.
Using the CLR Output SQL 2005 Does not require a rewrite. Clunky, but is useful as a last resort when INSERT-EXEC does not work.
OPENQUERY Output SQL 7 Does not require rewrite. Tricky with many pitfalls.
Using XML In/Out SQL 2005 A roundabout way that requires you to make a rewrite, but it has some advantages over the other methods.
Using Cursor Variables Output SQL 7 Not recommendable.
29Dec/090

Look Ma, no SQL!

Is the Structured Query Language  goes the way of dinosaurs?
First proposed back in 1970s, the relational database technologies have flourished, taking over the entire data processing domain (with an occasional non-relational data storage hiding in long shadows of the [t]rusty mainframes). The days of glory may be over, and the reason could be  ... yes, you've guessed it - a paradigm shift.

The relational databases brought order into chaotic world of unstructured data; for years the ultimate goal was to normalize data, organize it in some fashion, chop it into entities and attributes so it could be further sliced and diced to construct information... There was a price to pay though t - need for a set-based language to manipulate the data, namely, Structured Query Language - SQL  (with some procedural and multidimensional extensions trown in...)

The Holy Grail was to get data to 5NF, and then create a litter of data warehoses - either dimensional or normalized to analyze the data.... Then again, maybe we could just leave the data the way it is, stop torturing it into relational model - and gain speed and flexibility at the same time?  That's what I call a paradigm shift!

Enter MapReduce: Simplified Data Processing on Large Clusters, another idea from Google (which also inspired Hadoop - open source implementation of the idea)

Google is doing it, Adobe is doing it, FaceBook is doing it, and hordes of other, relatively unknown, vendors are doing it ( lots of tacky names - CouchDB, MongoDB, Dynomite, HadoopDB, Cassandra,Voldemort, Hypertable ... 🙂

IBM, Oracle and Microsoft have announced additional features for their flagship products: the M2 Data Analysis Platform based upon Hadoop, and Microsoft extending its LINQ  (which goes past relational data) to include similar features... Sybase has recently announced that it implementes MapReduce in its SybaseIQ database.

To be true, the data still undergo some pre-processing to be fully managed by these technologies, but to a much lesser degree. The technology is designed to abstract intricacies of parallel processing, and to facilitate managementr of large distributed data sets;  it aims not to eliminate need for relational storage but the need for SQL to manipulate the data... the idea is to allow analytic processing of the data where it lives, without expensive ETL, and with minimal performance hit. The line is blurring between ORM, DBMS, OODBMS and programming environment; between data and data processing..

With all that said, it might not be the time to ditch your trusty RDBMS ( just yet...:)  A team of researchers concluded that "Databases "were significantly faster and required less code to implement each task, but took longer to tune and load the data," the researchers write. Database clusters were between 3.1 and 6.5 times faster on a "variety of analytic tasks."