I was watching rather apprehensively the split in Hudson development (Hudson/Jenkins fork). My team was using Hudson CI for over a year now, and we came to rely on it as a key component of our Software Development Ecosystem, with extensive customization and integration; we really like the product and the idea that we would have to replace it was enough to give me jitters (I even blogged about it here)
I view this as a positive development, even as some in the field disagree
"Oracle today announced that it has submitted a proposal to the Eclipse Foundation to create a Hudson project in Eclipse and contribute the Hudson core code to that project."
"Under the new proposal, Oracle will be the project lead with Sonatype, Tasktop, and VMware as initial contributors.Other companies are also listed as project supporters."
Looks like my celebration was a bit premature, and there appear to be a lot of acrimonious feeling in the developers community. They voice their frustrations here, in the post by Mik Kersten
At the same time, Kohsuke Kawaguchi uploaded a presentation on SlideShare with his own narrative on the split, and his perspective on the future developments.
I guess we have no choice but wait until the dust settles.
It did not take long for Oracle to tighten its grip on the jewels which it fond itself in possession of with Sun Microsystems acquisition. The examples include Java spat resulting in Google's Android lawsuit and changes that lead to Apache Foundation withdrawal from the Java Community Process. Here's the recent one - expropriation of Hudson Continuous Integration Server
Not surprisingly, the Hudson developers bailed out, leaving Oracle with the only asset that it really owns - the name "Hudson". The fork of the code is fait accompli: the new Jenkins site is up and running, and the project is being considered for Apache Foundation umbrella - where it logically belongs.
Oracle maintains that this ousting of the project's founder Kohsuke Kawaguchi was in the best interests of the project because now they'll be able to bring in "real structure" and make the project "corporate friendly". Needless to say that neither are the top priorities of the open source community, Oracle has pushed the wrong key - again.
I’ve been using Hudson continuous integration server for some time now, and – by and large – I’m very happy with the tool. It enjoys popularity in the open source community, and because of this popularity one has a wide spectrum of high-quality plugins to extend Hudson’s functionality.
Sometimes, it is possible to find an un-intended use for a plugin (which might be also an indication to clone it, and make it new-use specific) . Here’s one such an example: I was looking for a way to scrub my source files for hard-coded values, and came up with a reasonably fast command line executable (C#) which recursively crawls directories and produces verbose report pinpointing each occurence of the specific string tokens; I wanted to see the results surfaced through Hudson, and then, right before I started thinking about formatting HTML and hooking into Hudson's extensibility model, I got a better idea.
I’ve been using Task Scanner plugin for Hudson by Ulli Hafner for awhile, and found it very helpful – stable, highly configurable; then it occurred to me that this plugin can be repurposed to look for hard-coded values.
While not often, but my team had been burned by hard-coded database credentials, IP addresses and such a number of times. These issues usually manifest themselves when an application I being deployed in an environment different from the one developers are using. For instance, a developer might have been using local instance of RDBMS for speed and convenience reasons, and might have – again, for convenience, put a connection string into his code (“yes, I know about configuration files, but it is just this only time, and I will change it right back, as soon as I am done”). Now your build is broken, and you might spend hours tracking down the problem.
One solution would be to instruct your Task Scanner plugin to look for any part of the following connection string – or take it as a whole (pay attention to special characters in the token strings) :
The results of the code scan not only would summarize all occurrences of the specified string, but would take you straight to the line of the code in the specific module, display trend in a clickable graph and provide at-glance report view.
Cloning the plugin to change appearance, captions etc would allow you to distinguish between the usages – whether you are looking for TODO tasks or for hard-coded values.
If you are not doing continuous integration, you should; and if you are - then you ought to consider database install as integral a part of your build process.
Most CI servers out there would allow you to execute batch or shell commands, and virtually every RDBMS provides a command line utility (and creating one on your own - if needed - is rather trivial).
Installing a database as part of your build process, and populating it with data could play role in your unit testing strategy, and should definitely be considered integral part of functional and regression testing procedures.
The following gives but an example of how to make MS SQL Server database install a part of your build process utilizing Microsoft command line utility SQLCMD and open source continuous integration server Hudson. This could be applied to any other RDBMS package - MySQL, PostgreSQL, Oracle, DB2 or Sybase - with minor adjustments.
The command line utility can be downloaded separately, or installed as part of SQL Server 200X installation. If your unit tests require database support, it might be a good idea to install free SQL Server Express Edition which could be started as part of the build process and shut down afterwards.
"The sqlcmd utilitylets you enter Transact-SQL statements, system procedures, and script files at the command prompt, in Query Editor in SQLCMD mode, in a Windows script file or in an operating system (Cmd.exe) job step of a SQL Server Agent job. This utility uses OLE DB to execute Transact-SQL batches."
This provides an opportunity to make creation of a database and all dependent database objects a part in your continuous integration build process with Hudson - an open source continuous integration serverthrough executing scripts - either integrated with your build management utility such as Maven, Ant or MSBuild - depending on your platform, or just plain batch or shell commands.
A very basic Windows batch command in Hudson installing database through SQLCMD might look like this:
sqlcmd –S<IP address>,[port] -U<user> -P<password> -dmaster -i%WORKSPACE% \exec.sql
- -S indicates IP of the SQL Server instance to connect to
- - U and –P - user ID and password, respectively (this example uses SQL server Authentication)
- -d specifies the default database to connect to, and [master] database is the one you would want if creating a database is part of your build process.
NB: for complete commands list see documentation. Keep in mind that UserID/Password are in clear text, and will be sent over the network as such (unless you are using DAC). To minimize amount of hard-coded use include files in your script.
Here is an example as SQL code could be organized, in order of execution (I will link script files soon):
|1||exec.sql||main controller of the database installation process|
|2||constants.config||contains declaration of all variables to be used in the script; note that file extension is irrelevant for execution|
|3||backupDB.sql||backup existing database (if present); note that backup directory must exist on remote computer|
|4||createDB.sql||create new database; note that all the paths must exist on the remote computer|
|5||createTables.sql||creates all tables in the database; it might include creation of indices and constraints as part of the script but I would advise against it because of the potential dependencies conflicts|
|6||createFunctions.sql||creates all the user-defined functions for the database; the order in which objects are created in the database is important, placing functions before [views] and [stored procedures] reflects common dependency pattern as both could use the functions.|
|7||createViews.sql||creates all views|
|8||createProcedures.sql||creates all procedures|
|9||createConstraints.sql||adds constraints to the objects: primary keys, foreign keys, indices etc.|
|10||importData.sql||if your database has static data this could be used to add it at creation time; you may want to switch 9 and 10 as your data might potentially violate constraints (e.g. orphaned records); this also could be used in unit testing strategies|
|11||createUsers.sql||add all users; this script assumes that logins are already created (if not, add script to create logins first)|
|12||grantPrivileges.sql||grant privileges to the objects (e.g. EXECUTE)|
It is important to understand that GO command completes the batch execution and flushes the buffer; it makes SQLCMD “forget” everything you might have declared prior to executing the command. In the above example, all variables declared in [constants.config ] are no longer part of the script once the GO command was issued.
When creating scripts, keep in mind differences between local (Hudson) directories and remote (SQL Server) ones. The former refer to location of the SQL script files checked out by Hudson from your source control, understood by SQLCMD and Hudson only; the latter specifies directories that SQL Server understands – backup and database locations.
SQLCMD takes in arguments in clear text which constitutes potential security breach; use it in fully trusted environment. Alternative would be implement workaround such as local batch files in secure directories with hard-coded userID/Passwords, and rely on Hudson security matrix; only users with access to the server would be able to see it. This does increase maintenance butb is relatively easy to implement.
If you want SQLCMD generated messages to be displayed in Hudson console output do not specify output file. Alternatively, I could envision a plugin that would parse the output file, and present it nicely in Hudson environment; I might take a stab at it, time permitting.
The successful execution of the scripts relies on correct order of creation – you must figure out object dependencies, and factor it in your scripts. Unfortunately, this is classical Catch 22 – the reliable way to determine dependencies is to query SQL Server after the objects has been created… Which means that you ‘d have to run all the script manually first, and adjust your scripts accordingly.
The utility allso allows you to perform many administrative tasks. For example, ability to re-create test environment on demand can save many hours of developers' time, and being able to backup and/or restore database could be such time saver. Here is an example to restore database from a backup to a local SQL Express database
sqlcmd -S .\SQLEXPRESS -i restoreDB.sql -v database="%1" -v root="D:\backups"
the [restoreDB.sql] might contain something like this:
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[$(database)]') AND type in (N'U'))
ALTER DATABASE $(database) SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
RESTORE DATABASE $(database)
FROM DISK = '$(root)\$(database).bak'
Caveat: The above script accepts all the default options contained in the backup such as location of the data and log files; if, for some reason, such restore paths do not exist on the target machine, the restore operation would fail.You may want to query the backup for the metadata (e.g. logical names for data and logs), and then use MOVE command to restore to different locations
While SQLCMD does return errors to the calling process which you could re-direct to Hudson console, you might want to check the status of execution by querying MSDB - ether as a part of same [restoreDB.sql] SQL script as the one that handles restoration, or in a separate Hudson's build step (new db session):
SELECT destination_database_name,max(restore_date) as restored_on
GROUP BY destination_database_name
One can spend time polishing the scripts, adding error handling and safeguards (e.g. wrapping it in a stored procedures, parameterizing inputs etc..) Ultimately, there is a need for a Hudson plugin to encapsulate SQLCMD functionality (I am tempted to take a stab at it myself, time permitting :).
Java figures prominently into Oracle's future. Let's wait and see how they are going to handle the open source community...
JavaFX will get aggressive investments. Oracle is going after Flash and Silverlight.
GlassFish is delegated to the status of Microsoft Access (if RDBMS metaphors to be used), departmental use at best. Bye.
NetBeans will remain as a "lightweight development environment for Java developers". Ouch. RIP.
Interestingly enough, the open source continuous integration server HUDSON was mentioned during this heavyweights conference. Not sure what this would spell for the application... Got a feeling that Oracle will try competing in ALM market.
SUN Cloud is officially dead. Oracle CEO Larry Ellison had declared it a fad. I think he's dead wrong on this, just as Bill Gates managed to go spectacularly wrong with his "Internet is but a fad" and "Nobody needs more that 640KB of RAM" assertions.
Nothing specific on either Solaris or MySQL...
Subversion is one of the finest version control systems out there (Ok, GIT afficionados might disagree :)), it can run on any OS out there - Linux, FreeBSD, MacOS, Windows..
Well, almost. For a number of reasons, I am running my sandbox SVN environment on Windows (Windows 2003).
Here is a stack trace of an error logged into Hudson; the build failed on check-out step:
Checking out https://XYZ.test.agilitator.com:8443/svn/Sandbox/hudson ERROR: Failed to check out https://XYZ.test.agilitator.com:8443/svn/Sandbox/hudson org.tmatesoft.svn.core.SVNException:
is not canonicalized; there is a problem with the client.
svn: REPORT of '/svn/Sandbox/!svn/vcc/default': 400 Bad Request (https://XYZ.test.agilitator.com:8443)
This somewhat cryptic error was thrown once IP addresses to SVN repository were replaced with URL. It took us quite awhile to discover the culprit: uppercase URL.
For me, this "case sensitivity" underscores Linux heritage of SVN, the Windows port was an afterthought..