Dot NET Native makes you hungry for more

Dot NET Native makes you hungry for more. Ever since dot NET Native was announced, I have been dreaming about the potential and how C# developers everywhere can take their expertise to new domains without having to learn another programming language. At this exact moment, it is not of immediate use to me since I generally don’t get into the mobile development mess (something I am not enchanted by). So why do I care?

Quite frankly, dot NET (along with any other managed code like Java for that matter) sucks for seriously high performance computing apps. It’s generally well known, and has been “since the beginning”, that if you do high performance computing, you work with C. Or C++ if you can stomach it. But if Microsoft can bring dot NET Native to the desktop, this could be a game changer for Microsoft and open the door in many computing domains where dot NET (or Windows) has traditionally been disregarded.

Also, you may have experienced some difficulty managing the dot NET runtimes and the GAC. No more. With dot NET Native, you get one EXE and one DLL statically linked. The dev in the video says they’re working on shared libraries. But if you’re seriously reducing the footprint anyway, my question is — why take the risk?

You can find out more on the dot NET Native site. I would also check out the video on Channel 9.

Another cool thing that’s mentioned in the video is ‘on demand compilation’ for the target platform, which means using cloud compute cycles on the fly — now that’s going to need serious MITM protection. But the folks in the video make a great point: as dot NET Native capabilities expand, solution providers don’t need to re-ship their Apps. Wow! That’s pretty convincing.

For my Perl friends – wouldn’t it be cool to be able to do this with Perl? And on all the platforms Perl supports?

For now, I have to keep on dreaming of it being usable for systems apps. It will probably be available sooner rather than later though, because the Microsoft commentary on the Channel 9 page is very reassuring (meaning it must already be in-plan).

Our Axis2 to Mojolicious Modernization

Our Axis2 to Mojolicious Modernization had the opportunity to be highlighted a couple months ago at MojoConf 2014. Although I have been programming Perl since 1993 (after installing Linux from 3.5″ floppies!), this was the first Perl event I had ever been to. The place was filled with so many cool and passionate Perl programmers. And I felt it would have been the coolest time to have an irc2face app. Also it was in Oslo, which multiplied the cool factor many times over.

The talk is about moving one of our integration products from JEE to Perl. Specifically, from Axis2 to Mojolicious. Take a look.

Some of my clients who were not aware of the move became concerned about the technology choice. After all, for the most part, Perl modernization news in the last five years has been confined to what people are calling the Perl Echo-Chamber. I spent several weeks convincing clients (through data) that the technology choice is a non-issue in this case.

For many years, I have asserted with clients as well as internal to our company that the technology used to implement solutions matters only when there are concrete, tangible levels of vendor risk associated with that technology. Because of this, from perhaps 2003 up until perhaps five years ago (coincidentally, it aligns with the Python and Ruby popularity rise) it would have been extremely difficult to make Perl as a technology choice for customer solution delivery. Especially making the case that vendor risk is a non-issue. Yet, I feel the tide turning once again since Perl 5.14. And frameworks like Mojolicious, tools like Carton, the DBIx::Class ORM, and excellent event-driven computability with POE have just made the case even more.

Also, I feel the pulse of changing as we’re moving more compute cycles to public/private/hybrid cloud computing ecosystems where enterprises focus on solutions and don’t need to worry about maintaining infrastructure to support technology at the micro level. Mojolicious is technically positioned well to become a viable Perl PAAS for Web APIs.

Perl has always been a viable option, but its mindshare and enterprise-level perception still needs work to make using it for widespread solution delivery a non-issue. We didn’t move from Axis2 to Mojolicious for the sake of being ‘techno-activists’ or simply because we love working with Perl. We had real, tangible, and measurable reasons for doing so. To make the case in enterprise organizations, real measurable outcomes are required. And they need to be aligned with your company’s unique strategy and vision which means there’s no boilerplate for making the case.

Also, take a look at the many other Mojoconf 2014 talks from the Mojoconf 2014 Channel on Youtube.

Net::Gnats and GNU Gnats and Gnatsweb

GNU Gnats and Gnatsweb (a system used for defect tracking, support tickets, or case management system existing since the 90’s) has good fundamentals, but is very limited due to the way client programs need to interact with it. Current mechanisms include sockets, SMTP, and direct interaction with the data store.

Gnatsweb is a web interface for Gnats, and interacts with gnatsd which is the sockets interface facilitating the Gnats protocol. Some feel like gnatsd and the gnats data storage mechanism should be completely ditched and rewritten. Regardless of the scope of the GNU Gnats and Gnatsweb modernization, it would be fairly arrogant to say that Gnats should be completely rewritten. Many years of deep thinking have gone into its feature and function, only to be limited by the technology available at its original implementation. In such case, I have made the decision to support protocol and data store backward compatibility in the new Gnatsweb, which means having a gnatsd client written in Perl.

That’s when I decided that it would be great to use Net::Gnats instead of writing more comprehensive client code directly in Gnatsweb. This also gives us the opportunity to abstract data store operations which would in turn give Gnatsweb a better way to move from one data store to another.

Mike Hoolehan, the original author of Net::Gnats has recently made me maintainer of the module. Recently I have been making changes to the Gnatsperl project holding Net::Gnats to modernize the build and delivery mechanism and code structure, add unit tests with Test::More and building up the code coverage with Devel::Cover, and finally slowly converting the code to a modern “Best practices” state with Perl::Critic.

After reading the Net::Gnats code, I found significant code duplication between Gnatsweb and Net::Gnats. Some protocol routines in Net::Gnats can improve by folding in functionality from Gnatsweb. Once folded in and releasing a new Net::Gnats version, more than 1800 lines of code can be removed from Gnatsweb, which ultimately gives us better separation of concerns and a better path to modernization.

New GnatsWeb Maintainer

I’m happy to announce that I’m the new Gnatsweb maintainer.

For those who are not aware of GNATS, it’s the de facto (yet relatively unmaintained) GNU standard defect tracking system.

I have a connection with GNATS: I used it extensively at SMARTS, and integrated it with CVS. It’s an excellent lightweight defect tracker. Unfortunately, it has fallen under the hammer of lack of maintenance, and many of its interfaces and integrations are easily seen as archaic. Perhaps, soon, GNATS and Gnatsweb will become one.

However, I think GNATS fundamentally has excellent philosophy and really can be seen as being “left behind” due to the 2000’s attitude of over-extension in “hardened” development process.

The first step is to normalize Gnatsweb, which is currently run as a CGI Perl application. Documentation has already been condensed, and steps are being taken to simplify the 4300+ line perl script that encapsulates its functionality. Once we get it runnable under model versions of Perl, we’ll be looking to Mojolicious for further modernization.

Take a look at the Gnatsweb project page for more information.

A Challenge from My Staff

As some of you might be aware, our office in Australia delivers on Training and Certification.  That’s right, along with the regular Vendor and Process training we provide, we also proctor certification exams.  Pretty neat.

Because of the market, and some historical drivers, Canberra has a lot of engineers interested in Microsoft technology.  Also, those big consulting firms that deliver whole programs of work need to keep their staff in tip top shape.  Naturally, a majority of exams we proctor are for Microsoft certifications.

Last month, one of our staff asked me why I don’t have any Microsoft certifications (currently all the certifications I have is for IBM software and technology).  Looking down at my Macbook, and then back up, I answered, “well, it hasn’t changed much in the last 10 years.  Why bother?”

Then, that comment I made bugged me.  Really bugged me.  How much of Windows Server have I really dug into for the last few years? Creating VM’s … Installing Active Directory … only to install
vendor software or test .Net code on top of it for demos and development and delivery verification.  Not really thinking about Windows Server in detail — probably since Windows Server 2008 or before.

Also, it’s been over 13 years since I received an MCSE (which I didn’t keep up), and received this pin:


Yeah, it’s pretty old.

I started by looking at networking features new in Windows Server 2012 and Windows Server 2012 R2.  Hasn’t changed much?  Au contraire.  I would say the advancements are substantial.

So, I took it on as a challenge.  Some people have already said to me that the certification is worthless — a waste of time.

My response is: “A certification is not just a measurement of
knowledge. It is a tool that builds confidence in your own convictions.”

Reality is, no matter how skilled you think you might be, if you’re not in it, in detail, every day then without a broad and relatively strong understanding of the technology through a guided baseline it’s pretty tough to know what your options could be in various situations.  You might not know the technology in detail at every given moment, but your brain works in cool ways to recall possibilities.  There’s no app for that yet!

Time to dig in.  As I progress, I might post a nugget or two here.

Perl Coming Back?

This year marks my 20th in developing Perl programs. I remember installing Linux on a 286 using 3.5″ floppies back then, quickly getting on with sed and awk, and then soon thereafter finding Perl my default programming language for getting things done. To this day, it remains my favorite for getting thought into code.

So when Sebastian Riedel (sri of Mojolicious fame) tweeted that the 2013 CPAN statistics were out, I jumped to the page to take a look. There were a lot of good surprises. But as with anything statistics, analysis can follow the hopes and dreams of the analyst.

A Few Notes

It would be nice to have data or means to do correlation on why the statistics look so good. A few thoughts around the statistics:

New PAUSE Accounts

Late last year, I got a PAUSE account created since I wanted to start
contributing after years of being a user and sometimes a collaborator. So I’m part of that statistic. My intent is to help maintain a few modules that I work with frequently. I don’t foresee adding new modules.

New Releases

Undoubtedly, the new release metric directly correlates to recent
availability of continuous integration and delivery through sites like github. A new release can contain something as simple as a one-liner change to a pod (perl documentation).

Namespace Popularity

My opinion is module namespaces have been the most mismanaged aspect of CPAN. The hope I have in this statistic is owners and maintainers are starting to refactor their namespaces into something more manageable, or more thought and mindshare is going into namespaces.


It’s good to see momentum on CPAN. Love it or hate it, Perl is still
used extensively and is a flexible and accessible programming
language. The CPAN metrics could use some additional correlation
theory that potentially points to better module publishing and
management. I’m looking forward to more Perl in 2014.

Compensation for Collaboration?

DevOps evolves organizational culture from blame-oriented to success-oriented. However, neglecting practical matters in DevOps adoption can spoil its promise. A primary practical matter is compensation for effective collaboration. Measuring compensation for effective collaboration is not trivial and remains a ‘dark art’ fraught with subjectivity.

Today’s traditional compensation models revolve around the firm’s financial health and individual contribution. Measuring individual contribution can be subjective and tightly coupled with the manager’s performance qualification of direct reports. Qualification is typically based on annual or quarterly targets, either financial or
based on the individual’s job description.

Collaborative efforts by employees naturally go beyond these
bounds. Today’s models can ignore efforts that do not directly contribute to an outcome and can be largely ignored. Employee
frustration, and employee mistrust in compensation models, leads to
loss of human capital and results in retention problems. Employee
retention problems result in decreased organizational performance and ultimately organization’s potential.

Teaming in DevOps

If you haven’t heard, DevOps is a relatively newfangled philosophy in Information Technology. Fundamentally, it is an organizational
behavior shift that promotes better outcomes through effective
communication and sharing across spheres of influence. To achieve
DevOps benefit realization, holistic adoption should be a fundamental goal. This presents real challenges for organizations and managers.

Traditionally, IT has been structured functionally into Planning, Building, and Running business applications. The functional teams must work with each other, and typically do so through formal
communication. Each functional team has their own metrics for
success, for example Planning focuses on budget, Building focuses on delivering functionality, and Running focuses on uptime (like making sure the email server doesn’t stop working). Structuring an
organization this way is not wrong; individuals in those teams benefit from managers who understand their specializations and relating needs.

Although functionally different, events can taint motives. If an
application stops working (for example, your website), it initially
seems like the problem belongs to the Running team. In reality, the
problem will impact the Building team, and ultimately the Planning
team. While every event cannot be anticipated, it is clear that the
probability creates measurable risk.

DevOps, to a great degree, can mitigate the risk through pro-active
collaboration across these functional teams. In effect, we’re
creating a forward-thinking organization instead of a reactive and
blaming one. Risk mitigation ultimately computes to increased
organizational value. HR professionals can then provide compensation mechanisms that concretely promote collaborative behaviors. However, until this practical matter is resolved, it will be tough to build DevOps momentum.

Promoting Teaming

The first and most fundamental priority in any firm is outcomes outweighing investments. With that in mind, why wouldn’t managers dive head first into teaming?

Accelerated, effective, and broad teaming adoption requires practical change that conflicts with management objectives. Teaming continues to be a difficult proposition for many managers: they are charged with providing outcomes in their direct sphere of influence, which naturally conflicts with self-organizing and dynamic teaming.

Managers are compensated based on the performance of their team. Compensation drives motives and decisions. In order to drive teaming, managerial performance must be accompanied by objective measurement of dynamic team performance: how their reports contribute to organizational outcomes beyond their sphere of influence, and be compensated by it.

Measuring the team itself can create its own set of challenges. To do
so, objective measurement must be performed, and the measurement methods must be transparent. Once made objective and transparent, it becomes an automatic metric for individual compensation as well as an aggregate for managerial compensation. In effect, managers will naturally promote teaming.


With practical organizational change, DevOps will become much more achievable. Although collaboration and sharing is not a DevOps specific organizational trait, it can be a springboard for broad and objective teaming compensation models. Neglecting practical matters can inhibit accelerated adoption.

HR has a real opportunity to drive DevOps benefit realization. A key challenge is measuring contribution and compensating based on collaborative inputs. Objective measurements must be in place and be transparent.