Tuesday, July 14, 2009

The Value Of Pictures In Software Design

There are some very good reasons why software engineers use visual communication to quickly and effectively transfer knowledge from one person to another.

While people have many different learning styles, and while everyone employs all of the styles to a greater or lesser degree, most people, or at least, enough people to matter, are predominantly visual learners. Various sources claim that around 60% of us are visual learners. Therefore, it's worthwhile to use visual techniques for this reason.

Visual communication transfers information at a very high rate compared with aural and textual communication. You can tell with a glance a system's structure, or lack thereof. A verbal description takes longer.

Visual communication helps the sender, too. That is, the person creating the graphical representation has to understand the system well enough to draw it. This applies to verbal and written communication as well, so visual does not necessarily hold an advantage over other forms, but it's certainly a valid approach.

Visual communication helps a newcomer to the team come up to speed and become productive more rapidly.

Furthermore, graphical representations of software systems can reveal flaws, voids, and redundancies that are not immediately obvious in verbal or written communication. How many times have you drawn a system diagram, only to see that there's something that can be cleaned up? If you have not done this, try it - it's a worthwhile exercise.

To illustrate the value of pictures, let me point you at one that someone else drew, one that helped me to quickly understand a software framework's design and intended usage. I'm talking about the design of Mina, Apache's ongoing effort to wrap the basic Java NIO components. I think the graphics they provide are a great example of what we should be doing on our own projects.

Why is it important to understand the need for visual communication in software development? I've noticed something unsettling on the last couple projects I've been on: no graphical representations of the systems. In each case, there was no perceived need or gain to having images. For those of us who have been on non-trivial projects and witnessed the indispensable benefits of this form of communication, this is a red flag.

This red flag is a reliable indicator of a project in trouble. The particular dysfunctions might take one of several forms, but most likely is a combination of them. I'll try to name a few here, without trying to make an exhaustive list.

First, it means that the system probably has no clear direction. The team doesn't know where it's headed, or at least doesn't have a common vision of the goal.A shiny new a idea comes along, it's legitimately cool, and we go down that track. And that's great - provided it complements the existing framework. We can't follow all of those cool ideas. Some we'll have to back-burner for another day or another project. A solid understanding of a system as a whole, bolstered by few good drawings, can help us stay disciplined and on the road toward our goal. The images help to remind us of the goal, and to keep us from working at cross-purposes.

A lack of graphical communication might indicate that the team is unable to create them. The system has grown disorganized and chaotic over time (that is separation of concerns is largely gone), and the team, however good they are, cannot walk up to a whiteboard or show on paper a cohesive, overall design. The discipline of striving to achieve the goal of always having something drawn (simple, not 200 pages), drives us to keep the system cohesive and well-organized.

A lack of drawing might indicate something far worse - that the team refuses to draw them. This might be from fear that the drawings will become stale (a legitimate, but addressable concern), or because it takes time from coding, or from a misapplied development philosophy, from plain and simple laziness, or because of a lack of experience building complex systems. Even with current development paradigms that eschew grotesquely large architectural documents, some documentation is essential.

It's this last statement that seems be key element on the last couple projects I've been on. Agile development philosophies encourage us to limit the amount of useless documentation that gets created. This is a worthy and noble goal. Sadly, some have twisted the intent of these goals, eliminating strong, time-tested tools from their arsenal, to the detriment of the projects and teams they represent.

Thursday, July 9, 2009

John Roth, founder of the Ozark Trail Association, has died

This is sad news, indeed. I only ever "met" John via email and discussion forums, but never had the privilege of meeting him in person. As I read the notes from the forum linked below, it's clear what a void his passing leaves.

Ozark Trail home page
Forum notes

Saturday, July 4, 2009

A Worser Mousetrap

<tirade>

I have this, uh, friend, that has a rodent problem. Yes...a friend has this rodent problem.

So anyway, in order to help this, uh, friend, with his rodent problem, I used to buy this Victor snap trap at Lowes. Now, they don't sell those any more, they sell this Tomcat snap trap instead. They're very similar, except that the Tomcat is $0.78 instead of $1.10. There is one other minor difference - the Victors work. They actually catch mice. The Tomcats don't. They suck, actually. The mice come, take the bait, and leave. The traps don't trip. I even jammed the bait into the little holes so that they'd really have to apply a lot of force to get it - end the traps still don't trip. I would've been happy to spring for the extra 32 cents even, for a trap that just ^*(&^*&^%$%# works.

I wouldn't even be griping here if I hadn't just come back from Lowe's, having returned a branch cutter that pulled apart during normal operation: I was cutting a redbud branch about the size of my thumb. Forty bucks and it lasted about an hour. Jeez, what a piece of crap.

Next time you and the kids want to have fun, go play this game. It's called Made in America. Go into your local Lowes, and try to find a product that's made in America. Set a time limit, though. If you haven't found something after an hour, it's time to give up.

</tirade>

Saturday, June 13, 2009

Seeking Input For My Next Opensource Project

Friends,

I'm seeking your input to help me think about my next opensource project. I have two ideas, either one of which I'd like to do, probably using Java as the primary language, just as a matter of preference. I'd particularly like to know whether either is already being done, so that I don't duplicate work, and whether or not you think it might be something useful in work that you've done.

The first is a SOA Directory Service alternative to UDDI. When I worked to help implement a middleware/SOA framework in the mid-90s, one of the pieces we built was a directory service. While it didn't offer the metadata storage capability that UDDI offers today, it had some advantages over UDDI. It was simple: easy to register services, do lookups, etc. It was very fast. Services registered using a lease mechanism, so you could get a list of matching service instances, knowing that the instances were probably still up. Next, it was replicated. Certainly, many UDDI implementations support replication. What I don't want to do is create another UDDI implementation, but rather to build an alternative Directory Service that is more like what we did in the 90s, consistent with today's framework needs, but more lightweight. To my mind, UDDI is far more heavyweight a solutions than most enterprises need, and a simpler solution might offer some appeal, provided that it integrated well with whatever framework they're already using. That is, that it would be easy to choose it as an alternative to UDDI.

The second possibility is to do an opensource implementation of a data federation system. We built one for a client that was never used, but there were some good ideas in there. I'd like to do it again as an opensource project, because it offers some useful capabilities. It essentially allows users to publish documents to a master node, then replicate documents to regional servers, that is, to push the data close to where it would be used within the organization. For example, if a document were flagged as pertinent to an organization's European region, it would be be pushed to that region's server, and to its backup server in a neighboring region. Users in the region can then make annotations to the documents as needed, and push those back to the original author for consideration. A federation system such as this offers some availability and performance benefits relative to having a monolithic document server. When a user wants a document that not stored in his or her region, the system goes back to the master or another regional server to fetch it. As an added capability, the original system supported plugins that could fetch data from external sources, and that might be useful to include.

Sunday, June 7, 2009

We Don't Have Time To Skip That Step

In the mid-90s I was privileged to be on a team building a service-oriented middleware architecture (Datagate), which I have mentioned before. We used an underlying library that implemented XDR, for which we had no unit tests. Since much of our software was built on this and a couple other core technologies, it was important that they be as solid as possible. Make the foundation solid, and the rest will follow. I decided to build a suite of unit tests against this software, suspecting that there were some bugs in there. Using a coverage tool, I wrote tests that covered the entire XDR library, and we found that the suite would not run on one of our twenty or so supported platforms. We fixed that bug in the library.

After that, we found that one of our nagging bugs went away in our Directory Service. It turns out that the two were related. I took some time to build the test suite, but not that much. It saved us time in supporting the Directory Service. It probably saved application and service developers time, too, but we didn't research that.

After that, when it came to the value of testing, I told everyone who would listen, "We don't have time to skip that step." And it's still true today. Can you afford the extra time it takes to skip testing?

Saturday, June 6, 2009

OT Courtois Section Trip Report - 5/30 and 5/31

I hiked the part of this section from Hazel Creek to Bass' River Resort last weekend, camping after it crosses FR2265 the first time, before the trail cuts west there. What a great weekend to hike - a little on the warm side, but tolerable by watching my pace. Finished up Sunday morning about 11 before the heat really got going, and sat in the shade by the creek, enjoying a couple free beers from some new-found friends and skipping a few stones. Good times.

If you hike this direction, you need to know that when it goes into the field near Harmon Spring, you need to head forward across the field, taking the trail that goes slightly to the right. There's a trail to the left and to the right, and I went left, getting about a half-mile up the road before convincing myself that that wasn't the right way.

Also, if you're going this way, make sure you get plenty of water at the Beecher Artesian Well spring, especially in warm weather. The next decent water isn't until you're almost at Bass', and a long trek on the gravel road section will surely dry you out. There is some water before there, but I decided it wasn't for me, even purified, and I'm not too picky.

South of Highway 8, there are about 4 or 5 deadfalls. North of 8, maybe a couple.

The trail's pretty horsey in sections - plow through, and it gets better.

The ticks were bad, too. Had a couple in new places from this trip. Both standard ticks and seed ticks were on the prowl.

Not a lot of people out - just saw three guys in the Berryman section, but there were at least three people ahead of me, but making better time.

I really like this section. At least on the part I hiked, the hills are pretty docile, and there's some pretty trails through some really impressive pine groves.

Thursday, June 4, 2009

Manage Your Eclipse Install With A Local Git Repository

I had something of an epiphany this morning. If it wasn't the real thing, it sure felt like it. With Eclipse, I'm often trying out new plugins that purport to do one other or another, and there are usually a few that do the same thing, but with different features. So the quandary is, what do I do when I pick one of the several, try it, and don't like it? Perhaps it's hard to use, perhaps my IDE crashes more often than it used to, or perhaps one of the other similar plugins suddenly seems more appealing.

Traditionally, I've solved the problem using rsync. I rsync -a eclipse/ eclipse.beforeWonderfulPlugin, then install the plugin. If the plugin turns out to be a flop, I go back to the old version, which takes a little while. First, delete eclipse, then rsync -a eclipse.beforeWonderfulPlugin/ eclipse. It's slow, but it easier and faster than say, cp or tar. Also, I now have (at least) two entire copies of eclipse laying around.

There are further issues, too, though they're perhaps less important. It's hard to remember what plugins I have installed, for example, above and beyond what's already included.

Lately I've making the move to git to manage source for various projects. Then it hit me - I should make my Eclipse install a git repository. So that's what I did. There's a master branch - that's where the downloads from Eclipse go - ganymede->ganymedeSR1->ganymedeSR2, and so forth. Then, there's the working-branch, which is where I'll normally run from. When I'm trying a new plugin, I'll branch off of working-branch, try the plugin for a while. If it's good, I'll merge it back into working-branch.

Presumably, there will be an Eclipse Ganymede SR3 at some point, and I'll rebase there. When Galileo comes out, I'll put that on master, and probably start a new working-branch.

If I merge a plugin into working-branch, and find out a week later that there's a problem, it's easy to re-branch from a point prior to the merge, and get rid of the plugin. Sometimes it's the case that there's some quirky behavior I didn't notice at first. It's a matter of a few minutes to go back to a previous version, try it out, and see if the quirky behavior was there all along, or started with some new plugin.

Sure, Eclipse lets you uninstall plugins, most of the time, but there are often problems with that.

Could this be done with CVS, Subversion, ClearCase or other revision control systems? Very possibly - but not in practical terms, simply because of the performance issues.

If there are alternative plugins, say, Subclipse and Subversive, and I want to try them both, I can have them both on separate branches off of working-branch, and explore them both for a while before picking one. As an added bonus, gitk lets me see where I've been, and when I started using which plugins.