Saturday, December 19, 2009

Adsense Ads and GWT - Making it work.

It seems like a lot of people have had this same problem, but I haven't found anywhere on the net where someone has found a solution. Here's how I got it to work. If you find this post helpful, I would ask the favor that you check out http://penwag.com, and ask your friends to do the same.

I struggled for a long time trying to get an Adsense ad to appear in a <div>, but that seems to be the wrong approach. Divs are nice for styling reasons, but it seems that Adsense knows when it's in a div, and won't display.

I avoided IFrames (which is what the GWT Frame object compiles to) because sizing isn't automatic. Eventually, though, it became apparent that IFrames were the way to go, since Adsense ads will load in them. I create an IFrame and point it at a static page that contains the necessary Adsense script. That just works. The content loads correctly, and ads will display.

But here's the rub. IFrames need to be sized with custom javascript. I use this javascript: https://penwag.com/home/iframe.js. This works easily for all browsers except - you guessed it - IE. Below is the GWT code that I use to bring it all together, including a work-around for IE.


public static native String getUserAgent() /*-{
return navigator.userAgent.toLowerCase();
}-*/;

private Widget buildMainPanel() {
Widget mainPanel;
if(getUserAgent().contains("msie")) {
mainPanel = buildIEPanel();
} else {
mainPanel = buildNonIEPanel();
}

mainPanel.getElement().setId(getPanelId());
mainPanel.addStyleName(Styles.StaticPanel);

return mainPanel;
}

private Widget buildNonIEPanel() {
Frame mainPanel = new Frame();
mainPanel.getElement().setAttribute("onLoad", "resizeCaller();");
mainPanel.setUrl(getRootPage());

return mainPanel;
}

private Panel buildIEPanel() {
Panel mainPanel = new VerticalPanel();

HTML adBar = new HTML("Loading...");
mainPanel.add(adBar);

try {
RequestBuilder builder = new RequestBuilder(RequestBuilder.GET, GWT.getModuleBaseURL() + "ie_ads/index.html");
builder.sendRequest(null, new RequestHandler(adBar, null));
} catch (RequestException e) {
adBar.setHTML("");
}

HTML content = new HTML("Loading...");
mainPanel.add(content);

String errorMessage = "Failed to load content, please try again later.";
try {
RequestBuilder builder = new RequestBuilder(RequestBuilder.GET, GWT.getModuleBaseURL() + getRootPage());
builder.sendRequest(null, new RequestHandler(content, errorMessage));
} catch (RequestException e) {
content.setHTML(errorMessage);
}

return mainPanel;
}

Saturday, October 24, 2009

A New Star

I'd like to introduce you all to an up-and-coming star in the universe of photography. Also, he's my son, Taylor. I may be biased.

Please take a moment to visit his web site PhotoImageOgraphy. I think you'll be glad you did.

Sunday, September 13, 2009

Ozark Trail Volunteer Work

Well, after a couple of years using the Ozark Trail, I finally got around to helping to build the Ozark Trail on the Courtois Section. (It's pronounced Code-Away.) What a tremendous experience. The people were great to work with, and great to hang out with after the work was done for the day. And the food, thanks to Jeff The Chef, really hit the spot. Nothing like a couple burgers and a few brats to make the day complete.

The group included some tremendously hard workers on the rock-wall team, guys I couldn't even begin to keep up with energy-wise, including Scotty from the USFS, Russ, Gabe, Charles and others. Good work, guys.

No injuries, either, which is good. Couple close calls, though - the hill was about a 25% grade, and a few of the boulders got away from us.

What a great bunch of folks to hang out with. I'm looking forward to the next event.

Tuesday, July 14, 2009

The Value Of Pictures In Software Design

There are some very good reasons why software engineers use visual communication to quickly and effectively transfer knowledge from one person to another.

While people have many different learning styles, and while everyone employs all of the styles to a greater or lesser degree, most people, or at least, enough people to matter, are predominantly visual learners. Various sources claim that around 60% of us are visual learners. Therefore, it's worthwhile to use visual techniques for this reason.

Visual communication transfers information at a very high rate compared with aural and textual communication. You can tell with a glance a system's structure, or lack thereof. A verbal description takes longer.

Visual communication helps the sender, too. That is, the person creating the graphical representation has to understand the system well enough to draw it. This applies to verbal and written communication as well, so visual does not necessarily hold an advantage over other forms, but it's certainly a valid approach.

Visual communication helps a newcomer to the team come up to speed and become productive more rapidly.

Furthermore, graphical representations of software systems can reveal flaws, voids, and redundancies that are not immediately obvious in verbal or written communication. How many times have you drawn a system diagram, only to see that there's something that can be cleaned up? If you have not done this, try it - it's a worthwhile exercise.

To illustrate the value of pictures, let me point you at one that someone else drew, one that helped me to quickly understand a software framework's design and intended usage. I'm talking about the design of Mina, Apache's ongoing effort to wrap the basic Java NIO components. I think the graphics they provide are a great example of what we should be doing on our own projects.

Why is it important to understand the need for visual communication in software development? I've noticed something unsettling on the last couple projects I've been on: no graphical representations of the systems. In each case, there was no perceived need or gain to having images. For those of us who have been on non-trivial projects and witnessed the indispensable benefits of this form of communication, this is a red flag.

This red flag is a reliable indicator of a project in trouble. The particular dysfunctions might take one of several forms, but most likely is a combination of them. I'll try to name a few here, without trying to make an exhaustive list.

First, it means that the system probably has no clear direction. The team doesn't know where it's headed, or at least doesn't have a common vision of the goal.A shiny new a idea comes along, it's legitimately cool, and we go down that track. And that's great - provided it complements the existing framework. We can't follow all of those cool ideas. Some we'll have to back-burner for another day or another project. A solid understanding of a system as a whole, bolstered by few good drawings, can help us stay disciplined and on the road toward our goal. The images help to remind us of the goal, and to keep us from working at cross-purposes.

A lack of graphical communication might indicate that the team is unable to create them. The system has grown disorganized and chaotic over time (that is separation of concerns is largely gone), and the team, however good they are, cannot walk up to a whiteboard or show on paper a cohesive, overall design. The discipline of striving to achieve the goal of always having something drawn (simple, not 200 pages), drives us to keep the system cohesive and well-organized.

A lack of drawing might indicate something far worse - that the team refuses to draw them. This might be from fear that the drawings will become stale (a legitimate, but addressable concern), or because it takes time from coding, or from a misapplied development philosophy, from plain and simple laziness, or because of a lack of experience building complex systems. Even with current development paradigms that eschew grotesquely large architectural documents, some documentation is essential.

It's this last statement that seems be key element on the last couple projects I've been on. Agile development philosophies encourage us to limit the amount of useless documentation that gets created. This is a worthy and noble goal. Sadly, some have twisted the intent of these goals, eliminating strong, time-tested tools from their arsenal, to the detriment of the projects and teams they represent.

Thursday, July 9, 2009

John Roth, founder of the Ozark Trail Association, has died

This is sad news, indeed. I only ever "met" John via email and discussion forums, but never had the privilege of meeting him in person. As I read the notes from the forum linked below, it's clear what a void his passing leaves.

Ozark Trail home page
Forum notes

Saturday, July 4, 2009

A Worser Mousetrap

<tirade>

I have this, uh, friend, that has a rodent problem. Yes...a friend has this rodent problem.

So anyway, in order to help this, uh, friend, with his rodent problem, I used to buy this Victor snap trap at Lowes. Now, they don't sell those any more, they sell this Tomcat snap trap instead. They're very similar, except that the Tomcat is $0.78 instead of $1.10. There is one other minor difference - the Victors work. They actually catch mice. The Tomcats don't. They suck, actually. The mice come, take the bait, and leave. The traps don't trip. I even jammed the bait into the little holes so that they'd really have to apply a lot of force to get it - end the traps still don't trip. I would've been happy to spring for the extra 32 cents even, for a trap that just ^*(&^*&^%$%# works.

I wouldn't even be griping here if I hadn't just come back from Lowe's, having returned a branch cutter that pulled apart during normal operation: I was cutting a redbud branch about the size of my thumb. Forty bucks and it lasted about an hour. Jeez, what a piece of crap.

Next time you and the kids want to have fun, go play this game. It's called Made in America. Go into your local Lowes, and try to find a product that's made in America. Set a time limit, though. If you haven't found something after an hour, it's time to give up.

</tirade>

Saturday, June 13, 2009

Seeking Input For My Next Opensource Project

Friends,

I'm seeking your input to help me think about my next opensource project. I have two ideas, either one of which I'd like to do, probably using Java as the primary language, just as a matter of preference. I'd particularly like to know whether either is already being done, so that I don't duplicate work, and whether or not you think it might be something useful in work that you've done.

The first is a SOA Directory Service alternative to UDDI. When I worked to help implement a middleware/SOA framework in the mid-90s, one of the pieces we built was a directory service. While it didn't offer the metadata storage capability that UDDI offers today, it had some advantages over UDDI. It was simple: easy to register services, do lookups, etc. It was very fast. Services registered using a lease mechanism, so you could get a list of matching service instances, knowing that the instances were probably still up. Next, it was replicated. Certainly, many UDDI implementations support replication. What I don't want to do is create another UDDI implementation, but rather to build an alternative Directory Service that is more like what we did in the 90s, consistent with today's framework needs, but more lightweight. To my mind, UDDI is far more heavyweight a solutions than most enterprises need, and a simpler solution might offer some appeal, provided that it integrated well with whatever framework they're already using. That is, that it would be easy to choose it as an alternative to UDDI.

The second possibility is to do an opensource implementation of a data federation system. We built one for a client that was never used, but there were some good ideas in there. I'd like to do it again as an opensource project, because it offers some useful capabilities. It essentially allows users to publish documents to a master node, then replicate documents to regional servers, that is, to push the data close to where it would be used within the organization. For example, if a document were flagged as pertinent to an organization's European region, it would be be pushed to that region's server, and to its backup server in a neighboring region. Users in the region can then make annotations to the documents as needed, and push those back to the original author for consideration. A federation system such as this offers some availability and performance benefits relative to having a monolithic document server. When a user wants a document that not stored in his or her region, the system goes back to the master or another regional server to fetch it. As an added capability, the original system supported plugins that could fetch data from external sources, and that might be useful to include.

Sunday, June 7, 2009

We Don't Have Time To Skip That Step

In the mid-90s I was privileged to be on a team building a service-oriented middleware architecture (Datagate), which I have mentioned before. We used an underlying library that implemented XDR, for which we had no unit tests. Since much of our software was built on this and a couple other core technologies, it was important that they be as solid as possible. Make the foundation solid, and the rest will follow. I decided to build a suite of unit tests against this software, suspecting that there were some bugs in there. Using a coverage tool, I wrote tests that covered the entire XDR library, and we found that the suite would not run on one of our twenty or so supported platforms. We fixed that bug in the library.

After that, we found that one of our nagging bugs went away in our Directory Service. It turns out that the two were related. I took some time to build the test suite, but not that much. It saved us time in supporting the Directory Service. It probably saved application and service developers time, too, but we didn't research that.

After that, when it came to the value of testing, I told everyone who would listen, "We don't have time to skip that step." And it's still true today. Can you afford the extra time it takes to skip testing?

Saturday, June 6, 2009

OT Courtois Section Trip Report - 5/30 and 5/31

I hiked the part of this section from Hazel Creek to Bass' River Resort last weekend, camping after it crosses FR2265 the first time, before the trail cuts west there. What a great weekend to hike - a little on the warm side, but tolerable by watching my pace. Finished up Sunday morning about 11 before the heat really got going, and sat in the shade by the creek, enjoying a couple free beers from some new-found friends and skipping a few stones. Good times.

If you hike this direction, you need to know that when it goes into the field near Harmon Spring, you need to head forward across the field, taking the trail that goes slightly to the right. There's a trail to the left and to the right, and I went left, getting about a half-mile up the road before convincing myself that that wasn't the right way.

Also, if you're going this way, make sure you get plenty of water at the Beecher Artesian Well spring, especially in warm weather. The next decent water isn't until you're almost at Bass', and a long trek on the gravel road section will surely dry you out. There is some water before there, but I decided it wasn't for me, even purified, and I'm not too picky.

South of Highway 8, there are about 4 or 5 deadfalls. North of 8, maybe a couple.

The trail's pretty horsey in sections - plow through, and it gets better.

The ticks were bad, too. Had a couple in new places from this trip. Both standard ticks and seed ticks were on the prowl.

Not a lot of people out - just saw three guys in the Berryman section, but there were at least three people ahead of me, but making better time.

I really like this section. At least on the part I hiked, the hills are pretty docile, and there's some pretty trails through some really impressive pine groves.

Thursday, June 4, 2009

Manage Your Eclipse Install With A Local Git Repository

I had something of an epiphany this morning. If it wasn't the real thing, it sure felt like it. With Eclipse, I'm often trying out new plugins that purport to do one other or another, and there are usually a few that do the same thing, but with different features. So the quandary is, what do I do when I pick one of the several, try it, and don't like it? Perhaps it's hard to use, perhaps my IDE crashes more often than it used to, or perhaps one of the other similar plugins suddenly seems more appealing.

Traditionally, I've solved the problem using rsync. I rsync -a eclipse/ eclipse.beforeWonderfulPlugin, then install the plugin. If the plugin turns out to be a flop, I go back to the old version, which takes a little while. First, delete eclipse, then rsync -a eclipse.beforeWonderfulPlugin/ eclipse. It's slow, but it easier and faster than say, cp or tar. Also, I now have (at least) two entire copies of eclipse laying around.

There are further issues, too, though they're perhaps less important. It's hard to remember what plugins I have installed, for example, above and beyond what's already included.

Lately I've making the move to git to manage source for various projects. Then it hit me - I should make my Eclipse install a git repository. So that's what I did. There's a master branch - that's where the downloads from Eclipse go - ganymede->ganymedeSR1->ganymedeSR2, and so forth. Then, there's the working-branch, which is where I'll normally run from. When I'm trying a new plugin, I'll branch off of working-branch, try the plugin for a while. If it's good, I'll merge it back into working-branch.

Presumably, there will be an Eclipse Ganymede SR3 at some point, and I'll rebase there. When Galileo comes out, I'll put that on master, and probably start a new working-branch.

If I merge a plugin into working-branch, and find out a week later that there's a problem, it's easy to re-branch from a point prior to the merge, and get rid of the plugin. Sometimes it's the case that there's some quirky behavior I didn't notice at first. It's a matter of a few minutes to go back to a previous version, try it out, and see if the quirky behavior was there all along, or started with some new plugin.

Sure, Eclipse lets you uninstall plugins, most of the time, but there are often problems with that.

Could this be done with CVS, Subversion, ClearCase or other revision control systems? Very possibly - but not in practical terms, simply because of the performance issues.

If there are alternative plugins, say, Subclipse and Subversive, and I want to try them both, I can have them both on separate branches off of working-branch, and explore them both for a while before picking one. As an added bonus, gitk lets me see where I've been, and when I started using which plugins.

Sunday, May 24, 2009

Too Much Testing?

I recently heard this question:

"Is too much focus on testing benefits a bad thing overall?"

TDD done well can improve readability. TDD done poorly, that is without consideration of other important principles, can reduce readability.

A guy I worked with in the mid-90s would say "You can always make a system more flexible by adding a layer of indirection. You can always make a system simpler by removing a layer of indirection." Both flexibility and simplicity are important qualities of a system. The two principles can often live together in harmony, but often they work against each other. If you go too far towards one extreme or the other, you move away from the ideal that exists where these two principles are balanced.

TDD is partly about testing, partly about design. TDD done poorly can tend too much towards either flexibility or simplicity. It can push towards too much flexibility. The objects become more testable, and often simpler, but the inherent complexity of the domain problem then is pushed out of the objects into the interaction of the objects. We gained flexibility, and to the naïve eye, it can look as though we've gained simplicity because our objects are simpler. The complexity, however, is still there. It's moved out of the objects, and into the object interaction, where it's harder to control. There are code smells that can act as red flags here - a system with hundreds of small objects and no larger objects is one, lots of objects with only one-line methods is another.

TDD done poorly can move in the other direction as well, that is, towards too much simplicity. So, we do TDD by writing the test first, but it has little impact on our design. We still have long methods and huge objects, and those are code smells that can red-flag this problem.

Now TDD will not by its nature knock you off-balance in either direction, provided it's well-applied. Use other practices to keep you on track. For example, draw pictures of what you're doing before you do it. Obviously, not all the time. Some things are far too simple for that. Some pictures are worth saving, some are just sketches that help us to visualize the problem, and we are, by varying degrees, mostly visual learners. If you can't draw a picture of the problem, you don't understand it.

How will this help with TDD? It will help to keep a system from going too far on the flexibility side, away from the simplicity side. If you draw a picture and it's ugly, that's a red flag. Sometimes it's necessary, but often when you draw the picture, your mind will quickly see things that can be simplified. The solution becomes more elegant and simplified, easier to maintain, and more enjoyable to work on. If you can't or won't draw pictures of your system, you're losing this opportunity to make your software more solid, more elegant, more beautiful to see and easier to maintain.

Applying this comes with experience, and some coders will never understand the value that a good balance provides. There's no metric that you can run that tells you you're in the right place. If someone gives you a prescribed method to arrive at that harmonious point, he's lying to you. More importantly, he's probably lying to himself without realizing it.

So, my answer to his question is 'yes': test everything without forgetting the other good principles.

Any good practice will throw you off-course if it's not balanced with other good practices.

Saturday, May 23, 2009

Creating A Remote Branch With Git

If you're still doing this the way I suggested. Stop. Stop right now. Back away from the keyboard, and nobody gets hurt. Also, read the comments below, and do it the way Graham shows.

Original post:

----------------------------------------------------------------------

So, I've seen this documented in a few places, and it seems like they all give you various hard ways to do it. Here's an easy way. But, given git, it's surely not the only easy way. I create remote branches as a way to self-collaborate on a new branch from the various boxes in my daily life. If you're not a computer geek, you probably don't know that by "box" I mean "machine." And by "machine" I mean "computer." But, if you're not a computer geek, you're probably not using git, in which case you're not reading this. In that case, stop now.

Assume that you're in a clone of a remote repository. We'll create a remote branch named "mybranch." First create the branch locally:
>git branch mybranch

Switch to that branch:
>git checkout mybranch
Switched to branch "mybranch"

At this point, list all your branches:
>git branch -a
master
* mybranch
origin/HEAD
origin/master

Now, here's the command that creates the remote branch:
>git push --all
Total 0 (delta 0), reused 0 (delta 0)
To /tmp/remote.git
* [new branch] mybranch -> mybranch

It's just that easy. Now, in another clone on another machine, I can self-collaborate by creating a local branch that mirrors the remote branch. First, list all the branches:
>git branch -a
* master
origin/HEAD
origin/master
origin/mybranch

And create a local branch that tracks with the remote branch:
>git checkout -tb mybranch origin/mybranch
Branch mybranch set up to track remote branch refs/remotes/origin/mybranch.
Switched to a new branch "mybranch"

List the branches again to see the change:
>git branch -a
master
* mybranch
origin/HEAD
origin/master
origin/mybranch

Run gitk to see visually that your local branch is tracking the remote branch. Use git push and pull as you change machines to move the changes around. When the branch is done, you can merge it.

I like git. I tried bazaar and mercurial, but git "just works" and is easier to get started with than bazaar. Plus, there's github. Visit me there at https://github.com/DonBranson.

Wednesday, May 20, 2009

DonsProxy on github

I've been thinking about getting DonsProxy out on github for a bit now, but needed to break out the good stuff from the slough. So, it's finally out there. My next DonsProxy task is to get the SSL part working better - maybe by injecting Mina. Still trying some things out. You can clone DonsProxy from github at http://github.com/DonBranson/DonsProxy/

Thursday, May 7, 2009

Favorite Project Series - Datagate (14 Years of SOA)

SOA is one of the big buzzwords these days, and a lot of people have started doing SOA for the first time in the past few years. I have some good memories of the "good old days of SOA," and now it seems like everybody's using the groundwork that was laid in the 80s and 90s. Now, people can just go download Weblogic and Systinet or JBoss, and and the various pieces, build a system using those frameworks, and they're "doing SOA." There are so many powerful tools; the IDEs such as Eclipse and Intellij do a lot of the tedious work for you.

My first introduction to SOA was in 1995 on a project called Datagate at Southwestern Bell. I came on the project when it was about six months old. All of the things that people take for granted now, that they use and it "just works" (or not), we had to build our own. There was no app engine, no UDDI, no SOAP. The things that people download and use today, we built and used back then. If you're a computer geek, this is the project of a lifetime. It was all message-based (that's a MOM for all you acronymophiles). Now I can look around and see SOA everywhere, but we were, in our own way, pioneers of the technology, because we did things no one had ever done before. Sure, not everything we did was new - Tuxedo had been around, so MOMs weren't exactly new.

What we did in the mid-90s that was new was to make the technology more accessible. It was hard to write Tuxedo services back then. We made it accessible to every C, VB, and PowerBuilder coder out there. There are some ways in which today's technology is better. But, what we built was so smokin' fast, and there's nothing that SOA folks are doing today that even comes close to the performance we delivered on those old, slow machines. We had a great, usable middleware product that was the framework for developers to construct clients and reusable services on a variety of platforms - about 12 unixes, MVS running on IBM 390s, Windows 3.0, Windows NT, and Tandem. It's one of those experiences that you looked back on, and think, we did it before most anybody else. What people today use, we built.

I wasn't one of the visionaries for that group. I was brought in to develop an interface so that VB and PB programmers could write clients on top of our C framework. In the end, I help design and architect that, the Directory Service Replicator, and the Dashboard for this system, and we did good work.

There was no SOAP back then. The Datagate team developed its own protocol. It was message-based, so was much more performant, scalable, and simpler than the RPC-style calls are today. Yes, even the message-driven services are RPC under the covers, not the true MOMs like what we built. A true MOM lets you fire off multiple messages to services without waiting for the ack that the message was received, then process the responses as they come back in. All this with one thread, not multiple threads that RPC pushes you towards for this kind of processing.

There was no UDDI then for service lookup back then. There was no LDAP, either. We built our own distributed, replicated X.500 directory service. Infrastruture and business services registered and reserved leases dynamically. If a service went down, its service entry went away, and clients would call one of the other instances that was still registered. And of course, it was fast - 10s of millis to do a service lookup, not 100s or 1000s like you see with UDDI. Smokin' fast.

I architected and led the development of the Directory Replication Service along with my friend Dave C. This cell-based, scalable solution allowed us to run multiple Directory Services, and replicate data between them in a scalable, available fashion.

We used XDR for marshalling. That's one piece we did download. I lead the charge to unit test the whole thing - and found only one bug. There was an endian issue on that affected one of the platforms, and we found it and fixed it.

There were no application engines. What we built was a resource manager that would control service lifecycle and heartbeat services to quickly detect outages.

There was no PKI, or even SSL. We bought licenses for the encryption libraries, then designed and built our own secure message protocol (with mutual authentication where needed), plus the infrastructure to support it. Certificate Revocation Service, Certificate Authentication Service, and al the APIs for services to use to load their certs, encrypt their messages. We built the services and the GUIs so that administrators could manage the PKI we built.

There was monitoring software available - Tivoli - but it was far more than our budget would allow. We built our own Dashboard that allowed us to closely monitor infrastructure and business services running over much of the United States, including Missouri, Texas, California, and more.

We did training to teach developers how to build reusable business in C, and how to build clients in C, in VB, and in PowerBuilder. I taught the VB and PowerBuilder classes.

Our team consisted of three subteams - Infrastructure, which was the team I was on, the Service Writers team, and System Adminstration, which handled the care and feeding of production systems.

At A.G. Edwards, we leveraged SOA frameworks for 6-1/2 years as part of a broker workstation, to develop the pretty scalable agedwards.com (which, last I heard some years back, had about 300,000 users signed up). I was a member of the Lead Architect Team that designed and developed agedwards.com. We used SOA for some other pretty cool stuff. One of my favorites was the BLServer, that used a modified version of what is now called the "Competing Consumers" pattern in a highly-available cluster of message-based services that self-allocated using leases to process roughly 1.5 million messages a day for about 4-1/2 years before it was mothballed in February of this year.

The industry has come a long way since then. Like I said, what we built, you can now go download, and the SOA business has experience explosive growth. After 2-1/2 years at Bell on the Datagate project, I did 6-1/2 years of SOA work at A.G. Edwards, continuing to architect systems using new SOA technologies as they emerged - Tengah, (now Weblogic) Dynamo, Novell's LDAP server, SiteMinder, WSDs, Big/IP, and various JMS providers. Some of this stuff I had more exposure to than others, but that gives you an idea of how things have changed over the past 14 years. Now there's CXF and JAXB and other exciting new tools coming along.

Every Tom, Dick and Harry does SOA now. Years after that work at Southwestern Bell, a small company hired a "SOA expert" (after I was already there!). This poor fellow didn't even know what LDAP is. When I described how it could be used for service lookup, he said you can't use it for that - it's for authentication. I explained to him that it's just a database that's optimized for reads. You can use it for anything that falls into that category. Now, he's an expert, downloading what other people have built and putting it all together. One boss at that company tried to get me to join a project where I would have been writing architecture documents all day for six months, and building nothing. His angle? That the SOA experience would look good on my resume. I kid you not.

On Datagate, we built SOA sooner, faster, more scalable, and at least in some ways better than what exists today. The primary advantage I see in today's world is standards. Almost everything is standards-based, and that means vendors can provide tools to do many of the things we used vi for. It's a change for the better, in most every way, because SOA is much more accessible today that it was in 1995. Plus, now it's an acronym. ;)

Tuesday, May 5, 2009

Migration from Subversion to Git

I'm migrating from Subversion to Git. It's about time, of course - decentralized SCMs have some significant advantages over the old-style SCMS such as CVS, Clearcase, Subversion and the like. I looked at Bazaar, which touts "Bazaar is a distributed version control system that Just Works." Sadly, I couldn't get it to work. It needed a lot of python modules that I, not being a python guy, don't really know where to get. Found some, not others. Funny thing is, Git "just works." Sure, had some hiccups with converting from Subversion before realizing that I had made an incorrect assumption about the repository structure in one case. Looks cool, though, especially when you throw github in the mix. Good stuff.

Recommended reading if you're thinking about converting from your current SCM to something new, whatever that may be: http://whygitisbetterthanx.com/

Upgrade from GWT 1.5 to GWT 1.6

Just a refer:

http://penwag.blogspot.com/2009/05/upgrade-from-gwt-15-to-gwt-16.html

Saturday, March 28, 2009

Tell Your Story!

There's a web site that I'm building, penwag.com, that lets me enjoy two of my favorite interests: software development and story telling.

It lets me enjoy software development
in that I'm using two technologies that I will gain experience with: the Google Web Toolkit (GWT) and Hibernate. The GWT is a great tool from Google that provides a framework for building AJAX-style applications. The framework includes the necessary pieces for client-side and server-side development, and manages the communication between the layers. Add in Hibernate and a data layer, and you have everything you need for easily building AJAX web sites.

Then there's the story-telling side.
When I was very little, we visited my great-uncle Henry Shoop, who was my father's mother's brother. He was born near Cherryville, Missouri, but his family moved to California when he was six. Uncle Henry was a friend, a beekeeper, and one of the great storytellers, and I hung on his every word as he instilled in me a love for a well-told story. He was one of the best.

Another favorite story-teller of mine was Earl Halbert, a family friend. You can see his picture here from National Geographic Magazine. He's the one on the left in the overalls. Earl was born in 1910 in rural Missouri, and lived through the Great Depression, Prohibition and numerous other hard times. He knew my great-grandfather Thomas Jefferson Branson, and had a few good stories to share about him as well.

Both of these men, shall we say, infected me with a love of a story told friend-to-friend, perhaps while sipping some sassafras tea, or around a campfire, or just hanging out in the living room. I miss both of them not just for their story-telling and their wit, but for their character as human beings.

Certainly the art of a well-told story has not been lost, but is changing over time. How many people nowadays have had the chance to hear stories of hard times in the Great Depression, friends lost, good times, and funny times, from "way back when"? I'd hate to lose those stories. So join me, won't you, and let's tell each other those stories, like we're sitting over a beer or around a campfire, and let's enjoy a good laugh together, savor tales of clever dealings with shysters, and the occasional twist ending.

Just head out to penwag.com. You're getting in early at a site that will someday, hopefully, have many stories that we have shared with each other. But it's your stories that will make the site great. If you have even one story to share, come on out and put it up there.

This site is in its early stages, but everything you need to add a story and read others' stories is already there. Since it's in its early stages, there's still a lot of work that I need to do add features. Some are pretty obvious - you can create a story, but can't edit the story to make corrections yet. I have a long list of features that I will add as the site grows. But the feature you want may not be on my list! So, click on the Contact Support link, or just send an email to support@penwag.com, and give me a shout. Tell me what feature you want, and I'll add it to my list.

Let me reiterate - this site's success depends on people like you who have a story to tell. As you add your stories, others will visit the site just to read a great story. The stories are short. Don't feel like you have to write a novel. Just write it how you would share it with your friends over a beer.

Now, to get started, you have to register. For simplicity's sake, your email address is your user id, and you need to make up a password. Registration is quick and easy, and your email address will not be shared. If you still have questions or concerns, or if the site doesn't work right for you, use the Contact Support link and let me know!

Thanks, and hope to see you out there. Come tell your story!

Wednesday, March 25, 2009

OT Trip Report - Bell Mountain to Council Bluff Lake

My friend James and I hiked Council Bluff Lake to Bell Mountain (reversed) March 22 and 23, in about 24 hours. It was our first hike of the year, and left me a little sore.

The view from Bell Mountain is a must-see, and is easy to get to on foot. It's pretty level all the way from the North Bell Trailhead. It'd make a fine afternoon hike, out and back.

Once you get past that, the rest of the trail has some pretty good hills, by our Midwest standards. There are long level parts along ridge-tops and in creek bottoms, but you'll have some climbs in the 200-400 foot range. There are some beautiful creeks along the way, too, so finding water was not a problem for us. Joes Creek is big enough that in wetter weather, it is probably hard to cross.

We heard from other hikers that there are feral pigs in the area, so we hung our food out of there reach overnight. We really saw very little animal life - a couple squirrels and birds along the whole trip. We saw lots of deer scat from last fall - many were full of persimmon seeds.

Tuesday, March 24, 2009

AIG Bonus Scandal

Okay, I haven't been political yet, but here goes...

I've been thinking about the whole AIG bonus thing. Correct me if I'm wrong, but here's the way it looks to me. Congress rushed through this legislation. It's fairly complicated, and they were in a hurry to get this thing done. It seems like you can pick two of: complicated, fast, and right. They picked all three, and they screwed it up. The issue of bonuses was in the legislation in black and white, and they missed it because they voted on something without taking enough time to read it. To me, it really seems that simple.

Now as far as AIG's role, maybe the folks deserved the bonuses, maybe not. I've heard the argument that things would have been a lot worse without the work of the executives that got the bonuses. That may or may not be true. It's a real possibility, but without knowing the inside info, I can't say. But it seems clear that AIG was contractually obligated to pay those bonuses, with Congress' backing per the legislation. If they didn't pay the bonuses, they could have been sued, and lost the bonus money plus punitive damages and court costs. And they wouldn't have had a leg to stand on in court, so they probably would have lost.

AIG was between a rock and a hard place. Congress put the rock there, and now they're whining about it. Congress should man up (if there are any there) and say "We screwed up. We know what we did wrong this time, and we won't make that mistake next time." Since they're not owning the problem, they'll probably screw it up next time, too.

Don't misunderstand me - I am not defending AIG. But Congress is pretty quick to shift the blame here, because they know if we stop and think about it, we'll realize what idiots we've elected.

Now, as complicated as the bailout is, it pales in comparison to what's coming down the pike with health care. If they can't even get bailouts right, can they be trusted to get health care right? At this point, I don't think they collectively have the skills to lance a blister, much less to do their jobs well.

Sunday, February 15, 2009

Testing Webapps That Send Email

I've been working on a GWT-based web application (https://penwag.com) that relies on email for some of its functionality. For example, it sends email when you register, and email when you ask for your password. Standard stuff.

I've been working towards having full-blown integration tests that hit the site to make sure everything works from end-to-end. There are really two tricky areas with this. First, we want the app to send the emails, and think it's sending them out, but without actually sending them to a real email address. Second, we do need to actually check those emails to make sure that the correct content went into them.

This weekend, it finally all came together, and the solution was very straightforward. It depends on two opensource components, and adds about 100 lines to code to glue it all together.

The first opensource component is Dumbster, a fake SMTP server that's controllable through java code. It listens on port 25, and acts like a normal SMTP server as far as the connecting app is concerned. The key difference is that it doesn't send out the emails, but collects them and makes them available via a Java API.

The second component is the well-known Jetty, a Java web server that is easily embedded into other applications, which is the key for us.

Here's how we pull it all together. When the test harness starts up, it completely initializes the database, readying it for the Selenium tests. It then starts Dumbster, and finally starts Jetty. The harness creates a handler that responds to a very limited set of HTTP requests:

/ - list all emails
/stop - calls System.exit(0)
/reset - restarts Dumpster, which throws away emails and restarts with an empty list
/<number> - returns the email at this index in the list.

Combined, these features allow Selenium tests to hit a URL and fetch a given email, validate the content of the email, and, in the case of my verification email, click on the link embedded in the email.

I thought you might want the code. It's not tested, no guarantees, yada yada, but it works for me. Grab it here: http://moneybender.com/IntegrationTestHarness.java

Sunday, January 11, 2009

First Chaco Sale

So, I think I just sold my first pair of shoes. ;)

My wife and I were at the mall the other night, wandering around, waiting for Gran Torino to start. I noticed a couple of the shoe stores had sales on, so I headed over to Tradehome shoe store, the one where I bought my Chaco Redrocks, to see if they have them marked down, seeing as how I'm very happy with mine. They did not, sadly. So, my wife's there with me, tries on a couple pairs of shoes. A couple Merrels, and the women's Chaco Redrocks. She definitely favored the Redrocks, and now she has a pair, too. My first shoe sale!

My grandpa would be pleased, I think. He worked for Brown Shoe for 34 years, and while on the board of directors pushed to have a shoe plant built in Steelville, Mo., and it lasted for quite some time before they shut it down.

Friday, January 9, 2009

Eclipse Breakpoints

Hey, here's a tip for Eclipse users. Breakpoints can be created that either suspend just the one thread your code is on, or suspend the whole VM. You can set them individually using the breakpoint properties page. Also, you can set how they're created by default in Window..Preferences..Java..Debug..Default suspend policy for new breakpoints. Pick "Suspend VM" or "Suspend thread."

Enjoy.