Today we had a little meeting to talk about the priorities for our next iteration. We had a really odd, fun experience.
We're trying to figure out how to implement a feature that none of our competitors' products have really addressed, but the client needs. But, it's a bit tricky, so we were talking about the problem.
The team lead explained a possible solution to the problem. It sounded like a great idea, and I re-stated the problem back to him, but I had misunderstood what he said. He in turn, heard my response incorrectly and thought that I had had a great idea. But the thing he thought I was saying actually was a great idea, and solved the problem beautifully, elegantly. We all had a good laugh, since we had arrived at a terrific solution as we went back and forth misunderstanding each other.
So, whose great idea was it? Nobody's, really, and everybody's. So cool.
Now, of course, we have something of a puzzler - how do we effectively leverage our confusion to produce more groundbreaking ideas?
Friday, December 19, 2008
Sunday, December 7, 2008
Windows Vista And My Network
Well, if you've been paying attention to Apple's commercials and your friends with new Windows computers, you may have heard that Windows Vista has issues. Here's one that got me.
When I hooked up a new Windows Vista computer to my wireless hub, it asked me for a PIN number. Hm. That's interesting, never seen that before. It told me to find the PIN on the underside of the WRT54G2 wireless hub, so I got it and entered it. It showed me a network SSID, which I carelessly clicked through. Suddenly, all the other devices hooked to the wireless network were dropped. Hmmm, that's weird.
Here's what happened. Apparently, using the PIN gives your computer complete control over the network hub. It changed the network SSID, randomly generated a new WPA shared key, and reset it. At this point everything else was dropped. What a pain! Shame on me for missing the fact that it was going to change the SSID, but surely I don't want the shared key changed, blocking out all other network devices. Yuck!
When I hooked up a new Windows Vista computer to my wireless hub, it asked me for a PIN number. Hm. That's interesting, never seen that before. It told me to find the PIN on the underside of the WRT54G2 wireless hub, so I got it and entered it. It showed me a network SSID, which I carelessly clicked through. Suddenly, all the other devices hooked to the wireless network were dropped. Hmmm, that's weird.
Here's what happened. Apparently, using the PIN gives your computer complete control over the network hub. It changed the network SSID, randomly generated a new WPA shared key, and reset it. At this point everything else was dropped. What a pain! Shame on me for missing the fact that it was going to change the SSID, but surely I don't want the shared key changed, blocking out all other network devices. Yuck!
Bridging WAP54G and WRT54G2
Let's start off by stating the goal: I have a network in a room off the kitchen where my DSL comes into the house. Downstairs is an XBox 360 connected to the family TV. Rather than run a cable down to the basement, I decided to set up a wireless bridge between the basement and the network upstairs.
Despite the warnings from Linksys to the contrary, you can use WAP54G to bridge to devices besides other WAP54Gs. In my case, I have a WAP54G in the basement bridged to the WRT54G2 upstairs. It's not too hard if you follow these instructions.
First, be very very careful not to follow the instructions provided by Linksys. It's easy to accidentally read them, thinking they'll tell you what you need to do.
Next, hook a cable directly from your PC to the WAP54G. Configure your PC to have a static IP address of 192.168.1.200 (most other addresses on 192.168.1.x will work, too). Open a browser and go to 192.168.1.245, which is the default address of the WAP54G. You will be asked for a user ID and password, and the Linksys instructions are wrong. Here's what actually works: Use a USER ID of blank, and a password of "admin."
In the Setup tab, click on "AP Mode." Select the radio button next to "Wireless Bridge Remote Wireless Bridge's LAN MAC Addresses." The WAP54G needs the MAC address of the WRT54G2, which is on the underside of the WRT54G2. Enter the MAC address in the first box and click "Save Settings."
At this point, I reconnected my PC to the network, and moved the WAP54G downstairs. I plugged in the WAP54G, connected the XBox 360 to the WAP54G, and everything's good to go. For me, the XBox 360 won't dynamically pick up an address, so it's statically configured to 10.5.128.10. Your mileage may vary.
Despite the warnings from Linksys to the contrary, you can use WAP54G to bridge to devices besides other WAP54Gs. In my case, I have a WAP54G in the basement bridged to the WRT54G2 upstairs. It's not too hard if you follow these instructions.
First, be very very careful not to follow the instructions provided by Linksys. It's easy to accidentally read them, thinking they'll tell you what you need to do.
Next, hook a cable directly from your PC to the WAP54G. Configure your PC to have a static IP address of 192.168.1.200 (most other addresses on 192.168.1.x will work, too). Open a browser and go to 192.168.1.245, which is the default address of the WAP54G. You will be asked for a user ID and password, and the Linksys instructions are wrong. Here's what actually works: Use a USER ID of blank, and a password of "admin."
In the Setup tab, click on "AP Mode." Select the radio button next to "Wireless Bridge Remote Wireless Bridge's LAN MAC Addresses." The WAP54G needs the MAC address of the WRT54G2, which is on the underside of the WRT54G2. Enter the MAC address in the first box and click "Save Settings."
At this point, I reconnected my PC to the network, and moved the WAP54G downstairs. I plugged in the WAP54G, connected the XBox 360 to the WAP54G, and everything's good to go. For me, the XBox 360 won't dynamically pick up an address, so it's statically configured to 10.5.128.10. Your mileage may vary.
Friday, December 5, 2008
Favorite Projects Series, Installment 2
The previous project in this series I chose as a favorite due to the impact it had for the user. I consider this second project to be a favorite for the interesting technical challenges that it presented. While it certainly had impact for a lot of users, that was far less visible to me.
I lead a team of about three developers on this project at A.G. Edwards. The project was named BLServer based on a service by the same name that we used from an external provider. This provider's service was implemented using a COBOL program that would handle requests over a limited number of network connections. It had a number of limitations that we needed to overcome, which we did by wrapping it with Message-Driven Web Service running in a Weblogic cluster.
The function of the BLServer service was to receive messages for account creation and modification, including changes in holdings of various securities. The provider's BLServer had a nightly maintenance window (I think it was about four hours), and used a proprietary message format. It was secured by using a dedicated leased line and a single common password.
Our wrapping BLServer service was required 1) to expose a standards-based (SOAP) interface, 2) to provide 24x7 uptime, 3) to preserve message ordering, 4) to secure access to the service without exposing the single common password, and 5) to queue requests during maintenance windows delivering them when the provider's BLServer became available. There were also scalability and performance issues which, in combination with message ordering requirements, drove us to an interesting solution. I'm not sure about the exact scalability requirements, since that was four years ago. If I remember correctly, we initially had to be able to handle about 300,000 requests during a normal 8-hour business day, with the ability to handle peak loads of around 1.5 million per day.
The first benefit that our service provided was to expose a standards-based (SOAP) interface, and interact with the provider BLServer which took requests and delivered responses using a proprietary protocol and message format. Our service was then used by application Web Services to provide customer value.
In order to meet the scalability and availability requirements, we proposed standing up a small cluster of WebLogic engines to host a BLServer WebService. This WebService would receive requests and (using JMS) queue them for processing. Responses would then be queued for later retrieval by the calling services. By queuing requests in this way, we could use the transactionality of the JMS provider to guarantee that each message was processed once and only once. Furthermore, we could queue up a backlog of messages and feed them through the finite number of connections made available by the provider BLServer.
By using a cluster, we would be able to handle the necessary load of incoming requests, queue them, and run them through the provider BLServer, keeping it as fully loaded as possible over the finite number of available connections.
Aye, but here's the rub. We had a cluster of WebLogic engines pulling messages from the queue. How do you go about maintaining message order while at the same time leveraging the parallelization of the cluster to handle the load? Consider what happens if you have two messages in the queue in order. The first is a stock sell that results in an increase in available cash. The second is a buy that uses the that cash to buy some other stock. You can see that these must be processed in order. If one server in the cluster grabs the first from the queue, and another grabs the second, there's no guarantee that the sell will be handled first by the provider BLServer. Therefore, we have to guarantee the order in our BLServer service.
How to do that? The solution became more obvious once we realized that total message ordering was not required. What's really required is that messages within certain groups be correctly ordered. These groups are identified by key, and all messages for a given key must be ordered. Depending on the type of request, that key might be a CUSIP, might be an account number, or some other identifier.
Now message ordering with scalability becomes simpler. If all messages for a certain key are handled by a given engine, then we can guarantee ordering by pulling a single message at a time from the queue, and processing it to completion before beginning the next message. Other engines in the queue will be doing the same thing at the same time for other keys. Thus, we gain some scalability.
Oooh, but we've just introduced Single Points Of Failure (SPOFs) for each key. If a given server that handles keys that start with '17' for example, and that server crashes, then messages for those keys won't be processed, and we have failed to meet our availability requirements. That's where the second bit of creativity came into play. We employed a lease mechanism. Leases were stored in a highly-available database. Upon startup, a given engine would go to the database and grab a lease record. Each lease represented a group of keys. For example, a lease record might exist for all records starting with the range '00' to '03'. An engine starts up, finds that this lease is the next available, and grabs it. In order to 'grab' a lease, an engine will update the lease with a time in the not-to-distant future, say, five minutes. As long as the engine is up, it will continue to update the lease every two minutes or so with a new time. If the engine crashes, the time expires, and some other engine grabs the lease.
As long as an engine has a lease for a given range, it can use a selector to receive messages from the queue for that given range. We now have scalability, message ordering and high availability. Everybody say, "Woah, that's so cool!"
At this point, we've solved a significant technical issue that should be captured as an architectural pattern. We never did that. It may be that this solution is documented somewhere as a pattern, but I'm not aware of it.
At the end of the project I moved on to other things, and left BLServer in the capable hands of my friend Brian S. I heard some months down the road that the service was in active use in production, and had seen only one minor bug. I've always been proud of the product quality that our team delivered. We went through at least four or five variations of possible solutions before arriving at the one described above. In each case, we'd get into the details of the solution only to ask, "yeah, but what happens if..." and realize that we were close, but had some problem because of the distributed nature of the environment, or whatever. It was very satisfying to finally arrive at the elegant solution that we delivered.
I lead a team of about three developers on this project at A.G. Edwards. The project was named BLServer based on a service by the same name that we used from an external provider. This provider's service was implemented using a COBOL program that would handle requests over a limited number of network connections. It had a number of limitations that we needed to overcome, which we did by wrapping it with Message-Driven Web Service running in a Weblogic cluster.
The function of the BLServer service was to receive messages for account creation and modification, including changes in holdings of various securities. The provider's BLServer had a nightly maintenance window (I think it was about four hours), and used a proprietary message format. It was secured by using a dedicated leased line and a single common password.
Our wrapping BLServer service was required 1) to expose a standards-based (SOAP) interface, 2) to provide 24x7 uptime, 3) to preserve message ordering, 4) to secure access to the service without exposing the single common password, and 5) to queue requests during maintenance windows delivering them when the provider's BLServer became available. There were also scalability and performance issues which, in combination with message ordering requirements, drove us to an interesting solution. I'm not sure about the exact scalability requirements, since that was four years ago. If I remember correctly, we initially had to be able to handle about 300,000 requests during a normal 8-hour business day, with the ability to handle peak loads of around 1.5 million per day.
The first benefit that our service provided was to expose a standards-based (SOAP) interface, and interact with the provider BLServer which took requests and delivered responses using a proprietary protocol and message format. Our service was then used by application Web Services to provide customer value.
In order to meet the scalability and availability requirements, we proposed standing up a small cluster of WebLogic engines to host a BLServer WebService. This WebService would receive requests and (using JMS) queue them for processing. Responses would then be queued for later retrieval by the calling services. By queuing requests in this way, we could use the transactionality of the JMS provider to guarantee that each message was processed once and only once. Furthermore, we could queue up a backlog of messages and feed them through the finite number of connections made available by the provider BLServer.
By using a cluster, we would be able to handle the necessary load of incoming requests, queue them, and run them through the provider BLServer, keeping it as fully loaded as possible over the finite number of available connections.
Aye, but here's the rub. We had a cluster of WebLogic engines pulling messages from the queue. How do you go about maintaining message order while at the same time leveraging the parallelization of the cluster to handle the load? Consider what happens if you have two messages in the queue in order. The first is a stock sell that results in an increase in available cash. The second is a buy that uses the that cash to buy some other stock. You can see that these must be processed in order. If one server in the cluster grabs the first from the queue, and another grabs the second, there's no guarantee that the sell will be handled first by the provider BLServer. Therefore, we have to guarantee the order in our BLServer service.
How to do that? The solution became more obvious once we realized that total message ordering was not required. What's really required is that messages within certain groups be correctly ordered. These groups are identified by key, and all messages for a given key must be ordered. Depending on the type of request, that key might be a CUSIP, might be an account number, or some other identifier.
Now message ordering with scalability becomes simpler. If all messages for a certain key are handled by a given engine, then we can guarantee ordering by pulling a single message at a time from the queue, and processing it to completion before beginning the next message. Other engines in the queue will be doing the same thing at the same time for other keys. Thus, we gain some scalability.
Oooh, but we've just introduced Single Points Of Failure (SPOFs) for each key. If a given server that handles keys that start with '17' for example, and that server crashes, then messages for those keys won't be processed, and we have failed to meet our availability requirements. That's where the second bit of creativity came into play. We employed a lease mechanism. Leases were stored in a highly-available database. Upon startup, a given engine would go to the database and grab a lease record. Each lease represented a group of keys. For example, a lease record might exist for all records starting with the range '00' to '03'. An engine starts up, finds that this lease is the next available, and grabs it. In order to 'grab' a lease, an engine will update the lease with a time in the not-to-distant future, say, five minutes. As long as the engine is up, it will continue to update the lease every two minutes or so with a new time. If the engine crashes, the time expires, and some other engine grabs the lease.
As long as an engine has a lease for a given range, it can use a selector to receive messages from the queue for that given range. We now have scalability, message ordering and high availability. Everybody say, "Woah, that's so cool!"
At this point, we've solved a significant technical issue that should be captured as an architectural pattern. We never did that. It may be that this solution is documented somewhere as a pattern, but I'm not aware of it.
At the end of the project I moved on to other things, and left BLServer in the capable hands of my friend Brian S. I heard some months down the road that the service was in active use in production, and had seen only one minor bug. I've always been proud of the product quality that our team delivered. We went through at least four or five variations of possible solutions before arriving at the one described above. In each case, we'd get into the details of the solution only to ask, "yeah, but what happens if..." and realize that we were close, but had some problem because of the distributed nature of the environment, or whatever. It was very satisfying to finally arrive at the elegant solution that we delivered.
Thursday, November 20, 2008
On Software Development Metrics
As software developers, we need to be careful with metrics. I think there is an understanding that it's possible to cause more harm than help with an ill-chosen approach to metrics. One of the concerns is metrics that are susceptible to gaming. To me, a concern at least as great as gaming is measuring the wrong things.
The primary opportunity for measuring the wrong thing is by measuring mechanisms instead of results. For example, measuring pairing is a measuring a mechanism. Measuring the degree of siloing is measuring a result. Measuring testing is measuring a mechanism. Measuring code quality, but better yet product quality is measuring a result. It's the results that we care about more than the mechanisms. The mechanisms are a means to an end, not the end in themselves.
It's critical to measure the result rather than the mechanism. The first reason for this is that it's less susceptible to gaming. Consider measuring the number of tests versus the number of support calls received. Certainly, both can be gamed. But it's far easier to artificially jack up the number of tests. The real desire is to produce a system of great quality, which is subjective. It's harder to measure these subjective things, but it's worth it.
The second reason is that if we measure mechanisms, we'll miss important components of producing a quality system. So, for example, tests are a mechanism that help us deliver quality systems, but not the only mechanism. What we really care about is the quality of the delivered product. What happens if we measure the desired results instead of the means to achieve that result? First, it's harder to measure, and the outcome is more subjective. But, by measuring that, we also indirectly measure all those little things that we do as developers to make sure we don't get those 2AM calls, such as perusing the code a bit before check-in, or being well-read on pitfalls and benefits of various patterns.
The third reason for measuring the result instead of the mechanism is that measuring mechanism creates a box to think in. To make a trivial example, if we take as a metric the number of JUnit tests, we'll never be free to consider alternatives. We'll always create JUnit tests, because that's what measured. When the next great thing comes along, we'll be slower to adopt it, since it's not what we're measuring. We're thinking in a box. If we're measuring results, we will be more inclined to adopt new techniques as they come along to the extent that they seem to provide a real contribution to product quality.
It's easier to measure mechanisms than results. The main reason for this is that mechanisms tend to be more quantifiable than subjective results. The ease of measuring mechanisms is why most companies do it this way, and remain mediocre. The rule of thumb is this: You'll get more of what you measure. If you want more of a certain technique, measure it, and you'll get more of it. If you want more product quality, measure that instead - whatever it takes - and you'll get more of that. When it comes down to brass tacks, you don't want more of certain mechanisms, you want better results.
The primary opportunity for measuring the wrong thing is by measuring mechanisms instead of results. For example, measuring pairing is a measuring a mechanism. Measuring the degree of siloing is measuring a result. Measuring testing is measuring a mechanism. Measuring code quality, but better yet product quality is measuring a result. It's the results that we care about more than the mechanisms. The mechanisms are a means to an end, not the end in themselves.
It's critical to measure the result rather than the mechanism. The first reason for this is that it's less susceptible to gaming. Consider measuring the number of tests versus the number of support calls received. Certainly, both can be gamed. But it's far easier to artificially jack up the number of tests. The real desire is to produce a system of great quality, which is subjective. It's harder to measure these subjective things, but it's worth it.
The second reason is that if we measure mechanisms, we'll miss important components of producing a quality system. So, for example, tests are a mechanism that help us deliver quality systems, but not the only mechanism. What we really care about is the quality of the delivered product. What happens if we measure the desired results instead of the means to achieve that result? First, it's harder to measure, and the outcome is more subjective. But, by measuring that, we also indirectly measure all those little things that we do as developers to make sure we don't get those 2AM calls, such as perusing the code a bit before check-in, or being well-read on pitfalls and benefits of various patterns.
The third reason for measuring the result instead of the mechanism is that measuring mechanism creates a box to think in. To make a trivial example, if we take as a metric the number of JUnit tests, we'll never be free to consider alternatives. We'll always create JUnit tests, because that's what measured. When the next great thing comes along, we'll be slower to adopt it, since it's not what we're measuring. We're thinking in a box. If we're measuring results, we will be more inclined to adopt new techniques as they come along to the extent that they seem to provide a real contribution to product quality.
It's easier to measure mechanisms than results. The main reason for this is that mechanisms tend to be more quantifiable than subjective results. The ease of measuring mechanisms is why most companies do it this way, and remain mediocre. The rule of thumb is this: You'll get more of what you measure. If you want more of a certain technique, measure it, and you'll get more of it. If you want more product quality, measure that instead - whatever it takes - and you'll get more of that. When it comes down to brass tacks, you don't want more of certain mechanisms, you want better results.
Sunday, November 9, 2008
OT Middle Fork Trip Report - 11/1 to 11/2
This was my second hike on the OT, and my first solo hike of any length. I walked the Middle Fork section from the DD trailhead to Brushy Creek lodge, and it was beautiful weather. Surprised not to see more people out - you all missed a great weekend.
I left the DD trailhead at about 11 AM. Couldn't get down there earlier, unfortunately. Near the beginning of the walk I could hear an F15 overhead, and caught a few glimpses of it. He was doing some loops and rolls, as if he were training for a show or practicing evasive maneuvers. Not what I went in the woods to see, but pretty cool, nonetheless.
The trail is pleasant all the way, quite a few nice little creeks. Along this section it can be a ways between signs. There were a couple times where I might have wondered if I were still on the trail, except for how well-maintained it is. Most of the trail is shaded by woods, too, which is nice. There were a couple groups ahead of me, but never caught up to them enough to see them, just saw their shoe prints. Just before crossing the bridge at MF7, there was a little persimmon tree. A shake knocked a few off (if they drop from a shake, they're ripe), so I got to have a couple persimmons as a sweet treat on the trail. There were quite a few deer droppings along the trail, and they almost always had some persimmon seeds in them. Met Dan and Richard at the primitive camp there at MF7, and we chatted a bit. They saw a couple other groups on the trail.
I was planning to camp somewhere between MF8 and MF9, but there was about 2 hours of light left, so I pushed on, and ended up camping at the bottom of the hill by MF12. It was a little chilly down there, but I was warm enough to get some good sleep. This was my first night out after completing the net-tent part of my Ray-Way tarp. It's slippery sleeping on the net-tent floor, and I had just a slight incline, which meant a couple adjustments in the night.
Also new on this trip was my Cat Stove [url]http://coders-log.blogspot.com/2008/10/cat-stove.html[/url], which worked pretty well. I had a simple menu. For each meal, I had some multi-grain pasta, some pre-cooked Bob Evans breakfast sausage, and some cheddar cheese. Fuel up the stove, pour in a cup of water (that's up to my first knuckle). Get the water boiling, then add the pasta, put the meat and cheese on top, cover and cook. Tasty and provides some good energy for the trail.
Second day I started out at first light, and headed up the hill at first light, warming up quickly. Continued to see tracks from people ahead of me on the trail, but the only other people I met was a group of four on horseback going the other way. They had seen someone out who was on his ninth day on the trail.
I like Middle Fork Section. I did a hike with a friend on the Highway 21 to Devil's Tollgate section in August, and that was pretty dry and rocky, with some pretty aggressive climbing. A nice hike, don't get me wrong, but a lot more work. :) By contrast, Middle Fork is gravelly but not rocky, has plenty of water, and gentle grades throughout. The last climb before descending to Brushy Creek takes you up about 300 feet, but it's gentle enough that it's not a killer. I cooked and ate lunch at the bottom after crossing the creek, and that gave me enough energy to complete the hike.
Remember not to drink the water at Strother Creek. Check the map and fill up with water before getting there. It's not a terribly long stretch without water, but just in case.
This was also my first hike after trading in my New Balance trail running shoes for my Chaco Redrock shoes. I definitely like the Chacos. They're heavier, but don't show any deterioration after 25 miles on the trail, like the NBs did.
The hike ended at 2PM at Brushy Creek, which looks like a nice place. Friendly folks, and all that. Rested there and waited for my ride to pick me up. All in all, a very, very nice hike. I highly recommend this section as a starter hike, too. You can start at DD, and there's a trailhead at 12 miles, 20 miles, and 25, so you can bug out early if you get in over your head. There are also numerous gravel road crossings, if it comes down to that.
Also note that cell phone coverage is very sparse out there, so it's a tenuous life-line, if that's what you're counting on.
if you're on Facebook, see the pictures here:
http://www.facebook.com/photos.php?id=1295841432#/album.php?aid=10650&id=1295841432
I left the DD trailhead at about 11 AM. Couldn't get down there earlier, unfortunately. Near the beginning of the walk I could hear an F15 overhead, and caught a few glimpses of it. He was doing some loops and rolls, as if he were training for a show or practicing evasive maneuvers. Not what I went in the woods to see, but pretty cool, nonetheless.
The trail is pleasant all the way, quite a few nice little creeks. Along this section it can be a ways between signs. There were a couple times where I might have wondered if I were still on the trail, except for how well-maintained it is. Most of the trail is shaded by woods, too, which is nice. There were a couple groups ahead of me, but never caught up to them enough to see them, just saw their shoe prints. Just before crossing the bridge at MF7, there was a little persimmon tree. A shake knocked a few off (if they drop from a shake, they're ripe), so I got to have a couple persimmons as a sweet treat on the trail. There were quite a few deer droppings along the trail, and they almost always had some persimmon seeds in them. Met Dan and Richard at the primitive camp there at MF7, and we chatted a bit. They saw a couple other groups on the trail.
I was planning to camp somewhere between MF8 and MF9, but there was about 2 hours of light left, so I pushed on, and ended up camping at the bottom of the hill by MF12. It was a little chilly down there, but I was warm enough to get some good sleep. This was my first night out after completing the net-tent part of my Ray-Way tarp. It's slippery sleeping on the net-tent floor, and I had just a slight incline, which meant a couple adjustments in the night.
Also new on this trip was my Cat Stove [url]http://coders-log.blogspot.com/2008/10/cat-stove.html[/url], which worked pretty well. I had a simple menu. For each meal, I had some multi-grain pasta, some pre-cooked Bob Evans breakfast sausage, and some cheddar cheese. Fuel up the stove, pour in a cup of water (that's up to my first knuckle). Get the water boiling, then add the pasta, put the meat and cheese on top, cover and cook. Tasty and provides some good energy for the trail.
Second day I started out at first light, and headed up the hill at first light, warming up quickly. Continued to see tracks from people ahead of me on the trail, but the only other people I met was a group of four on horseback going the other way. They had seen someone out who was on his ninth day on the trail.
I like Middle Fork Section. I did a hike with a friend on the Highway 21 to Devil's Tollgate section in August, and that was pretty dry and rocky, with some pretty aggressive climbing. A nice hike, don't get me wrong, but a lot more work. :) By contrast, Middle Fork is gravelly but not rocky, has plenty of water, and gentle grades throughout. The last climb before descending to Brushy Creek takes you up about 300 feet, but it's gentle enough that it's not a killer. I cooked and ate lunch at the bottom after crossing the creek, and that gave me enough energy to complete the hike.
Remember not to drink the water at Strother Creek. Check the map and fill up with water before getting there. It's not a terribly long stretch without water, but just in case.
This was also my first hike after trading in my New Balance trail running shoes for my Chaco Redrock shoes. I definitely like the Chacos. They're heavier, but don't show any deterioration after 25 miles on the trail, like the NBs did.
The hike ended at 2PM at Brushy Creek, which looks like a nice place. Friendly folks, and all that. Rested there and waited for my ride to pick me up. All in all, a very, very nice hike. I highly recommend this section as a starter hike, too. You can start at DD, and there's a trailhead at 12 miles, 20 miles, and 25, so you can bug out early if you get in over your head. There are also numerous gravel road crossings, if it comes down to that.
Also note that cell phone coverage is very sparse out there, so it's a tenuous life-line, if that's what you're counting on.
if you're on Facebook, see the pictures here:
http://www.facebook.com/photos.php?id=1295841432#/album.php?aid=10650&id=1295841432
Tuesday, October 28, 2008
Installing OpenCV on Fedora 8
I've just finished installing and documenting this process on our company blog: OpenCV on Fedora 8
Sunday, October 26, 2008
Favorite Projects Series, Installment 1
I've been spending quite a bit of time these days thinking about what exactly makes a software project a good project to be on. Ever since I became a computer geek in 1980 with the purchase of an Ohio Scientific C1P from Commsci Corporation in Manchester, MO,
I've loved working with new technology, and using new, cool stuff. There are a number of projects that have been great experiences from that perspective, and I'll get to those later. But there's one that I always talk about when I'm asked what are some of my favorite projects, and we should look at that one first, and what it was that made it one of those most memorable projects.
In 1994 I was still working at Washington University, and more specifically, was doing some work for the School of Arts and Sciences. The plan was that I would sit and work in the department instead of being at a desk in the IT department. I had been up there for a little while, and we would identify different things that needed addressing.
So it turned out that there was this task that Cindy N. was responsible for that had to be done every year. It had never been automated, so she was spending two-and-a-half weeks every year manually completing the task. She dreaded it for weeks ahead of time every year, and made an otherwise happy job for her become miserable for several weeks.
Every year, Cindy had to review prospective students' records online, evaluate for what student aid they were eligible, then type up a letter inviting them to apply and detailing this information. As you can imagine with all of the manual work involved, there were going to be mistakes, and that's part of what she agonized over.
So, applying the technology at the time, we wanted to assemble the information available from an IBM mainframe to produce all of the letters and mailing labels needed. With today's technology, that's quite easy, given how everything's networked together. Even then, it was NOT rocket science. I had learned C, and wanted apply it to the problem of massaging the data into CSV format. We had a mainframe running the CP/CMS timesharing system (an early implementation of virtualization, which is in common use today), and that machine was the only place where we could run a C program.
We had to copy data from one mainframe to the CP/CMS mainframe, and if I recall correctly, we loaded the data into a FOCUS database, then extracted it from there onto the CP/CMS system. The mainframe with the data was not directly accessible from the Windows PC where we would run the mail merge into Word, but the CP/CMS mainframe was.
In CP/CMS we ran a C program that would extract the data and produce a CSV-formatted file, which we then downloaded to the Windows PC.
On the Windows PC, we wrote a non-trivial Basic for Applications script that would choose the appropriate paragraphs to include for each letter, depending on what aid would be received, and apply it to that letter, along with supporting detail. We would run the script to produce a single, long document that could be visually verified for accuracy.
Cindy would run this process, verify the results, and print the letters. What had been a painstaking, error-prone (no fault of Cindy's) two-and-a-half week process became a one-and-a-half day process that produced much more accurate and timely results. What had been a miserable, dreaded, yearly task became just another simple task to be performed.
Not surprisingly, this changed everything for Cindy. Though not technically the most challenging project I've worked on, it is one of the most satisfying projects I've ever done. Why? Well, like many people that get into software development (or many other careers), I want to change the world, and change it for the better. Realistically, I probably won't do that, but I can change my little corner of the world, and this is one project where I did change my little corner for the better. I worked directly with a user, understood the need, met the need, and saw the benefit that I provided, one human being to another. In some small way, one person's life was better because of what I did, and I got to see it happen.
One of the core tenets of today's Agile development processes is continuous, daily, user interaction. I've seen this be effective from 1983 when my software development career began at Washington University with my first user, Neldeane P.
I've loved working with new technology, and using new, cool stuff. There are a number of projects that have been great experiences from that perspective, and I'll get to those later. But there's one that I always talk about when I'm asked what are some of my favorite projects, and we should look at that one first, and what it was that made it one of those most memorable projects.
In 1994 I was still working at Washington University, and more specifically, was doing some work for the School of Arts and Sciences. The plan was that I would sit and work in the department instead of being at a desk in the IT department. I had been up there for a little while, and we would identify different things that needed addressing.
So it turned out that there was this task that Cindy N. was responsible for that had to be done every year. It had never been automated, so she was spending two-and-a-half weeks every year manually completing the task. She dreaded it for weeks ahead of time every year, and made an otherwise happy job for her become miserable for several weeks.
Every year, Cindy had to review prospective students' records online, evaluate for what student aid they were eligible, then type up a letter inviting them to apply and detailing this information. As you can imagine with all of the manual work involved, there were going to be mistakes, and that's part of what she agonized over.
So, applying the technology at the time, we wanted to assemble the information available from an IBM mainframe to produce all of the letters and mailing labels needed. With today's technology, that's quite easy, given how everything's networked together. Even then, it was NOT rocket science. I had learned C, and wanted apply it to the problem of massaging the data into CSV format. We had a mainframe running the CP/CMS timesharing system (an early implementation of virtualization, which is in common use today), and that machine was the only place where we could run a C program.
We had to copy data from one mainframe to the CP/CMS mainframe, and if I recall correctly, we loaded the data into a FOCUS database, then extracted it from there onto the CP/CMS system. The mainframe with the data was not directly accessible from the Windows PC where we would run the mail merge into Word, but the CP/CMS mainframe was.
In CP/CMS we ran a C program that would extract the data and produce a CSV-formatted file, which we then downloaded to the Windows PC.
On the Windows PC, we wrote a non-trivial Basic for Applications script that would choose the appropriate paragraphs to include for each letter, depending on what aid would be received, and apply it to that letter, along with supporting detail. We would run the script to produce a single, long document that could be visually verified for accuracy.
Cindy would run this process, verify the results, and print the letters. What had been a painstaking, error-prone (no fault of Cindy's) two-and-a-half week process became a one-and-a-half day process that produced much more accurate and timely results. What had been a miserable, dreaded, yearly task became just another simple task to be performed.
Not surprisingly, this changed everything for Cindy. Though not technically the most challenging project I've worked on, it is one of the most satisfying projects I've ever done. Why? Well, like many people that get into software development (or many other careers), I want to change the world, and change it for the better. Realistically, I probably won't do that, but I can change my little corner of the world, and this is one project where I did change my little corner for the better. I worked directly with a user, understood the need, met the need, and saw the benefit that I provided, one human being to another. In some small way, one person's life was better because of what I did, and I got to see it happen.
One of the core tenets of today's Agile development processes is continuous, daily, user interaction. I've seen this be effective from 1983 when my software development career began at Washington University with my first user, Neldeane P.
Thursday, October 23, 2008
Cat Stove
Okay, so for all of you out there that thought, "mmmmm, cat," shame on you. They're not that tasty. ;)
I think this guy is the inventor: THE CAT FOOD CAN ALCOHOL STOVE I read his instructions, but then read and followed these to make my stove: SGT Rock's Hiking H.Q. - Cat Stove.
The instructions were easy to follow, and an evening's work resulted in a new cat stove that fits nicely into my backpacking cook pot. My stove weighs 68.5 grams, which is about 2.4 ounces. I could potentially reduce that by trimming the hardware cloth, since its squares are 1/2 inch on a side, and probably 1 inch on a side would do the trick.
I think this guy is the inventor: THE CAT FOOD CAN ALCOHOL STOVE I read his instructions, but then read and followed these to make my stove: SGT Rock's Hiking H.Q. - Cat Stove.
The instructions were easy to follow, and an evening's work resulted in a new cat stove that fits nicely into my backpacking cook pot. My stove weighs 68.5 grams, which is about 2.4 ounces. I could potentially reduce that by trimming the hardware cloth, since its squares are 1/2 inch on a side, and probably 1 inch on a side would do the trick.
Saturday, October 18, 2008
Chaco Shoes for Backpacking
So, now that I'm getting back into backpacking after a long hiatus - like, decades - I've been trying learn from Ray Jardine's practices and apply what works for me. I've already mentioned about making his Tarp and Net-Tent kits.
Another piece of advice from Ray is about footwear. There's probably no more important gear choice you can make than what to put on your feet. Ray recommends hiking in trail running shoes. I agree. On the other hand, I don't push it, because some people genuinely need ankle support. But if you switch to trail running shoes, there are some advantages. One, obviously, is weight. Trail shoes beat boots hands-down. Another is flexibility, and that's very important. Again, trail shoes beat boots. Another is support, and again, shoes beat boots. Another is shock absorption. Again, shoes over boots.
So, about weight. The weight on your feet is more important than the weight on your back. Why? I think it's because you're constantly accelerating and decelerating your feet. Any weight there takes more of an energy toll than the weight on your back, which maintains a relatively constant speed. I'm a big fan of New Balance shoes. I switched to them for daily use some years ago when I found that a pair would last me a couple years instead of one year, like many other brands. I also like to buy Made in the U.S.A when I can, so I can ride my high horse when my job gets outsourced. ;) New Balance gives me more of an opportunity to do that. New Balance shoes are available in 2E width, which I need.
Given that background, I went and bought a pair of MT908's. They're advertised at about 12 ounces, which is close to Ray's ideal maximum of 11 ounces. They're made in China, unfortunately. I wore them on several hike at a parks nearby that sports woods, hills, and trail loops where I can get a 3 to 6 mile hike in fairly easily. I then wore them on an overnighter, about 15 miles. Total mileage on the shoes was probably less than 100 miles. And guess what? The soles started falling apart. Follow the link and look at the picture of the soles. You'll see that there are different colors. All of those different colors are actually pieces that are glued on. Duh. What were they thinking? Those pieces were starting to fall off. I'm going to go to the New Balance store, expecting a fight when I try to return them.
But let me tell you this - New Balance understands customer service like NOBODY else does these days. I take the shoes back to the New Balance store, and the employee there says that this is unusual, and they haven't had that problem with this model. She asks if I would like a total refund, would I like to try a brand-new pair of the same shoe, or would I like to try a different model. So, I like the shoe, it's light and comfortable, so I try a new pair. No charge. I love those guys, and they have a customer for life.
Within seven miles, the new pair is falling apart. That's right, seven miles. Now, I'm not huge. I'm a little over six feet tall, about 195 pounds. Heavy, but the shoes should be able to handle it. Back to the New Balance store. This time, by luck of the draw, I'm talking to the store manager. Same level of customer service. This time, I opt for a full refund.
So, I was disappointed in the shoe. They're made in China, and honestly, they know that Americans today are not like the previous generation. Most of us (not me) are happy to buy crap, and pay good money for it, so they sell us shiny crap at high prices, and we say thank you. Having said this, New Balance customer service is Made in the U.S. of A., the old-fashioned way. I WILL go back to them, largely because of their customer service. New Balance, please make all of your shoes in America. Why not outsource to small towns like Steelville, Missouri? You can still save money over big city labor costs, and those small-town folks remember what quality is. I guarantee it.
So now, what to do for a shoe? I went to one store, and the guy tried to jam Nikes on my foot. Nikes only come in narrow and narrower, and I need a 2E. Moron. I went to another store, and found the Chaco's Men's Redrock.
Here's what's right with the Chacos. Firstly, never in my life have I had a shoe where the arch of the shoe comes up and nestles in the arch of my shoe. I never knew they were supposed to do that! The Chacos do. Wow, arch support. So that's what they meant by arch support. Oooooh.
Second, one-piece soles, stitched to the uppers. That's way shoes are supposed to be made. There's no gluing in shoes.
The soles are some percentage recycled rubber from tires. Good for the environment, and that's a plus. And they've got good lugs for grip on the trail.
Also, all the standard stuff. Reasonable cushioning. Not as good as the NBs, but good. That's good for my knees. Breathable uppers, so that they walk dry after going through a creek. I also moved the laces so that the eyes closest to my toes are not used. This gives me the nice, floppy, barefoot feel up front without losing the heel-hugging feel in the back. The shoe laces could be better, but I might swap out for some of New Balance's bubble laces, which are great.
Okay, now the bad news. First, they're made in China. Not a show-stopper, but I'd like to keep shoe jobs here so that when someone's buying software development, my job stays here. Next, the weight. Chaco doesn't advertise the weight.I wear a men's size 9 in 2E width. My right shoe weighs 463.1 grams, or about 16.34 ounces. The left one weighs 472.2 grams, or about 16.7 ounces. Too heavy to be ideal for backpacking. They are noticeably heavier than the New Balance 908s. Finally, when I asked about the return policy, it's not as generous as NB's. They'd deduct from the refund for wear-and-tear.
The Chacos are for me, at least for the time being. Chaco, here's what I have to say to you. First, open a plant in Steelville, Missouri. Can you tell I love the place? It's not my home town, but it has a special place in my heart, for a variety of reasons. Chaco, if you build a plant there, you will be able to make shoes cheaper than in Colorado. Not as cheap as China perhaps, but it keeps jobs here in the U.S. Second, see if you can make a shoe that's just right for ultralight backpackers. Take the Redrock, reduce the weight by five to six ounces, if possible, while keeping as much ruggedness as possible. And third, think about your return policy. If you're making a quality shoe, and you are in the Redrock, then you can be more generous. See if you can match NB's policy.
Conclusion: I'm going to wear the Chacos for now, because they're the best shoe I've found so far.
Another piece of advice from Ray is about footwear. There's probably no more important gear choice you can make than what to put on your feet. Ray recommends hiking in trail running shoes. I agree. On the other hand, I don't push it, because some people genuinely need ankle support. But if you switch to trail running shoes, there are some advantages. One, obviously, is weight. Trail shoes beat boots hands-down. Another is flexibility, and that's very important. Again, trail shoes beat boots. Another is support, and again, shoes beat boots. Another is shock absorption. Again, shoes over boots.
So, about weight. The weight on your feet is more important than the weight on your back. Why? I think it's because you're constantly accelerating and decelerating your feet. Any weight there takes more of an energy toll than the weight on your back, which maintains a relatively constant speed. I'm a big fan of New Balance shoes. I switched to them for daily use some years ago when I found that a pair would last me a couple years instead of one year, like many other brands. I also like to buy Made in the U.S.A when I can, so I can ride my high horse when my job gets outsourced. ;) New Balance gives me more of an opportunity to do that. New Balance shoes are available in 2E width, which I need.
Given that background, I went and bought a pair of MT908's. They're advertised at about 12 ounces, which is close to Ray's ideal maximum of 11 ounces. They're made in China, unfortunately. I wore them on several hike at a parks nearby that sports woods, hills, and trail loops where I can get a 3 to 6 mile hike in fairly easily. I then wore them on an overnighter, about 15 miles. Total mileage on the shoes was probably less than 100 miles. And guess what? The soles started falling apart. Follow the link and look at the picture of the soles. You'll see that there are different colors. All of those different colors are actually pieces that are glued on. Duh. What were they thinking? Those pieces were starting to fall off. I'm going to go to the New Balance store, expecting a fight when I try to return them.
But let me tell you this - New Balance understands customer service like NOBODY else does these days. I take the shoes back to the New Balance store, and the employee there says that this is unusual, and they haven't had that problem with this model. She asks if I would like a total refund, would I like to try a brand-new pair of the same shoe, or would I like to try a different model. So, I like the shoe, it's light and comfortable, so I try a new pair. No charge. I love those guys, and they have a customer for life.
Within seven miles, the new pair is falling apart. That's right, seven miles. Now, I'm not huge. I'm a little over six feet tall, about 195 pounds. Heavy, but the shoes should be able to handle it. Back to the New Balance store. This time, by luck of the draw, I'm talking to the store manager. Same level of customer service. This time, I opt for a full refund.
So, I was disappointed in the shoe. They're made in China, and honestly, they know that Americans today are not like the previous generation. Most of us (not me) are happy to buy crap, and pay good money for it, so they sell us shiny crap at high prices, and we say thank you. Having said this, New Balance customer service is Made in the U.S. of A., the old-fashioned way. I WILL go back to them, largely because of their customer service. New Balance, please make all of your shoes in America. Why not outsource to small towns like Steelville, Missouri? You can still save money over big city labor costs, and those small-town folks remember what quality is. I guarantee it.
So now, what to do for a shoe? I went to one store, and the guy tried to jam Nikes on my foot. Nikes only come in narrow and narrower, and I need a 2E. Moron. I went to another store, and found the Chaco's Men's Redrock.
Here's what's right with the Chacos. Firstly, never in my life have I had a shoe where the arch of the shoe comes up and nestles in the arch of my shoe. I never knew they were supposed to do that! The Chacos do. Wow, arch support. So that's what they meant by arch support. Oooooh.
Second, one-piece soles, stitched to the uppers. That's way shoes are supposed to be made. There's no gluing in shoes.
The soles are some percentage recycled rubber from tires. Good for the environment, and that's a plus. And they've got good lugs for grip on the trail.
Also, all the standard stuff. Reasonable cushioning. Not as good as the NBs, but good. That's good for my knees. Breathable uppers, so that they walk dry after going through a creek. I also moved the laces so that the eyes closest to my toes are not used. This gives me the nice, floppy, barefoot feel up front without losing the heel-hugging feel in the back. The shoe laces could be better, but I might swap out for some of New Balance's bubble laces, which are great.
Okay, now the bad news. First, they're made in China. Not a show-stopper, but I'd like to keep shoe jobs here so that when someone's buying software development, my job stays here. Next, the weight. Chaco doesn't advertise the weight.I wear a men's size 9 in 2E width. My right shoe weighs 463.1 grams, or about 16.34 ounces. The left one weighs 472.2 grams, or about 16.7 ounces. Too heavy to be ideal for backpacking. They are noticeably heavier than the New Balance 908s. Finally, when I asked about the return policy, it's not as generous as NB's. They'd deduct from the refund for wear-and-tear.
The Chacos are for me, at least for the time being. Chaco, here's what I have to say to you. First, open a plant in Steelville, Missouri. Can you tell I love the place? It's not my home town, but it has a special place in my heart, for a variety of reasons. Chaco, if you build a plant there, you will be able to make shoes cheaper than in Colorado. Not as cheap as China perhaps, but it keeps jobs here in the U.S. Second, see if you can make a shoe that's just right for ultralight backpackers. Take the Redrock, reduce the weight by five to six ounces, if possible, while keeping as much ruggedness as possible. And third, think about your return policy. If you're making a quality shoe, and you are in the Redrock, then you can be more generous. See if you can match NB's policy.
Conclusion: I'm going to wear the Chacos for now, because they're the best shoe I've found so far.
Thursday, October 16, 2008
Completing the Ray-Way Tarp and Net-Tent
Well, it's taken us a while, but my Ray-Way Tarp and Net-Tent are now complete, having finished the Net-Tent tonight. My wife Dot did the lion's share of the work - it's a fair bit of sewing, to be sure. It's really been a labor of LOVE on her part, too - she doesn't backpack.
So, every good ultra light backpacker (which I am not, but let's pretend) will want to know the weight. When I weighed the tarp before, I mis-weighed it by counting my 200 gram weights as 100 grams. Oops! Anyway, here are the correct figures.
Also consider that with the Net-Tent completed, I no longer need the ground cloth, since the Net-Tent does double-duty, and that saves me 114.1 grams, or 4 ounces.
I added about 10 grams to the weight of the tent by swapping out all the brown flatlines for white cord. Why? Because I found that I absolutely cannot untie the broun flatline when wet.
So, every good ultra light backpacker (which I am not, but let's pretend) will want to know the weight. When I weighed the tarp before, I mis-weighed it by counting my 200 gram weights as 100 grams. Oops! Anyway, here are the correct figures.
Component | Advertised weight | My weight |
---|---|---|
Two-person tarp | 16.76 ounces (before sealing) | 17.84 ounces (505.8 grams) |
8 Stakes | 2.87 ounces (81.5 grams) | |
Net-Tent | about 12 ounces | 13.85 ounces (392.5 grams) |
TOTAL | 34.56 ounces (979.8 grams) |
Also consider that with the Net-Tent completed, I no longer need the ground cloth, since the Net-Tent does double-duty, and that saves me 114.1 grams, or 4 ounces.
I added about 10 grams to the weight of the tent by swapping out all the brown flatlines for white cord. Why? Because I found that I absolutely cannot untie the broun flatline when wet.
PenWag and GWT update
So, I mentioned that I'm using GWT to develop my new web site PenWag. I ran into an interesting problem with GWT, so let's talk about that in case you run into the same problem. When you do GWT development, most of the time spent viewing your app is in "hosted mode," that is, using their custom browser. Your app runs there as a java program. Once you've made some progress, you might deploy it to say, Tomcat, and hit it with a full-fledged browser such as Firefox.
So, I'm tooling along throwing this app together, viewing it in hosted mode, and all looks good. It's looking the way I want it, so I deploy it to Tomcat and view it in Firefox. Hmm, no love. I can see most of the content, but there's a blank space where there's supposed to be a grid. I'll cut to the chase here - after a fair amount of digging, I found the problem in this method in a class used to populate the grid:
To my eye, it looks pretty innocuous. But no. Here's where the problem is. Apparently, not sure why, "results.sublist(...)" works fine in hosted mode as Java, but fails when it's compiled to Javascript and viewed with Firefox. Ah, well. The fix was to move this logic to the server side, and just store the desired value that we will return with getList(). So now, the method looks like:
and all is well.
This was something of a difficult problem to solve. There was no error message to point to the problem, so I used a divide-and-conquer approach to solving the problem. Took out all the code, and the other surrounding components began to show up. Started to introduce pieces until it broke, then narrowed the problem down to sublist. There ya go, hope this helps.
So, I'm tooling along throwing this app together, viewing it in hosted mode, and all looks good. It's looking the way I want it, so I deploy it to Tomcat and view it in Firefox. Hmm, no love. I can see most of the content, but there's a blank space where there's supposed to be a grid. I'll cut to the chase here - after a fair amount of digging, I found the problem in this method in a class used to populate the grid:
public List<T> getList() {
return hasNextPage() ? results.subList(0, pageSize) : results;
}
To my eye, it looks pretty innocuous. But no. Here's where the problem is. Apparently, not sure why, "results.sublist(...)" works fine in hosted mode as Java, but fails when it's compiled to Javascript and viewed with Firefox. Ah, well. The fix was to move this logic to the server side, and just store the desired value that we will return with getList(). So now, the method looks like:
public List<T> getList() {
return results;
}
and all is well.
This was something of a difficult problem to solve. There was no error message to point to the problem, so I used a divide-and-conquer approach to solving the problem. Took out all the code, and the other surrounding components began to show up. Started to introduce pieces until it broke, then narrowed the problem down to sublist. There ya go, hope this helps.
Wednesday, October 15, 2008
Global Warming, A Skeptic's View
Well, my friends, I am a Global Warming Skeptic. Is the Earth warming? I though it was, but the more I read, the more the data seems inconclusive. Does that mean I think we should go on polluting the Earth? Well, no. In practical terms, we will of course, continue to pollute. But I'll go out on a limb and say that less pollution is better than more pollution. There are enough valid reasons for this that I don't need your bogus Global Warming reasons to convince me.
After a unfortunately short conversation with a friend of mine who seems quite convinced of Global Warming, I sent him this email: Global Warming Letter It includes a link to the Junk Science article on Global Warming. If you're still one of those folks that think Global Warming's on the verge of destroying the planet, and you can only stomach one Global Warming Skeptic article, by all means make it this one. They've done some good study on the subject.
Now, you may be saying to yourself, "But there's a strong consensus among scientists that Global Warming is real!" There a many ways to achieve consensus. One is through reason, but there are other techniques available. Consider the many countries where "democratic" elections re-install presidents with 100% support. That's right. The Global Warming exaggerators may employ less-than-scientific methods of persuasion to change your mind. They may threaten to make you unemployable if you don't agree with them, effectively excommunicating non-believers from the scientific community. All while they're griping that one Global Warming Exaggerator might have been threatened once, according to a rumor from a credible source. Here's Dr. Cullen's threat. The Exaggerators are now making ties between skeptics and neo-Nazism by referring to skeptics as "deniers." The name-calling wouldn't be there if they had a valid case, would it? Read this one, too: The Real Inconvenient Truth About Global Warming: Skeptics Have Valid Arguments
There are also some interesting articles on Wikipedia. This one is biased towards Global Warming Exaggeration: Global warming controversy. You rarely hear about the List of scientists opposing global warming consensus. Who knew there were scientists that disagree with the consensus?
You might even remember dire predictions that 2006 would be the worst hurricane season ever in the Atlantic, surpassing even 2005, and that this would provide further evidence of global warming. In 2005 there were thirty storms, of which fifteen were hurricanes. In 2006 there were nine storms, of which five were hurricanes. That's right, nine storms. Hardly worth even getting the Hurricane Center all geared up for the season. Is that proof that there is no global warming? Of course not. It might be an indication that the exaggerators couldn't predict Christmas with a calendar, much less the average temperature in 2100.
Reuter's reports that the United States is "the world's largest greenhouse gas polluter accounting for nearly one quarter of all carbon emissions." That much may be true. What they're not telling you is that the U.S.'s "carbon uptake" matches our carbon output. The net effect? Zero. Contrast that with Japan, whose carbon output is seven times their uptake. Wow.
After a unfortunately short conversation with a friend of mine who seems quite convinced of Global Warming, I sent him this email: Global Warming Letter It includes a link to the Junk Science article on Global Warming. If you're still one of those folks that think Global Warming's on the verge of destroying the planet, and you can only stomach one Global Warming Skeptic article, by all means make it this one. They've done some good study on the subject.
Now, you may be saying to yourself, "But there's a strong consensus among scientists that Global Warming is real!" There a many ways to achieve consensus. One is through reason, but there are other techniques available. Consider the many countries where "democratic" elections re-install presidents with 100% support. That's right. The Global Warming exaggerators may employ less-than-scientific methods of persuasion to change your mind. They may threaten to make you unemployable if you don't agree with them, effectively excommunicating non-believers from the scientific community. All while they're griping that one Global Warming Exaggerator might have been threatened once, according to a rumor from a credible source. Here's Dr. Cullen's threat. The Exaggerators are now making ties between skeptics and neo-Nazism by referring to skeptics as "deniers." The name-calling wouldn't be there if they had a valid case, would it? Read this one, too: The Real Inconvenient Truth About Global Warming: Skeptics Have Valid Arguments
There are also some interesting articles on Wikipedia. This one is biased towards Global Warming Exaggeration: Global warming controversy. You rarely hear about the List of scientists opposing global warming consensus. Who knew there were scientists that disagree with the consensus?
You might even remember dire predictions that 2006 would be the worst hurricane season ever in the Atlantic, surpassing even 2005, and that this would provide further evidence of global warming. In 2005 there were thirty storms, of which fifteen were hurricanes. In 2006 there were nine storms, of which five were hurricanes. That's right, nine storms. Hardly worth even getting the Hurricane Center all geared up for the season. Is that proof that there is no global warming? Of course not. It might be an indication that the exaggerators couldn't predict Christmas with a calendar, much less the average temperature in 2100.
Reuter's reports that the United States is "the world's largest greenhouse gas polluter accounting for nearly one quarter of all carbon emissions." That much may be true. What they're not telling you is that the U.S.'s "carbon uptake" matches our carbon output. The net effect? Zero. Contrast that with Japan, whose carbon output is seven times their uptake. Wow.
When I'm Not Coding
Sure, I'm a coder, but like many others, coding's not my only interest. Many coders are fine musicians. Not me, sadly, but there are a couple other hobbies I enjoy.
I've been bowhunting for deer for a long time. Unsuccessfully. And I mean a LONG time. For many years, the land I had access to was thinly populated with very smart deer. I had precious few opportunities. Sometimes I went a few years without even seeing a deer. For the past several years, I've been hunting with a friend whose family has well-populated land. They're nice enough to let me go up there, and I say thankya. Well, last year I got a new bow. A Mathews, actually, and they make a very fine bow. It's very quiet and smooth, not to mention which it's beautiful in terms of craftsmanship and artistry. I had four clean misses last year, all easily make-able shots. This year, I finally got one. It's been a long time coming.
I've been bowhunting for deer for a long time. Unsuccessfully. And I mean a LONG time. For many years, the land I had access to was thinly populated with very smart deer. I had precious few opportunities. Sometimes I went a few years without even seeing a deer. For the past several years, I've been hunting with a friend whose family has well-populated land. They're nice enough to let me go up there, and I say thankya. Well, last year I got a new bow. A Mathews, actually, and they make a very fine bow. It's very quiet and smooth, not to mention which it's beautiful in terms of craftsmanship and artistry. I had four clean misses last year, all easily make-able shots. This year, I finally got one. It's been a long time coming.
Current Projects
I have a couple software projects underway that I do outside of work. The first is of interest only to developers of web applications, mostly, and it's called DonsProxy. I didn't name it that, but people started using that name for the original version which was developed over some years at a client site. The new version is a complete re-write, and is available at http://donsproxy.sourceforge.net/. I haven't made any updates for a couple months. The next big task on this is to do more work on the SSL/HTTPS side, and get that working better.
The project I've been active on lately is for a web site with somewhat broader appeal. It's for anyone who likes to write, but for whatever reason isn't diving in to a book project or short stories for some magazine. It's also for readers of those stories, and it can be found at https://penwag.com/. This site is in its early stages, so don't expect much yet. It's just that I wanted to share it while it's under development, so you can see the progress, much like the user on an Agile project would.
The project I've been active on lately is for a web site with somewhat broader appeal. It's for anyone who likes to write, but for whatever reason isn't diving in to a book project or short stories for some magazine. It's also for readers of those stories, and it can be found at https://penwag.com/. This site is in its early stages, so don't expect much yet. It's just that I wanted to share it while it's under development, so you can see the progress, much like the user on an Agile project would.
My Life with SOA
I was recently asked to write about my history using SOA technologies. I was asked to write a paragraph. I failed. :) There's a lot to cover, and here it is:
I got my start with service-oriented architectures well before SOA was a buzzword in 1995 at Southwestern Bell Telephone, on a project by the name of Datagate. At the time, there were really no tools or libraries to help with the process, so we built everything in C from the ground up. There are a number of elements that are immediately associated with SOA in today's world, such as SOAP and UDDI. In Datagate, there were corresponding technologies. The exchange protocol that we used was a proprietary format designed to be concise and limit bandwidth use. Today, industry standards such as SOAP exist to allow interoperation and tools development. Most SOA environments rely on a Directory Service to allow client applications to discover active service instances. In today's world, that's typically UDDI, and in Datagate we built our own directory service that complied with the X.500 standards of the time. Current SOA implementations provide services hosted by an application engine such as WebLogic or JBoss. Datagate services ran as independent processes, but were managed by a central Resource Manager that monitored and reported the health state of infrastructure and business services. Eventually, we added a PKI to the system, and again, there were few tools at the time, so much of what we did we had to build ourselves.
One of the keys to success within a SOA environment is that mentality, a way of thinking about services, that is different from building stand-alone applications. SOA provides opportunities for high-availability and scalability that standalone applications cannot provide. But the real power of SOA, the idea that makes it really exciting, is the idea that we can capture complex business rules one time in a service, and re-use that service over and over. If we have a service that captures, for example, the rules that allow a customer to buy a certain stock, we can capture that once in a business service. That business service is then responsible to guard the data. We can test that service to a high standard, and any client that wants to buy a stock will go through that service, using well-tested logic. If we then decide to create a new application that requires the same logic, we don't have to recreate the logic that's already there. We can re-use it. This is important because we already have a service that we've been using, and trust to do the right thing. It's also important because it represents a cost savings to the business. We don't have to re-write, re-test and re-deploy a new implementation. This lets us provide the latest flashy interface, while using the same business logic that we've come to trust over time.
I started at my next employer, Connectria, in 1998, and worked on-site at A. G. Edwards, Inc. for six-and-half years. We did a number of projects there using SOA technologies as they developed into what they are today. My first SOA project there was a project named ClientOne, which delivered a new Broker Workstation to all of the brokers around the United States. We mentored application teams as they wrote thin-client applications that were delivered through the Broker Workstations.
My next big SOA project there was AGEconnect - the project which delivered the website agedwards.com for many years. I was an architect on this project's architecture teams through a couple major iterations. In the first iteration, we delivered a SOA-based architecture using two application engines, Dynamo and Tengah (now WebLogic). In the second iteration, we completely re-architected the framework to be based in WebLogic. Many supporting technologies were required and included in the new architecture, including LDAP, Apache HTTPD, F5's Big-IP appliance, and WSD appliances. We hosted the system on Sun hardware. There were four WebLogic engines each on four Sun machines in the home office, with the same on the failover site. We rolled out with about 80,000 clients on the system, which eventually grew to 300,000 or more.
On my final project at A. G. Edwards, I was the team leader for the BLServer component of their Gateway project. The Gateway project was focused on migrating some key functionality to an external provider. This provider was using some dated technologies to deliver content over the network, specifically, proprietary protocols. In addition, this external client had a nightly four-hour update window during which we needed to continue to accept updates. BLServer was one of a collection of services designed to wrap those proprietary services with standards-compliant services. BLServer included at its core a clustered message-driven web service that queued requests into JMS queues (the provider was Tibco) and then applied them in order during the external provider's uptime. The throughput requirements (about 300,000 transactions per 8-hour day) and the ordering requirements that exist for updating clients' financial accounts made this a particularly interesting and challenging project.
I started at my present employer, Asynchrony Solutions, Inc., in 2004, where I have also been a member of SOA projects. On the first I was the team leader on a project to deliver a system that could both push documents near to point-of-use around the world, and to pull data from disparate systems and deliver it around the world. That is, a federated system. This system employed Model 2 servlets hosted under Tomcat and JMS to meet the demanding performance requirements on networks that are often slow or unreliable. The JMS provider in this case was originally ActiveMQ, but we replaced it with JORAM when ActiveMQ was found to have difficulty reconnecting after network failures.
The next was to demonstrate to a client in the healthcare industry how SOA might work for them. We delivered a pilot application that allowed clients to view their prescriptions, appointments, doctors and other information using a web browser. It allowed healthcare providers to review patient information using a browser on workstations, iPods, and Blackberries. This application demonstrated the capabilities of both SOAP-based and RESTful services, and both Java and Ruby clients.
The next project made available an opensource SOA stack for the Army that developers could use to implement SOA solutions, then deploy them within a production-level environment using primarily commercial implementations of SOA stack components, such as WebLogic and Systinet. The developers' stack included a service compliance tool that would alert them to possible governance violations before deployment, and a stack compliance tool that would allow them to replace components of the SOA stack and verify that they delivered the same functionality.
I got my start with service-oriented architectures well before SOA was a buzzword in 1995 at Southwestern Bell Telephone, on a project by the name of Datagate. At the time, there were really no tools or libraries to help with the process, so we built everything in C from the ground up. There are a number of elements that are immediately associated with SOA in today's world, such as SOAP and UDDI. In Datagate, there were corresponding technologies. The exchange protocol that we used was a proprietary format designed to be concise and limit bandwidth use. Today, industry standards such as SOAP exist to allow interoperation and tools development. Most SOA environments rely on a Directory Service to allow client applications to discover active service instances. In today's world, that's typically UDDI, and in Datagate we built our own directory service that complied with the X.500 standards of the time. Current SOA implementations provide services hosted by an application engine such as WebLogic or JBoss. Datagate services ran as independent processes, but were managed by a central Resource Manager that monitored and reported the health state of infrastructure and business services. Eventually, we added a PKI to the system, and again, there were few tools at the time, so much of what we did we had to build ourselves.
One of the keys to success within a SOA environment is that mentality, a way of thinking about services, that is different from building stand-alone applications. SOA provides opportunities for high-availability and scalability that standalone applications cannot provide. But the real power of SOA, the idea that makes it really exciting, is the idea that we can capture complex business rules one time in a service, and re-use that service over and over. If we have a service that captures, for example, the rules that allow a customer to buy a certain stock, we can capture that once in a business service. That business service is then responsible to guard the data. We can test that service to a high standard, and any client that wants to buy a stock will go through that service, using well-tested logic. If we then decide to create a new application that requires the same logic, we don't have to recreate the logic that's already there. We can re-use it. This is important because we already have a service that we've been using, and trust to do the right thing. It's also important because it represents a cost savings to the business. We don't have to re-write, re-test and re-deploy a new implementation. This lets us provide the latest flashy interface, while using the same business logic that we've come to trust over time.
I started at my next employer, Connectria, in 1998, and worked on-site at A. G. Edwards, Inc. for six-and-half years. We did a number of projects there using SOA technologies as they developed into what they are today. My first SOA project there was a project named ClientOne, which delivered a new Broker Workstation to all of the brokers around the United States. We mentored application teams as they wrote thin-client applications that were delivered through the Broker Workstations.
My next big SOA project there was AGEconnect - the project which delivered the website agedwards.com for many years. I was an architect on this project's architecture teams through a couple major iterations. In the first iteration, we delivered a SOA-based architecture using two application engines, Dynamo and Tengah (now WebLogic). In the second iteration, we completely re-architected the framework to be based in WebLogic. Many supporting technologies were required and included in the new architecture, including LDAP, Apache HTTPD, F5's Big-IP appliance, and WSD appliances. We hosted the system on Sun hardware. There were four WebLogic engines each on four Sun machines in the home office, with the same on the failover site. We rolled out with about 80,000 clients on the system, which eventually grew to 300,000 or more.
On my final project at A. G. Edwards, I was the team leader for the BLServer component of their Gateway project. The Gateway project was focused on migrating some key functionality to an external provider. This provider was using some dated technologies to deliver content over the network, specifically, proprietary protocols. In addition, this external client had a nightly four-hour update window during which we needed to continue to accept updates. BLServer was one of a collection of services designed to wrap those proprietary services with standards-compliant services. BLServer included at its core a clustered message-driven web service that queued requests into JMS queues (the provider was Tibco) and then applied them in order during the external provider's uptime. The throughput requirements (about 300,000 transactions per 8-hour day) and the ordering requirements that exist for updating clients' financial accounts made this a particularly interesting and challenging project.
I started at my present employer, Asynchrony Solutions, Inc., in 2004, where I have also been a member of SOA projects. On the first I was the team leader on a project to deliver a system that could both push documents near to point-of-use around the world, and to pull data from disparate systems and deliver it around the world. That is, a federated system. This system employed Model 2 servlets hosted under Tomcat and JMS to meet the demanding performance requirements on networks that are often slow or unreliable. The JMS provider in this case was originally ActiveMQ, but we replaced it with JORAM when ActiveMQ was found to have difficulty reconnecting after network failures.
The next was to demonstrate to a client in the healthcare industry how SOA might work for them. We delivered a pilot application that allowed clients to view their prescriptions, appointments, doctors and other information using a web browser. It allowed healthcare providers to review patient information using a browser on workstations, iPods, and Blackberries. This application demonstrated the capabilities of both SOAP-based and RESTful services, and both Java and Ruby clients.
The next project made available an opensource SOA stack for the Army that developers could use to implement SOA solutions, then deploy them within a production-level environment using primarily commercial implementations of SOA stack components, such as WebLogic and Systinet. The developers' stack included a service compliance tool that would alert them to possible governance violations before deployment, and a stack compliance tool that would allow them to replace components of the SOA stack and verify that they delivered the same functionality.
Hadoop
I'm now on my second project where we're using Hadoop/Hbase and the Google Web Toolkit, both of which I'm happy to get a chance to use at work. Gives me an excuse to play, er, work with tools I'd use at home anyway.
The first Hadoop project was for a client in the health industry. They needed to provide doctors with easy access to DICOM images using web browsers. This required a conversion process from DICOMs that browsers cannot display, in to browser-friendly formats such as JPG and AVI files. We used Hadoop to manage the conversion of large numbers of images from DICOM to JPG and to created varying sizes of images for thumbnails and such. We used Hbase to store the images. The use of Hbase on top of Hadoop greatly simplified the original approach where we stored the images on HDFS. Finally, we developed a snazzy GWT front-end for the doctors so that they could upload and manage images on the system.
The exposure to GWT was a great experience, and I was so impressed with it that I decided to use it for a web site I'm developing, PenWag. We use Agile-XP practices at work, and I try to follow those to the extent I can at home. So, yes, I have story cards, I have a continuously-demo-able product. PenWag is under development, go have a look at its current state. I'm just getting started, but thought I'd like to make it available all along the way.
The current Hadoop project is an R&D effort. Hadoop is relatively new to most companies. I'm starting to hear about opportunities to do Hadoop development on both coasts of the U.S. There are fewer such opportunities in the Midwest, but it's coming this way. Our company is preparing to be ready when it gets here. We already have some experience, but we're exploring the technology more thoroughly, because there's a lot we can do for our clients with this technology. Hadoop really, really simplifies the whole question of how to scale my app. Now, any problem I can express in MapReduce terms can be deployed to Hadoop. We can start with four or five commodity boxes, but could conceivably scale to 2,000 or 10,000 boxes if that's what the customer wanted. Assuming, you know, that they had a place for 10,000 Linux machines, and a way to cool them.
The first Hadoop project was for a client in the health industry. They needed to provide doctors with easy access to DICOM images using web browsers. This required a conversion process from DICOMs that browsers cannot display, in to browser-friendly formats such as JPG and AVI files. We used Hadoop to manage the conversion of large numbers of images from DICOM to JPG and to created varying sizes of images for thumbnails and such. We used Hbase to store the images. The use of Hbase on top of Hadoop greatly simplified the original approach where we stored the images on HDFS. Finally, we developed a snazzy GWT front-end for the doctors so that they could upload and manage images on the system.
The exposure to GWT was a great experience, and I was so impressed with it that I decided to use it for a web site I'm developing, PenWag. We use Agile-XP practices at work, and I try to follow those to the extent I can at home. So, yes, I have story cards, I have a continuously-demo-able product. PenWag is under development, go have a look at its current state. I'm just getting started, but thought I'd like to make it available all along the way.
The current Hadoop project is an R&D effort. Hadoop is relatively new to most companies. I'm starting to hear about opportunities to do Hadoop development on both coasts of the U.S. There are fewer such opportunities in the Midwest, but it's coming this way. Our company is preparing to be ready when it gets here. We already have some experience, but we're exploring the technology more thoroughly, because there's a lot we can do for our clients with this technology. Hadoop really, really simplifies the whole question of how to scale my app. Now, any problem I can express in MapReduce terms can be deployed to Hadoop. We can start with four or five commodity boxes, but could conceivably scale to 2,000 or 10,000 boxes if that's what the customer wanted. Assuming, you know, that they had a place for 10,000 Linux machines, and a way to cool them.
Subscribe to:
Posts (Atom)