Today we had a little meeting to talk about the priorities for our next iteration. We had a really odd, fun experience.
We're trying to figure out how to implement a feature that none of our competitors' products have really addressed, but the client needs. But, it's a bit tricky, so we were talking about the problem.
The team lead explained a possible solution to the problem. It sounded like a great idea, and I re-stated the problem back to him, but I had misunderstood what he said. He in turn, heard my response incorrectly and thought that I had had a great idea. But the thing he thought I was saying actually was a great idea, and solved the problem beautifully, elegantly. We all had a good laugh, since we had arrived at a terrific solution as we went back and forth misunderstanding each other.
So, whose great idea was it? Nobody's, really, and everybody's. So cool.
Now, of course, we have something of a puzzler - how do we effectively leverage our confusion to produce more groundbreaking ideas?
Friday, December 19, 2008
Sunday, December 7, 2008
Windows Vista And My Network
Well, if you've been paying attention to Apple's commercials and your friends with new Windows computers, you may have heard that Windows Vista has issues. Here's one that got me.
When I hooked up a new Windows Vista computer to my wireless hub, it asked me for a PIN number. Hm. That's interesting, never seen that before. It told me to find the PIN on the underside of the WRT54G2 wireless hub, so I got it and entered it. It showed me a network SSID, which I carelessly clicked through. Suddenly, all the other devices hooked to the wireless network were dropped. Hmmm, that's weird.
Here's what happened. Apparently, using the PIN gives your computer complete control over the network hub. It changed the network SSID, randomly generated a new WPA shared key, and reset it. At this point everything else was dropped. What a pain! Shame on me for missing the fact that it was going to change the SSID, but surely I don't want the shared key changed, blocking out all other network devices. Yuck!
When I hooked up a new Windows Vista computer to my wireless hub, it asked me for a PIN number. Hm. That's interesting, never seen that before. It told me to find the PIN on the underside of the WRT54G2 wireless hub, so I got it and entered it. It showed me a network SSID, which I carelessly clicked through. Suddenly, all the other devices hooked to the wireless network were dropped. Hmmm, that's weird.
Here's what happened. Apparently, using the PIN gives your computer complete control over the network hub. It changed the network SSID, randomly generated a new WPA shared key, and reset it. At this point everything else was dropped. What a pain! Shame on me for missing the fact that it was going to change the SSID, but surely I don't want the shared key changed, blocking out all other network devices. Yuck!
Bridging WAP54G and WRT54G2
Let's start off by stating the goal: I have a network in a room off the kitchen where my DSL comes into the house. Downstairs is an XBox 360 connected to the family TV. Rather than run a cable down to the basement, I decided to set up a wireless bridge between the basement and the network upstairs.
Despite the warnings from Linksys to the contrary, you can use WAP54G to bridge to devices besides other WAP54Gs. In my case, I have a WAP54G in the basement bridged to the WRT54G2 upstairs. It's not too hard if you follow these instructions.
First, be very very careful not to follow the instructions provided by Linksys. It's easy to accidentally read them, thinking they'll tell you what you need to do.
Next, hook a cable directly from your PC to the WAP54G. Configure your PC to have a static IP address of 192.168.1.200 (most other addresses on 192.168.1.x will work, too). Open a browser and go to 192.168.1.245, which is the default address of the WAP54G. You will be asked for a user ID and password, and the Linksys instructions are wrong. Here's what actually works: Use a USER ID of blank, and a password of "admin."
In the Setup tab, click on "AP Mode." Select the radio button next to "Wireless Bridge Remote Wireless Bridge's LAN MAC Addresses." The WAP54G needs the MAC address of the WRT54G2, which is on the underside of the WRT54G2. Enter the MAC address in the first box and click "Save Settings."
At this point, I reconnected my PC to the network, and moved the WAP54G downstairs. I plugged in the WAP54G, connected the XBox 360 to the WAP54G, and everything's good to go. For me, the XBox 360 won't dynamically pick up an address, so it's statically configured to 10.5.128.10. Your mileage may vary.
Despite the warnings from Linksys to the contrary, you can use WAP54G to bridge to devices besides other WAP54Gs. In my case, I have a WAP54G in the basement bridged to the WRT54G2 upstairs. It's not too hard if you follow these instructions.
First, be very very careful not to follow the instructions provided by Linksys. It's easy to accidentally read them, thinking they'll tell you what you need to do.
Next, hook a cable directly from your PC to the WAP54G. Configure your PC to have a static IP address of 192.168.1.200 (most other addresses on 192.168.1.x will work, too). Open a browser and go to 192.168.1.245, which is the default address of the WAP54G. You will be asked for a user ID and password, and the Linksys instructions are wrong. Here's what actually works: Use a USER ID of blank, and a password of "admin."
In the Setup tab, click on "AP Mode." Select the radio button next to "Wireless Bridge Remote Wireless Bridge's LAN MAC Addresses." The WAP54G needs the MAC address of the WRT54G2, which is on the underside of the WRT54G2. Enter the MAC address in the first box and click "Save Settings."
At this point, I reconnected my PC to the network, and moved the WAP54G downstairs. I plugged in the WAP54G, connected the XBox 360 to the WAP54G, and everything's good to go. For me, the XBox 360 won't dynamically pick up an address, so it's statically configured to 10.5.128.10. Your mileage may vary.
Friday, December 5, 2008
Favorite Projects Series, Installment 2
The previous project in this series I chose as a favorite due to the impact it had for the user. I consider this second project to be a favorite for the interesting technical challenges that it presented. While it certainly had impact for a lot of users, that was far less visible to me.
I lead a team of about three developers on this project at A.G. Edwards. The project was named BLServer based on a service by the same name that we used from an external provider. This provider's service was implemented using a COBOL program that would handle requests over a limited number of network connections. It had a number of limitations that we needed to overcome, which we did by wrapping it with Message-Driven Web Service running in a Weblogic cluster.
The function of the BLServer service was to receive messages for account creation and modification, including changes in holdings of various securities. The provider's BLServer had a nightly maintenance window (I think it was about four hours), and used a proprietary message format. It was secured by using a dedicated leased line and a single common password.
Our wrapping BLServer service was required 1) to expose a standards-based (SOAP) interface, 2) to provide 24x7 uptime, 3) to preserve message ordering, 4) to secure access to the service without exposing the single common password, and 5) to queue requests during maintenance windows delivering them when the provider's BLServer became available. There were also scalability and performance issues which, in combination with message ordering requirements, drove us to an interesting solution. I'm not sure about the exact scalability requirements, since that was four years ago. If I remember correctly, we initially had to be able to handle about 300,000 requests during a normal 8-hour business day, with the ability to handle peak loads of around 1.5 million per day.
The first benefit that our service provided was to expose a standards-based (SOAP) interface, and interact with the provider BLServer which took requests and delivered responses using a proprietary protocol and message format. Our service was then used by application Web Services to provide customer value.
In order to meet the scalability and availability requirements, we proposed standing up a small cluster of WebLogic engines to host a BLServer WebService. This WebService would receive requests and (using JMS) queue them for processing. Responses would then be queued for later retrieval by the calling services. By queuing requests in this way, we could use the transactionality of the JMS provider to guarantee that each message was processed once and only once. Furthermore, we could queue up a backlog of messages and feed them through the finite number of connections made available by the provider BLServer.
By using a cluster, we would be able to handle the necessary load of incoming requests, queue them, and run them through the provider BLServer, keeping it as fully loaded as possible over the finite number of available connections.
Aye, but here's the rub. We had a cluster of WebLogic engines pulling messages from the queue. How do you go about maintaining message order while at the same time leveraging the parallelization of the cluster to handle the load? Consider what happens if you have two messages in the queue in order. The first is a stock sell that results in an increase in available cash. The second is a buy that uses the that cash to buy some other stock. You can see that these must be processed in order. If one server in the cluster grabs the first from the queue, and another grabs the second, there's no guarantee that the sell will be handled first by the provider BLServer. Therefore, we have to guarantee the order in our BLServer service.
How to do that? The solution became more obvious once we realized that total message ordering was not required. What's really required is that messages within certain groups be correctly ordered. These groups are identified by key, and all messages for a given key must be ordered. Depending on the type of request, that key might be a CUSIP, might be an account number, or some other identifier.
Now message ordering with scalability becomes simpler. If all messages for a certain key are handled by a given engine, then we can guarantee ordering by pulling a single message at a time from the queue, and processing it to completion before beginning the next message. Other engines in the queue will be doing the same thing at the same time for other keys. Thus, we gain some scalability.
Oooh, but we've just introduced Single Points Of Failure (SPOFs) for each key. If a given server that handles keys that start with '17' for example, and that server crashes, then messages for those keys won't be processed, and we have failed to meet our availability requirements. That's where the second bit of creativity came into play. We employed a lease mechanism. Leases were stored in a highly-available database. Upon startup, a given engine would go to the database and grab a lease record. Each lease represented a group of keys. For example, a lease record might exist for all records starting with the range '00' to '03'. An engine starts up, finds that this lease is the next available, and grabs it. In order to 'grab' a lease, an engine will update the lease with a time in the not-to-distant future, say, five minutes. As long as the engine is up, it will continue to update the lease every two minutes or so with a new time. If the engine crashes, the time expires, and some other engine grabs the lease.
As long as an engine has a lease for a given range, it can use a selector to receive messages from the queue for that given range. We now have scalability, message ordering and high availability. Everybody say, "Woah, that's so cool!"
At this point, we've solved a significant technical issue that should be captured as an architectural pattern. We never did that. It may be that this solution is documented somewhere as a pattern, but I'm not aware of it.
At the end of the project I moved on to other things, and left BLServer in the capable hands of my friend Brian S. I heard some months down the road that the service was in active use in production, and had seen only one minor bug. I've always been proud of the product quality that our team delivered. We went through at least four or five variations of possible solutions before arriving at the one described above. In each case, we'd get into the details of the solution only to ask, "yeah, but what happens if..." and realize that we were close, but had some problem because of the distributed nature of the environment, or whatever. It was very satisfying to finally arrive at the elegant solution that we delivered.
I lead a team of about three developers on this project at A.G. Edwards. The project was named BLServer based on a service by the same name that we used from an external provider. This provider's service was implemented using a COBOL program that would handle requests over a limited number of network connections. It had a number of limitations that we needed to overcome, which we did by wrapping it with Message-Driven Web Service running in a Weblogic cluster.
The function of the BLServer service was to receive messages for account creation and modification, including changes in holdings of various securities. The provider's BLServer had a nightly maintenance window (I think it was about four hours), and used a proprietary message format. It was secured by using a dedicated leased line and a single common password.
Our wrapping BLServer service was required 1) to expose a standards-based (SOAP) interface, 2) to provide 24x7 uptime, 3) to preserve message ordering, 4) to secure access to the service without exposing the single common password, and 5) to queue requests during maintenance windows delivering them when the provider's BLServer became available. There were also scalability and performance issues which, in combination with message ordering requirements, drove us to an interesting solution. I'm not sure about the exact scalability requirements, since that was four years ago. If I remember correctly, we initially had to be able to handle about 300,000 requests during a normal 8-hour business day, with the ability to handle peak loads of around 1.5 million per day.
The first benefit that our service provided was to expose a standards-based (SOAP) interface, and interact with the provider BLServer which took requests and delivered responses using a proprietary protocol and message format. Our service was then used by application Web Services to provide customer value.
In order to meet the scalability and availability requirements, we proposed standing up a small cluster of WebLogic engines to host a BLServer WebService. This WebService would receive requests and (using JMS) queue them for processing. Responses would then be queued for later retrieval by the calling services. By queuing requests in this way, we could use the transactionality of the JMS provider to guarantee that each message was processed once and only once. Furthermore, we could queue up a backlog of messages and feed them through the finite number of connections made available by the provider BLServer.
By using a cluster, we would be able to handle the necessary load of incoming requests, queue them, and run them through the provider BLServer, keeping it as fully loaded as possible over the finite number of available connections.
Aye, but here's the rub. We had a cluster of WebLogic engines pulling messages from the queue. How do you go about maintaining message order while at the same time leveraging the parallelization of the cluster to handle the load? Consider what happens if you have two messages in the queue in order. The first is a stock sell that results in an increase in available cash. The second is a buy that uses the that cash to buy some other stock. You can see that these must be processed in order. If one server in the cluster grabs the first from the queue, and another grabs the second, there's no guarantee that the sell will be handled first by the provider BLServer. Therefore, we have to guarantee the order in our BLServer service.
How to do that? The solution became more obvious once we realized that total message ordering was not required. What's really required is that messages within certain groups be correctly ordered. These groups are identified by key, and all messages for a given key must be ordered. Depending on the type of request, that key might be a CUSIP, might be an account number, or some other identifier.
Now message ordering with scalability becomes simpler. If all messages for a certain key are handled by a given engine, then we can guarantee ordering by pulling a single message at a time from the queue, and processing it to completion before beginning the next message. Other engines in the queue will be doing the same thing at the same time for other keys. Thus, we gain some scalability.
Oooh, but we've just introduced Single Points Of Failure (SPOFs) for each key. If a given server that handles keys that start with '17' for example, and that server crashes, then messages for those keys won't be processed, and we have failed to meet our availability requirements. That's where the second bit of creativity came into play. We employed a lease mechanism. Leases were stored in a highly-available database. Upon startup, a given engine would go to the database and grab a lease record. Each lease represented a group of keys. For example, a lease record might exist for all records starting with the range '00' to '03'. An engine starts up, finds that this lease is the next available, and grabs it. In order to 'grab' a lease, an engine will update the lease with a time in the not-to-distant future, say, five minutes. As long as the engine is up, it will continue to update the lease every two minutes or so with a new time. If the engine crashes, the time expires, and some other engine grabs the lease.
As long as an engine has a lease for a given range, it can use a selector to receive messages from the queue for that given range. We now have scalability, message ordering and high availability. Everybody say, "Woah, that's so cool!"
At this point, we've solved a significant technical issue that should be captured as an architectural pattern. We never did that. It may be that this solution is documented somewhere as a pattern, but I'm not aware of it.
At the end of the project I moved on to other things, and left BLServer in the capable hands of my friend Brian S. I heard some months down the road that the service was in active use in production, and had seen only one minor bug. I've always been proud of the product quality that our team delivered. We went through at least four or five variations of possible solutions before arriving at the one described above. In each case, we'd get into the details of the solution only to ask, "yeah, but what happens if..." and realize that we were close, but had some problem because of the distributed nature of the environment, or whatever. It was very satisfying to finally arrive at the elegant solution that we delivered.
Thursday, November 20, 2008
On Software Development Metrics
As software developers, we need to be careful with metrics. I think there is an understanding that it's possible to cause more harm than help with an ill-chosen approach to metrics. One of the concerns is metrics that are susceptible to gaming. To me, a concern at least as great as gaming is measuring the wrong things.
The primary opportunity for measuring the wrong thing is by measuring mechanisms instead of results. For example, measuring pairing is a measuring a mechanism. Measuring the degree of siloing is measuring a result. Measuring testing is measuring a mechanism. Measuring code quality, but better yet product quality is measuring a result. It's the results that we care about more than the mechanisms. The mechanisms are a means to an end, not the end in themselves.
It's critical to measure the result rather than the mechanism. The first reason for this is that it's less susceptible to gaming. Consider measuring the number of tests versus the number of support calls received. Certainly, both can be gamed. But it's far easier to artificially jack up the number of tests. The real desire is to produce a system of great quality, which is subjective. It's harder to measure these subjective things, but it's worth it.
The second reason is that if we measure mechanisms, we'll miss important components of producing a quality system. So, for example, tests are a mechanism that help us deliver quality systems, but not the only mechanism. What we really care about is the quality of the delivered product. What happens if we measure the desired results instead of the means to achieve that result? First, it's harder to measure, and the outcome is more subjective. But, by measuring that, we also indirectly measure all those little things that we do as developers to make sure we don't get those 2AM calls, such as perusing the code a bit before check-in, or being well-read on pitfalls and benefits of various patterns.
The third reason for measuring the result instead of the mechanism is that measuring mechanism creates a box to think in. To make a trivial example, if we take as a metric the number of JUnit tests, we'll never be free to consider alternatives. We'll always create JUnit tests, because that's what measured. When the next great thing comes along, we'll be slower to adopt it, since it's not what we're measuring. We're thinking in a box. If we're measuring results, we will be more inclined to adopt new techniques as they come along to the extent that they seem to provide a real contribution to product quality.
It's easier to measure mechanisms than results. The main reason for this is that mechanisms tend to be more quantifiable than subjective results. The ease of measuring mechanisms is why most companies do it this way, and remain mediocre. The rule of thumb is this: You'll get more of what you measure. If you want more of a certain technique, measure it, and you'll get more of it. If you want more product quality, measure that instead - whatever it takes - and you'll get more of that. When it comes down to brass tacks, you don't want more of certain mechanisms, you want better results.
The primary opportunity for measuring the wrong thing is by measuring mechanisms instead of results. For example, measuring pairing is a measuring a mechanism. Measuring the degree of siloing is measuring a result. Measuring testing is measuring a mechanism. Measuring code quality, but better yet product quality is measuring a result. It's the results that we care about more than the mechanisms. The mechanisms are a means to an end, not the end in themselves.
It's critical to measure the result rather than the mechanism. The first reason for this is that it's less susceptible to gaming. Consider measuring the number of tests versus the number of support calls received. Certainly, both can be gamed. But it's far easier to artificially jack up the number of tests. The real desire is to produce a system of great quality, which is subjective. It's harder to measure these subjective things, but it's worth it.
The second reason is that if we measure mechanisms, we'll miss important components of producing a quality system. So, for example, tests are a mechanism that help us deliver quality systems, but not the only mechanism. What we really care about is the quality of the delivered product. What happens if we measure the desired results instead of the means to achieve that result? First, it's harder to measure, and the outcome is more subjective. But, by measuring that, we also indirectly measure all those little things that we do as developers to make sure we don't get those 2AM calls, such as perusing the code a bit before check-in, or being well-read on pitfalls and benefits of various patterns.
The third reason for measuring the result instead of the mechanism is that measuring mechanism creates a box to think in. To make a trivial example, if we take as a metric the number of JUnit tests, we'll never be free to consider alternatives. We'll always create JUnit tests, because that's what measured. When the next great thing comes along, we'll be slower to adopt it, since it's not what we're measuring. We're thinking in a box. If we're measuring results, we will be more inclined to adopt new techniques as they come along to the extent that they seem to provide a real contribution to product quality.
It's easier to measure mechanisms than results. The main reason for this is that mechanisms tend to be more quantifiable than subjective results. The ease of measuring mechanisms is why most companies do it this way, and remain mediocre. The rule of thumb is this: You'll get more of what you measure. If you want more of a certain technique, measure it, and you'll get more of it. If you want more product quality, measure that instead - whatever it takes - and you'll get more of that. When it comes down to brass tacks, you don't want more of certain mechanisms, you want better results.
Sunday, November 9, 2008
OT Middle Fork Trip Report - 11/1 to 11/2
This was my second hike on the OT, and my first solo hike of any length. I walked the Middle Fork section from the DD trailhead to Brushy Creek lodge, and it was beautiful weather. Surprised not to see more people out - you all missed a great weekend.
I left the DD trailhead at about 11 AM. Couldn't get down there earlier, unfortunately. Near the beginning of the walk I could hear an F15 overhead, and caught a few glimpses of it. He was doing some loops and rolls, as if he were training for a show or practicing evasive maneuvers. Not what I went in the woods to see, but pretty cool, nonetheless.
The trail is pleasant all the way, quite a few nice little creeks. Along this section it can be a ways between signs. There were a couple times where I might have wondered if I were still on the trail, except for how well-maintained it is. Most of the trail is shaded by woods, too, which is nice. There were a couple groups ahead of me, but never caught up to them enough to see them, just saw their shoe prints. Just before crossing the bridge at MF7, there was a little persimmon tree. A shake knocked a few off (if they drop from a shake, they're ripe), so I got to have a couple persimmons as a sweet treat on the trail. There were quite a few deer droppings along the trail, and they almost always had some persimmon seeds in them. Met Dan and Richard at the primitive camp there at MF7, and we chatted a bit. They saw a couple other groups on the trail.
I was planning to camp somewhere between MF8 and MF9, but there was about 2 hours of light left, so I pushed on, and ended up camping at the bottom of the hill by MF12. It was a little chilly down there, but I was warm enough to get some good sleep. This was my first night out after completing the net-tent part of my Ray-Way tarp. It's slippery sleeping on the net-tent floor, and I had just a slight incline, which meant a couple adjustments in the night.
Also new on this trip was my Cat Stove [url]http://coders-log.blogspot.com/2008/10/cat-stove.html[/url], which worked pretty well. I had a simple menu. For each meal, I had some multi-grain pasta, some pre-cooked Bob Evans breakfast sausage, and some cheddar cheese. Fuel up the stove, pour in a cup of water (that's up to my first knuckle). Get the water boiling, then add the pasta, put the meat and cheese on top, cover and cook. Tasty and provides some good energy for the trail.
Second day I started out at first light, and headed up the hill at first light, warming up quickly. Continued to see tracks from people ahead of me on the trail, but the only other people I met was a group of four on horseback going the other way. They had seen someone out who was on his ninth day on the trail.
I like Middle Fork Section. I did a hike with a friend on the Highway 21 to Devil's Tollgate section in August, and that was pretty dry and rocky, with some pretty aggressive climbing. A nice hike, don't get me wrong, but a lot more work. :) By contrast, Middle Fork is gravelly but not rocky, has plenty of water, and gentle grades throughout. The last climb before descending to Brushy Creek takes you up about 300 feet, but it's gentle enough that it's not a killer. I cooked and ate lunch at the bottom after crossing the creek, and that gave me enough energy to complete the hike.
Remember not to drink the water at Strother Creek. Check the map and fill up with water before getting there. It's not a terribly long stretch without water, but just in case.
This was also my first hike after trading in my New Balance trail running shoes for my Chaco Redrock shoes. I definitely like the Chacos. They're heavier, but don't show any deterioration after 25 miles on the trail, like the NBs did.
The hike ended at 2PM at Brushy Creek, which looks like a nice place. Friendly folks, and all that. Rested there and waited for my ride to pick me up. All in all, a very, very nice hike. I highly recommend this section as a starter hike, too. You can start at DD, and there's a trailhead at 12 miles, 20 miles, and 25, so you can bug out early if you get in over your head. There are also numerous gravel road crossings, if it comes down to that.
Also note that cell phone coverage is very sparse out there, so it's a tenuous life-line, if that's what you're counting on.
if you're on Facebook, see the pictures here:
http://www.facebook.com/photos.php?id=1295841432#/album.php?aid=10650&id=1295841432
I left the DD trailhead at about 11 AM. Couldn't get down there earlier, unfortunately. Near the beginning of the walk I could hear an F15 overhead, and caught a few glimpses of it. He was doing some loops and rolls, as if he were training for a show or practicing evasive maneuvers. Not what I went in the woods to see, but pretty cool, nonetheless.
The trail is pleasant all the way, quite a few nice little creeks. Along this section it can be a ways between signs. There were a couple times where I might have wondered if I were still on the trail, except for how well-maintained it is. Most of the trail is shaded by woods, too, which is nice. There were a couple groups ahead of me, but never caught up to them enough to see them, just saw their shoe prints. Just before crossing the bridge at MF7, there was a little persimmon tree. A shake knocked a few off (if they drop from a shake, they're ripe), so I got to have a couple persimmons as a sweet treat on the trail. There were quite a few deer droppings along the trail, and they almost always had some persimmon seeds in them. Met Dan and Richard at the primitive camp there at MF7, and we chatted a bit. They saw a couple other groups on the trail.
I was planning to camp somewhere between MF8 and MF9, but there was about 2 hours of light left, so I pushed on, and ended up camping at the bottom of the hill by MF12. It was a little chilly down there, but I was warm enough to get some good sleep. This was my first night out after completing the net-tent part of my Ray-Way tarp. It's slippery sleeping on the net-tent floor, and I had just a slight incline, which meant a couple adjustments in the night.
Also new on this trip was my Cat Stove [url]http://coders-log.blogspot.com/2008/10/cat-stove.html[/url], which worked pretty well. I had a simple menu. For each meal, I had some multi-grain pasta, some pre-cooked Bob Evans breakfast sausage, and some cheddar cheese. Fuel up the stove, pour in a cup of water (that's up to my first knuckle). Get the water boiling, then add the pasta, put the meat and cheese on top, cover and cook. Tasty and provides some good energy for the trail.
Second day I started out at first light, and headed up the hill at first light, warming up quickly. Continued to see tracks from people ahead of me on the trail, but the only other people I met was a group of four on horseback going the other way. They had seen someone out who was on his ninth day on the trail.
I like Middle Fork Section. I did a hike with a friend on the Highway 21 to Devil's Tollgate section in August, and that was pretty dry and rocky, with some pretty aggressive climbing. A nice hike, don't get me wrong, but a lot more work. :) By contrast, Middle Fork is gravelly but not rocky, has plenty of water, and gentle grades throughout. The last climb before descending to Brushy Creek takes you up about 300 feet, but it's gentle enough that it's not a killer. I cooked and ate lunch at the bottom after crossing the creek, and that gave me enough energy to complete the hike.
Remember not to drink the water at Strother Creek. Check the map and fill up with water before getting there. It's not a terribly long stretch without water, but just in case.
This was also my first hike after trading in my New Balance trail running shoes for my Chaco Redrock shoes. I definitely like the Chacos. They're heavier, but don't show any deterioration after 25 miles on the trail, like the NBs did.
The hike ended at 2PM at Brushy Creek, which looks like a nice place. Friendly folks, and all that. Rested there and waited for my ride to pick me up. All in all, a very, very nice hike. I highly recommend this section as a starter hike, too. You can start at DD, and there's a trailhead at 12 miles, 20 miles, and 25, so you can bug out early if you get in over your head. There are also numerous gravel road crossings, if it comes down to that.
Also note that cell phone coverage is very sparse out there, so it's a tenuous life-line, if that's what you're counting on.
if you're on Facebook, see the pictures here:
http://www.facebook.com/photos.php?id=1295841432#/album.php?aid=10650&id=1295841432
Tuesday, October 28, 2008
Installing OpenCV on Fedora 8
I've just finished installing and documenting this process on our company blog: OpenCV on Fedora 8
Subscribe to:
Posts (Atom)