Tracking My Weight

Aug 13, 2010

I've been slightly overweight for quite a long time. Two months ago, I decided I would start tracking my weight daily, in an effort to try and motivate myself to shed a few pounds. Desiring a tool to make this easy, I immediately searched the Android marketplace and found Libra. This incredibly handy tool uses a weight trend line as described in the excellent book The Hacker's Diet.

Allow me to quickly talk about The Hacker's Diet. Written by John Walker, founder of AutoDesk, this book tackles weight loss as an engineering problem. The author is funny, to the point, and provides a careful analysis of how weight loss works. The briefest summary: you will only lose weight by eating fewer calories than you need. Exercise won't do it (though it helps), and weird diets (Atkins, South Beach, et al.) won't do it either. Read the book for further discussion and analysis of this viewpoint. The author presents a pretty solid case that's hard to argue against. Best of all, the book is available for free as a PDF!

The trend line in a weight chart tells you where you're headed: am I gaining weight (line going up), maintaining it (horizontal), or losing it (line going down)? With this simple tool, I was able to see in no time at all that my weight was going upwards at an alarming rate. After waking up to my weight gain, I set a modest goal of losing 9 pounds (I was 9 pounds above the "overweight" line for someone my height).

After reading The Hacker's Diet, I made one simple change to my lifestyle: I altered how much I eat at each meal. I didn't change what I eat; only how much. And wow what a difference that has made! Today, I weighed in at my goal weight for the very first time! Here's the proof:

As you can see from the chart, I started heading up, turned the corner, and have been headed down ever since. My trend line hasn't yet hit my target weight (as of today's measurement, it's scheduled to hit the target on August 21), but at least it's heading in the right direction. It was a great feeling to hit my target this morning. I'm looking forward to shedding a few more pounds and maintaining a healthier weight.

This weekend, for my mom's birthday, we took a trip over to Greensboro, NC to visit the Greensboro Historical Museum and the Guilford Courthouse Military Park. Having never visited Greensboro proper, we didn't really know what to expect from either.

The historical museum in Greensboro is way larger than it may look from the outside. We easily spent two hours wandering through the various exhibits, some of which are tremendously large. More time could easily be spent here; the rainy weather limited our outdoor experiences (a few exhibits are outside the building). I was surprised to learn about the history of the area; a number of corporations were founded there, and several prominent events have occurred over the course of time. Best of all, the visit is absolutely free! I came away from the museum very impressed. It easily rivals the state museums in Raleigh.

Guilford county courthouse, site of a pivotal battle in the Revolutionary War, is equally as entertaining. Again, the rainy weather limited our outdoor activity at the park, but it should be noted that there are miles of hiking trails and a number of memorials around the park. The visitor center has an excellent 30-minute film describing the events of the battle. A number of artifacts from the battlefield are also on display; from rifles, to cannonballs, to belt buckles, it's all here. The collection is truly gigantic. Again, the visit is completely free. This is a park I will definitely return to.

If you're ever in the Greensboro area, I highly recommend both destinations. Both provide a relaxing environment, and a historical perspective on the Piedmont region of North Carolina.

Useful Tool: ImgBurn

Jul 20, 2010

I needed a quick and easy way to burn an ISO image here at work, so I took a look around and found ImgBurn. A Windows-only app, it's small, easy to set up, and took no time at all to get working. The only annoyance was that the installer included an option to install an "Ask" toolbar in IE (along with a few other advertising options). Thankfully, you can disable them all at setup time.

Recommended LCDs?

Jul 12, 2010

Exactly five years ago today, I bought a used NEC 22" monitor for my personal computer at home. It has served me well for that time, but I've seen it act up a time or two recently. Seeing as LCD technology has progressed much over the past few 5 years, I feel like it's finally time to bite the bullet and join the mainstream. As such, I'm starting the hunt for a new display. Here's what I want:

  • Real Estate: I run 1600 x 1200 at home, and I'd like to stay in that neighborhood
  • Fast Response Times: The display would primarily be used for gaming, so fast response times are a requirement.
  • Vibrant Colors: Some LCD displays have pretty weak white-balance; I want something with nice color reproduction, since I'll also be doing occasional photo editing.

Does anyone here have any recommendations on brands or where to start looking? Is there a model or manufacturer you've been happy with? Any ideas would be appreciated!

On Friday afternoon, I finally upgraded my home system to Windows 7. Windows XP was feeling dated, and my old system had slowed to a crawl for unexplained reasons. I also figured it was time to upgrade to a 64-bit OS, so that's the version of 7 that I installed. Here are a few brief thoughts I've had on this new operating system:

New Task Bar
Interestingly enough, the steepest learning curve I've had with Windows 7 has been with the new task bar. I'm quite used to XP's task bar, complete with the quick launch toolbar. The new task bar in Windows 7 rolls these two toolbars into one; essentially combining currently running applications with 'pinned' applications. Also, by default, only program icons are displayed; none of the window titles are shown as a part of each process' button. This new scheme is a little confusing at first, but I'm becoming accustomed to it.
Updated Start Menu
Microsoft finally got smart with the new start menu. No longer does it stretch to the top of the screen when you have a million applications installed. Instead, the "All Programs" menu simply transforms into a scrollable pane, showing the items available. This is a terrific UI change that should have been done at least 10 years ago.
Improved Speed
In the midst of going to Windows 7, I also made several hardware improvements. I upped my memory from 2 GB to 4 GB (I may go to 8 GB if 4 doesn't suffice), I am using a new brand of hard drive (Western Digital, instead of Seagate), and I added a new CPU heat sink. Since I updated a few hardware components, I'm not sure what really made the difference, but most of my applications now start noticeably faster than before. For example, iTunes starts nearly instantly, which blows the previous 15 to 20 second startup time out of the water. Games also start way faster, which is a plus. I love getting performance boosts like this; hopefully they will hold up over time.
Miscellaneous
There are other minor things that I find interesting about the Windows 7 experience:
  • Installation was amazingly fast, and I was only asked one or two questions.
  • Drivers thankfully haven't been an issue (so far).
  • The built-in zip file support has apparently been vastly improved; it's orders of magnitude faster than XP. I'm not sure I'm going to install WinZip seeing as the built-in support is so good.
  • The new virtualized volume control is epic; why wasn't it like this all along?

So far, I'm pleasantly surprised with Windows 7. Some of the new UI takes getting used to, but this looks like a positive step forward; both for Microsoft and for my home setup.

E3 2010

Jun 18, 2010

This year's E3 has come and gone, and I thought I'd post a few thoughts on various things introduced at the event. To make things easy, I'll organize things by platform.

PC Gaming

Portal 2
This may be the game I'm most excited about. Whereas the first Portal was an "experiment" of sorts, this second title looks to be a full-fledged game. The puzzles sound much more insidious (physics paint!), and the new milieu of the game looks incredible. Portions of the trailer I watched are very funny, as can be expected. And hey, it's Valve we're talking about here. This will definitely be a winner.
Rage
id Software's new intellectual property looks incredible. Part racer, part first-person shooter, this game looks like a boat load of fun. It's pretty, too, as expected with titles from id (humans still look a little too fake, however; they need to drop the 'bloated' look). I'll probably pick this one up when it's released.
Deus Ex: Human Revolution
If this game is as fun (and as deep) as the first one was, I'll definitely buy in. If it's as lame as the second one was reported to be, I'll skip it. Nevertheless, the trailer looks great.

Nintendo Wii

Lost in Shadow
This upcoming adventure game looks really impressive. You play as the shadow of a young boy, separated from him at the beginning of the game. The ultimate goal is to reach the top of a tower, where the boy is being held. But the twist here is that, as a shadow, you can only use other object's shadows as platforms. Manipulating light in the environment looks like a large part of the puzzle mechanic. This is another very inventive title that looks promising.
Zelda: Skyward Sword
What's not to like about a new Zelda title?
Kirby's Epic Yarn
Kirby's Epic Yarn has an incredibly unique art design. This time around, Kirby is an outline of yarn, and moves through a similarly designed environment. I've seen plenty of comments around the web poking fun at the seemingly "gay" presentation of the trailer; but this looks like an inventive, fun game to me.
Donkey Kong Country Returns
I was a big fan of the Donkey Kong Country games back on the SNES, so I'm really looking forward to this one. Some of the older games were ridiculously difficult; hopefully some of that difficulty will be ported over. The graphics in this one look fantastic.
Epic Mickey
Mickey Mouse goes on an epic adventure, using various paints and paint thinners to modify and navigate the world. The fact that this game includes a Steamboat Willie level, complete with the old artwork style, is epic in itself.

Nintendo DS

Nintendo 3DS
The next iteration of Nintendo's hand-held looks interesting. I'd have to see the 3D effect in person to get a good feel for it, but all the press I've read has sounded promising. There are some neat sounding titles coming for this new platform and, if they're fun enough, I may just have to upgrade.

XBox 360

Kinect (AKA Project Natal)
I'm not exactly sure what to think about this. I've read in several places that Microsoft really butchered the unveiling of this tech, opting for 'family-friendly' titles similar to what's already on the Wii. That being said, Child of Eden looks like a phenomenal title that makes terrific use of the new technology. Only time will tell how this stuff works out. I think it's funny, however, that Sony and Microsoft are just now trying to catch up to Nintendo in motion control. Nintendo gets a lot of hate from the hard-core gaming community (a small portion of which is justified), but they're obviously doing something right; otherwise these companies wouldn't be entering this space.

I'm sure there are a few items I've missed in this rundown, but these are the ones that really caught my eye. For those of you who followed this year's event, what are you looking forward to?

Yesterday, I finally finished reading the Lord of the Rings series for the first time. I can finally scratch them off my list of shame! As I did for the previous two books, I thought I would provide some brief thoughts on each.

The Two Towers

I found it interesting how this volume told two stories in separate chunks (books 3 and 4), rather than interleaving them. The first book follows the adventures of Aragorn, Gimli, Legolas, Merry, Pippin, and Gandalf, from beginning to end. The second follows Sam, Frodo, and Gollum. In the movie adaptation of this book, the stories are intertwined, helping to remind the viewer that various events are happening in parallel. Telling each story in its entirety in the novel was much more rewarding from a reading perspective. I never lost track of what was going on during each story, and I found them that much more engaging. It's interesting that Peter Jackson decided to move the scene with Shelob into the third movie, since it really happens at the end of the second novel. Again, this was a top notch novel, which I enjoyed cover to cover.

The Return of the King

To me, this book differs more from its movie adaptation than the previous two. In the book, the army of the dead is used to gain ships for Aragorn and company: nothing more. They are released from service after helping the company obtain these ships. In the movie, the dead travel with them and fight Sauron's army with the company. I think I prefer the novel's version here. Likewise, I prefer the ending of the novel over the movie. How could the film's writers have left out the scouring of the Shire? When Frodo and company return to the Shire, they find it in ruin. This was a key scene omitted from the movie, much to the movie's detriment, in my opinion. Novel for the win!

Now for a few final thoughts on the series as a whole:

  • It boggles my mind that Arwen is a bit character in the novels. Having seen the movies before reading the books, I guess my vision of her importance was tarnished. She barely has any speaking lines in the books, and is left out of the second story altogether.
  • While I enjoy Peter Jackson's movie adaptations of these books, the novels (as usual) far exceed them. Key elements were left out of the films: interacting with Tom Bombadil, several scenes with the Ents, and the scouring of the Shire (along with the deaths of both Saruman and Wormtongue). I guess it's hard to beat a book.

Defining State Parks

Jun 4, 2010

While researching the North Carolina State Park System for my "visit and photograph every state park" project, I learned that there are far more state parks than I realized. My original list had 39 parks; the official list, as I eventually found on the NC parks website, lists 32 parks, 19 natural areas, and 4 recreation areas. Unfortunately, this list is only current as of January 1, 2007. As such, a few newer parks aren't listed, such as Grandfather Mountain and Chimney Rock (which is actually listed as Hickory Nut Gorge).

All of this got me thinking about what, for my purposes, constitutes a "state park." Not all of the official sites have public facilities or access. A number of the state natural areas are simply chunks of land set aside for preservation. Several areas are relatively new and haven't yet been developed. Some others aren't developed simply based on recent budget cuts and shortfalls.

These facts have all led me to the following decision: the "state parks" I will pursue in my visitation project will include those for which official attendance figures are kept. Attendance information is posted in each state park newsletter; it is from this source that I have pulled my park list. The result is 40 parks, which nearly agrees with my first list. I had omitted Grandfather Mountain in my first pass, simply because it only recently became a state park, and wasn't listed on the official website until very recently.

I'm looking forward to visiting each park in the state. As of this writing, I've been to 13 parks, and have photographed 11. Plenty more to go!

I cannot recommend Process Explorer highly enough. This application from SysInternals is essentially a replacement for the built-in Windows task manager. One small feature that turns out to be pretty useful is that each process is shown in the list with its associated icon. This makes tracking down a specific application really easy (especially those troublesome processes that don't terminate cleanly; Java, I'm looking at you). The other tremendously useful feature I enjoy is having a description and company name along with each process. Many processes have cryptic, 8-character names, and having the associated information to help identify them is a real time saver.

As I mentioned a while back, I've been wanting to discuss automatic dependency generation using GNU make and GNU gcc. This is something I just recently figured out, thanks to two helpful articles on the web. The following is a discussion of how it works. I'll be going through this material quickly, and I'll be doing as little hand-holding as possible, so hang on tight.

Let's start by looking at the final makefile:

SHELL = /bin/bash

ifndef BC
    BC=debug
endif

CC = g++
CFLAGS = -Wall
DEFINES = -DMY_SYMBOL
INCPATH = -I../some/path

ifeq($(BC),debug)
    CFLAGS += -g3
else
    CFLAGS += -O2
endif

DEPDIR=$(BC)/deps
OBJDIR=$(BC)/objs

# Build a list of the object files to create, based on the .cpps we find
OTMP = $(patsubst %.cpp,%.o,$(wildcard *.cpp))

# Build the final list of objects
OBJS = $(patsubst %,$(OBJDIR)/%,$(OTMP))

# Build a list of dependency files
DEPS = $(patsubst %.o,$(DEPDIR)/%.d,$(OTMP))

all: init $(OBJS)
    $(CC) -o My_Executable $(OBJS)

init:
    mkdir -p $(DEPDIR)
    mkdir -p $(OBJDIR)

# Pull in dependency info for our objects
-include $(DEPS)

# Compile and generate dependency info
# 1. Compile the .cpp file
# 2. Generate dependency information, explicitly specifying the target name
# 3. The final three lines do a little bit of sed magic. The following
#    sub-items all correspond to the single sed command below:
#    a. sed: Strip the target (everything before the colon)
#    b. sed: Remove any continuation backslashes
#    c. fmt -1: List words one per line
#    d. sed: Strip leading spaces
#    e. sed: Add trailing colons
$(OBJDIR)/%.o : %.cpp
    $(CC) $(DEFINES) $(CFLAGS) $(INCPATH) -o $@ -c $<
    $(CC) -MM -MT $(OBJDIR)/$*.o $(DEFINES) $(CFLAGS) $(INCPATH) \
        $*.cpp > $(DEPDIR)/$*.d
    @cp -f $(DEPDIR)/$*.d $(DEPDIR)/$*.d.tmp
    @sed -e 's/.*://' -e 's/\\\\$$//' < $(DEPDIR)/$*.d.tmp | fmt -1 | \
        sed -e 's/^ *//' -e 's/$$/:/' >> $(DEPDIR)/$*.d
    @rm -f $(DEPDIR)/$*.d.tmp

clean:
    rm -fr debug/*
    rm -fr release/*

Let's blast through the first 20 lines of code real quick, seeing as this is all boring stuff. We first set our working shell to bash, which happens to be the shell I prefer (if you don't specify this, the shell defaults to 'sh'). Next, if the user didn't specify the BC environment variable (short for "Build Configuration"), we default it to a value of 'debug.' This is how I gate my build types in the real world; I pass it in as an environment variable. There are probably nicer ways of doing this, but I like the flexibility that an environment variable gives me. Next, we set up a bunch of common build variables (CC, CFLAGS, etc.), and we do some build configuration specific setup. Finally, we set our DEPDIR (dependency directory) and OBJDIR (object directory) variables. These will allow us to store our dependency and object files in separate locations, leaving our source directory nice and clean.

Now we come to some code that I discussed in my last programming grab bag:

# Build a list of the object files to create, based on the .cpps we find
OTMP = $(patsubst %.cpp,%.o,$(wildcard *.cpp))

# Build the final list of objects
OBJS = $(patsubst %,$(OBJDIR)/%,$(OTMP))

# Build a list of dependency files
DEPS = $(patsubst %.o,$(DEPDIR)/%.d,$(OTMP))

The OTMP variable is assigned a list of file names ending with the .o extension, all based on the .cpp files we found in the current directory. So, if our directory contained three files (a.cpp, b.cpp, c.cpp), the value of OTMP would end up being: a.o b.o c.o.

The OBJS variable modifies this list of object files, sticking the OBJDIR value on the front of each, resulting in our "final list" of object files. We do the same thing for DEPDIR, instead prepending the DEPDIR value to each entry (giving us our final list of dependency files).

Next up is our first target, the all target. It depends on the init target (which is responsible for making sure that the DEPDIR and OBJDIR directories exist), as well as our list of object files that we created moments ago. The command in this target will link together the objects to form an executable, after all the objects have been built. The next line is very important:

# Pull in dependency info for our objects
-include $(DEPS)

This line tells make to include all of our dependency files. The minus sign at the front says, "if one of these files doesn't exist, don't complain about it." After all, if the dependency file doesn't exist, neither does the object file, so we'll be recreating both anyway. Let's take a quick look at one of the dependency files to see what they look like, and to understand the help they'll provide us:

objs/myfile.o: myfile.cpp myfile.h
myfile.cpp:
myfile.h:

In this example, our object file depends on two files: myfile.cpp and myfile.h. Note that, after the dependency list, each file is listed by itself as a rule with no dependencies. We do this to exploit a subtle feature of make:

If a rule has no prerequisites or commands, and the target of the rule is a nonexistent file, then make imagines this target to have been updated whenever its rule is run. This implies that all targets depending on this one will always have their commands run.

This feature will help us avoid the dreaded "no rule to make target" error, which is especially helpful if a file gets renamed during development. No longer will you have to make clean in order to pick up those kinds of changes; the dependency files will help make do that work for you!

Back in our makefile, the next giant block is where all the magic happens:

# Compile and generate dependency info
# 1. Compile the .cpp file
# 2. Generate dependency information, explicitly specifying the target name
# 3. The final three lines do a little bit of sed magic. The following
#    sub-items all correspond to the single sed command below:
#    a. sed: Strip the target (everything before the colon)
#    b. sed: Remove any continuation backslashes
#    c. fmt -1: List words one per line
#    d. sed: Strip leading spaces
#    e. sed: Add trailing colons
$(OBJDIR)/%.o : %.cpp
    $(CC) $(DEFINES) $(CFLAGS) $(INCPATH) -o $@ -c $<
    $(CC) -MM -MT $(OBJDIR)/$*.o $(DEFINES) $(CFLAGS) $(INCPATH) \
        $*.cpp > $(DEPDIR)/$*.d
    @cp -f $(DEPDIR)/$*.d $(DEPDIR)/$*.d.tmp
    @sed -e 's/.*://' -e 's/\\\\$$//' < $(DEPDIR)/$*.d.tmp | fmt -1 | \
        sed -e 's/^ *//' -e 's/$$/:/' >> $(DEPDIR)/$*.d
    @rm -f $(DEPDIR)/$*.d.tmp

This block of code is commented, but I'll quickly rehash what's going on. The first command actually compiles the object file, while the second command generates the dependency file. We then use some sed magic to create the special rules in each dependency file.

Though it's a lot to take in, these makefile tricks are handy to have in your toolbox. Letting make handle the dependency generation for you will save you a ton of time in the long run. It also helps when you're working with very large projects, as I do at work.

If you have a comment or question about this article, feel free to comment.

Oatmeal Raisin Cookies

Apr 29, 2010

This recipe comes from Quaker Oats (from the lid on their oatmeal containers, specifically). I've transcribed it here so I can remember it without having to keep an oatmeal lid lying around somewhere. Note that the cookie recipe on their website is slightly different from this one. These are incredibly delicious cookies!

  • 1/2 pound of margarine or butter, softened
  • 1 cup firmly packed brown sugar
  • 1/2 cup granulated sugar
  • 2 eggs
  • 1 teaspoon vanilla
  • 1-1/2 cups all-purpose flour
  • 1 teaspoon baking soda
  • 1 teaspoon cinnamon
  • 1/2 teaspoon salt (optional)
  • 3 cups oats, uncooked
  • 1 cup raisins

Beat together the margarine and sugars until creamy. Add eggs and vanilla, and beat well. Add combined flour, baking soda, cinnamon and salt; mix well. Stir in oats, and raisins; mix well. Drop by rounded teaspoonfuls onto ungreased cookie sheet. Bake at 350 degrees for 10 to 12 minutes or until golden brown. Cool 1 minute on cookie sheet; remove to wire rack. Makes about 4 dozen.

One of the recent updates to Firebug broke a "feature" I used all the time: the ability to select a link with the element inspector, then edit that link's :hover pseudo-class style rules. Well, it turns out that technically, that "feature" was a bug (though I might argue against that fact). In newer versions of Firebug, you have to:

  1. Click the element inspector
  2. Click the link you're interested in editing
  3. Select the :hover pseudo-class menu item in the Style tab's drop-down menu
  4. Edit the rule as you like

This new option allows you to "lock" the element in the :hover state, the usefulness of which I can understand. At the same time, it would be great to have an option (perhaps a hidden preference) to bring back the old behavior.

Yesterday's nightly Firefox build fixed bug 147777, in which the :visited CSS property allows web sites to detect which sites you have visited (essentially a privacy hole). Sid Stamm wrote a very interesting article on the problem, along with how the Firefox team decided to fix the issue. I recall being amazed at the simplicity of this privacy attack: no JavaScript was needed for a site to figure out where you had been. Several history sniffing websites are available if you're interested in seeing the hole in action.

Email Fixed

Apr 11, 2010

Just a quick note to let everyone know that contact via email should be back up and running here at the site. Comments are also a good way to get in contact with me (plus they benefit everyone).

There has been quite a bit of news recently on the escalating war of words between Adobe and Apple. For the uninformed, Apple has essentially said "no Flash, ever" for either the iPhone or iPad, and Adobe has been pretty upset (rightfully so, in my opinion). Adobe employees have publicly denounced Apple, and Apple has fired back. It's all been a sort of "playground dispute" so far.

Let me first say that I don't love either company; they both have pretty serious failings in my eyes. But, in the end, I despise Adobe much less than I do Apple, so I'd love to see Adobe come out on top if at all possible. It occurred to me just the other day how Adobe could "get back" at Apple for this latest Flash debacle.

Simply put: Adobe should drop all OS X support for all of their future products. "If your OS won't support our products, our products won't support your OS." Just think about it: all of the artsy folks in the world who use Adobe products use them on Apple branded computers. Cutting them off might seriously impact Apple's new OS sales (and, admittedly, would probably hurt Adobe's bottom line, at least in the short term). But this seems like serious leverage to me. Granted, Apple's main revenue stream these days comes via the iPhone, but OS sales are still a vital piece of their puzzle. Putting the squeeze on a big vein like that might make Apple change its mind.

As this bickering continues, I can only hope that Android continues to grab market share. Could the iPhone vs. Android war turn into the Apple vs. IBM war from the 1980s? I can only hope so...

I've recently had a perfect storm of email woes here at this site. Last month, my email servers changed at DreamHost (for reasons I still don't fully understand), breaking all of my approved SSL certificates (not to mention my SMTP settings). Around the same time, I updated to Thunderbird 3.0 from 2.x. The new interface is bizarre, and I've only had problems from day one of the upgrade. As such, I am now actively working towards moving all of Born Geek's email (including this website) to GMail.

Unfortunately, someone is apparently squatting on my domain over at Google Apps. I attempted to reset the account password there, but no secondary email address is on record, making things much more difficult for me. I have started a manual password reset process (via proving to Google that I do indeed own the domain), and hope to have things up and running by this weekend.

Long story short, any direct emails sent to me through the contact form at this website may not be answered for a while. Please bear with me during this painful process.

Beautiful Evening

Apr 1, 2010

It's nights like this that make me glad to be alive.

It has once again been ages since the last programming grab bag article was published, so let's dive right into another one, shall we? This time around, we'll be looking at some simple tricks involving GNU make.

1. Let Make Construct Your Object List

One common inefficiency in many Makefiles I've seen is having a manual list of the object files you are interested in building. Let's work with the following example makefile (I realize that this makefile has a number of design issues; it's a simple, contrived example for the sake of this discussion). I've highlighted the list of objects below (line 2):

CFLAGS = -Wall
OBJS = class_a.o class_b.o my_helpers.o my_program.o

all: my_program

my_program: $(OBJS)
    gcc -o my_program $(OBJS)

class_a.o: class_a.cpp
    gcc $(CFLAGS) -c class_a.cpp

class_b.o: class_b.cpp
    gcc $(CFLAGS) -c class_b.cpp

my_helpers.o: my_helpers.cpp
    gcc $(CFLAGS) -c my_helpers.cpp

my_program.o: my_program.cpp
    gcc $(CFLAGS) -c my_program.cpp

For very small projects, maintaining a list like this is doable, even if it is a bother. When considering larger projects, this approach rarely works. Why not let make do all this work for us? It can generate our list of object files automatically from the cpp files it finds. Here's how:

OBJS = $(patsubst %.cpp,%.o,$(wildcard *.cpp))

We are using two built-in functions here: patsubst and wildcard. The first function will do a pattern substitution: the first parameter is the pattern to match, the second is the substitution, and the third is the text in which to do the substitution.

Note that, in our example, the third parameter to the patsubst function is a call to the wildcard function. A call to wildcard will return a space separated list of file names that match the given pattern (in our case, *.cpp). So the resulting string in our example would be: class_a.cpp class_b.cpp my_helpers.cpp my_program.cpp. Given this string, patsubst would change all .cpp instances to .o instead, giving us (at execution time): class_a.o class_b.o my_helpers.o my_program.o. This is exactly what we wanted!

The obvious benefit of this technique is that there's no need to maintain our list anymore; make will do it for us!

2a. Use Pattern Rules Where Possible

One other obvious problem in our example makefile above is that all the object targets are identical in nature (only the file names are different). We can solve this maintenance problem by writing a generic pattern rule:

%.o: %.cpp
    gcc -c $< -o $@

Pretty ugly syntax, huh? This rule allows us to build any foo.o from a corresponding foo.cpp file. Again, the % characters here are wildcards in the patterns to match. Note also that the command for this rule uses two special variables: $< and $@. The former corresponds to the name of the first prerequisite from the rule, while the latter corresponds to the file name of the target of this rule.

Combining this pattern rule with the automatic list generation from tip #1 above, results in the following updated version of our example makefile:

CFLAGS = -Wall
OBJS = $(patsubst %.cpp,%.o,$(wildcard *.cpp))

all: my_program

my_program: $(OBJS)
    gcc -o my_program $(OBJS)

%.o: %.cpp
    gcc $(CFLAGS) -c $< -o $@

This is much more maintainable than our previous version, wouldn't you agree?

2b. Potential Problems With This Setup

Astute readers have undoubtedly noticed that my sample makefile has no header (.h) files specified as dependencies. In the real world, it's good to include them so that updates to said files will trigger a build when make is executed. Suppose that our example project had a header file named class_a.h. As the makefile is written now, if we update this header file and then call make, nothing will happen (we would have to make clean, then make again, to pick up the changes).

Header file dependencies aren't likely to be a one-to-one mapping. Fortunately, we can get make to automatically generate our dependencies for us. Furthermore, we can get make to include those automatic dependencies at execution time, without any recursive calls! The process for doing this is above and beyond the scope of this article, but I will be writing an article on this very subject in the near future (so stay tuned).

3. Target-Specific Variables Can Help

Suppose that we want to build a debug version of our program using a target. Wouldn't it be nice to be able to modify some of our variable values given that specific target? Well, it turns out that we can do just that. Here's how (the added lines have been highlighted):

CFLAGS = -Wall
OBJS = $(patsubst %.cpp,%.o,$(wildcard *.cpp))

all: my_program

debug: CFLAGS += -g3
debug: my_program

my_program: $(OBJS)
    gcc -o my_program $(OBJS)

%.o: %.cpp
    gcc -c $< -o $@

In this example, when we type make debug from the command line, our CFLAGS variable will have the appropriate debug option appended (in this case, -g3), and then the program will be built using the specified dependencies. Being able to override variables in this manner can be quite useful in the right situations.

Do you have your own make tips? If so, leave a comment! I'll be posting more about doing automatic dependency generation with make and gcc in the near future.

Motorola Droid Review

Mar 21, 2010

Back in December of last year, I made the decision to ditch my land-line telephone and go wireless only. I decided to pick up a smart phone, and chose the Motorola Droid: both because of the Verizon network (with which I was relatively happy) and because it wasn't an iPhone. Now that I've had an opportunity to play with it for a few months, I'd like to share some thoughts on the device.

Droid Hardware

Seeing as this is my first smart phone experience, I don't have anything else to compare it to, but the hardware is solid. It feels well built, looks nice (in a utilitarian sort of way), and works very well. The phone is heavy, which can be a minor annoyance. I like the fact that I can use either a physical or virtual keyboard, though the physical keyboard is a bit tight. Oddly enough, I find myself switching between the keyboards pretty frequently; sometimes I'll use the physical keyboard, while others I'll use the virtual one. Automatic word prediction, a feature I enjoy using, only works with the virtual keyboard, which probably explains why I bounce between the two (depending on how much I need to type).

The external speaker sounds great, which is a plus when I use the speaker phone. Equally as good is the display, which has vivid colors and incredibly crisp text. The touch screen is decent, though I can't help but feel that it's not quite as good as it should be. I sometimes have a tough time getting it to pick out exactly where to click, especially on crowded web page designs. Scrolling can occasionally feel laggy, but it has a nice inertia to it, which I appreciate (I hear that Android 2.1, which has yet to be released as of this writing, improves scrolling responsiveness). Fingerprints are obviously an issue on a touch screen, and a minor annoyance, but I've learned to live with them. Storing the phone in my pocket surprisingly helps keep the screen clean!

The battery has been nothing but a problem since the day I got this phone. Sometimes, I can get two or even three days of battery life out of a single charge (depending on my usage), while other times I get less than a single day. Occasionally, the battery will drain itself for no apparent reason. Several software bugs involving the battery are also still lurking in the OS; the charge indicator will report a values of 5% or 15% at random times. Plug the phone in, and the indicator resets itself. Hopefully this problem will be worked out in Android 2.1.

Wireless reception is a mixed bag. Signal strength is terrific in the car and outside. In my house, I get decent signal (between 2 and 3 bars). At work, and in many big-box stores, I get absolutely no signal whatsoever. My signal at work is so bad, that I essentially can't answer calls without walking out of the building (and I sit so far from a door that I can't make it outside before my voice-mail picks up the call). This annoys some people to no end, but I don't know of a decent way to deal with the problem, short of getting a new phone number via Google Voice, a solution I'm not terribly interested in.

Wi-fi support is terrific, as is the built-in GPS (which I'll come back to in a moment). Rounding out the hardware is the camera, which is nice, but something I haven't made much use of. The 5 megapixel resolution is a bit much in my opinion, as resulting images are too large for uploading to the web (I had to grab a free app to resize images for posting purposes).

GPS Navigation

The Droid comes with free turn-by-turn navigation via Google Maps. This is my first experience with a GPS navigation device and I absolutely love it. Google Maps navigation has been updated several times since I got the phone, with some terrific new features, including pinch-to-zoom, improved search results, and more. Navigating at night is convenient with a night-mode (on-screen graphics are dimmed to avoid being so bright), and drive time estimations are typically quite accurate. Being able to get real-time traffic reports is another great feature that has come in handy a time or two. The navigation software will even reroute your directions if you drive off course, which can be quite handy in the right situations (a missed turn, for example). I picked up a weighted, friction dash mount for using it in the car (I didn't want a suction cup on the windshield), and so far so good.

Software - Android OS

I'm pleased with the Android operating system. User actions have a nice feel to them, and I think the layout is clean and efficient. Again, I have nothing else to really compare this to. Changing the phone's settings can be a bit of a bother. There are loads of menus and sub-menus, and it's hard to remember where certain settings are stored. There are places here where some additional polish would be welcome. For example, it's super easy to set my phone to silent mode on the unlock screen; but when I'm actually using the phone, I have to navigate through several menus of options to make that change. This kind of inconsistency, especially for something so common as a silent mode switch, is bizarre.

As a developer, I'm impressed with how Android works behind the scenes. Applications are actually collections of code chunks. In other words, there's no "main" function like your typical computer program. I won't go into why this is the case here, but suffice it to say that these design decisions make for some interesting capabilities.

Software - Apps

A number of terrific applications are available for Android, though the library is currently much smaller than the iPhone's software library (though, to be fair, the iPhone has been out for far longer). I routinely use several apps:

  • WeatherBug for forecasts (this app has the cleanest look of all the weather apps I've tried, which is saying a lot, considering how terrible the WeatherBug website is).
  • Either Twidroid or Seesmic for Twitter access, depending on my mood.
  • Shopping List Plus for grocery lists. Not a perfect app, but it lets me say so long to pen and paper!
  • Wapedia for accessing Wikipedia articles (I can't find a decent link for the app itself).

Concluding Thoughts

So far, I'm happy with the decision to switch to wireless only. Although I occasionally miss the clarity of a land-line telephone, I find this wireless phone is good enough for the little amount of talking I do. Having a phone that's capable of doing other tasks for me (managing my calendar, my shopping list, catching up on Twitter, etc) is great, and I don't see myself going back.

I may or may not have mentioned before that I have a goal of visiting and photographing every state park in North Carolina. As a precursor to setting out on that goal, I have created a map of state park locations. Each location uses GPS coordinates provided by the state park service. Now that I have a GPS device that uses Google Maps (a Motorola Droid; review coming soon!), I figured this would be a terrific way to make it easy for me to get driving directions to certain locations.

While looking through all of the official state park pages, I learned a number of interesting facts:

  • Four state parks require entrance fees out of the 39 parks in the state. They include Jordan Lake, Kerr Lake, Falls Lake, and Chimney Rock.
  • Two state parks do not have public access or public facilities at this time: Mayo River State Park and Haw River State Park.
  • One state park can only be accessed by taking a ferry: Hammocks Beach State Park.

The location markers on the map I've created are currently being used by me to keep track of where I've been. However, the map is publicly available, so feel free to use it to navigate to any of the state's parks. If you have any suggestions on how the map could be improved, feel free to leave a comment. I'd like for this to be a helpful resource for people.