On Friday afternoon, I finally upgraded my home system to Windows 7. Windows XP was feeling dated, and my old system had slowed to a crawl for unexplained reasons. I also figured it was time to upgrade to a 64-bit OS, so that's the version of 7 that I installed. Here are a few brief thoughts I've had on this new operating system:

New Task Bar
Interestingly enough, the steepest learning curve I've had with Windows 7 has been with the new task bar. I'm quite used to XP's task bar, complete with the quick launch toolbar. The new task bar in Windows 7 rolls these two toolbars into one; essentially combining currently running applications with 'pinned' applications. Also, by default, only program icons are displayed; none of the window titles are shown as a part of each process' button. This new scheme is a little confusing at first, but I'm becoming accustomed to it.
Updated Start Menu
Microsoft finally got smart with the new start menu. No longer does it stretch to the top of the screen when you have a million applications installed. Instead, the "All Programs" menu simply transforms into a scrollable pane, showing the items available. This is a terrific UI change that should have been done at least 10 years ago.
Improved Speed
In the midst of going to Windows 7, I also made several hardware improvements. I upped my memory from 2 GB to 4 GB (I may go to 8 GB if 4 doesn't suffice), I am using a new brand of hard drive (Western Digital, instead of Seagate), and I added a new CPU heat sink. Since I updated a few hardware components, I'm not sure what really made the difference, but most of my applications now start noticeably faster than before. For example, iTunes starts nearly instantly, which blows the previous 15 to 20 second startup time out of the water. Games also start way faster, which is a plus. I love getting performance boosts like this; hopefully they will hold up over time.
Miscellaneous
There are other minor things that I find interesting about the Windows 7 experience:
  • Installation was amazingly fast, and I was only asked one or two questions.
  • Drivers thankfully haven't been an issue (so far).
  • The built-in zip file support has apparently been vastly improved; it's orders of magnitude faster than XP. I'm not sure I'm going to install WinZip seeing as the built-in support is so good.
  • The new virtualized volume control is epic; why wasn't it like this all along?

So far, I'm pleasantly surprised with Windows 7. Some of the new UI takes getting used to, but this looks like a positive step forward; both for Microsoft and for my home setup.

E3 2010

Jun 18, 2010

This year's E3 has come and gone, and I thought I'd post a few thoughts on various things introduced at the event. To make things easy, I'll organize things by platform.

PC Gaming

Portal 2
This may be the game I'm most excited about. Whereas the first Portal was an "experiment" of sorts, this second title looks to be a full-fledged game. The puzzles sound much more insidious (physics paint!), and the new milieu of the game looks incredible. Portions of the trailer I watched are very funny, as can be expected. And hey, it's Valve we're talking about here. This will definitely be a winner.
Rage
id Software's new intellectual property looks incredible. Part racer, part first-person shooter, this game looks like a boat load of fun. It's pretty, too, as expected with titles from id (humans still look a little too fake, however; they need to drop the 'bloated' look). I'll probably pick this one up when it's released.
Deus Ex: Human Revolution
If this game is as fun (and as deep) as the first one was, I'll definitely buy in. If it's as lame as the second one was reported to be, I'll skip it. Nevertheless, the trailer looks great.

Nintendo Wii

Lost in Shadow
This upcoming adventure game looks really impressive. You play as the shadow of a young boy, separated from him at the beginning of the game. The ultimate goal is to reach the top of a tower, where the boy is being held. But the twist here is that, as a shadow, you can only use other object's shadows as platforms. Manipulating light in the environment looks like a large part of the puzzle mechanic. This is another very inventive title that looks promising.
Zelda: Skyward Sword
What's not to like about a new Zelda title?
Kirby's Epic Yarn
Kirby's Epic Yarn has an incredibly unique art design. This time around, Kirby is an outline of yarn, and moves through a similarly designed environment. I've seen plenty of comments around the web poking fun at the seemingly "gay" presentation of the trailer; but this looks like an inventive, fun game to me.
Donkey Kong Country Returns
I was a big fan of the Donkey Kong Country games back on the SNES, so I'm really looking forward to this one. Some of the older games were ridiculously difficult; hopefully some of that difficulty will be ported over. The graphics in this one look fantastic.
Epic Mickey
Mickey Mouse goes on an epic adventure, using various paints and paint thinners to modify and navigate the world. The fact that this game includes a Steamboat Willie level, complete with the old artwork style, is epic in itself.

Nintendo DS

Nintendo 3DS
The next iteration of Nintendo's hand-held looks interesting. I'd have to see the 3D effect in person to get a good feel for it, but all the press I've read has sounded promising. There are some neat sounding titles coming for this new platform and, if they're fun enough, I may just have to upgrade.

XBox 360

Kinect (AKA Project Natal)
I'm not exactly sure what to think about this. I've read in several places that Microsoft really butchered the unveiling of this tech, opting for 'family-friendly' titles similar to what's already on the Wii. That being said, Child of Eden looks like a phenomenal title that makes terrific use of the new technology. Only time will tell how this stuff works out. I think it's funny, however, that Sony and Microsoft are just now trying to catch up to Nintendo in motion control. Nintendo gets a lot of hate from the hard-core gaming community (a small portion of which is justified), but they're obviously doing something right; otherwise these companies wouldn't be entering this space.

I'm sure there are a few items I've missed in this rundown, but these are the ones that really caught my eye. For those of you who followed this year's event, what are you looking forward to?

Yesterday, I finally finished reading the Lord of the Rings series for the first time. I can finally scratch them off my list of shame! As I did for the previous two books, I thought I would provide some brief thoughts on each.

The Two Towers

I found it interesting how this volume told two stories in separate chunks (books 3 and 4), rather than interleaving them. The first book follows the adventures of Aragorn, Gimli, Legolas, Merry, Pippin, and Gandalf, from beginning to end. The second follows Sam, Frodo, and Gollum. In the movie adaptation of this book, the stories are intertwined, helping to remind the viewer that various events are happening in parallel. Telling each story in its entirety in the novel was much more rewarding from a reading perspective. I never lost track of what was going on during each story, and I found them that much more engaging. It's interesting that Peter Jackson decided to move the scene with Shelob into the third movie, since it really happens at the end of the second novel. Again, this was a top notch novel, which I enjoyed cover to cover.

The Return of the King

To me, this book differs more from its movie adaptation than the previous two. In the book, the army of the dead is used to gain ships for Aragorn and company: nothing more. They are released from service after helping the company obtain these ships. In the movie, the dead travel with them and fight Sauron's army with the company. I think I prefer the novel's version here. Likewise, I prefer the ending of the novel over the movie. How could the film's writers have left out the scouring of the Shire? When Frodo and company return to the Shire, they find it in ruin. This was a key scene omitted from the movie, much to the movie's detriment, in my opinion. Novel for the win!

Now for a few final thoughts on the series as a whole:

  • It boggles my mind that Arwen is a bit character in the novels. Having seen the movies before reading the books, I guess my vision of her importance was tarnished. She barely has any speaking lines in the books, and is left out of the second story altogether.
  • While I enjoy Peter Jackson's movie adaptations of these books, the novels (as usual) far exceed them. Key elements were left out of the films: interacting with Tom Bombadil, several scenes with the Ents, and the scouring of the Shire (along with the deaths of both Saruman and Wormtongue). I guess it's hard to beat a book.

Defining State Parks

Jun 4, 2010

While researching the North Carolina State Park System for my "visit and photograph every state park" project, I learned that there are far more state parks than I realized. My original list had 39 parks; the official list, as I eventually found on the NC parks website, lists 32 parks, 19 natural areas, and 4 recreation areas. Unfortunately, this list is only current as of January 1, 2007. As such, a few newer parks aren't listed, such as Grandfather Mountain and Chimney Rock (which is actually listed as Hickory Nut Gorge).

All of this got me thinking about what, for my purposes, constitutes a "state park." Not all of the official sites have public facilities or access. A number of the state natural areas are simply chunks of land set aside for preservation. Several areas are relatively new and haven't yet been developed. Some others aren't developed simply based on recent budget cuts and shortfalls.

These facts have all led me to the following decision: the "state parks" I will pursue in my visitation project will include those for which official attendance figures are kept. Attendance information is posted in each state park newsletter; it is from this source that I have pulled my park list. The result is 40 parks, which nearly agrees with my first list. I had omitted Grandfather Mountain in my first pass, simply because it only recently became a state park, and wasn't listed on the official website until very recently.

I'm looking forward to visiting each park in the state. As of this writing, I've been to 13 parks, and have photographed 11. Plenty more to go!

I cannot recommend Process Explorer highly enough. This application from SysInternals is essentially a replacement for the built-in Windows task manager. One small feature that turns out to be pretty useful is that each process is shown in the list with its associated icon. This makes tracking down a specific application really easy (especially those troublesome processes that don't terminate cleanly; Java, I'm looking at you). The other tremendously useful feature I enjoy is having a description and company name along with each process. Many processes have cryptic, 8-character names, and having the associated information to help identify them is a real time saver.

As I mentioned a while back, I've been wanting to discuss automatic dependency generation using GNU make and GNU gcc. This is something I just recently figured out, thanks to two helpful articles on the web. The following is a discussion of how it works. I'll be going through this material quickly, and I'll be doing as little hand-holding as possible, so hang on tight.

Let's start by looking at the final makefile:

SHELL = /bin/bash

ifndef BC
    BC=debug
endif

CC = g++
CFLAGS = -Wall
DEFINES = -DMY_SYMBOL
INCPATH = -I../some/path

ifeq($(BC),debug)
    CFLAGS += -g3
else
    CFLAGS += -O2
endif

DEPDIR=$(BC)/deps
OBJDIR=$(BC)/objs

# Build a list of the object files to create, based on the .cpps we find
OTMP = $(patsubst %.cpp,%.o,$(wildcard *.cpp))

# Build the final list of objects
OBJS = $(patsubst %,$(OBJDIR)/%,$(OTMP))

# Build a list of dependency files
DEPS = $(patsubst %.o,$(DEPDIR)/%.d,$(OTMP))

all: init $(OBJS)
    $(CC) -o My_Executable $(OBJS)

init:
    mkdir -p $(DEPDIR)
    mkdir -p $(OBJDIR)

# Pull in dependency info for our objects
-include $(DEPS)

# Compile and generate dependency info
# 1. Compile the .cpp file
# 2. Generate dependency information, explicitly specifying the target name
# 3. The final three lines do a little bit of sed magic. The following
#    sub-items all correspond to the single sed command below:
#    a. sed: Strip the target (everything before the colon)
#    b. sed: Remove any continuation backslashes
#    c. fmt -1: List words one per line
#    d. sed: Strip leading spaces
#    e. sed: Add trailing colons
$(OBJDIR)/%.o : %.cpp
    $(CC) $(DEFINES) $(CFLAGS) $(INCPATH) -o $@ -c $<
    $(CC) -MM -MT $(OBJDIR)/$*.o $(DEFINES) $(CFLAGS) $(INCPATH) \
        $*.cpp > $(DEPDIR)/$*.d
    @cp -f $(DEPDIR)/$*.d $(DEPDIR)/$*.d.tmp
    @sed -e 's/.*://' -e 's/\\\\$$//' < $(DEPDIR)/$*.d.tmp | fmt -1 | \
        sed -e 's/^ *//' -e 's/$$/:/' >> $(DEPDIR)/$*.d
    @rm -f $(DEPDIR)/$*.d.tmp

clean:
    rm -fr debug/*
    rm -fr release/*

Let's blast through the first 20 lines of code real quick, seeing as this is all boring stuff. We first set our working shell to bash, which happens to be the shell I prefer (if you don't specify this, the shell defaults to 'sh'). Next, if the user didn't specify the BC environment variable (short for "Build Configuration"), we default it to a value of 'debug.' This is how I gate my build types in the real world; I pass it in as an environment variable. There are probably nicer ways of doing this, but I like the flexibility that an environment variable gives me. Next, we set up a bunch of common build variables (CC, CFLAGS, etc.), and we do some build configuration specific setup. Finally, we set our DEPDIR (dependency directory) and OBJDIR (object directory) variables. These will allow us to store our dependency and object files in separate locations, leaving our source directory nice and clean.

Now we come to some code that I discussed in my last programming grab bag:

# Build a list of the object files to create, based on the .cpps we find
OTMP = $(patsubst %.cpp,%.o,$(wildcard *.cpp))

# Build the final list of objects
OBJS = $(patsubst %,$(OBJDIR)/%,$(OTMP))

# Build a list of dependency files
DEPS = $(patsubst %.o,$(DEPDIR)/%.d,$(OTMP))

The OTMP variable is assigned a list of file names ending with the .o extension, all based on the .cpp files we found in the current directory. So, if our directory contained three files (a.cpp, b.cpp, c.cpp), the value of OTMP would end up being: a.o b.o c.o.

The OBJS variable modifies this list of object files, sticking the OBJDIR value on the front of each, resulting in our "final list" of object files. We do the same thing for DEPDIR, instead prepending the DEPDIR value to each entry (giving us our final list of dependency files).

Next up is our first target, the all target. It depends on the init target (which is responsible for making sure that the DEPDIR and OBJDIR directories exist), as well as our list of object files that we created moments ago. The command in this target will link together the objects to form an executable, after all the objects have been built. The next line is very important:

# Pull in dependency info for our objects
-include $(DEPS)

This line tells make to include all of our dependency files. The minus sign at the front says, "if one of these files doesn't exist, don't complain about it." After all, if the dependency file doesn't exist, neither does the object file, so we'll be recreating both anyway. Let's take a quick look at one of the dependency files to see what they look like, and to understand the help they'll provide us:

objs/myfile.o: myfile.cpp myfile.h
myfile.cpp:
myfile.h:

In this example, our object file depends on two files: myfile.cpp and myfile.h. Note that, after the dependency list, each file is listed by itself as a rule with no dependencies. We do this to exploit a subtle feature of make:

If a rule has no prerequisites or commands, and the target of the rule is a nonexistent file, then make imagines this target to have been updated whenever its rule is run. This implies that all targets depending on this one will always have their commands run.

This feature will help us avoid the dreaded "no rule to make target" error, which is especially helpful if a file gets renamed during development. No longer will you have to make clean in order to pick up those kinds of changes; the dependency files will help make do that work for you!

Back in our makefile, the next giant block is where all the magic happens:

# Compile and generate dependency info
# 1. Compile the .cpp file
# 2. Generate dependency information, explicitly specifying the target name
# 3. The final three lines do a little bit of sed magic. The following
#    sub-items all correspond to the single sed command below:
#    a. sed: Strip the target (everything before the colon)
#    b. sed: Remove any continuation backslashes
#    c. fmt -1: List words one per line
#    d. sed: Strip leading spaces
#    e. sed: Add trailing colons
$(OBJDIR)/%.o : %.cpp
    $(CC) $(DEFINES) $(CFLAGS) $(INCPATH) -o $@ -c $<
    $(CC) -MM -MT $(OBJDIR)/$*.o $(DEFINES) $(CFLAGS) $(INCPATH) \
        $*.cpp > $(DEPDIR)/$*.d
    @cp -f $(DEPDIR)/$*.d $(DEPDIR)/$*.d.tmp
    @sed -e 's/.*://' -e 's/\\\\$$//' < $(DEPDIR)/$*.d.tmp | fmt -1 | \
        sed -e 's/^ *//' -e 's/$$/:/' >> $(DEPDIR)/$*.d
    @rm -f $(DEPDIR)/$*.d.tmp

This block of code is commented, but I'll quickly rehash what's going on. The first command actually compiles the object file, while the second command generates the dependency file. We then use some sed magic to create the special rules in each dependency file.

Though it's a lot to take in, these makefile tricks are handy to have in your toolbox. Letting make handle the dependency generation for you will save you a ton of time in the long run. It also helps when you're working with very large projects, as I do at work.

If you have a comment or question about this article, feel free to comment.

Oatmeal Raisin Cookies

Apr 29, 2010

This recipe comes from Quaker Oats (from the lid on their oatmeal containers, specifically). I've transcribed it here so I can remember it without having to keep an oatmeal lid lying around somewhere. Note that the cookie recipe on their website is slightly different from this one. These are incredibly delicious cookies!

  • 1/2 pound of margarine or butter, softened
  • 1 cup firmly packed brown sugar
  • 1/2 cup granulated sugar
  • 2 eggs
  • 1 teaspoon vanilla
  • 1-1/2 cups all-purpose flour
  • 1 teaspoon baking soda
  • 1 teaspoon cinnamon
  • 1/2 teaspoon salt (optional)
  • 3 cups oats, uncooked
  • 1 cup raisins

Beat together the margarine and sugars until creamy. Add eggs and vanilla, and beat well. Add combined flour, baking soda, cinnamon and salt; mix well. Stir in oats, and raisins; mix well. Drop by rounded teaspoonfuls onto ungreased cookie sheet. Bake at 350 degrees for 10 to 12 minutes or until golden brown. Cool 1 minute on cookie sheet; remove to wire rack. Makes about 4 dozen.

One of the recent updates to Firebug broke a "feature" I used all the time: the ability to select a link with the element inspector, then edit that link's :hover pseudo-class style rules. Well, it turns out that technically, that "feature" was a bug (though I might argue against that fact). In newer versions of Firebug, you have to:

  1. Click the element inspector
  2. Click the link you're interested in editing
  3. Select the :hover pseudo-class menu item in the Style tab's drop-down menu
  4. Edit the rule as you like

This new option allows you to "lock" the element in the :hover state, the usefulness of which I can understand. At the same time, it would be great to have an option (perhaps a hidden preference) to bring back the old behavior.

Yesterday's nightly Firefox build fixed bug 147777, in which the :visited CSS property allows web sites to detect which sites you have visited (essentially a privacy hole). Sid Stamm wrote a very interesting article on the problem, along with how the Firefox team decided to fix the issue. I recall being amazed at the simplicity of this privacy attack: no JavaScript was needed for a site to figure out where you had been. Several history sniffing websites are available if you're interested in seeing the hole in action.

Email Fixed

Apr 11, 2010

Just a quick note to let everyone know that contact via email should be back up and running here at the site. Comments are also a good way to get in contact with me (plus they benefit everyone).

There has been quite a bit of news recently on the escalating war of words between Adobe and Apple. For the uninformed, Apple has essentially said "no Flash, ever" for either the iPhone or iPad, and Adobe has been pretty upset (rightfully so, in my opinion). Adobe employees have publicly denounced Apple, and Apple has fired back. It's all been a sort of "playground dispute" so far.

Let me first say that I don't love either company; they both have pretty serious failings in my eyes. But, in the end, I despise Adobe much less than I do Apple, so I'd love to see Adobe come out on top if at all possible. It occurred to me just the other day how Adobe could "get back" at Apple for this latest Flash debacle.

Simply put: Adobe should drop all OS X support for all of their future products. "If your OS won't support our products, our products won't support your OS." Just think about it: all of the artsy folks in the world who use Adobe products use them on Apple branded computers. Cutting them off might seriously impact Apple's new OS sales (and, admittedly, would probably hurt Adobe's bottom line, at least in the short term). But this seems like serious leverage to me. Granted, Apple's main revenue stream these days comes via the iPhone, but OS sales are still a vital piece of their puzzle. Putting the squeeze on a big vein like that might make Apple change its mind.

As this bickering continues, I can only hope that Android continues to grab market share. Could the iPhone vs. Android war turn into the Apple vs. IBM war from the 1980s? I can only hope so...

I've recently had a perfect storm of email woes here at this site. Last month, my email servers changed at DreamHost (for reasons I still don't fully understand), breaking all of my approved SSL certificates (not to mention my SMTP settings). Around the same time, I updated to Thunderbird 3.0 from 2.x. The new interface is bizarre, and I've only had problems from day one of the upgrade. As such, I am now actively working towards moving all of Born Geek's email (including this website) to GMail.

Unfortunately, someone is apparently squatting on my domain over at Google Apps. I attempted to reset the account password there, but no secondary email address is on record, making things much more difficult for me. I have started a manual password reset process (via proving to Google that I do indeed own the domain), and hope to have things up and running by this weekend.

Long story short, any direct emails sent to me through the contact form at this website may not be answered for a while. Please bear with me during this painful process.

Beautiful Evening

Apr 1, 2010

It's nights like this that make me glad to be alive.

It has once again been ages since the last programming grab bag article was published, so let's dive right into another one, shall we? This time around, we'll be looking at some simple tricks involving GNU make.

1. Let Make Construct Your Object List

One common inefficiency in many Makefiles I've seen is having a manual list of the object files you are interested in building. Let's work with the following example makefile (I realize that this makefile has a number of design issues; it's a simple, contrived example for the sake of this discussion). I've highlighted the list of objects below (line 2):

CFLAGS = -Wall
OBJS = class_a.o class_b.o my_helpers.o my_program.o

all: my_program

my_program: $(OBJS)
    gcc -o my_program $(OBJS)

class_a.o: class_a.cpp
    gcc $(CFLAGS) -c class_a.cpp

class_b.o: class_b.cpp
    gcc $(CFLAGS) -c class_b.cpp

my_helpers.o: my_helpers.cpp
    gcc $(CFLAGS) -c my_helpers.cpp

my_program.o: my_program.cpp
    gcc $(CFLAGS) -c my_program.cpp

For very small projects, maintaining a list like this is doable, even if it is a bother. When considering larger projects, this approach rarely works. Why not let make do all this work for us? It can generate our list of object files automatically from the cpp files it finds. Here's how:

OBJS = $(patsubst %.cpp,%.o,$(wildcard *.cpp))

We are using two built-in functions here: patsubst and wildcard. The first function will do a pattern substitution: the first parameter is the pattern to match, the second is the substitution, and the third is the text in which to do the substitution.

Note that, in our example, the third parameter to the patsubst function is a call to the wildcard function. A call to wildcard will return a space separated list of file names that match the given pattern (in our case, *.cpp). So the resulting string in our example would be: class_a.cpp class_b.cpp my_helpers.cpp my_program.cpp. Given this string, patsubst would change all .cpp instances to .o instead, giving us (at execution time): class_a.o class_b.o my_helpers.o my_program.o. This is exactly what we wanted!

The obvious benefit of this technique is that there's no need to maintain our list anymore; make will do it for us!

2a. Use Pattern Rules Where Possible

One other obvious problem in our example makefile above is that all the object targets are identical in nature (only the file names are different). We can solve this maintenance problem by writing a generic pattern rule:

%.o: %.cpp
    gcc -c $< -o $@

Pretty ugly syntax, huh? This rule allows us to build any foo.o from a corresponding foo.cpp file. Again, the % characters here are wildcards in the patterns to match. Note also that the command for this rule uses two special variables: $< and $@. The former corresponds to the name of the first prerequisite from the rule, while the latter corresponds to the file name of the target of this rule.

Combining this pattern rule with the automatic list generation from tip #1 above, results in the following updated version of our example makefile:

CFLAGS = -Wall
OBJS = $(patsubst %.cpp,%.o,$(wildcard *.cpp))

all: my_program

my_program: $(OBJS)
    gcc -o my_program $(OBJS)

%.o: %.cpp
    gcc $(CFLAGS) -c $< -o $@

This is much more maintainable than our previous version, wouldn't you agree?

2b. Potential Problems With This Setup

Astute readers have undoubtedly noticed that my sample makefile has no header (.h) files specified as dependencies. In the real world, it's good to include them so that updates to said files will trigger a build when make is executed. Suppose that our example project had a header file named class_a.h. As the makefile is written now, if we update this header file and then call make, nothing will happen (we would have to make clean, then make again, to pick up the changes).

Header file dependencies aren't likely to be a one-to-one mapping. Fortunately, we can get make to automatically generate our dependencies for us. Furthermore, we can get make to include those automatic dependencies at execution time, without any recursive calls! The process for doing this is above and beyond the scope of this article, but I will be writing an article on this very subject in the near future (so stay tuned).

3. Target-Specific Variables Can Help

Suppose that we want to build a debug version of our program using a target. Wouldn't it be nice to be able to modify some of our variable values given that specific target? Well, it turns out that we can do just that. Here's how (the added lines have been highlighted):

CFLAGS = -Wall
OBJS = $(patsubst %.cpp,%.o,$(wildcard *.cpp))

all: my_program

debug: CFLAGS += -g3
debug: my_program

my_program: $(OBJS)
    gcc -o my_program $(OBJS)

%.o: %.cpp
    gcc -c $< -o $@

In this example, when we type make debug from the command line, our CFLAGS variable will have the appropriate debug option appended (in this case, -g3), and then the program will be built using the specified dependencies. Being able to override variables in this manner can be quite useful in the right situations.

Do you have your own make tips? If so, leave a comment! I'll be posting more about doing automatic dependency generation with make and gcc in the near future.

Motorola Droid Review

Mar 21, 2010

Back in December of last year, I made the decision to ditch my land-line telephone and go wireless only. I decided to pick up a smart phone, and chose the Motorola Droid: both because of the Verizon network (with which I was relatively happy) and because it wasn't an iPhone. Now that I've had an opportunity to play with it for a few months, I'd like to share some thoughts on the device.

Droid Hardware

Seeing as this is my first smart phone experience, I don't have anything else to compare it to, but the hardware is solid. It feels well built, looks nice (in a utilitarian sort of way), and works very well. The phone is heavy, which can be a minor annoyance. I like the fact that I can use either a physical or virtual keyboard, though the physical keyboard is a bit tight. Oddly enough, I find myself switching between the keyboards pretty frequently; sometimes I'll use the physical keyboard, while others I'll use the virtual one. Automatic word prediction, a feature I enjoy using, only works with the virtual keyboard, which probably explains why I bounce between the two (depending on how much I need to type).

The external speaker sounds great, which is a plus when I use the speaker phone. Equally as good is the display, which has vivid colors and incredibly crisp text. The touch screen is decent, though I can't help but feel that it's not quite as good as it should be. I sometimes have a tough time getting it to pick out exactly where to click, especially on crowded web page designs. Scrolling can occasionally feel laggy, but it has a nice inertia to it, which I appreciate (I hear that Android 2.1, which has yet to be released as of this writing, improves scrolling responsiveness). Fingerprints are obviously an issue on a touch screen, and a minor annoyance, but I've learned to live with them. Storing the phone in my pocket surprisingly helps keep the screen clean!

The battery has been nothing but a problem since the day I got this phone. Sometimes, I can get two or even three days of battery life out of a single charge (depending on my usage), while other times I get less than a single day. Occasionally, the battery will drain itself for no apparent reason. Several software bugs involving the battery are also still lurking in the OS; the charge indicator will report a values of 5% or 15% at random times. Plug the phone in, and the indicator resets itself. Hopefully this problem will be worked out in Android 2.1.

Wireless reception is a mixed bag. Signal strength is terrific in the car and outside. In my house, I get decent signal (between 2 and 3 bars). At work, and in many big-box stores, I get absolutely no signal whatsoever. My signal at work is so bad, that I essentially can't answer calls without walking out of the building (and I sit so far from a door that I can't make it outside before my voice-mail picks up the call). This annoys some people to no end, but I don't know of a decent way to deal with the problem, short of getting a new phone number via Google Voice, a solution I'm not terribly interested in.

Wi-fi support is terrific, as is the built-in GPS (which I'll come back to in a moment). Rounding out the hardware is the camera, which is nice, but something I haven't made much use of. The 5 megapixel resolution is a bit much in my opinion, as resulting images are too large for uploading to the web (I had to grab a free app to resize images for posting purposes).

GPS Navigation

The Droid comes with free turn-by-turn navigation via Google Maps. This is my first experience with a GPS navigation device and I absolutely love it. Google Maps navigation has been updated several times since I got the phone, with some terrific new features, including pinch-to-zoom, improved search results, and more. Navigating at night is convenient with a night-mode (on-screen graphics are dimmed to avoid being so bright), and drive time estimations are typically quite accurate. Being able to get real-time traffic reports is another great feature that has come in handy a time or two. The navigation software will even reroute your directions if you drive off course, which can be quite handy in the right situations (a missed turn, for example). I picked up a weighted, friction dash mount for using it in the car (I didn't want a suction cup on the windshield), and so far so good.

Software - Android OS

I'm pleased with the Android operating system. User actions have a nice feel to them, and I think the layout is clean and efficient. Again, I have nothing else to really compare this to. Changing the phone's settings can be a bit of a bother. There are loads of menus and sub-menus, and it's hard to remember where certain settings are stored. There are places here where some additional polish would be welcome. For example, it's super easy to set my phone to silent mode on the unlock screen; but when I'm actually using the phone, I have to navigate through several menus of options to make that change. This kind of inconsistency, especially for something so common as a silent mode switch, is bizarre.

As a developer, I'm impressed with how Android works behind the scenes. Applications are actually collections of code chunks. In other words, there's no "main" function like your typical computer program. I won't go into why this is the case here, but suffice it to say that these design decisions make for some interesting capabilities.

Software - Apps

A number of terrific applications are available for Android, though the library is currently much smaller than the iPhone's software library (though, to be fair, the iPhone has been out for far longer). I routinely use several apps:

  • WeatherBug for forecasts (this app has the cleanest look of all the weather apps I've tried, which is saying a lot, considering how terrible the WeatherBug website is).
  • Either Twidroid or Seesmic for Twitter access, depending on my mood.
  • Shopping List Plus for grocery lists. Not a perfect app, but it lets me say so long to pen and paper!
  • Wapedia for accessing Wikipedia articles (I can't find a decent link for the app itself).

Concluding Thoughts

So far, I'm happy with the decision to switch to wireless only. Although I occasionally miss the clarity of a land-line telephone, I find this wireless phone is good enough for the little amount of talking I do. Having a phone that's capable of doing other tasks for me (managing my calendar, my shopping list, catching up on Twitter, etc) is great, and I don't see myself going back.

I may or may not have mentioned before that I have a goal of visiting and photographing every state park in North Carolina. As a precursor to setting out on that goal, I have created a map of state park locations. Each location uses GPS coordinates provided by the state park service. Now that I have a GPS device that uses Google Maps (a Motorola Droid; review coming soon!), I figured this would be a terrific way to make it easy for me to get driving directions to certain locations.

While looking through all of the official state park pages, I learned a number of interesting facts:

  • Four state parks require entrance fees out of the 39 parks in the state. They include Jordan Lake, Kerr Lake, Falls Lake, and Chimney Rock.
  • Two state parks do not have public access or public facilities at this time: Mayo River State Park and Haw River State Park.
  • One state park can only be accessed by taking a ferry: Hammocks Beach State Park.

The location markers on the map I've created are currently being used by me to keep track of where I've been. However, the map is publicly available, so feel free to use it to navigate to any of the state's parks. If you have any suggestions on how the map could be improved, feel free to leave a comment. I'd like for this to be a helpful resource for people.

As shameful as it is for me to say, I had not, until just recently, ever read The Hobbit or The Fellowship of the Ring (or, for that matter, the other two volumes of The Lord of the Rings). I'm not sure why I never read them. Perhaps it's because I heard from some people that the books were hard to read. Well, I'm finally getting around to reading them, and I must say that I've enjoyed them thoroughly. Here are some thoughts:

The Hobbit

Though technically not a part of the The Lord of the Rings, The Hobbit is clearly where it all starts. As such, I read this book first, and I'm glad I did. Reading this story first provides a great deal of context for things learned in Fellowship. I particularly loved the way the book was written: it always seemed to me like an old man was telling me the story as we sat around a camp fire. Often the narrator would go off on a tangent, then later realize that he had gotten onto a tangent, and would finally have to apologize to you, the reader. Very enjoyable. The one thing I didn't like about this story was the abrupt ending. After the climax is a single chapter, wrapping up a number of threads in a short period of time. Such a jarring transition seems detrimental to the whole story on some level. Overall, however, a terrific story.

The Fellowship of the Ring

This is by far one of the best books I've read in a long time. Tolkien's command of the English language is outstanding, as is his inventiveness. Every character feels alive and their interactions are wonderful to experience. My absolute favorite scene is at the parting of the Company with Galadriel and Celeborn from Lothlórien. Galadriel gives each member of the Fellowship a gift, and she asks Gimli, the dwarf, what he would like. At first he says he wants nothing, but she presses him, so he answers that a single hair from her head would be his heart's desire. He then continues to assert that he doesn't want this; he's only saying so because she commanded him to speak. Here is her reply:

The Elves stirred and murmured with astonishment, and Celeborn gazed at the Dwarf in wonder, but the Lady smiled. "It is said that the skill of the Dwarves is in their hands rather than in their tongues," she said; "yet that is not true of Gimli. For none have ever made to me a request so bold and yet so courteous."

She then asks Gimli what he would do with such a gift, and he replies that he would simply treasure it, in memory of her words to him at their first meeting. This pleases her, so she gives him not one hair, but three. Gimli takes them and vows to have them set in an imperishable crystal to be an heirloom in his house, and a token of goodwill between the Dwarves and the Elves until the end of time.

Scenes like this one are peppered throughout the text, and are truly wonderful to take part in. I'm greatly looking forward to the next two books, even though I know how the story plays out.

One of the things I most appreciate about Perl is that it requires code blocks to be surrounded by curly braces. In my mind, this is particularly important with nested if-else statements. Many programming languages don't require braces to surround code blocks, so nested conditionals can quickly become unreadable and much harder to maintain. Let's take a look at an example:

if (something)
    if (another_thing)
    {
        some_call;
        some_other_call;
        if (yet_another_thing)
        {
            do_it;
            do_it_again;
        }
    }

Note that the outer if-statement doesn't have corresponding curly braces. As surprising as it may seem, this is completely legal code in many languages. In my opinion, this is a dangerous programming practice. If I wanted to add additional logic to the contents of the outer if block, I would have to remember to put the appropriate braces in place.

Had I attempted to use this code in a Perl script, the interpreter would have complained immediately, even if warnings and strict parsing were both disabled! This kind of safety checking prevents me from shooting myself in the foot. Some may complain that requiring braces makes programming slightly more inefficient from a productivity standpoint. My response to that is that any code editor worth its salt can insert the braces for you. My favorite editor, SlickEdit, even supports dynamic brace surrounding, a feature I truly appreciate. It's a shame that more programming languages don't enforce this kind of safety net. Hopefully future languages will keep small matters like this in mind.

On February 22, several new laws went into effect in the United States in the attempts to protect consumers from credit card companies. Included among these laws is a rule that credit card statements must include information on how long it will take to pay off the balance when paying the minimum amount each month. I've heard a great deal of talk on the radio about this particular change, mostly to the effect that it should help wake people up to the fact that minimum payments aren't a great idea, at least from the consumer's point of view; the credit card companies love this scenario.

That got me thinking about credit cards in general here in the United States. According to creditcards.com, the average credit card debt for American households in 2008 was $10,769 (for households with a credit card); almost $11,000! It boggles my mind that there are people out there with a running balance that high. My credit card debt is $0, which means someone out there has a debt of nearly $22,000! How does that even happen?

Most people must live well above their means, which makes no sense to me at all. Maybe that's because I've been pretty tight with my money all my life. I remember saving up chore money to buy my first Nintendo system. Every video game purchase was a result of hard work and scrimping and saving on my part. As a kid, I literally kept paper ledgers tracking how much money I was taking in versus how much was going out. Saving just came naturally to me. I paid for every vehicle I've ever owned, I paid for my college education, and I graduated debt free (or nearly so; I had about $1000 in student loans which I immediately paid off once I got a full time job). I'm what the credit card industry calls a "deadbeat." I pay my bill on time, in full, every month. How can I possibly do that? By staying within my means!

I essentially treat my credit card like a debit card: I know how much money I have in my bank account, so I know not to spend more than that. It's not that hard! Online money management tools like Mint.com only make that process easier. Month to month, I can track where my money is going, and how I'm doing overall.

I'm not sure what the answer to America's credit card debt problem is. At the very least, money management should be taught in school. Growing up, I had plenty of friends who got into trouble with money by purchasing things well outside of what they were capable of. The sad thing is that money management isn't that hard; it simply takes a little bit of self control. Which is something most Americans apparently just don't seem to have.

Writing Break

Feb 1, 2010

So that all of my regular readers are aware, I am taking a much needed break from blogging during the month of February. I've been in a writing funk lately, and I figured that a small break would do me some good. Updates will resume in March.