According to the WHATWG blog, future versions of the HTML specification will no longer use a version number. I can’t imagine why this is a good idea. How are web designers supposed to know how to target their sites to visitors? A “living and breathing spec” will require frequent updates from every browser vendor, so as to stay current with what’s allowed. As new features are implemented, how will developers know who’s compliant and who’s not? It seems to me that removing the version is a big step backwards. Perhaps they have ways of handling these situations?
Posts Tagged “articles”
The Register recently had an an interesting article on GFS2: the replacement for the Google File System. It offers insight on the problems Google is facing with the aging GFS. In today’s world of video streaming, GMail account checking, and more, the GFS model doesn’t hold up as it once did. According to the article, the new Caffeine search engine that Google is rolling out supposedly uses this new back end, resulting in faster search results. It should be interesting to see what other benefits come our way as Google tinkers with their engine.
Jeffrey Zeldman has written an interesting article on URL shortening and, more specifically, how he rolled his own using a plugin for WordPress. He also points to an excellent article written by Joshua Schachter, describing the benefits and pitfalls of link shortening utilities. Both articles are worthy reads. I suggest reading Joshua’s article before Jeffrey’s.
Do you use URL shortening services? I mainly use bit.ly at Twitter, mostly because that’s what everyone else seemed to use. Have you found some services to be better than others?
Jeffrey Zeldman has pointed his readers to an interesting article entitled Digg, Facebook Stealing Content, Traffic, Money From Publishers? The article focuses on the recently launched DiggBar, and the negative effects it’s having on the web. I gave up on Digg long ago, and this just furthers my intent to stay away from the site. With shady practices like this, it doesn’t deserve my attention.
I ran across a thoroughly engaging article at the Netflix blog that discusses the various encoding techniques they use for delivering “Watch Instantly” content. It sheds light on a number of the issues they face, and some of the decisions they are making. Silverlight is apparently their future player platform of choice, and the article discusses a little bit about why they chose this path. The technical details are appreciated, and it’s cool to see them being open like this. Maybe they’ll share similar information about other aspects of their business in the future.
I just finished reading an excellent article on how to fix pathfinding in games. The author presents a number of excellent examples of how today’s pathfinding can break (with examples from legendary games like Oblivion and Half Life 2), and offers a great solution: use a navigation mesh instead of a waypoint graph. Genius.
Are there any readers here who use Windows and don’t make use of an anti-virus client? I’ve been thinking about ditching my anti-virus client altogether on my personal system, and after reading an interesting article on the subject, I’m wondering if anyone else out there has taken this route. In my experience, anti-virus solutions are slow, ineffective (I’m not sure they’ve ever flagged anything for me over the years), and are generally a bother to keep up with.
If you’ve ditched anti-virus, why’d you do it? And what have been your results?
There’s an interesting article at InformationWeek about the new Windows architecture that Microsoft is developing. Windows 7, which is slated to be the successor to Vista, will use a new “MinWin” architecture. Essentially, the Windows core will be stripped down to the bare essentials, and additional functionality will be supplied through modules. According to the article, Eric Traut, a Microsoft distinguished engineer, demoed a version of the Windows core running with only a 25 MB footprint (as opposed to the 4 GB footprint of Vista).
I think this is a step in the right direction. Hard drive size increases have made sloppy programming, resulting in software bloat, much more prevalent. It’s time to step back, trim the fat, and work towards leaner software.
There’s an interesting op-ed article that contrasts Call of Duty 4 and Crysis. The author argues that emergent gaming (player-oriented, as in Crysis) is the future. Scripted gaming (like CoD4) is the current norm, but it limits the player in a number of ways. Unscripted gaming opens up a world of additional possibilities, at the cost of a much more challenging development paradigm. I certainly hope that games become more unscripted over time; I had a lot of fun with the Crysis demo, and the unscripted work going into the Half-Life 2 world seems to really be paying off.
There’s a really great article over at Stuart Parmenter’s blog discussing memory fragmentation in Firefox. This phenomenon is what’s causing Firefox to appear to consume so much memory. Most folks simply assume that Firefox leaks memory, mostly because they probably don’t understand what a memory leak is. Although Firefox did at one point have a number of memory leaks, the majority of them have been plugged (see this article by Jesse Ruderman for further details).
It’s great to see that someone is investigating this issue, and I find it very interesting that it’s a fragmentation problem that’s causing things to look bad. Hopefully we can see some fixes for this issue in the near future, and Firefox can get a better foothold in this department.
There’s a great followup article that shows some of the preliminary work going on to solve this problem.
A List Apart fails to disappoint. While I don’t read every article in each issue (not all of them apply to my web development efforts), I have yet to find one that hasn’t taught me something new. The latest issue is a prime example. Two new articles tackle the problem of weak writing on the web:
Both articles are excellent reads, but the latter is my personal favorite. Mrs. Simmons points out a number of interesting thoughts on where writing for the web becomes anemic. One specific example that hits close to home for me is
alt text. Improving my
alt text writing is a subtle, yet important improvement that would benefit my websites in a number of ways.
I recently stumbled upon an excellent article explaining why the “black bars” still show up for some movies, even on high-definition televisions. Not being the owner of a high-def TV, I had always wondered what really happened in these cases. Now I finally understand what’s going on, and that one shouldn’t panic when the bars continue to show up.
I just ran across an excellent article entitled Why “left: -9999px;” is Better For Accessibility Than “display: none;”. It discusses the two primary means by which web developers try to hide text (usually to allow for accessible logos or titles), and why using an offset is (often) better than just simply making it invisible. The author also explains why this is better, using some documentation from Microsoft on Internet Explorer accessibility. Looks like I need to fix my websites!
I just read an interesting article over at Wired that essentially asks “Is Firefox Getting Bloated?” The article compares Firefox to SeaMonkey. I was a Mozilla browser user well before it was named “SeaMonkey”, and well before Firefox 1.0 was released. During that time, I really came to despise the bloat in the application. Firefox was an incredible breath of fresh air when it was released: light-weight and responsive.
Personally, I feel that Firefox still exhibits both qualities. But I can see the argument made by the Wired article. Additional features, some of which many users may not actually care about, are creeping into the code base. Built-in support for microformats (something that I still don’t fully understand) is coming in Firefox 3.0. Do users really need this? Mozilla apparently thinks so. Many users may disagree.
There are certainly areas where Firefox could improve (in-browser support for both Java applets and PDF files are horrible). But I think Firefox is in great shape now. One thing I know for certain is that I’m never going back to Internet Explorer. (Side Note: I recently installed IE 7 on my work laptop … man, is it horrible.)
What do you think? Is Firefox too bloated? Too lean? Just right?
Matt Cutts has posted a short collection of improvements he’d like to see over at Amazon.com. I agree with him on all counts. There are a number of areas that Amazon could do way better on; hopefully some of these ideas will see the light of day.
Judging by the comments on the post, it also looks like I’m not the only Amazon Prime junkie. 😀
Last night I posted a Firefox profile tutorial. The guide describes what profiles are, what they are good for, and how you can make use of them. I’ll probably add to it over the next week or two. Some troubleshooting tips would make a good addition, and hopefully some readers will have suggestions on ways I can improve things. As always, let me know of any problems you might find.
The holy grail of CSS layouts has apparently been located by one Matthew Levine. Although I personally had never been searching for it, I was aware that people were. A number of potential grail candidates had apparently surfaced over time, but none have been as simple and elegant as the one found most recently.
This finding illustrates one of the main problems with CSS: columns. Placing content into columns is tough to begin with (even if we make use of illegitimate table layouts). Fortunately, CSS3 plans to add native column support. Unfortunately, support for that is still years down the road. And Microsoft is likely to never support it; they only support the “standards” for which they are sole author. Regardless, a tip of the hat to Matthew for sharing this gem with us. The world will never be the same.
My dad pointed me to an excellent article entitled The Perils of JavaSchools. Although it’s a little lengthy, the article is an incredibly worthwhile read on why schools that teach Java as the primary programming language are, to some degree, dumbing down the future generation of computer programmers. My alma mater took this route, but I was fortunate enough to be in the class before this change was made. I picked up C++ first: both on my own (by learning Visual C++ and MFC) and in school (my first programming courses were all taught using C++). After learning the intracacies of C++, learning Java was incredibly simple (almost too simple, in fact). I believe I passed my “Java for C Programmers” course with an A+; a feat that required very little effort on my part.
Once one knows about pointers, objects in Java become rediculously easy to discuss. As does the entire programming language itself. Java is a fine programming language (it fixes a lot of the brokenness in C++), but to know C++ is to feel enabled. I can wield the mighty sword of pointers and memory management; something that many Java programmers do not, and quite possible can not, ever do. I’m not saying that Java programmers cannot become successful C++ programmers; I’m simply making the point that there are more Java programmers who cannot pick up C++ than there are C++ programmers who cannot pick up Java.
Again, I highly recommend the article. The author also has an incredibly enticing reading list, which he uses to train managers in his company. There are a number of books there that look really great. I hope to begin reading The Mythical Man Month, the masterpiece written by Fred Brooks. It’s one that I’ve been meaning to read for a long, long time.