Also see the list of articles, none to be taken seriously.
I posted online notes from the first class of day 4 of the SD Expo.
I’m commuting into Boston from Providence by train every morning, which means I have to get up at 5:30 AM to make the first class at 8:00 AM. There isn't a lot of time left at the end of the day. A woman I talked to on the train yesterday has been doing the commute daily for months, and her managers have been doing it for years. Apparently the pleasure "starts to wear thin" at some point.
The first day I had taken a subway ride from South Station to the Hynes Convention Center, but then I realized it wasn't that far away from Back Bay. In fact, you can get from one to the other entirely indoors, through a tunnel, through a mall, across a skybridge, and through another mall. Good for avoiding the cold Boston nights.
My notes from the third day of the software development conference in Boston are online, summarizing three classes: Principles of Advanced Software Design, Use Case Design Pattern: Realistic Implementation in Java, and XML Schema Language.
My notes from the second day of the software development conference in Boston are online, summarizing two half-day tutorials: JDK 1.4, with new features for Java, and Hands-on XSLT, a classroom attempt at teaching the XML transformation language.
Steven Frank starts a user-interface discussion: .dmg Files Considered Harmful (via Daring Fireball).
I had some of the same misgivings about .dmg files, but there are also drawbacks to .sit and .tgz archives in that it's still not obvious to the untrained user how to install the programs after download. At least you can more reliably put instructions into a disk image window.
It's possible to make disk images user-friendly provided they satify three requirements:
It would be interesting to see a new user try out a disk image. Without that, all I can do is speculate.
I posted my notes from day 1. There's no WiFi in the conference building, but I walked a few blocks down Newbury St., with free wireless access, and I'm posting this from a street corner.
I?m at the the SD East conference in Boston every day this week.
Matthew Thomas wrote a great essay: When good interfaces go crufty.
In the 1970s and early ’80s, transferring documents from a computer’s memory to permanent storage (such as a floppy disk) was slow. It took many seconds, and you had to wait for the transfer to finish before you could continue your work. So, to avoid disrupting typists, software designers made this transfer a manual task. Every few minutes, you would “save” your work to permanent storage by entering a particular command.
Trouble is, since the earliest days of personal computers, people have been forgetting to do this, because it’s not natural. They don’t have to “save” when using a pencil, or a pen, or a paintbrush, or a typewriter, so they forget to save when they’re using a computer. So, when something bad happens, they’ve often gone too long without saving, and they lose their work.
Fortunately, technology has improved since the 1970s. We have the power, in today’s computers, to pick a sensible name for a document, and to save it to a person’s desktop as soon as she begins typing, just like a piece of paper in real life. We also have the ability to save changes to that document every couple of minutes (or, perhaps, every paragraph) without any user intervention.
We have the technology. So why do we still make people save each of their documents, at least once, manually? Cruft.
[more...]
A couple of years ago, I would have agreed with this wholeheartedly, and taken exception to this dissent (from Daring Fireball), which deals specifically with the one passage I excerpted.
The differences center on how and when documents are saved, and like mpt, I used to strongly believe that the schism of document-in-memory vs. document-on-disk was an unnecessary throwback to an early implementation detail. It should have been thrown out long ago, along with the annoying "Save/Save As" interface, which is mainly good at making the common task of renaming a document as circuitous as possible. To do so, you have to Save As using one navigation interface, switch to the Finder/file manager and locate the old document with a different navigation interface, then trash it, or, switch to the Finder/file manager first, locate and rename the document, then switch back and hope that you either remembered to close the document beforehand or all bets are off. On the Mac OS, well-written applications will notice the name change and update the title bar, less well-written applications will make a new copy under the old name when you eventually save, even less well-written applications will give a cryptic error message when you save, and on Windows you’ll typically get an “access denied” error message immediately after typing the new name, which isn’t friendly but at least gets rid of the uncertainty. Save As is not only bad at renaming, but also bad at making a backup copy: you naturally wind up editing the backup copy you just tried to make, unless you specifically close it, then go back to the Finder/file manager and reopen the regular version. For exactly what use case is Save As designed anyway?
But the ineffectiveness of Save As pales in comparison to problems with the model underlying the regular Save command. The Save model does the wrong thing by default: it doesn’t save your work. That is, unless you continuously correct it by telling the computer that you want your work saved rather than not saved. As a result of making the right thing more difficult than the wrong thing, and requiring that people remember to do repeatedly what the computer could do automatically, the Save model has resulted in untold hours of lost work.
I used to think that one unified document image kept equal in memory and on disk would solve these problems, as well as remove a conceptual stumbling block for new users. But recently I’ve come to see the value of having one permanent record (on the hard disk) and a separate working copy (somewhere or other). Manual save is good, because it should require more effort to affect the permanent record than just to open a file, scroll around, and maybe hit a couple of keys accidentally.
Maybe this has something to do with my use of a laptop now instead of a desktop. On a desktop machine (without a UPS), the in-memory document is at most a quarter-second’s power interruption away from oblivion. Whereas a laptop will keep on going right through a blackout. On Mac OS X, system crashes aren’t a worry anymore, and my PowerBook G4 preserves the contents of memory for days away from power in sleep mode. I can even remove both the power adapter and the battery for a minute and the thing still preserves everything in memory for instant wake. This goes a long way toward making the in-memory copy stable and trustable, and not just an implementation defect, as I used to think of it. It would be interesting to see whether other people’s opinions on the Save model correlate with desktop vs. laptop use.
The Daring Fireball objections are perfectly reasonable for today’s applications, but don’t allow much of a vision of the future. Maybe a new model based on an built-in versioning system (similar to the system in OpenDoc, but more automatic) would satisfy both camps.
Two unconventional takes on the approval of the Microsoft settlement. The first is from the dot-communist (?the government case was crafted by morons?). The second is from Robert X. Cringely (who brings up the Eolas lawsuit against Microsoft again, a story no one else appears to be reporting).
I posted diagrams and a status update on the template system.