Also see the list of articles, none to be taken seriously.

10 Best Features from Commercial CMS

Browser-based image editing, pre-localized interfaces

Extra credit: In-context editing (Edit This Page), dependency reporting, semblance of autoclassification, relational viewing tools

Reporting: such as Never Logged In

Configurable, forms-based workflow (ingest Visio WFML?)

508/WA compliant output — accessibility. Table headings + row headings, alts, etc.

Browser-based content object development (schema, essentially)

OpenCourse educational site. opencourse.org. “It rhymes with open source!” (The presenter avoided saying this, but I'm sure he wanted to.) Slow-moving.

Dublin Core Metadata in CMS

On oscom.org presentation slide show, different DC formats for XHTML, HTML, RDF XML are linked.

Good reference impl.: DC-dot. Another: Reggie

Elements (such as DC.Subject.Keyword) appearing multiple times, yes. Comma-separated value lists, no.

Discussion on thesauri, search engines, etc. Overall, I didn't get a huge amount out of this session, at least not directly. I'll have to find the references impls online.

Read and Post Comments

Provides a standard way to place content on a web server, with metadata, file locking, versioning. Also can decouple filesystem layout from author's view. Uses HTTP for all logins, so no need to create full user accounts.

Very few clients support metadata so far. Cadaver does, but cmd-line based. Kcera? KExplorer? support properties.

To check out: Joe Orton's sitecopy. Twingle.

WebDAV for filesharing tested lighter than SMB on network traffic.

Question on ranged PUTs. WebDAV and mod_dav support it, but some servers don't. The Mac OS X WebDAV client can't use ranged PUTs for this reason, or it would risk replacing the entire file with the tiny part that was changed. They're working toward some kind of solution.

Servers include Apache mod_dav (which the speaker wrote) and Zope, Tomcat. Jakarta Slide requires a lot of work to connect its memory-based store to something. Can even handle WebDAV with CGI except for OPTIONS method.

Subversion supports DeltaV WebDAV. You can mount & copy files from vanilla Windows & Mac OS X. But you can't modify them, because the client don't support DeltaV. (There is an experimental "autoversion" plugin to server to allow this.)

Extensions: ACL. Remote management of ACLs; close to RFC status. DASL (DAV Searching & Locating). Yet another query language. Further off.

MS WebDAV does a little check for FrontPage first, but is pretty much straight WebDAV otherwise.

My question: best/simplest route to implement a change trigger for a WebDAV server, so I could run a script? Can I plug in easily to any of the existing servers?

A. Zope supports WebDAV and is programmable. It uses its own data store, though, not the filesystem. So the whole system would have to use Zope.

Best answer. Could look at logs / an Apache filter to implement change response. Great idea.

Alternative: Author of FS watch & notify utils suggested those. They only run on Unixes, though. (I need Windows support, so I could look into NT's APIs for filesystem notification too.)

Read and Post Comments

To come.

Read and Post Comments

Warning: possible drivel ahead.

Thought for the day: As I sit here in this bus, for some reason, I remember reading about someone who got a steady WiFi signal on a high-speed train in some low-density area of the country. Thinking about it, I'm not sure how that's possible with regular transmitters--with a garden-variety access point having a range of less than 300 feet, they would be out of it in seconds. And I've never heard of enough transmitters/repeaters strung together to make handoff continuous. (Could someone on the train have been retransmitting an Internet connection they made through other means?) If Amtrak really wanted to do that, and I'm finally getting to the thought I mentioned, could they set up Pringles-can transmitters pointed down the track? They're very directional and have a great range (in Aspen's network, apparently miles). It may be easier than somehow getting internet access into the train through the overhead electrical system or satellite and having Amtrak retransmit it to everyone inside, since I can't imagine the overhead electrical system is great for data. (Then again, if the electrical system does have a constant enough connection to modulate data on top of it, they should do so right away.) I'm sure there are many people who have throught through this more than I just did, but having standard WiFi Internet access on trains would be a great marketing advantage, considering airlines' eagerness to adopt it but the significant expense, technical difficulties, and slow rollout it is currently entailing.

Amtrak would have to put a long fiber line along their right-of-way (which telecom companies have largely already put there, judging by the all those orange "Warning: Fiber!" tubes stuck in the ground along the routes) so they would need to tap it every so often with some network equipment. Maybe that equipment would be cheaper than satellite or a long-range WLAN protocol. Or maybe not, and I'm blowing smoke. Or maybe they already have WiFi, and I'm still blowing smoke. Oh well. I never claimed otherwise. End of today's thought.

Read and Post Comments

Going to OSCOM in Boston today.

Read and Post Comments

I posted online notes from the first class of day 4 of the SD Expo.

Read and Post Comments

I’m commuting into Boston from Providence by train every morning, which means I have to get up at 5:30 AM to make the first class at 8:00 AM. There isn't a lot of time left at the end of the day. A woman I talked to on the train yesterday has been doing the commute daily for months, and her managers have been doing it for years. Apparently the pleasure "starts to wear thin" at some point.

The first day I had taken a subway ride from South Station to the Hynes Convention Center, but then I realized it wasn't that far away from Back Bay. In fact, you can get from one to the other entirely indoors, through a tunnel, through a mall, across a skybridge, and through another mall. Good for avoiding the cold Boston nights.

Read and Post Comments

My notes from the third day of the software development conference in Boston are online, summarizing three classes: Principles of Advanced Software Design, Use Case Design Pattern: Realistic Implementation in Java, and XML Schema Language.

Read and Post Comments

My notes from the second day of the software development conference in Boston are online, summarizing two half-day tutorials: JDK 1.4, with new features for Java, and Hands-on XSLT, a classroom attempt at teaching the XML transformation language.

Read and Post Comments

Matthew Thomas wrote a great essay: When good interfaces go crufty.

In the 1970s and early ’80s, transferring documents from a computer’s memory to permanent storage (such as a floppy disk) was slow. It took many seconds, and you had to wait for the transfer to finish before you could continue your work. So, to avoid disrupting typists, software designers made this transfer a manual task. Every few minutes, you would “save” your work to permanent storage by entering a particular command.

Trouble is, since the earliest days of personal computers, people have been forgetting to do this, because it’s not natural. They don’t have to “save” when using a pencil, or a pen, or a paintbrush, or a typewriter, so they forget to save when they’re using a computer. So, when something bad happens, they’ve often gone too long without saving, and they lose their work.

Fortunately, technology has improved since the 1970s. We have the power, in today’s computers, to pick a sensible name for a document, and to save it to a person’s desktop as soon as she begins typing, just like a piece of paper in real life. We also have the ability to save changes to that document every couple of minutes (or, perhaps, every paragraph) without any user intervention.

We have the technology. So why do we still make people save each of their documents, at least once, manually? Cruft.

[more...]

A couple of years ago, I would have agreed with this wholeheartedly, and taken exception to this dissent (from Daring Fireball), which deals specifically with the one passage I excerpted.

The differences center on how and when documents are saved, and like mpt, I used to strongly believe that the schism of document-in-memory vs. document-on-disk was an unnecessary throwback to an early implementation detail. It should have been thrown out long ago, along with the annoying "Save/Save As" interface, which is mainly good at making the common task of renaming a document as circuitous as possible. To do so, you have to Save As using one navigation interface, switch to the Finder/file manager and locate the old document with a different navigation interface, then trash it, or, switch to the Finder/file manager first, locate and rename the document, then switch back and hope that you either remembered to close the document beforehand or all bets are off. On the Mac OS, well-written applications will notice the name change and update the title bar, less well-written applications will make a new copy under the old name when you eventually save, even less well-written applications will give a cryptic error message when you save, and on Windows you’ll typically get an “access denied” error message immediately after typing the new name, which isn’t friendly but at least gets rid of the uncertainty. Save As is not only bad at renaming, but also bad at making a backup copy: you naturally wind up editing the backup copy you just tried to make, unless you specifically close it, then go back to the Finder/file manager and reopen the regular version. For exactly what use case is Save As designed anyway?

But the ineffectiveness of Save As pales in comparison to problems with the model underlying the regular Save command. The Save model does the wrong thing by default: it doesn’t save your work. That is, unless you continuously correct it by telling the computer that you want your work saved rather than not saved. As a result of making the right thing more difficult than the wrong thing, and requiring that people remember to do repeatedly what the computer could do automatically, the Save model has resulted in untold hours of lost work.

I used to think that one unified document image kept equal in memory and on disk would solve these problems, as well as remove a conceptual stumbling block for new users. But recently I’ve come to see the value of having one permanent record (on the hard disk) and a separate working copy (somewhere or other). Manual save is good, because it should require more effort to affect the permanent record than just to open a file, scroll around, and maybe hit a couple of keys accidentally.

Maybe this has something to do with my use of a laptop now instead of a desktop. On a desktop machine (without a UPS), the in-memory document is at most a quarter-second’s power interruption away from oblivion. Whereas a laptop will keep on going right through a blackout. On Mac OS X, system crashes aren’t a worry anymore, and my PowerBook G4 preserves the contents of memory for days away from power in sleep mode. I can even remove both the power adapter and the battery for a minute and the thing still preserves everything in memory for instant wake. This goes a long way toward making the in-memory copy stable and trustable, and not just an implementation defect, as I used to think of it. It would be interesting to see whether other people’s opinions on the Save model correlate with desktop vs. laptop use.

The Daring Fireball objections are perfectly reasonable for today’s applications, but don’t allow much of a vision of the future. Maybe a new model based on an built-in versioning system (similar to the system in OpenDoc, but more automatic) would satisfy both camps.

Read and Post Comments

« Previous Page -- Next Page »