Also see the list of articles, none to be taken seriously.
Bill Bumgarner’s useful Dupinator script, for removing duplicate files, recently hit Python-URL. However, it has a logic bug that end up deleting too many files.
If you have several sets of duplicates that happen to share the same file size, all but one of the sets will be wiped out completely. The problem is that within each group of files of identical size, there’s at most a single generated "duplicates" list. The first file on the list is spared; the rest are deleted.
The net effect, when I tested the script on a large corpus of text files, was the program reported it would delete many files that were clearly not identical. (I had commented out the os.remove call for testing.)
There was an additional problem with iPhoto: the posted script follows symbolic links. iPhoto stores its albums as collections of symbolic links, so all photos in albums are flagged as duplicates of the original photos. An islink() test fixes this.
Here’s a modified version of the script. It has only been lightly tested, though the changes did successfully eliminate the false positives. Uncomment the os.remove() line only when you are satisfied with the list of redundant files generated.
Minor optimizations: all files < = 1024 bytes go directly into the dupes list, not potentialDupes, since the whole file has already been checked. Also, Mac OS X’s pesky .DS_Store files are skipped.
(I haven’t heard back from Bill yet on incorporating the fixes into his code, so I’m posting here.)
WordPress 1.2 now has an its own RSS import feature. However, it’s based on a different technique (regular expressions) than the code I contributed in January (which uses a true XML SAX parser). So I’m posting the code here as open source under the GPL license. This code has some additional features:
As long as your RSS feed passes the XML well-formedness test (which it probably does, even if it doesn’t validate according to the RSS Validator), you can use this RSS Import filter. If it’s not well-formed XML, you’re better off with the RSS import filter built into WordPress.
Versions are available for WordPress 0.9 through 1.2.
Now available: RSSFilter, an open source Python module for modifying RSS files and blogBrowser-format RSS archives in place. It builds on XMLFilter. (Speaking of which, thanks to Mark Pilgrim for its recent mention in his b-links.)
The module can also be used an RSS parser for valid XML feeds, though it trades in ultra-liberal parsing for its ability to safely modify files.
Operations such as inserting, modifying, or deleting a post are designed to cause minimal disruption to the rest of the file.
Here’s a way to back up iPhoto’s image comments into an easy-to-read flat directory structure. (Translation: one big folder.) You’d want to do this when archiving your photos to CD or DVD, or when trying to merge photo libraries, or when leaving iPhoto for another program, or at any other time you want your comments saved in a non-proprietary, easily readable format.
As you may have read last week, when I upgraded to iPhoto 4, all the image descriptions temporarily disappeared from my online photo albums. (I caught the problem on my own staging server before it appeared on this site.) The culprit was a change in the way iPhoto stores photo comments. Comments are now entirely gone from the easy-to-parse AlbumData.xml file; iPhoto now stores them in a binary format that appears to be proprietary.
AppleScript to the rescue. Last week’s script saved the comments to text files and generated a directory structure that exactly paralleled iPhoto’s library, with one text file for each comment. These files were in folders for each day, which were in turn inside folders for each month, etc., guaranteeing there would be no name conflicts. I had rejected using the internal ID of each picture (which would have allowed a flat conflict-free directory structure) because the ID wasn’t user-visible anywhere in the iPhoto interface, making comment files named for the ID difficult to map back to the original pictures.
One of the comments on that post asked for a version that generated the comment files in one folder, based on the image’s filename. That was a good idea. Though the filename is not guaranteed to be unique, it often is in practice. Most digital cameras save unique serial numbers for each picture as part of the filename. So this is enough for most people. (The exceptions would be if you have more than one digital camera using a similar naming convention, or if your camera is configured to reset its numbering between rolls.)
If you like guaranteed accuracy, use my original script; if you like simplicity, use the following alternate script. It will only save one of the conflicting comments if photo filenames are duplicated. Dropping the parallel folder structure simplified the script, since this version doesn’t need to employ any POSIX path manipulation.
Copy the following into Script Editor and run. Tested with iPhoto 4.0 on Mac OS X 10.3. (It may also work with earlier versions; drop me a comment below if you’ve tried it.)
I’m now using the release WordPress 1.0 to generate the content area of this weblog. (The headers, footers, site navigation, and subscription list are generated by ShearerSite.)
In many ways, it’s going from one extreme to the other. My own system is based on static rendering without a database, to the point that the original data itself is kept in RSS-compliant XML files on the site, and HTML files are generated from those. So there’s no programmatic server overhead for retrieval, but there is for authoring, since all the dependent pages have to be re-rendered on the spot. I’m still a fan of this type of system, but I wanted to try something different. WordPress is about as different as you can get: by default, it runs a battery of regular expressions--dozens upon dozens of them--over each post to format it at retrieval time. (Some kind of static caching may be on its way, though, judging from hints in the database schema.) The administration interface is mostly very good, making it much easier to perform administration tasks such as adding new categories than my homegrown config-file-based system did.
Pros of WordPress: very hackable (the good way, by the site owner); terrific setup routines; good navigation controls, easy to set up; well-rounded feature set.
Cons: frequently passes HTML through finicky regular expressions; too much use of addslashes() for my taste, including some double applications; a few bugs in 1.0 (though, to be fair, 1.0.1 final is imminent).
Some changes I made to my own copy include:
I bought the upgrade to the Apple’s iLife suite, released on Friday. Here’s a gotcha for developers who parse iPhoto’s AlbumData.xml file, though it doesn’t directly affect most users. It affects me, because my own code parses AlbumData.xml to generate my web-based photo albums (such as the England trip pictures I just posted).
Though the overall format of iPhoto’s XML file stays the same (and my script had no trouble reading it), the Comments and Date fields are gone! The Date field is renamed and in a different format, which is no problem to work around because the image file’s embedded EXIF data contains the date as well. The missing Comments field is a different story.
From my quick inspection, the comment data seems to be only stored in a newly introduced iPhoto.db file, which is in some binary format. The rationale for this is presumably performance, but that doesn’t completely make sense, since the photo title is still stored in the XML file and it may be changed just as often.
In any case, here’s a workaround that uses AppleScript to write a parallel folder structure holding just the comments, one per text file. Paste the following into a Script Editor window and run. Use this anytime you’d like to protect your comments from the vagaries of software or platform transitions or upgrades. (The parallel folder structure helps this; the script could have used iPhoto’s internal IDs and generated all the files in a single folder, but that wouldn’t have been as forward-compatible.) GPL-licensed.
commentCommonBaseDir = os.path.expanduser("~/Pictures/") commentOrigDir = os.path.join(commentCommonBaseDir, "iPhoto Library") commentParallelDir = os.path.join(commentCommonBaseDir, "iPhoto Library - My Comments Cache") commentFileSuffix = ".comment.txt" def getCommentForFile(imagePath): if not imagePath.lower().startswith(commentOrigDir.lower()): raise ('Error: image does not appear to be in iPhoto Library; ' + 'cannot compute comment path. Image: "%s". Library: "%s".' ) \ % (imagePath, commentOrigDir) commentPath = os.path.join(commentParallelDir, imagePath[len(commentOrigDir)+1:]) + commentFileSuffix if os.path.isfile(commentPath): print "Read comment for " + imagePath return open(commentPath, 'r').read() return ''
I’m giving WordPress a spin, replacing my own experimental statically-generated weblog publishing tool. The homegrown system worked well, but I wanted to add more dynamic features such as comments and trackbacks, and there’s so much other work going on with weblogging tools that it wasn’t a good use of time to implement those myself.
So I made some changes to WordPress to make it fit my publishing system, all of which are to be contributed back to the project.
To continue on the recent image resizing theme (probably of interest to Python scripters only), I made some changes as a result of upgrading to Panther last week. I wanted to use the new built-in Mac OS X version of Python 2.3 (plus the MacPython Extras from Jack Jansen—thanks, Jack!). But a problem with the initial Package Manger distribution of the Python Imaging Library made me look at a new Panther feature that let Python scripts use the native Quartz graphics library directly. (The hitch with PIL was that it was built to require a Fink install of libjpeg for full JPEG support. A quick compile of libjpeg and placement of it and its headers into Fink’s preferred locations didn’t work, and either installing Fink or compiling PIL from source would have taken a while.)
That was as good a reason as any to explore Panther’s new Quartz scripting feature. So I read what I could find on Quartz, and modified my photo album code to use Quartz if available. It still uses PIL to gather EXIF and size information, which works even without libjpeg, but then it uses Quartz to manipulate the actual image content.
The results were terrific, mostly. In real-world testing on an 800 MHz PowerBook G4, the PIL-only version spat out 8 JPEGs per minute, and the Quartz version spat out 65 JPEGs per minute. That’s a welcome improvement, especially when you multiply my typical batch of 100 photos by 3 sizes apiece.
The one problem is that I don’t yet know how to set the quality level. There’s a parameter that should contain this number, but as far as I can tell it isn’t documented anywhere. All of the supplied examples save as PNG or PDF, rather than JPEG, and the function isn’t documented along with the rest of Quartz because it’s not a real Quartz function—the release notes say that image export is actually handled through QuickTime. (This will be the first public mention in the history of the world, as far as Google is concerned, of the Core Graphics function that the API summary says it calls: CGBitmapContextWriteToFile. The last parameter, vaguely named “params” and defaulting to a zero-length string, is where a data structure including the quality level would obviously go.)
So for now it’s using a default JPEG quality level, which, whatever it is, is noticeably worse than the quality=90 setting I used with PIL, especially on thumbnails. Though I haven’t done a controlled side-by-side test, it seemed that lower quality levels resulted in some low-frequency blurriness, which looked much less objectionable than the high-frequency ringing (making macroblock boundaries visible) that PIL tended to show. It looked bad enough that I couldn’t really run PIL with anything below quality=90. And because of the lower quality setting, the file sizes on the Quartz side were half that of the PIL versions.
Here’s all the code the deals with Quartz in the new photo album. newImagesInfo holds a list of destination file paths and pre-calculated pixel dimensions.
def resizeImagesQuartz(origFilename, newImagesInfo): # newImagesInfo is a list of # (newFilename, newWidth, newHeight) tuples if not newImagesInfo: return import CoreGraphics origImage = CoreGraphics.CGImageCreateWithJPEGDataProvider( CoreGraphics.CGDataProviderCreateWithFilename(origFilename), [0,1,0,1,0,1], 1, CoreGraphics.kCGRenderingIntentDefault) for newFilename, newWidth, newHeight in newImagesInfo: print "Resizing image with Quartz: ", newFilename, \ newWidth, newHeight cs = CoreGraphics.CGColorSpaceCreateDeviceRGB() c = CoreGraphics.CGBitmapContextCreateWithColor( newWidth, newHeight, cs, (0,0,0,0)) c.setInterpolationQuality(CoreGraphics.kCGInterpolationHigh) newRect = CoreGraphics.CGRectMake(0, 0, newWidth, newHeight) c.drawImage(newRect, origImage) c.writeToFile(newFilename, CoreGraphics.kCGImageFormatJPEG) # final params parameter?
If you’re on a Panther machine with the Developer Tools installed, you can find the examples I started with in:
/Developer/Examples/Quartz/Python/
Seems obvious where they would be in retrospect. Thanks to the folks on the MacPython channel in iChat for pointing me to them.
My XMLFilter package was mentioned in Uche Ogbuji’s latest Python XML article on xml.com:
XMLFilter is one of those great examples of a unglamorous but extremely valuable program. Based on its description (and I expect to try it out and report on it in this column soon), it is a must-have for anyone building SAX programs. It provides a fallback SAX parser/driver to avoid SAXReaderNotAvailable errors that users encounter on some platforms. It also offers a safety net against the XMLGenerator bug that bit me earlier in this series. Its main feature, however, is a framework for SAX filters. See Andrew Shearer’s announcement.
Thanks, Uche!
A few days ago, I made changes to my photo album software. Now all current and past photo albums have an optional “large” size with double the pixel count, preserving more detail for users with large screens.
(There are also some other minor improvements, such as a photo count for each album, links to the next and previous albums by date, and more links to related sites.)