More about Troy: Some friends are telling me that they heard that it sucks. Well, maybe my appreciation of it is artificially enhanced by the fact that I read the story recently, and any gaping plot holes in the film are ones that I would miss. Well if you're worried about this, or perhaps daunted by the prosepect of reading 550 pages of dense prose, you could take a shortcut and read this ultra condensed version. You lose all of the metaphors and detail but you can at least get an idea of what the real plot is about.
Also, the CSU budget is finally nailed down, and so the final SFSU update for next year is here: Budget update -- May 2004. Some stuff got cut and fees will increase a bit (from "crazy cheap" to "cheap") but my department will remain. I don't know which of my professors will still be around next semester, unfortunately. I hope they all are but I know that's unlikely. Meanwhile, Republicans everywhere are shouting for more tax cuts.
Last semester I read the Iliad (among other Ancient Epic Tales, which was the name of the class). Last night we saw Troy. It's really good! There are some changes that I think are unavoidable due to the length of the epic poem that had to be made in order to make it fit the film. Also, the beginning and end of the film's plot don't match up exactly with the poem - the poem starts later and ends earlier than the film, but leaves out things like the back story of Helen as well as anything following the funeral of Hector (the Trojan Horse is described in The Odyssey, for example, and the escape from the sack of Troy is described in the Aeneid). Still, the characters from the poem are very well portrayed, the key plot tensions and personal conflicts are present, and the fight choreography was great. Some of the combat scenes were a bit too herky-jerky (for some reason, directors feel like it's a good technique to edit so much that you can't see anything or figure out what's going on) but others were very well done. I particularly liked Paris's "Menelaeus Cam". I hope that the openings that were left in the movie for possible Odyssey and Aeneid sequels will be pursued.
In the last month or two I've had to download a lot of stuff, usually for work. This has taken me to a bunch of different web sites and each of them has a different idea of what the appropriate user experience for downloading should be. Some of them are pretty icky. I shall document the List of Download Crimes here, in the form of an imaginary download experience from hell
Why are Microsoft and Google so interested in improving local filesystem search? (Google story / Microsoft story) Am I the only one who has learned how to use folders? I can't be; I've seen other people's computers and they appear to have figured out the concept of organizing files in folders on their hard disk and in email accounts. Google likes to talk about making that a thing of the past and that sounds horrible to me. Instead of organizing files and emails, just pile them all in a single folder and query for them each time. Argh. I'd rather see a feature added to e-mail clients that allows something like the unix hard-link so that you can have an email stored in multiple folders, without making two copies, but so that you could remove it from one and not delete it from all of them (i.e. not like an alias / shortcut / symlink). Folders are a good thing, and IMO the only reason that some people have trouble wrapping their brains around a hierarchical file system model is that they have never had to think like that before, and they should have to learn to think like that because it will help them in many other contexts than just computers. I'm all for supplementing a simple hierarchical system (like a filesystem) with metadata, but I think it's a major mistake to abandon the hierarchy and just fall back on some tags and full-text searching. I'm doing a research paper this work and I'm being reminded just how much that sort of a system sucks.
Speaking of systems that suck, I've been mystified by the "semantic web" crowd for a while. Clay Shirky wrote an interesting article about this. He has an interesting point: syllogisms require true/false statements, and since the world is fuzzy, syllogisms are not very good at describing the real world. A larger problem is that we already have some wimpy structured data in the form of the now-dead META KEYWORDS tag, and I see no reason to believe that porn sites and other search hijackers won't do everything in their power to subvert any new technology just like they did with the META KEYWORDS tag. Metadata is great if you can trust it, but on the internet you can't trust all web site authors to behave themselves. Here's a syllogism for you: The Semantic Web will work on systems where all content authors can be trusted. Not all authors on the Internet can be trusted. Therefore, the Semantic Web will not work on the Internet. Cory Doctorow's Metacrap goes into more detail as to why this meta-utopia won't work.
There's an interesting article about the Semantic Web that was written by Tim Berners-Lee, among others. The example that they give of a semantic web in action is unfortunately not very novel. It's similar to all of the other pie-in-the-sky scenarios put forth by anyone promoting a distributed programming environment, who wants you to believe that as soon as you adopt this technology, all computers everywhere will cooperate and make your life oh so easy. The parts that they gloss over are the part where hundreds or thousands of organizations get together and agree on the exact semantics of zillions of interactions between computers (instead of inventing a ton of proprietary ones and then patenting them), and the part where all of these computers are set up with appropriate trust relationships to the point where people are willing to trust the results of the computer-to-computer communications. The real world is not set up like this; every party in a transaction seeks personal advantage, and some seek shorter or longer advantages than the rest. Big players will intentionally not interoperate with smaller players, so that it'll be more expensive for software authors to also support the interfaces used by the smaller players. They may use patents to further complicate things. Less ethical companies may advertise an artificially low price when they know they're being looked at by a comparison-shopping agent, and then show the real price when the time comes to complete the transaction ("Sorry, prices are subject to change without notice!"). Getting this kind of multi-party, ad-hoc transactions to work is extremely hard, because it's not a technical problem; it's a business problem that requires people to either give away a potential for profit (by intentionally removing barriers to entry or switching costs), or to find a new way to make money from such transactions.
Still, there is good work happening in the area of standardizing computer-to-computer transaction semantics. WSDL describes the interface that a given service provides, and UDDI provides a central registry so that you can find computers that provide the service that you need. It's kind of icky if you try and read the specs (as is the case with a lot of internet standards published by OASIS and the W3C) but maybe this example from Google will make more sense.
On the topic of XML-based "smarter web" standards that don't really seem very useful to me, I still don't understand why anybody is excited about RSS. I currently have an XML & XSLT based web site publishing system that I cobbled together on my own which is used to publish the news part of this web site. Yeah, I know there are blogging applications out there, but they're either way too complicated for my needs, or proprietary, and in most cases insist on dynamically generating web pages for every page view even though mine change about once a week or less. I made an XSLT stylesheet (126 lines long, including HTML, or 42 lines if you only count lines containing XSLT tags), a DTD (224 lines, ugh) and wrote three one-line shell scripts (validate.sh, transform.sh, and publish.sh) and it works. I might turn the roll-your-own XML document format that I created for it into an RSS-compliant format, and thus make the XSLT stylesheet a poor man's RSS publishing tool. That would be a slight improvement since I wouldn't have to have my own DTD. But I still don't see what the big deal is. I suppose that if the goal is just to provide a naked format for blog-style news that is XML-based and is standardized, well, it does that, and it seems to do that well. I just don't understand at the moment why such a thing is so desirable. Yes, I've tried to use various RSS aggregators, but they're not very interesting, and inevitably lead me back to a web page containing the whole story each time I decide that a headline looks interesting. How is this an improvement over a "daily reads" bookmark page that takes you to the "today's news" page of a bunch of sites? The ability to combine all those headlines into a single window just doesn't seem like a killer feature to me. Am I missing something here? Are people such news junkies that they're hammering on all their news pages and blogs 100 times a day, and it really matters whether they're getting a stripped-down version with just the content, as opposed to the whole home page with images and ads and HTML layout junk?
In general I think that some people have been misled by one particular "selling point" of XML that's inaccurate but still get repeated over and over: that XML will make a document machine-readable (whatever that's supposed to mean). Obviously it's a document on a computer so it's machine-readable at a primitive level. However, a machine can't understand what it means, really. All XML does is to make it possible to specify a structured document format in such a way as to make it possible for a program to verify that a given document meets the format, without requiring that program to be entwined with code that actually knows what to do with the contents of that document, and without requiring that verifying program to be hard-coded to a particular document format. In comparison, HTML requires very sophisticated parsers, because the syntax has all sorts of exceptions like the <br> tag, and partly because they are trapped in a downward spiral of ever-more-broken documents. Since browsers historically tolerated invalid HTML documents, people wrote lots of invalid HTML documents and, seeing that they looked OK in their favorite browser, published them. That led to future parser and browser programmers feeling forced to keep writing parsers and browsers that tolerated invalid HTML, and on and on. XHTML fixes that by defining a format that is based on XML, and thus can be verified by a fairly simple parser. People still write crappy invalid HTML, so browser writers still have to keep making parsers that support old crappy HTML, but at least going forward the problem can be reduced. Using XML for other document formats makes it possible to agree on a format and verify it without having to perform a test-run in every application that might try to read that document. If the document is valid but the application breaks, the application is wrong. This is a very different benefit from making the application somehow intelligent enough to know what that document actually means. It just knows that the document is valid as far as the specification is concerned, and it doesn't need any format-specific verification code built in to know that; it just uses a standard XML parser and the DTD for that document format, and the parser can do the ugly low-level format validation work.
Here's an interesting story: Their beliefs are bonkers, but they are at the heart of power. Well put. Our President is a fundie, but I've never really thought about the fact that dire predictions about the future of the environment, the deficit, and a world full of people who hate you don't really matter if you think that the world is going to end in a few years anyway.
Dokaka is hilarious. He needs more bandwidth, though. His intonation is a bit off, and I don't think he really knows the words to all those songs, but that just makes it funnier.
I'm also very impressed with the progress that has been made with the Demotivators since I last saw them. I didn't realize they were working so hard turning out even more celebrations of despair.
I got a really funny spam the other day. It was an ad for anti-spam software. Well, I guess that's target marketing for ya.
Fiid alerted me to the existence of Bookmarklets, which utterly rule. Try using the "Zap" bookmarklet from this page on an ugly, practically illegible page such as this one. (It's not my fault that the content of that page is so sucky. At least after zapping the stylesheet you have a hope of reading it.)
I wish I had a bookmarklet that would let me eliminate the awful new Accenture ad campaign (with the positively cringeworthy tag line "Go on. Be a Tiger.") from all the magazines that it keeps popping up in. They must have spent a lot of money on this, from the concept to the endorsement to the ad placement, and it's so bad it makes me feel sorry for all of the people who were forced to help roll it out, because they must have known how awful it was.
My high-school friend Allen Slonaker was featured in an article entitled The New Face of the ABC: Alcoholic Beverage Control special agents are now more FBI than FDA. I'm not sure but I think that's his face on the cover (under the camo, and no I don't know why an urban ABC officer is wearing forest camo).