I enjoy using Twitter for several reasons, not the least of which is that it is amusing and very lightweight. I've had an active account for over two years now, and I can count on the fingers of one hand the number of times that it has actually come in handy. I in no way consider it to be a useful communications tool - less so even than traditional instant messengers, which in turn are less useful than IRC.
Twitter's format is very well defined: it is strict UTF-8 text, 140 or fewer characters, with an associated timestamp, user id (and associated user information, interestingly), source program, "favourited" status bit, &c.
The format makes no assumption about the environment in which it is rendered, which is part of why there are so many alternative clients available. The default Twitter web interface augments the base data format but places no artificial restrictions on the data itself.
The Twitter service gives as well as it gets.
You imply that Twitter "discourages long, well-written pieces", but I think otherwise: firstly, it does not discourage long pieces, but rather outright forbids them; secondly, this format restriction has the opposite effect of increasing the quality of communication.
This at first may seem quite counterintuitive, but consider the mental model of Twitter as an amusement: if we accept that Twitter is a game where the goal is primarily to amuse yourself and those who follow you, it's a short leap to say that goal is best served by context sensitivity and pithy wit.
Even if we reject this mental model, the fact of the matter is that user accounts get quickly unfollowed by the people who find those tweets to be useless, unpleasant or otherwise objectionable. Users who do not care about the quality of their posts soon find themselves yelling into the uncaring aether...net.
Consider a very similar medium: the email subject line. It features comparable length restrictions, can be associated with very similar metadata, has a robust ecosystem of clients, and is very well defined by a long history of RFCs and mainstream use. I assert that the quality of the information present in your average email subject line is substantially lower than that in most tweets, and that this is a function of what the medium encourages rather than what it allows or enforces.
We'll return to this point later (see "Laying Blame for Future Crimes" below).
I'd like to borrow some insight from an article over at http://www.maetico.com/everything-and-wa
I scare-quote "very efficient" because efficiency is an indexical statement: it depends on the goal that a person has for a technique or tool (amusement, business communication, revision tracking, threaded conversation, dessert topping + floor wax, &c.), and on the value each person places on the various resources used by that tool (attention span, context switch overhead, tool dependencies, eye strain, &c.). For someone who stays logged into Facebook all day, the messaging system that service offers may be more efficient for quick written communication than email, despite that email has enormous advantages in other areas.
I'm going to borrow the term "servent" from peer-to-peer terminology to denote a network node which acts as both a client and a server. As per Google's description, the protocol roughly concerns itself with..:
I have a few reservations about this model and how it is being positioned.
The Google documentation[2, 4, 5] implies but does not explicitly state that the underlying data format of a Wave document is (X)HTML. Editing HTML well is hard, way harder than it should be. If Google had only announced the creation of a pleasant tool which maintains a valid document in a valid state after every edit operation, I would still have been impressed.
I wonder, does Wave either sacrifice the correctness or the full expressive range of the document format? Either is very bad. It stretches credulity that it might do neither.
Versioning is difficult. Good versioning is intensely difficult. We're dealing with a protocol that assumes from the ground up that documents are meant to have deltas applied to them, and which by default accepts these deltas from all sources. Each delta generates a new version of the document.
The Google demo indicated that versioning in the current client reaches down to the keystroke level. If you thought that keeping up with a newsgroup or wiki page was difficult, keeping up with Wave will be a whole new level of pain... unless one ceases to pay attention to individual edits.
This model works very well for a document-centric realtime concurrent editor (see paragraph 2 of section "Executive Summary" of ), where the goal is to coordinate the efforts of one or more people to create a final finished document. Sifting through hundreds or potentially thousands of edits to get up to speed on a lengthy conversation, though, will become an incredibly difficult task. Wave looks like it will amplify the Wikipedia accountability problem to epic proportions.
The problem with Wave's synchronization model is that an authoritative copy of the document has to exist somewhere. Right now, that somewhere is necessarily one of Google's server. They've promised that other servers will be honoured at some point, but we're not there yet.
Google Talk was launched on 2004-08-24, but it wasn't until 2006-01-17 that they enabled open server federation. This delay applied to a well-known and widely implemented protocol, while the Wave protocol is entirely new. On the other hand, Google may have a different perspective on this release. On the gripping hand, they've got a pretty strongly vested interest in maintaining control while the ecosystem develops.
The client that Google demonstrated is an impressive web application. I don't object to it on principle, but it remains to be seen how it works in other browsers. I'm not holding out hope that it will work at all outside of Chrome, Firefox, Safari and Internet Explorer.
Since the Wave document format seems to be HTML, there's a very good chance that non-browser clients will be second class participants, if they are able to work at all. It's not fair to criticize Google for work not done by third parties, but it is perhaps more fair to criticize them for requiring this work to be done.
I really did not care for the part of the demo video where the first line of a new Wave was emboldened, as if it were an email subject line.
There is a world of difference between a line of text wrapped in
I really don't want to have to fire up a graphical browser application just to be able to exchange structured, threaded text with other users. I really don't see Google pouring development hours into ensuring that elinks, lynx, w3m, emacs and others aren't left out in the cold.
I like the idea of robots that interact with Waves as first-class clients. I tend to believe that this kind of augmented interaction should happen during the editing of a document rather than during its publication, but then the entire Wave philosophy requires that these two phases no longer be treated or thought of differently.
A spell checker that takes into account grammatical structure, with some degree of reliability, and that draws on the enormous corpus of structured text and language research that Google can bring to bear on the problem? This is incredibly exciting all on its own. Do want.
These look like trivial shortcuts to add URLs and prevent having to use other browser tabs. Frankly, I don't care about them in the slightest, except that they encourage blind copying and pasting of links. At least they provide an alternative to blindly copying and pasting content.
The chess example was particularly cool because it perfectly matches the protocol model of a [blank] shared state and a series of tiny transactional updates.
My main objections to Google Wave are not about what it provides, but about how it is being positioned. The Rasmussens et al have created a fantastic collaborative concurrent rich text editor, and they have a right to be proud of it; but Wave is being heralded by the press, traditional and otherwise[6, 7, 8, 9], as a replacement for..:
It's a concurrent document editor, and fundamentally differs from all of the above. While it's true that it may be able to replace the most basic parts of some of these tools, doing so will require throwing away the good bits of each that make them powerful and worthwhile.
My worry is that Google will seriously try to position Wave as an alternative to some of these systems, and that by dint of its user / fan base, it might to some extent succeed. I'm getting pretty tired of good systems and protocols being replaced by more popular but purely inferior ones.
Mandating bad behaviour is unquestionably bad. Merely encouraging it is less bad, but still not good. Internet users have enough bad habits to deal with without being encouraged to adopt a huge set of new ones.
I am very worried by Lars Rasmussen's description of how he believes Wave relates to email:
Later on, Lars mentions that a bidirectional email-to-wave gateway might be difficult, but possible. The problem is that email includes a set of well-structured header information and the optional ability to include an arbitrary number of body elements, each which may contain highly structured information (e.g. MIME sections containing binary files), semistructured documents (e.g. HTML, Markdown, format=flowed, or other text formats which offer hints about enhanced rendering while not requiring it) or unstructured text; whereas Waves appear to not only strip away the requirement for this extra richness but also the ability to include it.
Accepting and rendering down rich structure while offering only text with primarily presentational markup makes for a unbalanced flow of information.
This was an interesting read and sort of echoes some of my concerns.
What interests me about it is the divide between what seems to be two doctrines of collaboration (explicit or no, I think they're there).
Real quick: If a computer's environment is how we work, and it's a two-way interaction, we can effect how we work and how we use our tools effects how we think about them.
The dichotomy here, I think, is what's better? Having a series of specialized tools or one that Does It All?
There are clear advantages and disadvantages to both, but at first thought I'm tempted to side with Ben's implicit choice, one that (forgive me if I misunderstand) espouses a series of specialized and restricted tools that allow some user agency in controlling workflow and keep people from breaking their tools.
Different formats, I think, allow us to mentally organize and prioritize work and fun in interesting ways. Twitter is restricted but perfect for minor, inconsequential things. It's temporal, something you usually need to be aware of in the now since I'm sure everyone has woken up to an exchange and had no idea what's going on. It's a dialogue, but an open one.
IM is atemporal but still lends itself to short form. It allows for private conversations and often results in ideas being developed before being hashed out in an e-mail or doc or what have you.
E-mail is the tool for things you need to be able to remember tomorrow. It's well suited to long term stuff and also pretty much universally used so you don't need to worry about people who haven't bought in at this point.
I could go on about Skype and whatever else, but I think the point is made. A number of tools allows a group to work the way they want.
Unfortunately, this is at the expense of synchronicity. Bill loves twitter and insists on bit.ly linking every resource on it, but never answers his IM and E-mail? Forget about it. If people don't work in ways that at least mesh, then you're basically losing productivity to the natural overhead of the cost of transaction (in this case, asymmetric workflows).
But there's probably something to be said about trying to shoehorn everyone into one model. Everyone might work the same, but now it might crush out the individual tricks and hacks that each person uses on a project. Tricky.
I guess what I'm trying to say is that I'm completely ambivalent and that I'd love to try it out.
On October 11th, 2009 09:39 pm (UTC), (Anonymous) commented:
Learning where the permanent bugs and workarounds are inside a phonebook-length API teaches you nothing. It is anti-knowledge. Where your mind could have instead held something lastingly useful or truly beautiful, there is now garbage.