[ClayShirky RefactorOk] What _is_ the motivation? Is just it to save RSS from the Personality Wars? If so, would the ideal solution to be to take some flavor of 0.9x and call it PIE 1.0, and then start working from there?

Right now, the conversation looks muddled, because a lot of questions that were asked and answered in the development of RSS itself (it should be 7 bit; it should be represented in XML; _required_ metadata should be kept at a minimum; it should not try to be an input to the SemanticWeb) are coming up again, to no good effect, imo. If the goal is to get something that works like RSS, but is richer, and more extensible, and defended in advance from the new type of standards war of which RSS seems to be an early harbinger, then that probably ought to go in a charter statement somewhere (and the HP would be a better place than here, I think), in order to keep it from being a pile on.

Whenever you see people proposing to base something on OWL, you know the Ted Nelson quotes are not far behind.

[SamRuby RefactorOk] Clay, the answer to the question you are looking for can be found on the top of the RoadMap. The first place I saw this articulated was [WWW]here, and expanded upon [WWW]here. Could these be solved based on an existing RSS? I imagine so (personally, I would start with 2.0 instead of 0.9x). But the impetus for me creating this wiki can be found [WWW]here, specifically, the desire for a bit of [WWW]forward motion. One possible use of this analysis would be to produce a proper usage profile of RSS.

[DannyAyers] Clay, a lot of what Ted Nelson imagined has come to pass (in a modified form!) as the web. OWL and the related SemanticWeb efforts are about making web data richer, more extensible, and more useful because it's easier to machine-process. Current Web Services already go a long way in this direction, there's absolutely nothing to stop them going further. This isn't pipe dream stuff, people are already doing it. It doesn't negate Echo's aim of simplicity to design it so that it will work well with Semantic Web Services. See ExtraInterop.

[ClayShirky RefactorOk] Almost nothing of what Nelson imagined has come to pass. He was wrong about transclusion, wrong about stateful conversations, wrong about how to handle unique IDs, and almost everything that Berners-Lee did right -- limited semantics in the application protocol; made it stateless; and created the 404 error, living embodiment of [WWW]worse is better and savior of scale -- violated Nelson's bankrupt vision. Xanadu was the wrong answer, [WWW]REST (Representational State Transfer) is the right one. If you thought that all that was wrong with RSS 1.0 is that RDF didn't make it confusing enough, attach the twin boat-anchors of OWL and the Semantic Web to Echo and see what happens.

[KevinMarks] The key difference between Nelson's vision and Berners Lee's implementation was the lack of two-way links in the latter. Because a page did not have to automatically link back to all those that linked to it, it was possible for a true power-law link distribution to form, giving us the globally interlinked web we have now, rather than the small clusters of locally linked pages that we would have if the constraints of linking back applied. Any TrackBack like features need to consider this point carefully.

[DannyAyers] Nelson talked about having "everything" linked together using hypertext. In most important senses that's just what we're getting with the web. I'm sure he'd disagree (strongly) but things like transclusion IDs etc are merely implementation details (I agree with your points about Berners-Lee's approach being right, btw). All that was right with RSS 1.0 was that it was RDF!

Anyhow, I don't understand how in any sense you can see OWL and the SW as's like saying that Web Services are boat-anchors for XML. The SW technologies allow the creation of applications that should be able to do a lot more with the data generated by Echo feeds, and there's potential for generation of new kinds of feed content there too.

I agree absolutely re. visions and vaporware. Material from RSS 2.0 feeds can (fairly easily) be read and the manipulated by RDF(/OWL)-based apps, and even republished as RSS 2.0 feeds again if that's what's required. Probably the biggest problems with RSS 2.0 (and other dialects) in this context are inconsistency, and that the content isn't always clearly separated from the rest of the (meta)data. If Echo is done cleanly, then it could make the use of syndication with SemanticWeb technologies quite a lot easier. At absolutely no cost to the people that only want an electronic version of their diaries/newspapers. But I think it's silly ignoring the visions at a time when the the technologies have caught up enough to make many of them a reality.

[JoeGregorio] Clay, you say in part: "a lot of questions that were asked and answered in the development of RSS itself are coming up again, to no good effect, imo.", but in this Wiki they are coming up again, being discussed, the discussion archived for posterity, and a decision will be made on each of the areas with the answers codified in the Echo specification. So while the arguements are old, the outcome will be new.

[JoshJacobs] I'm curious as to whether the motivation for this effort is intended to make authoring easier, or consumption more powerful. The motivation above is very tool/technology centric. I'm curious what the goals are from more of a user standpoint. Will success be measured by adoption of the standard, or by specific user communities that will be empowered by adoption? 'Because we want to make it easier to build weblog tools', does this include aggregators?

[GrantCarpenter, RefactorOk] Josh, I think the simple answer to your question is 'yes and yes'. The end output may be more complex for authoring an Echo item (in comparison, say, to a putatively valid RSS 0.91 item). I believe, however, the process of having one true standard that is well adopted and unfailingly specific in how it should be implemented by authors also has a reduction effect whereby greater authoring simplicity happens. So yes, widespread adoption of the format by tool developers on both ends of the wire would seem to be an important goal, but I don't know that what's being proposed raises the barriers for entry to the extent that its orders of magnitude more complex than RSS .91/2. On the empowerment side, that's definitely a primary objective as I read things, but I think the motivation is for a simple, concrete core that works well at a baseline and with extension (wherein lies the power).

I don't see making authoring easier and consumption more powerful as fully mutually exclusive. In essence, how much harder is authoring something like [EchoExample] vs. RSS 0.9x? Somewhat harder I'll grant you. Harder than RSS 2.0? Hmmm. Too close to call when I consider the effort that's involved in divining precisely what RSS 2.0 should consist of. Harder than 1.0? I'd say no. So we're really talking about complexity vs. 0.91. It's there, but at the same time I think there are enough voices in the community saying 0.91 isn't rich enough (moreover, if it were, where did the motivation for 1.0 and 2.0 come from?), some additional complexity isn't a dealbreaker. At the end of the day, if the major tool vendors are supporting the format, it will see adoption--and given that most tools can handle most of the flavors of RSS, I don't know that this is that a daunting hurdle we're talking about.

[JoshJacobs RefactorOk] Grant, no argument from me that this spec is not dramatically harder to generate feeds for. And I think that the basic notion of what a post is and isn't, is fairly easy to write content for. My concern is the tradeoffs of a smaller core (which primarily benefits blogging platform vendors) vs. a larger required core which I think would enable a lot more powerful end user/content reader applications. It seems to me that there a set of people (medium sized) that write content, a set of people who build tools to manage/host that content (very small number), and a set of people who want to consume the information (huge group). The question I have is why not put a larger burden on the folks in the middle in order to allow for richer capabilities in the largest community: content readers? I understand how developers benefit from a small core, I don't get how this is advantageous to users of the technology (with the onvious caveat that if compliance is too onerous, and no tool supports echo, users get nothing).

[GrantCarpenter, RefactorOk] After reading your comments on Sam's site (as well as Sam's), I think I understand your concern. My presumption is that there's still a good bit of distance to travel before we can say 'okay, this is what the core would include'. The fundamental question of what extensions to require over and above what constitutes the existing cores is going to come up, and I'm interested to see where that goes. I agree that your basic premise does the most good for the most people; at the same time, if you can't get that small group of vendors controlling the majority of the tools space to buy into full adoption of a large core, it's all moot. People are linking up on the [RoadMap] page and that's totally heartening--but how much of that is frustration over the politics of RSS and the lack of good standards before the unifying standard is even fleshed out? (Hopefully this is just something Chicken Little would worry about.) If there is solid buy-in, however, an solid extensive Echo core loses a lot of its downside. I just am not personally sure how deep the rabbit hole can go--at some point there's probably a chicken v. egg threshhold where if it's too extensive it's too much work for a nascent standard vs. too little value to be a real long-term solution. So I'll agree to agree.

[ChrisWilper, RefactorOk] Clay says:He was wrong about transclusion He was only wrong about what it would be called and how it would be implemented. Ever use an img tag and had referenced content displayed inline?

[DannyAyers RefactorOk] I don't think compatibility will be a problem in the near future because of the layered architecture approach that is now being taken by TimBL and co (a largely evolved architecture, btw). This give a lot of flexibility, so for example a transclusion layer could be implemented on top of a static http+html layer.

[IanDavis] I've grown to realise over the past couple of years that the so-called RSS politics aren't about RSS at all. They're not even about XML vs RDF. The argument wasn't over the name. The issues were, and still are, user control and the commercial advantage such control brings. If a business's strategy relies on the exponential growth of lightwieght syndication then it would have to fight any loss of control tooth and nail. Somebody needs to convince me that this project will sidestep these control issues. I can't see how that can happen at the moment. Please show me that I'm shortsighted.

[MarkHershberger, RefactorOk] Ian, totally agreed. The soap opera that is RSS isn't about benefit, XML, RDF, etc. It's about control. Echo will (hopefully) sidestep this by being the cooperative effort of developers (and users) of the various consumers and producers. No one will be able to claim authorship or ownership because it will be obvious that this is a community effort.

[ThiemoMaettig] Don't you think your Motivation is highly inconsistent? It says: "If you want to build a weblog tool [...], you need to know a variety of different formats [...]: RSS 0.9, RSS 0.91, RSS 0.92, RSS 1.0, RSS 2.0 [...]. We'd like to make this just one format [...]." In fact, you are adding another format instead of replacing something! I'm pretty sure Atom will cause the same disorientation as any other format did before. If it becomes wide spreaded sometimes, we need to know "RSS 0.9, RSS 0.91, RSS 0.92, RSS 1.0, RSS 2.0" and Atom! What's wrong with RSS 1.0? I read almost anything here but I still don't understand why we can't use RSS 1.0? RSS 1.0 is able to do anything you want to do with Atom (including your absurd EscapedHtmlDiscussion). I would like to understand what's going on here, and I'm pretty sure I'm not the only one who's confused. And please, don't tell me RSS 1.0 is not well human readable (I think I read this anywhere). The intention of RSS is to be machine readable.

[DannyAyers] Thiemo - RSS 1.0 (with tweaks) is certainly capable of the sophistication required for the format, but the political history means that this isn't really an option. There's been a lot of FUD spread about the complexity of the RSS 1.0 syntax as well (it's no worse that RSS 2.0 with namespaces). Even if the politics weren't an issue, what RSS 1.0 doesn't cover is the PieApi, which needs bringing into the current century. In the near term it will be a case of RSS X+Y+Z+Atom, but a little further into the future it will be possible for tool developers to use Echo alone.

[ThiemoMaettig] Don't you think most people don't care about politics? I definitely don't. Don't you think most people don't care about complexity and namespaces? I don't. RSS is to be machine generated and machine read. Remember? Don't think you possibly missed the point? Why don't use RSS 1.0 (and introduce a new [WWW]RSS 1.0 module if you really need an extension) and put a slightly better API on it (which is based on the BloggerApi for example)? Advantages: Any existing RSS 1.0-capable software (and there's a lot) plus any RDF-capable software (hell, it's even a [WWW]W3C recommendation) will keep on running. I don't want to be forced to support another format because you don't like RSS. Do whatever you want with the EditingApi, but don't introduce a format that [WWW]still exists.

[DannyAyers] Thiemo, alas I fear it is you who is missing the point. By all means use RSS 1.0. No-one is stopping you. It isn't that people care about the politics, the politics messes up the environment in which they are working. I personally like RSS 1.0 a lot, but would rather move to a unified format than stick with it and have to support all the other junk.

I'm not a developer, just a user, but it seems to me that the experience of the larger open source community might apply here. Users don't care about politics. They only care about control to the extent that it impacts them: a closed format that everyone uses is fine, while a closed format that no one uses is useless. Users want a solution that will let them do the stuff they need/want to do, preferably without having to sign onto someone else's agenda as the price of entry. From the outside, interested observer perspective, it seems that Linux succeeded because it was willing to be agnostic about some of the more highly charged political issues, thereby avoiding much of the Big Closed Source Company-driven fear, uncertainty, and doubt about open source.

CategoryMetadata, CategoryModel, CategoryRss