Moving from one blog system to another can cause major headaches, especially with the problem of broken permalinks. Echo should be able to help with this - but how?
See also :
Don comments "I don't think Echo can help much except attempting to define:"
-
blog path format standard/guideline
-
feed redirection mechanism
-
change-of-permalink data format and protocol
BillKearney - this isn't an area where echo can tread without other efforts being applied first. Moving a site from one tool to another and not losing the data is one are echo WILL help. But moving the web links is a much more complex problem. The initial system might not allow a user to alter it's site after the fact (as in RadioUserland). Thus doing an importation of data into an entirely different service won't help fix the old links. Some sites, when moving, aren't moving from one DNS domain to another. They're only using a service to publish to a domain under their own control. This is an area where some bits of work could be done. But again, it's entirely dependent on the methods the tools involved use to publish the data at online URLs. Just having the data is a huge improvement. Having it automagically untangle the rats nest of URL migrations would be nice. But getting that might be much more complicated that it would initially seem.
DannyAyers - yep, agreed. Ok, there's a layer of migration problems that are pretty much out of reach - domain ownership etc. But it seems to me that even if taking on the whole job woud be *hard* we can still use the opportunity to include guidelines for e.g. blog path format (which could tie in with the URIs used for API calls), that could potentially make things a lot easier further down the path. Maybe.
[RogerBenningfield] This is just a thought off the top of my head, but... from the POV of a hosting provider, if the moving customer were willing to pay a small fee (monthly/annual/one-time/whatever) for their permalinks to automatically redirect to a new distination, it could be a win for everyone involved. I get a little extra cash with minimal additional bandwidth burn, and they get to keep their investment in their links. Making it work might even be relatively simple... all I need from the new hosting provider is a feed where each <item> contains the new permalink of an entry and the old permalink (or unique URI) so I can relate the latter to the former on my end.
[MartinAtkins, RefactorOk] This seems like a good reason to standardize on a layer one above that of the echo data -- the transaction of feed sychronization. With RSS feeds right now we have the situation where each client sucks down the last x entries even if they've seen all of them except the most recent. Could we have a standard way, as a separate layer on top of the echo XML format, to sync feeds while saying "I last synced with you at..." and have the feed dynamically generate the relevant data? The reason I bring this up here is that the feed could then be set to update the permalinks using the unique ID, with the assumption that the reader will notice the unique IDs match something they already have and update the data. (Now being discussed at AggregatorBehaviorRules)
[DeveloperDude] I can see a tool maker supporting import, but how are we going to get vendors to support export?
-
[RogerBenningfield] By routing around us. The key is to make sure that an Atom syndication feed contains everything that is needed for import. If it can be expressed in a template, then I can't realistically stop my users from exporting their stuff. If the process depends on me to do extra work just to lose money, on the other hand, don't hold your breath. Any import/export process needs to put the technical burden squarely on the import side.
-
[MartinAtkins] Remember that most 'feeds' only contain the last n entries, so this is not really the ideal mechanism for migrating to a new weblog tool. People running their own weblog stuff will probably be able to, with a small amount of effort, write their own script to generate the Atom data. So-called hosted services will make this more difficult. LiveJournal allows users of its protocol to gradually retrieve all entries in a journal, but not pull them all down at once. A tool could be written (independently of LiveJournal's people) to use this mechanism to gradually get the entire history of entries and build an Atom feed locally. I don't know if other host-type services offer a similar ability.