[DiegoDoval AnswerMe] (It seems that my previous comment to this effect was removed without a reply.) In the current spec (0.6) it seems that the use-case to re-post an entry after a post (an entry that you have created locally), the tool will have to:
-
do a get-entry (even though the data was already local, since it was posted originally locally),
-
then 'change' the values on the data obtained through the get (possibly deleting the original local content),
-
then do a re-post of that.
Is this interpretation of the process correct?
-
[JoeGregorio] Yes, that is correct.
-
[DiegoDoval] Ok. Isn't that process more complex than it should be? Consider that:
-
Reading the value from the server implies server-side filtering, or in other words, that the server will return its own interpretation of what was originally posted. That means that, potentially, the feed is a "dumbed down" version of what was posted (for example, if excerpts are used).
-
[JoeGregorio] The content returned via the Editing API is orthognal to the content returned in the feed. This is a non issue.
-
[DiegoDoval] Since the server returns whatever it wants, it's not necessarily orthogonal. The API does not specify that a GET should return exactly the same information passed on creation. The API does not bind the API provider to provide anything in particular. Additionally, realistically speaking, since the same format is used for syndication as for the API (one of the goals of Echo/Atom/etc) I expect that a lot of implementations will simply reuse the feed-creation code for API-return calls. Or would it be expected that they would re-implement that?
-
An entry stored on the client side includes metadata that is not sent to the server. If I have to re-read data from the server, it is required that the new data read be "re-wrapped" with the metadata, and the old entry on the client-side has to be deleted somehow.
-
[JoeGregorio] That is one processing model. An alternate model could assume all the state is stored on the server. In either case, state has to me merged at some point. You are just changing the point at which the merge has to occur. Now what if two people are trying to edit the same post at the same time?
-
[DiegoDoval] if two people are trying to edit the same post at the same time you have concurrency issues that go way beyond PUT/GET. The API makes no mention of concurrency, and full concurrency management would require server-based locking in some form, or rejecting posts that overlap each other, which would work in either case.
-
The data exists, both "physically" and "conceptually" where it first originated, in this case, in the client, and this process of get-before-repost implies that the data ownership is being passed to the server. This might seem acceptable in a client-server environment, but I have in mind peer-to-peer implementations where this would not be reasonable at all. In fact, the peers might not even store all the data originally posted, and doing a get before the repost would actually mean that data is lost (unless a completely new entry is created, but that would negate the usefulness of the repost).
-
[JoeGregorio] This doesn't make a lot of sense to me, a publishing system that doesn't publish all the information? Could you add more details/examples here?
-
[DiegoDoval] Yes. For example: I have an application that lets you publish a single entry to multiple weblogs. One weblog is my own personal weblog, the other one is a group weblog that contains abstracts. The group weblog is configured to avoid storing full entries and simply store an excerpt and a pointer to the main entry (which is published first). Now, if I change something on the entry (a common occurrence) you can see the complexity of doing multiple GETs then modifying each content, then doing multiple PUTs (in fact, depending on how the extract is done on the group weblog, this might be impossible to do at all), while simply re-posting directly would have none of those problems.
-
[KenMacLeod] Doesn't this just shift the burden of merging representations to the server (receiver)? That if they already have the resource state, and you PUT (without first GETting), then the server has to pick out what changed and merge it into what it has?
[DiegoDoval] No, because the server simply has to overwrite whatever it had before. If the owner of the content is the client, then the server simply wipes out the previous entry and "renders" the new one into its DB, and into its feeds/pages (which is what happens when you edit an entry on the server-side directly anyway. If the server is doing version management it would still not affect it, since version management systems are explicitly designed for merging content). 'Furthermore' shifting the burden into the client will really complicate a number of applications. Example: direct phone-blogging. Phone applications should really be small, and to the point, and should minimize transfer. The server is already doing content management (if required, as noted before, current applications don't "merge" anything). I can't see a reason why the client should have to shoulder more of the load, both in terms of transfer, processing, complexity, and so on, when the server is already doing it.
-
[JoeGregorio] "version management systems are explicitly designed for merging content". I don't feel particularly comfortable forcing that kind of processing model on weblogs. Also, as pointed out above, this type of processing makes trying to catch concurrent edits impossible. It's not real easy with the GET/PUT model, but at least you have a chance.
[DiegoDoval] I said 'if' the server is doing version management. In most cases, the server will simply overwrite the previous entry, and that will be it.
[JoeGregorio] This is a lot of discussion arguing over a single GET. I will be moving this discussion to EchoApiContentOwnership.
[DiegoDoval] Ok. But the discussion is, IMO, not over a "single GET" but over the issue of content ownership, and of who has to bear the burden of dealing with an update, and minimizing network transfer.
[DiegoDoval] I've been waiting for more replies or discussion on this and it doesn't seem to happen. Should I assume that the current (and not appropriate, as far as I can see) re-posting sequence will remain as-is?