Summary:
Aban et al. discussed an algorithm to approximate a drawn stroke in real time. With a possibly infinite number of points in a stroke and a limited amount of memory, simplifying the stroke as its being drawn will save the computer storage space.
The algorithm runs in O(1) time for three specific cases: when using the Hausdorff distance if the path is convex (i.e. every angle on the path is less than or equal to 180 degrees) or if the path is xy-monotone (any horizontal or vertical line intersects the path at most once), and when using the Frechet distance for general paths. Aban et al. never defined the Hausdorff distance, but the Frechet distance between two paths is equal to the maximum distance the paths are away from each other at two points (I think...)
The algorithm to simplify a stroke takes a the new point to add to the simplification and computes an error for the new addition. The point with a minimum error is then removed from the simplification, and the new errors are for each remaining segmentation are calculated.
Discussion:
I don't really understand how this algorithm works. If the algorithm keeps removing points from the segmentation, shouldn't the number of points always be constant? But then how do we build up points to have a segmentation in the first place? I need to reread the paper a few more times before I understand everything.
Also, on a personal note, I tend to dislike papers that use words like "simply" and "clearly." Especially when I don't think the paper is clear.
Subscribe to:
Post Comments (Atom)
2 comments:
They did seem to gloss over some key areas in this paper without full explanation and given the density of the material, this is troublesome for the average reader.
I think it is just the Hausdorff approximation for convex paths that gets processed in O(1) time. For xy-monotone paths using Hausdorff and general paths using Frechet the time is O(k/sqrt(epsilon)) amortized time.
They say that to 'process each point' on arrival takes O(log k) time in Theorem 3.3 and Theorem 2.1 in addition to the error time above.
As far as constant number of points:
Assuming you want an approximation of length L, they allow that many points to accrue in the priority queue before removing any and only begin removing points to maintain L points in the queue. So, yes, in a way because they are seeking a constant size approximation.
That said...I'm not sure everything I said is right because it wasn't the easiest read.
This was the most difficult paper the class has come across. Since I had to present it, I had to know it too. It turned out that the concept wasn't that hard, but the paper simply wasn't well-written to begin with IMO. Abam's other papers are read in the computation geometry class, and the students there had the same complaints about the writing style.
Okay, now to the paper. Even though the paper claims to have uses that would be beneficial to subjects that relate to sketch recognition, it clearly had a Geographical Info Sys audience in mind. I'm glad Dr. Hammond was able to explain to use its uses in the context of scribbling, which is an application that I can see Abam's technique have potential in. But I'm not sure how beneficial this technique is in storage savings given the computation required for line simplification of streaming points...
Post a Comment