Comments on: "Application Challenges of Computational Geometry(CG)", by CG Impact Task Force (http://www.cs.princeton.edu/~chazelle/taskforce/CGreport.ps.Z) by D. Avis (avis@cs.mcgill.ca) and K. Fukuda (fukuda@dma.epfl.ch) SUMMARY The task force report addresses the question of why Computational Geometry(CG) has had very little impact on practical geometric computing. There are many useful suggestions in the report, but we feel the main cause for CG's lack of impact has been overlooked. The thrust of the report seems to be: the fundamental problems have been solved, we must now do applications. As evidence of this the report cites "some truly remarkable success stories (eg. optimal algorithms for convex hull and Voronoi diagrams, linear-time solutions of LP type problems)." Our view is: the fundamental problems have not been solved, we should redo the theory in a realistic setting. In particular, we argue: (i)The theoretical model used in CG is unrealistic and inappropriate in that algorithms called "optimal" may be vastly inferior to other methods both in practice and according to more realistic models of computation. (ii)The distortions of the theoretical model have led the CG community to produce research which has little or no practical application. Our recommendations are: (i) Replace the current theoretical model by one which is realistic, and reconsider all fundamental problems under this new model. As a minimum, complexity analysis under such a model should state explicitly dependence of constants on the dimension, and where output size varies greatly, be output sensitive. (ii) Rigorously enforce the scientific method: theory which does not stand up to empirical scrutiny should be revised or abandoned. ------------------------------------------------------------------- DETAILS The view of the task force on the current state of CG is contained on page 2 of the report: "Any new field needs to build itself a home and a home needs foundations. On that score one may strike a celebratory note. Indeed, computational geometry has met with considerable success in resolving the asymptotic complexity of basic geometric problems (eg. convex hull, Voronoi diagrams, low dimension optimization). Most of the classical problems posed in the first decade have been solved. Many of them entailed building entirely new sub-areas in the field of algorithmic design. For example the twin use of randomization and derandomization has been the main force behind some truly remarkable success stories(eg. optimal algorithms for convex hull and Voronoi diagrams, linear time solutions of LP-type problems)." We argue that the classical problems cited in the report as solved (and many others in the CG literature) have not been solved in any realistic way. 1. Linear Time solutions of LP-type problems The result cited is that, in any fixed dimension, an LP may be solved in linear time, which is clearly "optimal" and a big improvement on the simplex method which runs in (roughly) O(n^{d/2}) time. (Under most models the simplex method is an exponential time algorithm, but under the CG model it is polynomial in each fixed dimension!) The result cited dates to 1983 yet seems to have no impact on either the theory or practice of linear programming, which is busy refining interior point methods and trying to find a truly polynomial simplex-like algorithm. The reason is well known: a huge constant depending on the dimension is hidden. The usual reply to this criticism is: the linear time method is only intended for "low dimensions". To which we ask: For precisely what dimensions is the proposed method "optimal", and why does the theory not tell us this? Is it in fact demonstrably better (theoretically and empirically) in these dimensions? 2.The Convex Hull Problem The discussion here is somewhat technical, but is important as it illustrates some fundamental problems with the current theoretical model in CG. Suppose we are given a graph on n nodes and asked to list all spanning trees. One could claim that an algorithm that runs in O(n sup n-2 ) time (assume each tree is output in unit time) is optimal and declare the problem solved because the complete graph has this many spanning trees. How would the proposed algorithm work on a much sparser graph, say a planar graph? The theory is too crude to predict this and the algorithm may be completely useless in practice. Worse, the theory would reject as inferior an algorithm running in O(nT) time (T=# trees) since its worst case running time is O(n sup n-1), even though such an algorithm could be vastly superior on almost all problems of interest. For these kinds of problems a good solution would be one that is linear in the output size times a small polynomial in n, or at worst polynomial in input + output size. This model is standard in the combinatorial computing community and seems to produce algorithms that work extremally well in practice. The CG community calls such algorithms "output sensitive". Surely finding such an algorithm (or showing no such algorithm exists) is essential before declaring the problem solved. The convex hull problem is a problem of a similar nature: the polytope P which is the convex hull of a set S of n points in d dimensions has size varying from linear in n and d to O(n sup {d/2}). There is apparently some discussion as to exactly what the convex hull problem is. According to Raimund Seidel (email, 96.4.26) there are four such problems, all of which have been solved: "CH1: Find all faces of all dimensions of P. CH2: Construct the Hasse diagram of the face lattice of P. CH3: Find all facets of P. CH4: Construct a triangulation of the boundary of P. THEOREM: Let $d$ be a fixed constant. Problems CH1, CH2, CH3, CH4 can be solved in time $O(n\log n + n^\floor{d/2})$. Moreover, in the worst case this is asymptotically best possible. "In other words, the ASYMPTOTIC COMPLEXITY of problems CH1 to CH4 has been RESOLVED (if inputsize is the only parameter). Note that `resolved' here certainly does not mean `solved again'. No previous algorithms could theoretically achieve such time bounds, let alone were analyzed for that. (Also there is the special case of $d=3$ which is important and highly non-trivial and was not solved before.)" (emphasis in original). The result stated is deep and extremely difficult to prove ( Seidel(1981) for even d, Chazelle(1993)). Nevertheless, it has the form of the one described above for the "optimal" spanning tree algorithm. For the convex hull case there is theoretical and empirical evidence to show that the algorithm proposed performs significantly worse than other methods regarded as inferior by the theoretical model, in dimension as low as 8. (see Avis,Bremner,Seidel,ftp://mutt.cs.mcgill.ca/pub/doc/hgch.ps.gz, CGTA, to appear). For example, it may take as much as Omega(n^{d/2}) time to find the convex hulls of polytopes whose size is a small polynomial in n and d. CH3 is the convex hull problem that arises frequently in practice, and is the only "classical" problem in the list, dating back at least to Fourier(1824). There is no known "output sensitive" algorithm for CH3. The literature contains many references to the successful use of the Motzkin method (a.k.a. double description method) for CH3, and its output can be used to solve problems CH1 and CH2. Pivoting methods (also traceable to Fourier) with perturbation are almost certainly better than the optimal algorithm for CH4. It is worth noting that Motzkin's method has been completely ignored in the CG community. It has been implemented successfully many times and is in wide use. We can find no evidence of the methods proposed in CG literature actually being used, in spite of the fact they have been widely known for many years. In summary, even though the convex hull problem is declared "solved" in the CG community, the methods proposed do not solve even quite small practical problems that can be solved (but not well) by other methods. A realistic model of computing reveals this instantly. 3. Voronoi Diagrams The method intended here is reduction to the convex hull problem, and hence has the same limitations. EFFECT OF THE MODEL ON THE COMMUNITY In our view, an inappropriate model has distorted the research done by the community. Papers tend to be selected for the leading symposia and journals based on whether they make progress measured by the standard model. This has lead to the esoteric research that has appears in our conferences. If measured by a realistic model of geometric computing, much of this research would not look like progress all.