3D-EPUG-Overlay is an algorithm for exactly computing the intersection (it can also compute general overlays) of pairs of 3D triangulated meshes. Each input mesh may represent a single solid or a more complex polygonal subdivision (representing multiple solids, each one identified by a label). 3D-EPUG-Overlay employs a combination of several techniques to ensure its correctness and efficiency:
Multiple precision rational numbers, to exactly represent the coordinates of the objects and completely eliminate roundoff errors during the computations.
Simulation of Simplicity, a symbolic perturbation technique, to ensure that all geometric degeneracies (special cases) are properly handled.
Simple data representations and local information, to simplify the correct processing of the data and make the algorithm more parallelizable.
A uniform grid, to efficiently index the data, and accelerate some of the steps of the algorithm such as testing pairs of triangles for intersection or locating points in the mesh.
An arithmetic filter (implemented using the excellent CGAL Interval arithmetic number type) is employed to accelerate the computation with the rationals (while still guaranteeing exactness), avoiding the usage of expensive operations with multiple-precision numbers during the evaluation of the predicates.
Parallel programming, to accelerate the intersection process and explore better the ubiquitous parallel computing capability of current hardware.
Experiments showed that it was up to 101 times faster than the algorithm available in LibiGL (the state of art algorithm for exact mesh intersection) and, also, it had a performance that was comparable to a parallel inexact algorithm that was specifically developed to be very fast. Besides being fast, 3D-EPUG-Overlay was more memory efficient than the other evaluated methods. Furthermore, in all test cases the result obtained by 3D-EPUG-Overlay matched the reference solution.
Code: https://github.com/sallesviana/MeshIntersection
This publication list includes also our preceding 2D overlay algorithm.
2017
- W. Randolph Franklin, Salles V. G. de Magalhães, and Marcus V. A. Andrade.
3D-EPUG-Overlay: intersecting very large 3D triangulations in parallel.
In 2017 SIAM conference on industrial and applied geometry. Pittsburgh PA USA, 10–12 July 2017.
(talk).
[abstract▼] [details] [slides]
[BibTeX▼]
We present 3D-EPUG-Overlay, an algorithm and preliminary implementation for intersecting (aka overlaying) two 3D triangulations. This operation is important in processing large geometry datasets in additive manufacturing. Much of our algorithm is a sequence of map-reduce operations, and so is easily parallelizable. We completely avoid roundoff errors and the associated topological problems by computing with rational numbers. We will handle geometric degeneracies with Simulation of Simplicity (SoS). Our target architecture is an inexpensive workstation. We have a complete 2D implementation, which has overlaid two 2D maps maps, with a total of 54,000,000 vertices and 739,000 faces, in only 322 elapsed seconds (excluding I/O) on a dual 8-core 3.4GHz Intel Xeon workstation. That time is using 16 cores, and is 11x faster than using one core. Our 3D implementation is complete except for SoS. It processes two 3D objects with a total of 6,000,000 face triangles and 3,000,000 tetrahedra in 180 elapsed seconds on 16 cores. The parallel speedup for 16 cores is a factor of 13. The execution time is almost linear in the data size, and could process much larger datasets, perhaps billions of triangles. We anticipate being able to process much larger datasets than competing SW can, and for datasets small enough for others to process, that we will be much faster and more robust on special cases.
@inproceedings{x3d-epug-siam-2017,
author = "Franklin, W. Randolph and de Magalhães, Salles V. G. and Andrade, Marcus V. A.",
title = "{3D-EPUG-Overlay}: Intersecting very large {3D} triangulations in parallel",
booktitle = "2017 {SIAM} conference on industrial and applied geometry",
year = "2017",
month = "10--12 July",
address = "Pittsburgh PA USA",
note = "(talk)",
href = "\bibhrefat{219-3d-epug-siam-2017-abs}{219-3d-epug-siam-2017.pdf}",
mykey = "overlay parallel marcus salles epug-overlay",
customlinkslides = "https://wrfranklin.org/p/219-3d-epug-siam-2017.pdf"
}
- W. Randolph Franklin, Salles V. G. de Magalhães, and Marcus V. A. Andrade.
An exact and efficient 3D mesh intersection algorithm using only orientation predicates.
In S3PM-2017: International Convention on Shape, Solid, Structure, & Physical Modeling, Shape Modeling International (SMI-2017) Symposium. Berkeley, California, USA, 19–23 June 2017.
(poster).
[abstract▼] [details] [poster]
[BibTeX▼]
We present an algorithm to compute the intersection of two 3D triangulated meshes. It has applications in CAD and Additive Manufacturing, and was developed to process the largest datasets quickly and correctly. The speed comes from simple regular data structures that parallelize very well. The correctness comes from using multiple-precision rational arithmetic to prevent roundoff errors and the resulting topological inconsistencies, and symbolic perturbation (simulation of simplicity) to handle special cases (geometric degeneracies). To simplify the symbolic perturbation, the algorithm employs only orientation predicates. This paper focuses on the challenges and solutions of the implementing symbolic perturbation.
@inproceedings{mesh-orientation-s3pm-2017,
author = "Franklin, W. Randolph and de Magalhães, Salles V. G. and Andrade, Marcus V. A.",
title = "An exact and efficient {3D} mesh intersection algorithm using only orientation predicates",
booktitle = "{S3PM-2017}: International Convention on Shape, Solid, Structure, \\& Physical Modeling, Shape Modeling International {(SMI-2017)} Symposium",
month = "19--23 June",
address = "Berkeley, California, USA",
host = "International Computer Science Institute",
year = "2017",
note = "(poster)",
href = "\bibhrefap{}{220-mesh-orientation-s3pm-2017.pdf}",
mykey = "parallel marcus salles overlay epug-overlay",
customlinkposter = "https://wrfranklin.org/p/220-mesh-orientation-s3pm-2017.pdf"
}
2016
- Salles V. G. de Magalhães, Marcus V. A. Andrade, W. Randolph Franklin, Wenli Li, and Maurício Gouvêa Gruppi.
Exact intersection of 3D geometric models.
In Geoinfo 2016, XVII Brazilian Symposium on GeoInformatics. Campos do Jordão, SP, Brazil, November 2016. Instituto Nacional de Pesquisas Espaciais (Brasil).
[abstract▼] [details] [full text] [slides]
[BibTeX▼]
This paper presents 3D-EPUG-OVERLAY , an exact and parallel algorithm for computing the intersection of 3D triangulated meshes, a problem with applications in several fields such as GIS and CAD. 3D-EPUG-OVERLAY presents several innovations: it employs exact arithmetic and, thus, the computations are completely exact, which avoids topological impossibilities that are often created by floating point arithmetic. Also, it uses a uniform grid to index the geometric data and was designed to be easily parallelizable. Furthermore, we propose the use of Simulation of Simplicity to effectively ensure that all the special cases are properly handled by the algorithm.
@inproceedings{salles-xsect-3d-geoinfo-2016,
author = "de Magalhães, Salles V. G. and Andrade, Marcus V. A. and Franklin, W. Randolph and Li, Wenli and Gruppi, Maurício Gouvêa",
title = "Exact intersection of {3D} geometric models",
booktitle = "Geoinfo 2016, {XVII Brazilian} Symposium on GeoInformatics",
year = "2016",
month = "November",
address = "Campos do Jordão, SP, Brazil",
puburl = "\url{http://www.geoinfo.info/geoinfo\_series.htm}",
publisher = "Instituto Nacional de Pesquisas Espaciais (Brasil)",
mykey = "wenlili marcus salles parallel overlay epug-overlay",
href = "\bibhrefpt{217-salles-xsect-3d-geoinfo-2016}{217-salles-xsect-3d-geoinfo-2016-talk.pdf}",
customlinkslides = "https://wrfranklin.org/p/217-salles-xsect-3d-geoinfo-2016-talk.pdf"
}
Contact: Salles
ParCube finds the pairwise intersections in a set of millions of congruent cubes. This operation is required when computing boolean combinations of meshes or polyhedra in CAD/CAM and additive manufacturing, and in determining close points in a 3D set. ParCube is very compact because it is uses a uniform grid with a functional programming API. ParCube is very fast; even single threaded it usually beats CGAL's elapsed time, sometimes by a factor of 3. Also because it is FP, ParCube parallelizes very well. On an Nvidia GPU, processing 10M cubes to find 6M intersections, it took 0.33 elapsed seconds, beating CGAL by a factor of 131. ParCube is independent of the specific parallel architecture, whether shared memory multicore Intel Xeon using either OpenMP or TBB, or Nvidia GPUs with thousands of cores. We expect the principles used in ParCube to apply to other computational geometry problems. Efficiently finding all bipartite intersections would be an easy extension.
2017
- W. Randolph Franklin and Salles V. G. de Magalhães.
Parallel intersection detection in massive sets of cubes.
In 27th Fall Workshop on Computational Geometry. Stony Brook University, New York, USA, 3–4 Nov 2017.
(talk).
[abstract▼] [details] [slides]
[BibTeX▼]
We present ParCube, a very fast parallel algorithm and implementation for finding the pairwise intersections in a set of millions of congruent cubes. This operation is required when computing boolean combinations of meshes or polyhedra in CAD/CAM and additive manufacturing, and in determining close points in a 3D set. ParCube expresses a 3D uniform grid using functional programming concepts in Nvidia Thrust. The underlying HW can be either regular single-threaded C++, an Nvidia GPU with CUDA, or a multicore Intel Xeon with OpenMP or TBB. ParCube can intersect 10M i.i.d. cubes with 6M intersections in 0.32 elapsed seconds, which is 131x faster than CGAL. Even running single threaded on the CPU, ParCube ran this case three times as fast as CGAL. ParCube can also find bipartite intersections without expensive preprocessing.
@inproceedings{parcube-fwcg-2017,
author = "Franklin, W. Randolph and de Magalhães, Salles V. G.",
title = "Parallel intersection detection in massive sets of cubes",
booktitle = "27th Fall Workshop on Computational Geometry",
year = "2017",
month = "3--4 Nov",
address = "{Stony Brook University, New York, USA}",
note = "(talk)",
href = "\bibhrefat{225-parcube-fwcg-2017-abs}{225-parcube-fwcg-2017-talk.pdf}",
mykey = "salles parcube parallel",
customlinkslides = "https://wrfranklin.org/p/225-parcube-fwcg-2017-talk.pdf"
}
- W. Randolph Franklin and Salles V. G. de Magalhães.
Parallel intersection detection in massive sets of cubes.
In Proceedings of BigSpatial'17: 6th ACM SIGSPATIAL Workshop on Analytics for Big Geospatial Data. Los Angeles Area, CA, USA, 7-10 Nov 2017.
doi:https://doi.org/10.1145/3150919.3150921.
[abstract▼] [details] [full text] [slides]
[BibTeX▼]
We present ParCube, which finds the pairwise intersections in a set of millions of congruent cubes. This operation is required when computing boolean combinations of meshes or polyhedra in CAD/CAM and additive manufacturing, and in determining close points in a 3D set. ParCube is very compact because it is uses a uniform grid with a functional programming API. ParCube is very fast; even single threaded it usually beats CGAL's elapsed time, sometimes by a factor of 3. Also because it is FP, ParCube parallelizes very well. On an Nvidia GPU, processing 10M cubes to find 6M intersections, it took 0.33 elapsed seconds, beating CGAL by a factor of 131. ParCube is independent of the specific parallel architecture, whether shared memory multicore Intel Xeon using either OpenMP or TBB, or Nvidia GPUs with thousands of cores. We expect the principles used in ParCube to apply to other computational geometry problems. Efficiently finding all bipartite intersections would be an easy extension.
@inproceedings{parcube-bigspatial-2017,
author = "Franklin, W. Randolph and de Magalhães, Salles V. G.",
title = "Parallel intersection detection in massive sets of cubes",
booktitle = "Proceedings of {BigSpatial'17}: 6th {ACM SIGSPATIAL} Workshop on Analytics for Big Geospatial Data",
year = "2017",
month = "7-10 Nov",
address = "{Los Angeles Area, CA, USA}",
doi = "https://doi.org/10.1145/3150919.3150921",
href = "\bibhrefpt{224-parcube-bigspatial-2017}{224-parcube-bigspatial-2017-talk.pdf}",
mykey = "parcube salles parallel",
customlinkslides = "https://wrfranklin.org/p/224-parcube-bigspatial-2017-talk.pdf"
}
Contact: WRF
EMFlow is a very efficient algorithm and implementation to compute the drainage network (i.e. the flow direction and flow accumulation) on huge terrains stored in external memory. Its utility lies in processing the large volume of high resolution terrestrial data newly available, which internal memory algorithms cannot handle efficiently.
EMFlow computes the flow direction using an adaptation of our previous method RWFlood that uses a flooding process to quickly remove internal depressions or basins. Flooding, proceeding inward from the outside of the terrain, works oppositely to the common method of computing downhill flow from the peaks.
To reduce the total number of I/O operations, EMFlow adopts a new strategy to subdivide the terrain into islands that are processed separately. The terrain cells are grouped into blocks that are stored in a special data structure managed as a cache memory.
EMFlow’s execution time was compared against the two most recent and most efficient published methods: TerraFlow and r.watershed.seg. It was, on average, 27 times faster than both methods, and EMFlow could process larger datasets. Processing a 50000x50000 terrain on a machine with 2GB of internal memory took only 3000 seconds, compared to 87000 seconds for TerraFlow while r.watershed.seg failed on terrains larger than 15000x15000. On very small, say 1000x1000 terrains, EMFlow takes under a second, compared to 6 to 20 seconds, so it could be a component of a future interactive system where a user could modify terrain and immediately see the new hydrography.
Code: https://github.com/guipenaufv/EMFlow/blob/master/README.md
2015
- Thiago L. Gomes, Salles V. G. de Magalhães, Marcus V. A. Andrade, W. Randolph Franklin, and Guilherme C. Pena.
Efficiently computing the drainage network on massive terrains with an external memory flooding process.
Geoinformatica, April 2015.
\url http://link.springer.com/article/10.1007/s10707-015-0225-y.
doi:https://doi.org/10.1007/s10707-015-0225-y.
[details] [full text]
[BibTeX▼]
@article{gomes-emflow-2015,
author = "Gomes, Thiago L. and de Magalhães, Salles V. G. and Andrade, Marcus V. A. and Franklin, W. Randolph and Pena, Guilherme C.",
title = "Efficiently computing the drainage network on massive terrains with an external memory flooding process",
href = "\bibhref{163-gomes-emflow-2015}",
journal = "Geoinformatica",
doi = "https://doi.org/10.1007/s10707-015-0225-y",
note = "\url{http://link.springer.com/article/10.1007/s10707-015-0225-y}",
mykey = "marcus salles hydro emflow",
month = "April",
year = "2015"
}
2012
- Thiago L. Gomes, Salles V. G. de Magalhães, Marcus V. A. Andrade, W. Randolph Franklin, and Guilherme C. Pena.
Computing the drainage network on huge grid terrains.
In 1st ACM SIGSPATIAL International Workshop on Analytics for Big Geospatial Data (BigSpatial-2012). Redondo Beach, CA, 6 Nov 2012.
[details] [full text] [slides]
[BibTeX▼]
@inproceedings{gomes-acmgis-2012,
author = "Gomes, Thiago L. and de Magalhães, Salles V. G. and Andrade, Marcus V. A. and Franklin, W. Randolph and Pena, Guilherme C.",
title = "Computing the drainage network on huge grid terrains",
address = "Redondo Beach, CA",
month = "6 Nov",
customlinkslides = "https://wrfranklin.org/p/162-bigspatial2012-drainage-talk.pdf",
href = "\bibhrefpt{162-bigspatial2012-drainage}{162-bigspatial2012-drainage-talk.pdf}",
booktitle = "1st {ACM SIGSPATIAL} International Workshop on Analytics for Big Geospatial Data {(BigSpatial-2012)}",
mykey = "marcus salles hydro emflow",
year = "2012"
}
Contact: Thiago Gomes.
RWFLOOD is a new and faster internal memory method to compute the drainage network, that is, the flow direction and accumulation on terrains represented by raster elevation matrix. The main idea is to surround the terrain by water (as an island) and then to raise the outside water level step by step, with depressions filled when the water reaches their boundary. This process avoids the very time-consuming depression filling step used by most of the methods to compute flow routing, that is, the flow direction and accumulated flow. The execution time of our method is very fast, and linear in the terrain size. Tests have shown that our method can process huge terrains more than 100 times faster than other recent methods.
This won the Best Paper Award (2nd place) at AGILE'2012 15th AGILE 2012 international conference on Geographic Information Science.
Code: tbd
2014
2012
- Salles V. G. de Magalhães, Marcus V. A. Andrade, W. Randolph Franklin, and Guilherme C. Pena.
A new method for computing the drainage network based on raising the level of an ocean surrounding the terrain.
In Jérome Gensel, Didier Josselin, and Danny Vandenbroucke, editors, Bridging the Geographic Information Sciences: International AGILE'2012 Conference, pages 391–407.
Springer, Avignon (France), 24–27 April 2012.
URL: http://agile2012.imag.fr/.
[details] [full text] [slides]
[BibTeX▼]
@incollection{salles-agile-2012,
author = "de Magalhães, Salles V. G. and Andrade, Marcus V. A. and Franklin, W. Randolph and Pena, Guilherme C.",
editor = "Gensel, Jérome and Josselin, Didier and Vandenbroucke, Danny",
title = "A new method for computing the drainage network based on raising the level of an ocean surrounding the terrain",
booktitle = "Bridging the Geographic Information Sciences: International {AGILE'2012} Conference",
publisher = "Springer",
pages = "391--407",
isbn = "978-3-642-29062-6",
year = "2012",
address = "Avignon (France)",
month = "24--27 April",
note = "Winner of the Best Paper Award (2nd place)",
url = "http://agile2012.imag.fr/",
customlinkslides = "https://wrfranklin.org/p/151-agile2012-rwflood-talk.pdf",
href = "\bibhrefpt{151-agile2012-rwflood}{151-agile2012-rwflood-talk.pdf}",
mykey = "salles marcus hydro rwflood sallesagile2012"
}
Contact: Salles.
(This section, and the code, are preliminary. When/if it is cleaned up, links may change. (2022-10-10) )
Here are bathymetry papers.
Here is some bathymetry code.
TiledVS is a better algorithm and implementation for external memory viewshed computation. It is about four times faster than the most recent and most efficient published methods. Ours is also much simpler. Since processing large datasets can take hours, this improvement is significant.
This algorithm was able to process a terrain with 10 billion points in one hour and a half on a
modest computer with only 512 MB of RAM. It is 10 times faster than our previous algorithm (EMViewshed).
To reduce the total number of I/O operations, our method is based on subdividing the terrain into blocks which are stored in a special data structure managed as a cache memory. The viewshed is that region of the terrain that is visible by a fixed observer, who may be on or above the terrain. Its applications range from visual nuisance abatement to radio transmitter siting and surveillance.
Code: https://github.com/chaulio/TiledVS
2016
- Chaulio R. Ferreira, Marcus V. A. Andrade, Salles V. G. de Magalhães, and W. Randolph Franklin.
An efficient external memory algorithm for terrain viewshed computation.
ACM Trans. on Spatial Algorithms and Systems, 2016.
doi:https://doi.org/10.1145/2903206.
[abstract▼] [details] [full text]
[BibTeX▼]
This paper presents TiledVS, a fast external algorithm and implementation for computing viewsheds. TiledVS is intended for terrains that are too large for internal memory, even over 100 000 × 100 000 points. It subdivides the terrain into tiles that are stored compressed on disk and then paged into memory with a custom cache data structure and LRU algorithm. If there is sufficient available memory to store a whole row of tiles, which is easy, then this specialized data management is faster than relying on the operating system's virtual memory management. Applications of viewshed computation include siting radio transmitters, surveillance, and visual environmental impact measurement. TiledVS runs a rotating line of sight from the observer to points on the region boundary. For each boundary point, it computes the visibility of all the terrain points close to the line of sight. The running time is linear in the number of points. No terrain tile is read more than twice. TiledVS is very fast, for instance, processing a 104000 x 104000 terrain on a modest computer with only 512MB of RAM took only 1 2 1 hours. On large datasets, TiledVS was several times faster than competing algorithms such as the ones included in GRASS. The source code of TiledVS is freely available for nonprofit researchers to study, use, and extend. A preliminary version of this algorithm appeared in a 4-page ACM SIGSPATIAL GIS 2012 conference paper “More efficient terrain viewshed computation on massive datasets using external memory”. This more detailed version adds the fast lossless compression stage that reduces the time by 30\% to 40\%, and many more experiments and comparisons.
@article{chaulio-tiledvs-tsas-2016,
author = "Ferreira, Chaulio R. and Andrade, Marcus V. A. and de Magalhães, Salles V. G. and Franklin, W. Randolph",
title = "An efficient external memory algorithm for terrain viewshed computation",
href = "\bibhref{197-chaulio-tiledvs-tsas-2016}",
journal = "{ACM} Trans. on Spatial Algorithms and Systems",
year = "2016",
volume = "2",
number = "2",
doi = "https://doi.org/10.1145/2903206",
mykey = "salles marcus visviewshedsiting tiledvs visibility"
}
Contact: Chaulio R. Ferreira.
This is the optimization and parallelization of the multiple observer siting program, originally developed by Franklin and Vogt. Siting is a compute-intensive application with a large amount of inherent parallelism. The advantage of parallelization is not only a faster program but also the ability to solve bigger problems. We have parallelized the program using two different techniques: OpenMP, using multi-core CPUs, and CUDA, using a general purpose graphics processing unit (GPGPU). Experiment results show that both techniques are very effective. Using the OpenMP program, we are able to site tens of thousands of observers on a 16385 × 16385 terrain in less than 2 minutes, on our workstation with two CPUs and one GPU. The CUDA program achieves the same in about 30 seconds.
2014
- Wenli Li, W. Randolph Franklin, Daniel N. Benedetti, and Salles V. G. de Magalhães.
Parallel multiple observer siting on terrain.
In 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL 2014). Dallas, Texas, USA, 4–7 Nov 2014.
[abstract▼] [details] [full text] [poster]
[BibTeX▼]
This paper presents the optimization and parallelization of the multiple observer siting program, originally developed by Franklin and Vogt. Siting is a compute-intensive application with a large amount of inherent parallelism. The advantage of parallelization is not only a faster program but also the ability to solve bigger problems. We have parallelized the program using two different techniques: OpenMP, using multi-core CPUs, and CUDA, using a general purpose graphics processing unit (GPGPU). Experiment results show that both techniques are very effective. Using the OpenMP program, we are able to site tens of thousands of observers on a 16385 × 16385 terrain in less than 2 minutes, on our workstation with two CPUs and one GPU. The CUDA program achieves the same in about 30 seconds.
@inproceedings{li-acmgis-2014,
author = "Li, Wenli and Franklin, W. Randolph and Benedetti, Daniel N. and de Magalhães, Salles V. G.",
title = "Parallel Multiple Observer Siting on Terrain",
booktitle = "22nd {ACM SIGSPATIAL} International Conference on Advances in Geographic Information Systems {(ACM SIGSPATIAL 2014)}",
year = "2014",
month = "4--7 Nov",
href = "\bibhrefpp{182-acmgis2014-par-siting}{182-acmgis2014-par-siting-poster.pdf}",
address = "Dallas, Texas, USA",
customlinkposter = "https://wrfranklin.org/p/182-acmgis2014-par-siting-poster.pdf",
mykey = "benedetti salles wenlili visviewshedsiting gpusiting parallelsiting parallel"
}
2013
- Wenli Li, W. Randolph Franklin, and Daniel Benedetti.
Parallel multiple observer siting on terrain.
In 23rd Fall Workshop on Computational Geometry. City College, New York City, USA, 25–26 Oct 2013.
(extended abstract).
[abstract▼] [details] [full text] [slides]
[BibTeX▼]
Multiple observer siting has much inherent parallelism. We have parallelized the siting package using OpenMP and CUDA. The results show that both methods are effective for parallelizing the program. Although the overall speedups of the two methods are similar, which are about 16, the speedups of the constituent parts of the program are quite different.
@inproceedings{li-fwcg-2013,
author = "Li, Wenli and Franklin, W. Randolph and Benedetti, Daniel",
title = "Parallel Multiple Observer Siting on Terrain",
booktitle = "23rd Fall Workshop on Computational Geometry",
year = "2013",
month = "25--26 Oct",
address = "City College, New York City, USA",
note = "(extended abstract)",
href = "\bibhrefpt{173-fwcg2013-li-parallel-siting}{173-fwcg2013-li-parallel-siting-talk.pdf}",
customlinkslides = "https://wrfranklin.org/p/173-fwcg2013-li-parallel-siting-talk.pdf",
mykey = "benedetti wenlili parallelsiting gpusiting parallel"
}
Contact: Wenli.
This is an efficient parallel heuristic for siting observers on raster terrains. More specifically, the goal is to choose the smallest set of points on a terrain such that observers located in these points are able to visualize at least a given percentage of the terrain. This problem is NP-Hard and has several applications such as determining the best places to position (site) communication or monitoring towers on a terrain. Since siting observers is a massive operation, its solution requires a huge amount of processing time even to obtain an approximate solution using a heuristic. This is still more evident when processing high resolution terrains that have become available due to modern data acquiring technologies such as LIDAR and IFSAR.
Our new implementation uses dynamic programming and CUDA to accelerate the swap local search heuristic, which was proposed in previous works. Also, to efficiently use the parallel computing resources of GPUs, we adapted some techniques previously developed for sparse-dense matrix multiplication. Our parallel algorithm is up to 7352 times faster than a sequential algorithm.
The reasons for this speedup is not only the use of GPU, but also because of the use of techniques originally designed for sparse-dense matrix multiplication.
We compared this new method with previous parallel implementations and the new method is much more efficient than the previous ones. It can process much larger terrains (the older methods are restrictive about terrain size) and it is faster.
Code: Guilherme Pena.
2017
- Wenli Li and W. Randolph Franklin.
GPU–accelerated multiple observer siting.
Photogrammetric Engineering & Remote Sensing, 83(6):439–446, June 2017.
doi:https://doi.org/10.14358/PERS.83.6.439.
[abstract▼] [details] [full text]
[BibTeX▼]
We present two fast parallel implementations of the Franklin-Vogt multiple observer siting algorithm, using either OpenMP or CUDA. In this problem, the objective is to site observers on a raster terrain such that the joint area visible by them is maximized. On a terrain with 16385×16385 cells, assuming that observers can see out to a radius-of-interest of 100 cells, finding the approximately 15000 observers that have the greatest coverage takes only 17s in CUDA. That is a factor of 70 speedup from the sequential version. The OpenMP version exhibits a factor of 17 speedup on a 16 core system. Applications for the multiple observer siting problem include radio transmission towers, environmental monitoring sites, and path planning for surveillance drones. The algorithm has four steps: finding the visibility indices of all points, selecting a candidate subset of potential top observers, finding each one's viewshed, and greedily constructing the final solution.
@article{wenli-gpu-siting-2016,
author = "Li, Wenli and Franklin, W. Randolph",
title = "{GPU}--Accelerated Multiple Observer Siting",
journal = "Photogrammetric Engineering \\& Remote Sensing",
year = "2017",
month = "June",
volume = "83",
number = "6",
pages = "439--446",
href = "\bibhref{213-wenli-gpu-siting-2016}",
doi = "https://doi.org/10.14358/PERS.83.6.439",
mykey = "wenlili visviewshedsiting gpusiting parallel"
}
2016
- Wenli Li.
GPU-accelerated terrain processing.
PhD thesis, Rensselaer Polytechnic Institute, 2016.
[abstract▼] [details] [full text]
[BibTeX▼]
This thesis extends Overdetermined Laplacian Partial Differential Equations (ODETLAP) for spatial data approximation and compression and parallelizes multiple observer siting on terrain, using General-Purpose Computing on Graphics Processing Units (GPGPU). Both ODETLAP compression and multiple observer siting use greedy algorithms that are parallelizable within iterations but sequential between iterations. They also demonstrate terrain-related research and applications that benefit from GPU acceleration and showcase the achievable speedups. ODETLAP approximation approximates a spatial dataset from scattered data points on a regular grid by solving an overdetermined system of linear equations that minimizes the absolute Laplacian of the approximation and the value errors of the data points. We show that ODETLAP approximation is a linear operator and is comparable in accuracy with natural neighbor interpolation and the multiquadric-biharmonic method. Using ODETLAP approximation to approximate spatial datasets, ODETLAP compression compresses a dataset as a set of known points and their values and decompresses it as an approximation from the known points. We implement ODETLAP approximation and compression using the CUSP library and the speedup is 8 times on a GPU. We design multiple algorithms to improve the accuracy of ODETLAP compression using a limited number of known points and use them to compress 2D terrain datasets and 3D MRI datasets. The results show that ODETLAP compression is 30% to 50% better in minimizing the maximum absolute error than JPEG 2000 for the terrain datasets and 40% to 60% better than JP3D for the MRI datasets. To further increase speed, we design a segmented ODETLAP compression algorithm and use it to compress larger 3D atmospheric and MRI datasets. The results show that the compressed size of the dataset is 60% that of JP3D for the same maximum absolute error. Multiple observer siting places multiple observer points above a terrain to maximize the total area visible from at least one observer. The algorithm first selects a set of highly visible points as tentative observers, and then iteratively selects observers to maximize the cumulative viewshed. We improve the time and space complexities of the algorithm and parallelize it using CUDA on a GPU and using OpenMP on multi-core CPUs. The speedup is up to 60 times on the GPU and 16 times on two CPUs with 16 cores.
@phdthesis{wenli-phd,
author = "Li, Wenli",
title = "{GPU}-accelerated terrain processing",
school = "Rensselaer Polytechnic Institute",
mykey = "wenlili visviewshedsiting gpusiting parallel theses",
year = "2016"
}
2014
- Wenli Li, W. Randolph Franklin, Daniel N. Benedetti, and Salles V. G. de Magalhães.
Parallel multiple observer siting on terrain.
In 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL 2014). Dallas, Texas, USA, 4–7 Nov 2014.
[abstract▼] [details] [full text] [poster]
[BibTeX▼]
This paper presents the optimization and parallelization of the multiple observer siting program, originally developed by Franklin and Vogt. Siting is a compute-intensive application with a large amount of inherent parallelism. The advantage of parallelization is not only a faster program but also the ability to solve bigger problems. We have parallelized the program using two different techniques: OpenMP, using multi-core CPUs, and CUDA, using a general purpose graphics processing unit (GPGPU). Experiment results show that both techniques are very effective. Using the OpenMP program, we are able to site tens of thousands of observers on a 16385 × 16385 terrain in less than 2 minutes, on our workstation with two CPUs and one GPU. The CUDA program achieves the same in about 30 seconds.
@inproceedings{li-acmgis-2014,
author = "Li, Wenli and Franklin, W. Randolph and Benedetti, Daniel N. and de Magalhães, Salles V. G.",
title = "Parallel Multiple Observer Siting on Terrain",
booktitle = "22nd {ACM SIGSPATIAL} International Conference on Advances in Geographic Information Systems {(ACM SIGSPATIAL 2014)}",
year = "2014",
month = "4--7 Nov",
href = "\bibhrefpp{182-acmgis2014-par-siting}{182-acmgis2014-par-siting-poster.pdf}",
address = "Dallas, Texas, USA",
customlinkposter = "https://wrfranklin.org/p/182-acmgis2014-par-siting-poster.pdf",
mykey = "benedetti salles wenlili visviewshedsiting gpusiting parallelsiting parallel"
}
- Guilherme C. Pena, Marcus V.A. Andrade, Salles V.G. de Magalhães, W. R. Franklin, and Chaulio R. Ferreira.
An improved parallel algorithm using GPU for siting observers on terrain.
In 16th International Conference on Enterprise Information Systems (ICEIS 2014), 367–375. Lisbon, 27–30 April 2014.
doi:https://doi.org/10.5220/0004884303670375.
[abstract▼] [details] [full text] [slides]
[BibTeX▼]
This paper presents an efficient method to determine a set of observers (that is, where to site them) such that a given percentage of a terrain is visually covered. Our method extends the method proposed in (Franklin, 2002) including a local search heuristic efficiently implemented using dynamic programming and GPU parallel programming. This local search strategy allows to achieve a higher coverage using the same number of observers as the original method and thus it is possible to obtain a given coverage using a smaller number of observers. It can be an important improvement since an observer can represent an expensive facility such as a telecommunication tower. The proposed method performance was compared with that of other methods and the tests showed that it can be more than 1200 times faster than the sequential implementation (with no use of dynamic programming and no GPU parallel programmming) and, also, more than 20 times faster than a previous parallel method presented in (de Magalhaes et al., 2011).
@inproceedings{iceis-2014,
author = "Pena, Guilherme C. and Andrade, Marcus V.A. and de Magalhães, Salles V.G. and Franklin, W. R. and Ferreira, Chaulio R.",
title = "An Improved Parallel Algorithm using {GPU} for Siting Observers on Terrain",
booktitle = "16th International Conference on Enterprise Information Systems ({ICEIS} 2014)",
year = "2014",
month = "27--30 April",
address = "Lisbon",
pages = "367--375",
doi = "https://doi.org/10.5220/0004884303670375",
href = "\bibhrefpt{177-ICEIS2014-pena-site-gpu}{177-ICEIS2014-pena-site-gpu-talk.pdf}",
customlinkslides = "https://wrfranklin.org/p/177-ICEIS2014-pena-site-gpu-talk.pdf",
mykey = "visviewshedsiting gpusiting marcus salles parallel"
}
2013
- Wenli Li, W. Randolph Franklin, and Daniel Benedetti.
Parallel multiple observer siting on terrain.
In 23rd Fall Workshop on Computational Geometry. City College, New York City, USA, 25–26 Oct 2013.
(extended abstract).
[abstract▼] [details] [full text] [slides]
[BibTeX▼]
Multiple observer siting has much inherent parallelism. We have parallelized the siting package using OpenMP and CUDA. The results show that both methods are effective for parallelizing the program. Although the overall speedups of the two methods are similar, which are about 16, the speedups of the constituent parts of the program are quite different.
@inproceedings{li-fwcg-2013,
author = "Li, Wenli and Franklin, W. Randolph and Benedetti, Daniel",
title = "Parallel Multiple Observer Siting on Terrain",
booktitle = "23rd Fall Workshop on Computational Geometry",
year = "2013",
month = "25--26 Oct",
address = "City College, New York City, USA",
note = "(extended abstract)",
href = "\bibhrefpt{173-fwcg2013-li-parallel-siting}{173-fwcg2013-li-parallel-siting-talk.pdf}",
customlinkslides = "https://wrfranklin.org/p/173-fwcg2013-li-parallel-siting-talk.pdf",
mykey = "benedetti wenlili parallelsiting gpusiting parallel"
}
Contact: Guilherme Pena
TIN is the current version of my efficient Triangulated Irregular Network computation program, for input points on a grid. Degeneracies, such as included points falling on the edge of the square, or on an existing edge, are handled. TIN uses an incremental greedy method. It stops when a predetermined maximum absolute error is reached, or when a given number of points have been included.
Due to its compact and simple data structures, TIN can process, in memory, a 10800x10800 grid of points on a PC.
In contrast to other TIN implementations, my TIN operates incrementally by repeatedly inserting the worst point. Therefore the ordered list of points that it selects can be used for progressive transmission of the surface.
More info (obsolescent).
Code
Contact: WRF
UPLAN is an efficient algorithm for path planning on road networks with polygonal constraints. U PLAN is based on the A* algorithm and it uses a uniform grid for efficiently indexing the obstacles. UPLAN can efficiently process very many polygonal obstacles. UPLAN was the 2nd place finisher in the ACM GISCUP 2015 competition.
2015
- Salles V. G. de Magalhães, Marcus V. A. Andrade, W. Randolph Franklin, and Wenli Li.
Fast path planning under polygonal obstacle constraints.
In 4th GIS-focused algorithm competition, GISCUP 2015, co-located with ACM SIGSPATIAL GIS. Bellevue WA USA, 4 Nov 2015.
Winner (2nd place).
[details] [full text]
[BibTeX▼]
@inproceedings{salles-giscup-2015,
author = "de Magalhães, Salles V. G. and Andrade, Marcus V. A. and Franklin, W. Randolph and Li, Wenli",
title = "Fast path planning under polygonal obstacle constraints",
booktitle = "4th {GIS}-focused algorithm competition, {GISCUP} 2015, co-located with {ACM SIGSPATIAL GIS}",
year = "2015",
month = "4 Nov",
address = "Bellevue WA USA",
note = "Winner (2nd place)",
mykey = "wenlili marcus salles pathplan sallesgiscup2015 uplan",
href = "\bibhref{203-salles-giscup-2015}"
}
Contact: Salles.
EPUG-Overlay (Exact Parallel Uniform Grid Overlay) is an algorithm to overlay two maps that is fast and parallel, has no roundoff errors, and is freely available. EPUG-Overlay combines several novel aspects. It represents coordinates with rational numbers, thereby ensuring exact computations with no roundoff errors and the ensuing sliver problems and topological impossibilities. For efficiency, EPUG-Overlay performs the map overlay in parallel, thereby utilizing the ubiquitous multicore architecture. Our application goes beyond merely using existing packages, which are inefficient when used in parallel on large problems. Indeed, overlaying two maps with 53,000,000 edges and 730,000 faces took only 322 elapsed seconds (plus 116 seconds for I/O) on a dual 8-core 3.1 GHz Intel Xeon E5-2687 workstation. In contrast, GRASS, executing sequentially and generating roundoff errors, takes 5300 seconds.
The overlay operation combines two input maps (planar graphs) containing faces (polygons) separated by polyline edges (chains), into a new map, each of whose faces is the intersection of one face from each input map. Floating point roundoff errors can cause an edge intersection to be missed or the computed intersection point be in a wrong face, leading to a topological inconsistency. Thus, a program might fail to compute a valid output map at all, using any amount of time. This gets worse when the inputs are bigger or have slivers. Heuristics can ameliorate this problem, but only to an extent.
By representing each coordinate as a vulgar fraction, with multiprecision numerator and denominator, the computation is exact. EPUG-Overlay also executes various useful subproblems very quickly, such as locating a set of points in a planar graph and finding all the intersections among a large set of small edges. EPUG-Overlay is built on our earlier sequential floating-point algorithm that found the areas of the overlay polygons, without finding the polygons themselves.
2017
- W. Randolph Franklin, Salles V. G. de Magalhães, and Marcus V. A. Andrade.
3D-EPUG-Overlay: intersecting very large 3D triangulations in parallel.
In 2017 SIAM conference on industrial and applied geometry. Pittsburgh PA USA, 10–12 July 2017.
(talk).
[abstract▼] [details] [slides]
[BibTeX▼]
We present 3D-EPUG-Overlay, an algorithm and preliminary implementation for intersecting (aka overlaying) two 3D triangulations. This operation is important in processing large geometry datasets in additive manufacturing. Much of our algorithm is a sequence of map-reduce operations, and so is easily parallelizable. We completely avoid roundoff errors and the associated topological problems by computing with rational numbers. We will handle geometric degeneracies with Simulation of Simplicity (SoS). Our target architecture is an inexpensive workstation. We have a complete 2D implementation, which has overlaid two 2D maps maps, with a total of 54,000,000 vertices and 739,000 faces, in only 322 elapsed seconds (excluding I/O) on a dual 8-core 3.4GHz Intel Xeon workstation. That time is using 16 cores, and is 11x faster than using one core. Our 3D implementation is complete except for SoS. It processes two 3D objects with a total of 6,000,000 face triangles and 3,000,000 tetrahedra in 180 elapsed seconds on 16 cores. The parallel speedup for 16 cores is a factor of 13. The execution time is almost linear in the data size, and could process much larger datasets, perhaps billions of triangles. We anticipate being able to process much larger datasets than competing SW can, and for datasets small enough for others to process, that we will be much faster and more robust on special cases.
@inproceedings{x3d-epug-siam-2017,
author = "Franklin, W. Randolph and de Magalhães, Salles V. G. and Andrade, Marcus V. A.",
title = "{3D-EPUG-Overlay}: Intersecting very large {3D} triangulations in parallel",
booktitle = "2017 {SIAM} conference on industrial and applied geometry",
year = "2017",
month = "10--12 July",
address = "Pittsburgh PA USA",
note = "(talk)",
href = "\bibhrefat{219-3d-epug-siam-2017-abs}{219-3d-epug-siam-2017.pdf}",
mykey = "overlay parallel marcus salles epug-overlay",
customlinkslides = "https://wrfranklin.org/p/219-3d-epug-siam-2017.pdf"
}
- W. Randolph Franklin, Salles V. G. de Magalhães, and Marcus V. A. Andrade.
An exact and efficient 3D mesh intersection algorithm using only orientation predicates.
In S3PM-2017: International Convention on Shape, Solid, Structure, & Physical Modeling, Shape Modeling International (SMI-2017) Symposium. Berkeley, California, USA, 19–23 June 2017.
(poster).
[abstract▼] [details] [poster]
[BibTeX▼]
We present an algorithm to compute the intersection of two 3D triangulated meshes. It has applications in CAD and Additive Manufacturing, and was developed to process the largest datasets quickly and correctly. The speed comes from simple regular data structures that parallelize very well. The correctness comes from using multiple-precision rational arithmetic to prevent roundoff errors and the resulting topological inconsistencies, and symbolic perturbation (simulation of simplicity) to handle special cases (geometric degeneracies). To simplify the symbolic perturbation, the algorithm employs only orientation predicates. This paper focuses on the challenges and solutions of the implementing symbolic perturbation.
@inproceedings{mesh-orientation-s3pm-2017,
author = "Franklin, W. Randolph and de Magalhães, Salles V. G. and Andrade, Marcus V. A.",
title = "An exact and efficient {3D} mesh intersection algorithm using only orientation predicates",
booktitle = "{S3PM-2017}: International Convention on Shape, Solid, Structure, \\& Physical Modeling, Shape Modeling International {(SMI-2017)} Symposium",
month = "19--23 June",
address = "Berkeley, California, USA",
host = "International Computer Science Institute",
year = "2017",
note = "(poster)",
href = "\bibhrefap{}{220-mesh-orientation-s3pm-2017.pdf}",
mykey = "parallel marcus salles overlay epug-overlay",
customlinkposter = "https://wrfranklin.org/p/220-mesh-orientation-s3pm-2017.pdf"
}
2016
- Salles V. G. de Magalhães, Marcus V. A. Andrade, W. Randolph Franklin, Wenli Li, and Maurício Gouvêa Gruppi.
Exact intersection of 3D geometric models.
In Geoinfo 2016, XVII Brazilian Symposium on GeoInformatics. Campos do Jordão, SP, Brazil, November 2016. Instituto Nacional de Pesquisas Espaciais (Brasil).
[abstract▼] [details] [full text] [slides]
[BibTeX▼]
This paper presents 3D-EPUG-OVERLAY , an exact and parallel algorithm for computing the intersection of 3D triangulated meshes, a problem with applications in several fields such as GIS and CAD. 3D-EPUG-OVERLAY presents several innovations: it employs exact arithmetic and, thus, the computations are completely exact, which avoids topological impossibilities that are often created by floating point arithmetic. Also, it uses a uniform grid to index the geometric data and was designed to be easily parallelizable. Furthermore, we propose the use of Simulation of Simplicity to effectively ensure that all the special cases are properly handled by the algorithm.
@inproceedings{salles-xsect-3d-geoinfo-2016,
author = "de Magalhães, Salles V. G. and Andrade, Marcus V. A. and Franklin, W. Randolph and Li, Wenli and Gruppi, Maurício Gouvêa",
title = "Exact intersection of {3D} geometric models",
booktitle = "Geoinfo 2016, {XVII Brazilian} Symposium on GeoInformatics",
year = "2016",
month = "November",
address = "Campos do Jordão, SP, Brazil",
puburl = "\url{http://www.geoinfo.info/geoinfo\_series.htm}",
publisher = "Instituto Nacional de Pesquisas Espaciais (Brasil)",
mykey = "wenlili marcus salles parallel overlay epug-overlay",
href = "\bibhrefpt{217-salles-xsect-3d-geoinfo-2016}{217-salles-xsect-3d-geoinfo-2016-talk.pdf}",
customlinkslides = "https://wrfranklin.org/p/217-salles-xsect-3d-geoinfo-2016-talk.pdf"
}
Contact: Salles.
Grid-Gen is an efficient heuristic for map simplification. Grid-Gen deals with a variation of the generalization problem where the idea is to simplify the polylines of a map without changing the topological relationships between these polylines or between the lines and control points. Grid-Gen uses a uniform grid to accelerate the simplification process and can handle a map with more than 3 million polyline points and 10 million control points in 9 seconds in a Lenovo T430s laptop.
It was the fourth fastest algorithm in the GISCUP 2014, but it was the only one to process all datasets without creating any topological error.
2014
- Salles V. G. de Magalhães, W. Randolph Franklin, Wenli Li, and Marcus V. A. Andrade.
Fast map generalization heuristic with a uniform grid.
In 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL 2014). Dallas, Texas, USA, 4–7 Nov 2014.
[details]
[BibTeX▼]
@inproceedings{giscup-2014,
author = "de Magalhães, Salles V. G. and Franklin, W. Randolph and Li, Wenli and Andrade, Marcus V. A.",
title = "Fast map generalization heuristic with a uniform grid",
booktitle = "22nd {ACM SIGSPATIAL} International Conference on Advances in Geographic Information Systems {(ACM SIGSPATIAL 2014)}",
year = "2014",
month = "4--7 Nov",
address = "Dallas, Texas, USA",
href = "\bibhrefptp{185-acmgis2014-cup}{185-acmgis2014-cup-talk.ppt}{185-acmgis2014-cup-poster.pdf}",
wrfnote = "GISCup presentation"
}
Contact: Salles.
This 1973 Fortran IV program computes any of the 16 possible boolean combinations of two polygons. The polygons may have multiple separate loops of edges, which may be nested.
The algorithm starts at an intersection point between an edge of polygon A and an edge of polygon B.
It uses a decision table to pick which of the four possible directions to go. At each successive intersection, it also uses a decision table to pick the next direction.
Then it processes edge loops that have no intersections with the other polygon. Each loop is either included in its original direction, included in the other direction, or excluded.
Anotb source. Compiler errors are caused by the Fortran language's incompatible changes since I wrote this.
Contact: WRF
This 1972 program takes a set of edges that form a planar graph.
Consider a planar graph of vertices, edges, and faces. If all
that is known is the vertex locations and which vertices are
at the ends of each edge, then this program will find the faces.
Tessel source. Compiler errors are caused by the Fortran language's incompatible changes since I wrote this.
Contact: WRF
PNPOLY (1970) tests whether a test point is contained in a given polygon. It is only 8 lines of executable C code.
Code and extensive documentation.
Contact: WRF
NearptD is a very fast parallel nearest neighbor algorithm and implementation, which has processed \(10^7\) points in \(E^6\) and \(184\cdot10^6\) points in \(E^3\). It uses 1/5 the space and as little as 1/100 the preprocessing time as FLANN (a well-known approximate nearest neighbor program). Up to \(E^4\) , its query time is also faster, by up to a factor of 100. NearptD uses Nvidia Thrust and CUDA in C++ to perform parallel preprocessing and querying of large point cloud data. Nearest neighbor searching is needed by many applications, such as collision detection, computer vision or machine learning. This implementation is an extension of the Nearpt3 algorithm performed in parallel on the GPU for a variable number of dimensions. NearptD shows that a uniform grid can outperform a kd-tree for preprocessing and searching large datasets.
2016
- David Hedin and W. Randolph Franklin.
Nearptd: a parallel implementation of exact nearest neighbor search using a uniform grid.
In Canadian Conference on Computational Geometry. Vancouver Canada, August 2016.
[abstract▼] [details] [full text] [slides]
[BibTeX▼]
We present NearptD, a very fast parallel nearest neighbor algorithm and implementation, which has processed 10M points in E6 and 184M6 points in E3. It uses 1/5 the space and as little as 1/100 the preprocessing time FLANN (a well-known approximate nearest neighbor 4program). Up to E4, its query time is also faster, by up to a factor of 100. NearptD uses Nvidia Thrust and CUDA in C++ to perform parallel preprocessing and querying of large point cloud data. Nearest neighbor searching is needed by many applications, such as collision detection, computer vision or machine learning. This implementation is an extension of the Nearpt3 algorithm performed in parallel on the GPU for a variable number of dimensions. NearptD shows that a uniform grid can outperform a kd-tree for preprocessing and searching large datasets.
@inproceedings{hedin-nearptd-2016,
author = "Hedin, David and Franklin, W. Randolph",
title = "NearptD: A Parallel Implementation of Exact Nearest Neighbor Search using a Uniform Grid",
booktitle = "Canadian Conference on Computational Geometry",
year = "2016",
month = "August",
address = "Vancouver Canada",
href = "\bibhrefpt{211-hedin-nearptd-2016}{211-hedin-nearptd-2016-talk.pdf}",
customlinkslides = "https://wrfranklin.org/p/211-hedin-nearptd-2016-talk.pdf",
mykey = "nearpt parallel"
}
2005
- W. Randolph Franklin.
Nearpt3 — Nearest point query on 184M points in $E^3$ with a uniform grid.
In Proceedings of the 17th Canadian Conference on Computational Geometry (CCCG'05), 239–242. Windsor, Ontario, 10-12 August 2005.
[details] [full text] [slides]
[BibTeX▼]
@inproceedings{wrf-nearpt3-cccg-2005,
author = "Franklin, W. Randolph",
title = "Nearpt3 --- {Nearest} Point Query on {184M} Points in {$E^3$} with a Uniform Grid",
year = "2005",
booktitle = "Proceedings of the 17th {Canadian} Conference on Computational Geometry {(CCCG'05)}",
address = "Windsor, Ontario",
month = "{10-12 August}",
pages = "239--242",
mykey = "nearpt",
customlinkslides = "https://wrfranklin.org/p/105-nearpt3-talk.pdf",
href = "\bibhrefnpt{paper (current version)}{105-nearpt3}{105-nearpt3-talk.pdf}"
}
- W. Randolph Franklin.
Nearpt3 — Nearest point query on 184M points in $E^3$ with a uniform grid.
\url https://wrfranklin.org/Research/nearpt3/, 2005.
[details]
[BibTeX▼]
@misc{wrf-nearpt3-web,
author = "Franklin, W. Randolph",
year = "2005",
title = "Nearpt3 --- {Nearest} Point Query on {184M} Points in {$E^3$} with a Uniform Grid",
howpublished = "\url{https://wrfranklin.org/Research/nearpt3/}",
mykey = "nearpt"
}
2004
- W. Randolph Franklin.
Nearpt3 — Nearest point query in $E^3$ with a uniform grid (extended abstract).
In 14th Annual Fall Workshop on Computational Geometry. MIT, 20 Nov 2004.
[details] [full text] [slides]
[BibTeX▼]
@inproceedings{wrf-nearpt3-mit-2004,
author = "Franklin, W. Randolph",
title = "Nearpt3 --- {Nearest} Point Query in {$E^3$} with a Uniform Grid (extended abstract)",
booktitle = "14th Annual Fall Workshop on Computational Geometry",
address = "{MIT}",
month = "20 Nov",
year = "2004",
mykey = "nearpt",
customlinkslides = "https://wrfranklin.org/p/102-fwcg2004-nearpt3-talk.pdf",
href = "\bibhrefpt{102-fwcg2004-nearpt3}{102-fwcg2004-nearpt3-talk.pdf}"
}
PinMesh is a very fast algorithm with implementation to preprocess a polyhedral mesh, also known as a multi-material mesh, in order to perform 3D point location queries. PinMesh combines several innovative components to efficiently handle the largest available meshes. Because of a 2-level uniform grid, the expected preprocessing time is linear in the input size, and the code parallelizes well on a shared memory machine. Querying time is almost independent of the dataset size. PinMesh uses exact arithmetic with rational numbers to prevent roundoff errors, and symbolic perturbation with Simulation of Simplicity (SoS) to handle geometric degeneracies or special cases. PinMesh is intended to be a subroutine in more complex algorithms. It can preprocess a dataset and perform 1 million queries up to 27 times faster than RCT (Relative Closest Triangle), the current fastest algorithm. Preprocessing a sample dataset with 50 million triangles took only 14 elapsed seconds on a 16-core Xeon processor. The mean query time was 0.6 μs.
2016
- Salles V. G. de Magalhães, Marcus V. A. Andrade, W. Randolph Franklin, and Wenli Li.
PinMesh – Fast and exact 3D point location queries using a uniform grid.
Computer & Graphics Journal, special issue on Shape Modeling International 2016, 58:1–11, August 2016.
(online 17 May). Awarded a reproducibility stamp, \url http://www.reproducibilitystamp.com/.
URL: http://www.sciencedirect.com/science/article/pii/S0097849316300607, doi:https://doi.org/10.1016/j.cag.2016.05.017.
[abstract▼] [details] [full text] [slides]
[BibTeX▼]
This paper presents PinMesh, a very fast algorithm with implementation to preprocess a polyhedral mesh, also known as a multimaterial mesh, in order to perform 3D point location queries. PinMesh combines several innovative components to efficiently handle the largest available meshes. Because of a 2-level uniform grid, the expected preprocessing time is linear in the input size, and the code parallelizes well on a shared memory machine. Querying time is almost independent of the dataset size. PinMesh uses exact arithmetic with rational numbers to prevent roundoff errors, and symbolic perturbation with Simulation of Simplicity (SoS) to handle geometric degeneracies or special cases. PinMesh is intended to be a subroutine in more complex algorithms. It can preprocess a dataset and perform 1 million queries up to 27 times faster than RCT (Relative Closest Triangle), the current fastest algorithm. Preprocessing a sample dataset with 50 million triangles took only 14 elapsed seconds on a 16-core Xeon processor. The mean query time was 0.6 microseconds.
@article{salles-pinmesh-smi-2016,
author = "de Magalhães, Salles V. G. and Andrade, Marcus V. A. and Franklin, W. Randolph and Li, Wenli",
title = "{PinMesh} -- {Fast} and exact {3D} point location queries using a uniform grid",
journal = "{Computer \\& Graphics Journal}, special issue on {Shape Modeling International} 2016",
year = "2016",
volume = "58",
pages = "1--11",
month = "August",
doi = "https://doi.org/10.1016/j.cag.2016.05.017",
url = "http://www.sciencedirect.com/science/article/pii/S0097849316300607",
href = "\bibhrefpt{209-salles-smi-pinmesh-2016}{209-salles-smi-pinmesh-2016-talk.pptx}",
note = "(online 17 May). Awarded a reproducibility stamp, \url{http://www.reproducibilitystamp.com/}.",
customlinkslides = "https://wrfranklin.org/p/209-salles-smi-pinmesh-2016-talk.pptx",
mykey = "salles marcus wenlili pinmesh parallel sallespinmeshsmi2016"
}
Reproducibility stamp.
Pinmesh code.
Contact: Salles.
UNION3 computes mass properties of the union of many identical axis-aligned cubes. This implementation has processed 20,000,000 cubes on a dual processor Pentium Xeon workstation in about one elapsed hour. The ideas would extend to the union of general polyhedra. UNION3 works by combining three techniques. The first is a set of local topological formulae for computing polyhedral mass properties. No global topology is used, such as complete face or edge information, or edge loops and face shells. For example, the volume of a multiple component, concave, rectilinear object is :math:` sum s_i x_i y_i z_i ` , summed over the vertices :math:` (x_i , y_i, z_i) , where each :math:`s_i is a function of the directions of the incident edges. The second technique is a 3-D grid of uniform cells, which is overlaid on the data. Each cell has a list of edges, faces, and cubes that overlap it. Only pairs of objects in the same cell are tested for intersection. Experimentally, the uniform grid is quite tolerant of varying densities in the input. The third technique, building on the second, is the covered-cell concept. When a cell is completely contained within a cube, its overlap list is cleared, and objects in it are not intersected against each other (unless they both also overlap the same other, noncovered, cell). These techniques combine to reduce the expected computation time to linear in the number of cubes, regardless of the number of intersections. In the slowest step of UNION3, which is testing pairs of objects in each cell for intersection, no inter-cell communication is required. Therefore UNION3 parallelizes quite well.
2013
- W. Randolph Franklin.
Parallel volume computation of massive polyhedron union.
In 23rd Fall Workshop on Computational Geometry. City College, New York City, USA, 25–26 Oct 2013.
(extended abstract).
[abstract▼] [details] [full text] [slides]
[BibTeX▼]
We present a parallel implementation of UNION3, an algorithm for computing the volume, surface area, center of mass, or similar properties of a boolean combination of many polyhedra. UNION3 has been implemented on the special case of the union of congruent isothetic cubes, and tested on 100, 000, 000 cubes. The algorithm uses a volume formula that does not require first computing the explicit union. All that is needed is the set of output vertices, including each vertex's position and neighborhood. UNION3's computation does not use randomization or sampling, and its output is exact. The implementation is parallel, using OpenMP. When using 32 threads, UNION3's elapsed time is only a few minutes, and the speedup, compared to using one thread, is a factor of ten. When executed on uniform random i.i.d input, UNION3's execution time is linear in the number of output vertices. UNION3 is intended as an example of a parallel geometry algorithm whose techniques should be broadly applicable.
@inproceedings{fwcg-2013,
author = "Franklin, W. Randolph",
title = "Parallel Volume Computation of Massive Polyhedron Union",
booktitle = "23rd Fall Workshop on Computational Geometry",
year = "2013",
month = "25--26 Oct",
address = "City College, New York City, USA",
note = "(extended abstract)",
href = "\bibhrefpt{171-fwcg2013-parallel-union}{171-fwcg2013-parallel-union-talk.pdf}",
customlinkslides = "https://wrfranklin.org/p/171-fwcg2013-parallel-union-talk.pdf",
mykey = "union3 topology parallel"
}
2005
- Wm. Randolph Franklin.
Mass properties of the union of millions of identical cubes.
In Ravi Janardan, Debashish Dutta, and Michiel Smid, editors, Geometric and Algorithmic Aspects of Computer Aided Design and Manufacturing, DIMACS Series in Discrete Mathematics and Theoretical Computer Science, volume 67, pages 329–345.
American Mathematical Society, 2005.
[details] [full text] [slides]
[BibTeX▼]
@incollection{wrf-dimacs2005,
author = "Franklin, Wm. Randolph",
editor = "Janardan, Ravi and Dutta, Debashish and Smid, Michiel",
title = "Mass Properties of the Union of Millions of Identical Cubes",
booktitle = "Geometric and Algorithmic Aspects of Computer Aided Design and Manufacturing, {DIMACS} Series in Discrete Mathematics and Theoretical Computer Science",
publisher = "American Mathematical Society",
volume = "67",
year = "2005",
pages = "329--345",
mykey = "union3 topology",
succeeds = "wrf-dimacs2003",
customlinkslides = "https://wrfranklin.org/p/104-dimacs03-union-talk.pdf",
href = "\bibhrefpt{104-dimacs03-union}{104-dimacs03-union-talk.pdf}"
}
2004
- W. Randolph Franklin.
Analysis of mass properties of the union of millions of polyhedra.
In M. L. Lucian and M. Neamtu, editors, Geometric Modeling and Computing: Seattle 2003, pages 189–202.
Nashboro Press, Brentwood TN, 2004.
[details] [full text]
[BibTeX▼]
@incollection{wrf-union-analysis-2004,
author = "Franklin, W. Randolph",
editor = "Lucian, M. L. and Neamtu, M.",
title = "Analysis of Mass Properties of the Union of Millions of Polyhedra",
year = "2004",
booktitle = "Geometric Modeling and Computing: Seattle 2003",
publisher = "Nashboro Press, Brentwood TN",
pages = "189--202",
isbn = "0-0-9728482-3-1",
mykey = "union3 topology parallel",
href = "\bibhref{101-siam04-union}"
}
Contact: WRF
Connect is a very efficient implementation on a laptop computer of the union-find algorithm for finding connected components when the input is more than a 1000x1000x1000 3D box of binary voxels. Each voxel may be connected either to its 6 orthogonal neighbors, or to all 26 neighbors. Component properties, such as area, and volume are also computed. This implementation is fast enough to permit experimental studies of random connectivity.
Determining the connected components of a 2D or 3D array of bits is a required operation in domains such as the following:
Investigating the distribution of cracks in concrete as it is stressed to failure (which is what motivated this research),
Extraction of connected components in image processing, and
Determination of porosity of an underground fluid reservoir.
Connect is very space- and time-efficient. The largest test case had a universe of size 1024x1088x1088, which has 1,212,153,856 voxels, about 1/2 empty and found the 4,539,562 components. The implementation environment was a 2GHz IBM T43p laptop with SuSE linux and gcc 4.1.0. The virtual memory used, which is a function of the space preallocated for runs and components, which was sized for this example, was only 340MB. The program's elapsed time was equal to its CPU time, which was 51 CPU seconds. Smaller examples run proportionately faster, in perhaps 0.1 seconds for a 100x100x100 universe. Very complicated cases would run more slowly.
More details.
2021
- W. Randolph Franklin, Salles Viana Gomes de Magalhães, and Eric N Landis.
Fast 3-D Euclidean connected components.
In John Krumm, editor, 3rd ACM SIGSPATIAL International Workshop on Spatial Gems (SpatialGems 2021).
ACM, 2 Nov 2021.
URL: https://www.spatialgems.net/.
[abstract▼] [details] [full text]
[BibTeX▼]
We present an efficient algorithm, with implementations, for computing the connected components within a 3-D cube of voxels, also known as the Euclidean union-find problem. There may be over $10^9$ voxels. The components may be 8-connected or 26-connected. Computing connected components has applications ranging from material failure in concrete under increasing stress to electrical conductivity in complex metal objects to elasticity in 3D printed parts. On key to efficiency is representing voxels by 1-D \emphruns of adjacent voxels. As a special case, 2-D connected components of images may easily be computed.
@incollection{connect-gem-2021,
author = "Franklin, W. Randolph and de Magalhães, Salles Viana Gomes and Landis, Eric N",
editor = "Krumm, John",
title = "Fast {3-D} {Euclidean} connected components",
booktitle = "3rd {ACM SIGSPATIAL} International Workshop on Spatial Gems (SpatialGems 2021)",
publisher = "ACM",
year = "2021",
month = "2 Nov",
location = "Beijing, China (virtual)",
mykey = "recent salles landis connect topology 2021",
url = "https://www.spatialgems.net/",
href = "\bibhref{240-connect-gem-2021}"
}
2006
- W. Randolph Franklin and Eric Landis.
Connected components on 1000x1000x1000 datasets.
In 16th Fall Workshop on Computational Geometry. Smith College, Northampton, MA, 10-11 Nov 2006.
(extended abstract).
[details] [full text] [slides]
[BibTeX▼]
@inproceedings{wrf-connect-fwcg2006,
author = "Franklin, W. Randolph and Landis, Eric",
title = "Connected components on 1000x1000x1000 datasets",
booktitle = "16th Fall Workshop on Computational Geometry",
address = "Smith College, Northampton, MA",
month = "10-11 Nov",
year = "2006",
mykey = "connect",
note = "(extended abstract)",
customlinkslides = "https://wrfranklin.org/p/110-smith-fwcg06-connect-talk.pdf",
href = "\bibhrefat{110-smith-fwcg06-connect}{110-smith-fwcg06-connect-talk}"
}
- Eric N. Landis, Tong Zhang, Edwin N. Nagy, George Nagy, and W. Randolph Franklin.
Cracking, damage and fracture in four dimensions.
Materials and Structures, online date: 13 July 2006.
[abstract▼] [details] [full text]
[BibTeX▼]
Abstract Concrete and cracking are nearly synonymous despite our best efforts and intentions. Relationships between cracking and the stress states that lead to cracking can be instructive. In an effort to better understand these relationships, X-ray microtomography was used to make high-resolution three-dimensional digital images of small concrete specimens under load. Using 3D image analysis, quantitative measurements of internal crack growth were made that include effects of crack tortuosity, branching and microcracking. Successive images at different levels of cracking and damage provide us with a detailed picture of internal crack progression. When coupled with load-deformation response, bulk material properties such as fracture toughness or damage variables can be quantitatively linked with cracking. Measurements to date have shown distinct fracture regimes linked to crack formation and propagation. In addition, the crack measurements offer a way to provide a physical basis for a scalar damage variable.
@article{landis2006,
author = "Landis, Eric N. and Zhang, Tong and Nagy, Edwin N. and Nagy, George and Franklin, W. Randolph",
title = "Cracking, damage and fracture in four dimensions",
journal = "Materials and Structures",
publisher = "Springer",
year = "2006",
mykey = "connect",
month = "online date: 13 July",
href = "\bibhref{109-matstruct2006-cracking-landis}",
Keywords = "Concrete - Fracture - Damage - Tomography",
publisher-url = "http://www.springerlink.com/content/ftgmk3m461716833/"
}
- W. Randolph Franklin.
CONNECT – find 3D connected components.
2006.
URL: https://wrfranklin.org/pmwiki/ConnectedComponents.
[details]
[BibTeX▼]
@misc{connect-web,
author = "Franklin, W. Randolph",
title = "{CONNECT} -- Find {3D} connected components",
url = "https://wrfranklin.org/pmwiki/ConnectedComponents",
mykey = "connect",
year = "2006"
}
2003
- Edwin Nagy, Tong Zhang, Wm Randolph Franklin, George Nagy, and E Landis.
3D analysis of tomographic images.
In 16th ASCE Engineering Mechanics Conference. U Washington, Seattle, 16-18 July 2003.
electronic proceedings.
[details] [full text]
[BibTeX▼]
@inproceedings{acse2003,
author = "Nagy, Edwin and Zhang, Tong and Franklin, Wm Randolph and Nagy, George and Landis, E",
title = "{3D} Analysis of Tomographic Images",
booktitle = "16th {ASCE} Engineering Mechanics Conference",
year = "2003",
address = "U Washington, Seattle",
month = "16-18 July",
mykey = "connect",
note = "electronic proceedings",
href = "\bibhref{98-acse03-3danalysis-nagy}"
}
2001
- G Nagy, T Zhang, WR Franklin, E Landis, E Nagy, and D Keane.
Volume and surface area distributions of cracks in concrete.
In C. Arcelli, L.P. Cordella, and G. Sanniti di Baja, editors, Visual Form 2001: 4th International Workshop on Visual Form IWVF4, volume 2051/2001 of Lecture Notes in Computer Science.
Springer-Verlag Heidelberg, Capri, Italy, 28-30 May 2001.
[details] [full text] [poster]
[BibTeX▼]
@incollection{capri01,
author = "Nagy, G and Zhang, T and Franklin, WR and Landis, E and Nagy, E and Keane, D",
editor = "Arcelli, C. and Cordella, L.P. and di Baja, G. Sanniti",
title = "Volume and Surface Area Distributions of Cracks in Concrete",
booktitle = "Visual Form 2001: 4th International Workshop on Visual Form {IWVF4}",
publisher = "Springer-Verlag Heidelberg",
year = "2001",
mykey = "connect",
volume = "2051/2001",
series = "Lecture Notes in Computer Science",
address = "Capri, Italy",
month = "28-30 May",
href = "\bibhrefpp{95-capri01-paper}{95-capri01-poster.gif}",
customlinkposter = "https://wrfranklin.org/p/95-capri01-poster.gif"
}
Contact: WRF