Geometry has been my overriding interest since high school in the 1960s. Geometry is the "branch of mathematics that deals with the measurement, properties, and relationships of points, lines, angles, surfaces, and solids" [1] The Geo in geometry is from the Greek Γη meaning, earth, ground, land[2]. One major project was Geo*, a DARPA–funded project for representing and operating on terrain, that is, elevation.
My big long-term unsolved problem is to devise a mathematics of terrain, which would respect its physical properties. To date, I've been nibbling around the edges.
More recently, a major theme has been parallel geometry algorithms.
I've applied the same underlying principles in Computational Geometry producing algorithms useful for large datasets, mostly in 3D, and usually implemented.
Both topics are applications of my long term theme of emphasizing small, simple, and fast data structures and algorithms. Note that efficiency in both space and time can become more important as machines get faster. This research is applicable to computational cartography, computer graphics, computational geometry, and geographic information science.
Here is a list of 18 PhD students (7 currently employed at a college), and 70 masters students have been graduated under my advisement.
My research has been externally funded by the National Science Foundation under Grants ENG-7908139, ECS-8021504, ECS-8351942, CCF-9102553, CCF-0306502, DMS-0327634, CMMI-0835762 and IIS-1117277 by DARPA/DSO, via the NGA, under the GeoStar program, by the US Army Topographic Engineering Center, and by IBM, Sun Microsystems, and Schlumberger-Doll Research.
Many of the algorithms have been implemented. The code is available for nonprofit research and education.
The main goal is to overlay two 3D meshes to produce a new mesh, where each output tetrahedron is part of the intersection of two input tetrahedra, one from each input mesh. Secondary goals are to process meshes with tens of millions of tetrahedra with an expected time linear in the input size. We will achieve this by combining give techniques.
minimal geometric representations, for simplicity and parallelizability,
uniform grid, for fast intersection detection,
rational numbers, to prevent roundoff errors,
Simulation of Simplicity, to handle degeneracies, and
OpenMP, for parallel speedup.
We have already overlaid two 2D meshes (aka embedded planar
graphs). Our big example combined US Water Bodies with US Block
Boundaries, which total 54,000,000 vertices, and 737,000 faces.
This took only 149 elapsed seconds (plus 116s for I/O).
We have also implemented PinMesh, which locates a point in a 3D
mesh, again combining the above five techniques. Its
preprocessing time is linear and query time constant. The
largest test case had 50M faces. Preprocessing took 14 seconds
on a dual 8-core Xeon, while querying averaged 0.6 microseconds
per point. This work won a reproducability stamp at the International Geometry Summit 2016.
Both programs are freely available to other nonprofit researchers and educators. Both programs scale up and scale down. They could process orders-of-magnitude larger datasets, if those were available. On much smaller datasets, they achieve sub-second performance.
This is Salles Viana Gomez de Magalhaes's PhD thesis.
The goal is to lossily compress 3D arrays of environmental data.
For a given maximum error, our compressed file is typically 1/3
the size of that produced by JP3D. The data size is up to
160x256x256. Our ideas also extend to 4D and higher datasets.
The original motivation was to interpolate raster terrain
surfaces from elevation contours. Existing techniques had many
problems. The input contours were visible as terraces in the
output surface. Those techniques interpolated a surface between
two adjacent contours, so that nothing encouraged continuity of
slope across a contour.
My solution was ODETLAP, which expresses the surface as the
solution of an overdetermined sparse linear system. The contours
provide known points; the surface between them the unknown
points. For each unknown point, an equation is created making it
the average of its four neighbors, as with a Laplacian. Each
known point has that equation (in contrast to a Laplacian) and
also has a second equation making it equal to its known value.
The two types of equations can be weighted to emphasize either
smoothness or accuracy. Then a best fit is found for this this
overdetermined, inconsistent, system.
Although inspired by a Laplacian, ODETLAP's properties are quite
different. E.g., local extrema are generated inside a set of
nested contours. Inconsistency is a powerful tool that is
underappreciated by other researchers.
The idea extends to higher dimensional datasets. It exploits the
data's autocorrelation in each dimension. The challenge with
ODETLAP is that it is compute and memory intensive.
The compressed dataset is a subset of the original dataset's
points, selected greedily, and coded compactly. ODETLAP is used
to reconstruct the dataset from them.
This paper presents a system that sites (finds optimal locations for) thousands of radio transmitter towers on terrains of up to two billion elevation posts. Applications include cellphone towers, camera systems, or even mitigating environmental visual nuisances. The transmitters and receivers may be situated above the terrain. The system has been parallelized with OpenMP to run on a multicore CPU.
@misc{siting-thousands-2020,
author = "Franklin, W. Randolph and de Magalhães, Salles Viana Gomes and Li, Wenli",
title = "Siting thousands of radio transmitter towers on terrains with billions of points",
year = "2020",
eprint = "2006.16783",
archivePrefix = "arXiv",
primaryClass = "eess.SP",
note = "arXiv 2006.16783",
href = "\bibhref{237-siting-thousands-2020}",
mykey = "recent salles parallel visibility visviewshedsiting wenlili"
}
We present two fast parallel implementations of the Franklin-Vogt multiple observer siting algorithm, using either OpenMP or CUDA. In this problem, the objective is to site observers on a raster terrain such that the joint area visible by them is maximized. On a terrain with 16385×16385 cells, assuming that observers can see out to a radius-of-interest of 100 cells, finding the approximately 15000 observers that have the greatest coverage takes only 17s in CUDA. That is a factor of 70 speedup from the sequential version. The OpenMP version exhibits a factor of 17 speedup on a 16 core system. Applications for the multiple observer siting problem include radio transmission towers, environmental monitoring sites, and path planning for surveillance drones. The algorithm has four steps: finding the visibility indices of all points, selecting a candidate subset of potential top observers, finding each one's viewshed, and greedily constructing the final solution.
@article{wenli-gpu-siting-2016,
author = "Li, Wenli and Franklin, W. Randolph",
title = "{GPU}--Accelerated Multiple Observer Siting",
journal = "Photogrammetric Engineering \\& Remote Sensing",
year = "2017",
month = "June",
volume = "83",
number = "6",
pages = "439--446",
href = "\bibhref{213-wenli-gpu-siting-2016}",
doi = "https://doi.org/10.14358/PERS.83.6.439",
mykey = "wenlili visviewshedsiting gpusiting parallel"
}
We use an O(n log(n)) quadtree-forest algorithm to compute approximate horizons at all points of a DEM, and achieve more than 30 times speedup on a GPU. The result of the algorithm is very close to that of the brute-force algorithm.
@inproceedings{wenli-gpu-horizons-fwcg-2016,
author = "Li, Wenli and Franklin, W. Randolph and de Magalhães, Salles V. G.",
title = "Computing approximate horizons on a {GPU}",
booktitle = "26th Fall Workshop on Computational Geometry",
year = "2016",
month = "27-28 Oct",
address = "{CUNY} Graduate Center, New York, USA",
note = "(extended abstract)",
href = "\bibhrefpt{216-wenli-gpu-horizons-fwcg-2016}{216-wenli-gpu-horizons-fwcg-2016-talk.pdf}",
customlinkslides = "https://wrfranklin.org/p/216-wenli-gpu-horizons-fwcg-2016-talk.pdf",
mykey = "visviewshedsiting wenlili salles"
}
This paper presents TiledVS, a fast external algorithm and implementation for computing viewsheds. TiledVS is intended for terrains that are too large for internal memory, even over 100 000 × 100 000 points. It subdivides the terrain into tiles that are stored compressed on disk and then paged into memory with a custom cache data structure and LRU algorithm. If there is sufficient available memory to store a whole row of tiles, which is easy, then this specialized data management is faster than relying on the operating system's virtual memory management. Applications of viewshed computation include siting radio transmitters, surveillance, and visual environmental impact measurement. TiledVS runs a rotating line of sight from the observer to points on the region boundary. For each boundary point, it computes the visibility of all the terrain points close to the line of sight. The running time is linear in the number of points. No terrain tile is read more than twice. TiledVS is very fast, for instance, processing a 104000 x 104000 terrain on a modest computer with only 512MB of RAM took only 1 2 1 hours. On large datasets, TiledVS was several times faster than competing algorithms such as the ones included in GRASS. The source code of TiledVS is freely available for nonprofit researchers to study, use, and extend. A preliminary version of this algorithm appeared in a 4-page ACM SIGSPATIAL GIS 2012 conference paper “More efficient terrain viewshed computation on massive datasets using external memory”. This more detailed version adds the fast lossless compression stage that reduces the time by 30\% to 40\%, and many more experiments and comparisons.
@article{chaulio-tiledvs-tsas-2016,
author = "Ferreira, Chaulio R. and Andrade, Marcus V. A. and de Magalhães, Salles V. G. and Franklin, W. Randolph",
title = "An efficient external memory algorithm for terrain viewshed computation",
href = "\bibhref{197-chaulio-tiledvs-tsas-2016}",
journal = "{ACM} Trans. on Spatial Algorithms and Systems",
year = "2016",
volume = "2",
number = "2",
doi = "https://doi.org/10.1145/2903206",
mykey = "salles marcus visviewshedsiting tiledvs visibility"
}
Max J Egenhofer, Keith C Clarke, Song Gao, Teriitutea Quesnot, W. Randolph Franklin, May Yuan, and David Coleman.
Contributions of GIScience over the past twenty years.
In Harlan Onsrud and Werner Kuhn, editors, Advancing Geographic Information Science: The Past and Next Twenty Years, chapter 1, pages 9–34.
GSDI association press, 2016.
[abstract▼] [details] [full text]
[BibTeX▼]
This paper summarizes the discussions related to the panel “Contributions of GIScientists (or GIScience) over the past Twenty Years” at the 2015 Vespucci Institute. Reflections about the past not only provide an account of what occurred, but also may serveas a basis for comparison when in the future somehow related scenarios arise. Such histories may be detailed enumerations of chronological events or, more analytically, analyses of interactions that enabled or caused specific developments. The purpose of this paper is to account for some key developments in the academic field of geographic information science over the past twenty years (i.e., since 1995) and to assess some of the impact of these developments. The panel in Bar Harbor, moderated by David Coleman, included two invited presentations (by Max Egenhofer and Keith Clarke), and responses by two early career panelists (Song Gao and Teriitutea Quesnot), and by two senior panelists (Randolph Franklin and May Yuan).Keywords: Emergence of GIScience; short recent history; outlets of GIScience research; publication ranking; selected highlights of GIScience research; contributions to other disciplines; research topics that have disappeared; recently emerging topics.
@incollection{egenhofer-advancing-2016,
author = "Egenhofer, Max J and Clarke, Keith C and Gao, Song and Quesnot, Teriitutea and Franklin, W. Randolph and Yuan, May and Coleman, David",
editor = "Onsrud, Harlan and Kuhn, Werner",
title = "Contributions of {GIScience} over the Past Twenty Years",
booktitle = "Advancing Geographic Information Science: The Past and Next Twenty Years",
publisher = "{GSDI} association press",
year = "2016",
chapter = "1",
pages = "9--34",
isbn = "978-0-9852444-4-6",
mykey = "visviewshedsiting",
href = "\bibhref{206-egenhofer-advancing-maine-2016}"
}
This thesis extends Overdetermined Laplacian Partial Differential Equations (ODETLAP) for spatial data approximation and compression and parallelizes multiple observer siting on terrain, using General-Purpose Computing on Graphics Processing Units (GPGPU). Both ODETLAP compression and multiple observer siting use greedy algorithms that are parallelizable within iterations but sequential between iterations. They also demonstrate terrain-related research and applications that benefit from GPU acceleration and showcase the achievable speedups. ODETLAP approximation approximates a spatial dataset from scattered data points on a regular grid by solving an overdetermined system of linear equations that minimizes the absolute Laplacian of the approximation and the value errors of the data points. We show that ODETLAP approximation is a linear operator and is comparable in accuracy with natural neighbor interpolation and the multiquadric-biharmonic method. Using ODETLAP approximation to approximate spatial datasets, ODETLAP compression compresses a dataset as a set of known points and their values and decompresses it as an approximation from the known points. We implement ODETLAP approximation and compression using the CUSP library and the speedup is 8 times on a GPU. We design multiple algorithms to improve the accuracy of ODETLAP compression using a limited number of known points and use them to compress 2D terrain datasets and 3D MRI datasets. The results show that ODETLAP compression is 30% to 50% better in minimizing the maximum absolute error than JPEG 2000 for the terrain datasets and 40% to 60% better than JP3D for the MRI datasets. To further increase speed, we design a segmented ODETLAP compression algorithm and use it to compress larger 3D atmospheric and MRI datasets. The results show that the compressed size of the dataset is 60% that of JP3D for the same maximum absolute error. Multiple observer siting places multiple observer points above a terrain to maximize the total area visible from at least one observer. The algorithm first selects a set of highly visible points as tentative observers, and then iteratively selects observers to maximize the cumulative viewshed. We improve the time and space complexities of the algorithm and parallelize it using CUDA on a GPU and using OpenMP on multi-core CPUs. The speedup is up to 60 times on the GPU and 16 times on two CPUs with 16 cores.
@phdthesis{wenli-phd,
author = "Li, Wenli",
title = "{GPU}-accelerated terrain processing",
school = "Rensselaer Polytechnic Institute",
mykey = "wenlili visviewshedsiting gpusiting parallel theses",
year = "2016"
}
This paper presents EPLSimp, an algorithm for map generalization that avoids the creation of topological inconsistencies. EPLSimp is based on Visvalingam-Whyatt's (VW) algorithm on which least “important” points are removed first. Unlike VW's algorithm, when a point is deleted a verification is performed in order to check if this deletion would create topological inconsistencies. This was done by using arbitrary precision rational numbers to completely avoid errors caused by floating-point arithmetic. EPLSimp was carefully implemented to be efficient, although using rational numbers adds an overhead to the computation. This efficiency was achieved by using a uniform grid for indexing the geometric data and parallel computing to speedup the process.
@inproceedings{mauricio-rational-geoinfo-2015,
author = "Gruppi, Maurício Gouvêa and de Magalhães, Salles V. G. and Andrade, Marcus V. A. and Franklin, W. Randolph and Li, Wenli",
title = "Using Rational Numbers and Parallel Computing to Efficiently Avoid Round-Off Errors on Map Simplification",
booktitle = "Geoinfo 2015, {XVI Brazilian} Symposium on GeoInformatics",
year = "2015",
month = "29 Nov -- 2 Dec",
address = "Campos do Jordão, SP, Brazil",
mykey = "marcus wenlili salles visviewshedsiting",
href = "\bibhrefpt{198-mauricio-rational-geoinfo-2015}{198-mauricio-rational-geoinfo-2015-talk.pdf}",
customlinkslides = "https://wrfranklin.org/p/198-mauricio-rational-geoinfo-2015-talk.pdf"
}
2014
Wenli Li, W. Randolph Franklin, Daniel N. Benedetti, and Salles V. G. de Magalhães.
Parallel multiple observer siting on terrain.
In 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL 2014). Dallas, Texas, USA, 4–7 Nov 2014.
[abstract▼] [details] [full text] [poster]
[BibTeX▼]
This paper presents the optimization and parallelization of the multiple observer siting program, originally developed by Franklin and Vogt. Siting is a compute-intensive application with a large amount of inherent parallelism. The advantage of parallelization is not only a faster program but also the ability to solve bigger problems. We have parallelized the program using two different techniques: OpenMP, using multi-core CPUs, and CUDA, using a general purpose graphics processing unit (GPGPU). Experiment results show that both techniques are very effective. Using the OpenMP program, we are able to site tens of thousands of observers on a 16385 × 16385 terrain in less than 2 minutes, on our workstation with two CPUs and one GPU. The CUDA program achieves the same in about 30 seconds.
@inproceedings{li-acmgis-2014,
author = "Li, Wenli and Franklin, W. Randolph and Benedetti, Daniel N. and de Magalhães, Salles V. G.",
title = "Parallel Multiple Observer Siting on Terrain",
booktitle = "22nd {ACM SIGSPATIAL} International Conference on Advances in Geographic Information Systems {(ACM SIGSPATIAL 2014)}",
year = "2014",
month = "4--7 Nov",
href = "\bibhrefpp{182-acmgis2014-par-siting}{182-acmgis2014-par-siting-poster.pdf}",
address = "Dallas, Texas, USA",
customlinkposter = "https://wrfranklin.org/p/182-acmgis2014-par-siting-poster.pdf",
mykey = "benedetti salles wenlili visviewshedsiting gpusiting parallelsiting parallel"
}
This paper proposes an efficient parallel heuristic for siting observers on raster terrains. More specifically, the goal is to choose the smallest set of points on a terrain such that observers located in these points are able to visualize at least a given percentage of the terrain. This problem is NP-Hard and has several applications such as determining the best places to position (site) communication or monitoring towers on a terrain. Since siting observers is a massive operation, its solution requires a huge amount of processing time even to obtain an approximate solution using a heuristic. This is still more evident when processing high resolution terrains that have become available due to modern data acquiring technologies such as LIDAR and IFSAR. Our new implementation uses dynamic programming and CUDA to accelerate the swap local search heuristic, which was proposed in previous works. Also, to efficiently use the parallel computing resources of GPUs, we adapted some techniques previously developed for sparse-dense matrix multiplication. We compared this new method with previous parallel implementations and the new method is much more efficient than the previous ones. It can process much larger terrains (the older methods are restrictive about terrain size) and it is faster.
@inproceedings{bigspatial-2014,
author = "Pena, Guilherme and de Magalhães, Salles and Andrade, Marcus and Franklin, Randolph and Ferreira, Chaulio and Li, Wenli and Benedetti, Daniel",
title = "An efficient {GPU} multiple-observer siting method based on sparse-matrix multiplication",
booktitle = "3rd {ACM SIGSPATIAL} International Workshop on Analytics for Big Geospatial Data (BigSpatial) 2014",
year = "2014",
month = "4 Nov",
address = "Dallas TX USA",
href = "\bibhrefpt{191-bigspatial-sparse-siting-2014}{191-bigspatial-sparse-siting-2014-talk.pdf}",
customlinkslides = "https://wrfranklin.org/p/191-bigspatial-sparse-siting-2014-talk.pdf",
mykey = "visviewshedsiting benedetti marcus wenlili salles parallel"
}
Viewshed (or visibility map) computation is an important component in many GIScience applications and, as nowadays there are huge volume of terrain data available at high resolutions, it is important to develop efficient algorithms to process these data. Since the main improvements on modern processors come from multi-core architectures, parallel programming provides a promising means for developing faster algorithms. In this paper, we describe a new parallel algorithm based on the model proposed by Van Kreveld. Our algorithm uses the shared memory model, which is relatively cheap and supported by most current processors. Experiments have shown that, with 16 parallel cores,was up to 12 times faster than the serial implementation, and up to 3.9 times using four parallel cores, which isalmost optimal speedup.
@article{chaulio-jidm-2014,
author = "Ferreira, Chaulio R. and Andrade, Marcus V. A. and de Magalhães, Salles V. G. and Franklin, W. R. and Pena, Guilherme C.",
title = "A parallel algorithm for viewshed computation on grid terrains",
journal = "Journal of information and data management",
year = "2014",
volume = "5",
number = "1",
note = "invited",
href = "\bibhref{180-jidm2014-chaulio-parallel-viewshed}",
mykey = "visviewshedsiting marcus salles parallel"
}
This paper presents an efficient method to determine a set of observers (that is, where to site them) such that a given percentage of a terrain is visually covered. Our method extends the method proposed in (Franklin, 2002) including a local search heuristic efficiently implemented using dynamic programming and GPU parallel programming. This local search strategy allows to achieve a higher coverage using the same number of observers as the original method and thus it is possible to obtain a given coverage using a smaller number of observers. It can be an important improvement since an observer can represent an expensive facility such as a telecommunication tower. The proposed method performance was compared with that of other methods and the tests showed that it can be more than 1200 times faster than the sequential implementation (with no use of dynamic programming and no GPU parallel programmming) and, also, more than 20 times faster than a previous parallel method presented in (de Magalhaes et al., 2011).
@inproceedings{iceis-2014,
author = "Pena, Guilherme C. and Andrade, Marcus V.A. and de Magalhães, Salles V.G. and Franklin, W. R. and Ferreira, Chaulio R.",
title = "An Improved Parallel Algorithm using {GPU} for Siting Observers on Terrain",
booktitle = "16th International Conference on Enterprise Information Systems ({ICEIS} 2014)",
year = "2014",
month = "27--30 April",
address = "Lisbon",
pages = "367--375",
doi = "https://doi.org/10.5220/0004884303670375",
href = "\bibhrefpt{177-ICEIS2014-pena-site-gpu}{177-ICEIS2014-pena-site-gpu-talk.pdf}",
customlinkslides = "https://wrfranklin.org/p/177-ICEIS2014-pena-site-gpu-talk.pdf",
mykey = "visviewshedsiting gpusiting marcus salles parallel"
}
2013
Chaulio R. Ferreira, Marcus V. A. Andrade, Salles V. G. de Magalhães, W. R. Franklin, and Guilherme C. Pena.
A parallel sweep line algorithm for visibility computation.
In Geoinfo 2013, XIV Brazilian Symposium on GeoInformatics. Campos do Jordão, SP, Brazil, 24–27 Nov 2013.
Winner of best paper award, \url http://www.geoinfo.info/geoinfo2013/index.php.
[abstract▼] [details] [full text]
[BibTeX▼]
This paper describes a new parallel raster terrain visibility (or viewshed) algorithm, based on the sweep-line model of [Van Kreveld 1996]. Computing the terrain visible from a given observer is required for many GIS applications, with applications ranging from radio tower siting to aesthetics. Processing the newly available higher resolution terrain data requires faster architectures and algorithms. Since the main improvements on modern processors come from multi-core architectures, parallel programming provides a promising means for developing faster algorithms. Our algorithm uses the economical and widely available shared memory model with OpenMP. Experimentally, our parallel speedup is almost linear. On 16 parallel processors, our algorithm is up to 12 times faster than the serial implementation.
@inproceedings{chaulio-geoinfo-2013,
author = "Ferreira, Chaulio R. and Andrade, Marcus V. A. and de Magalhães, Salles V. G. and Franklin, W. R. and Pena, Guilherme C.",
title = "A Parallel Sweep Line Algorithm for Visibility Computation",
booktitle = "Geoinfo 2013, {XIV Brazilian} Symposium on GeoInformatics",
year = "2013",
month = "24--27 Nov",
address = "Campos do Jordão, SP, Brazil",
href = "\bibhref{174-geoinfo2013-chaulio-sweep}",
note = "Winner of best paper award, \url{http://www.geoinfo.info/geoinfo2013/index.php}",
mykey = "marcus salles visviewshedsiting parallel chauliogeoinfo2013"
}
We present a better algorithm and implementation for external memory viewshed computation. It is about four times faster than the most recent and most efficient published methods. Ours is also much simpler. Since processing large datasets can take hours, this improvement is significant. To reduce the total number of I/O operations, our method is based on subdividing the terrain into blocks which are stored in a special data structure managed as a cache memory. The viewshed is that region of the terrain that is visible by a fixed observer, who may be on or above the terrain. Its applications range from visual nuisance abatement to radio transmitter siting and surveillance.
@inproceedings{ferreira-acmgis-2012,
author = "Ferreira, Chaulio R. and de Magalhães, Salles V. G. and Andrade, Marcus V. A. and Franklin, W. Randolph and Pompermayer, André M.",
title = "More efficient terrain viewshed computation on massive datasets using external memory",
booktitle = "20th {ACM} {SIGSPATIAL} International Conference on Advances in Geographic Information Systems ({ACM SIGSPATIAL GIS} 2012)",
year = "2012",
address = "Redondo Beach, CA",
month = "6--9 Nov",
href = "\bibhrefpp{159-acmgis2012-viewshed}{159-acmgis2012-viewshed-poster.pdf}",
customlinkposter = "https://wrfranklin.org/p/159-acmgis2012-viewshed-poster.pdf",
mykey = "visviewshedsiting marcus salles"
}
This paper presents an heuristic method to give an approximated solution to the observer siting problem on high resolution terrains that are too large to be processed in the internal memory. Informally, the problem is to determine an optimal positioning of as few as possible observers for being able to observe as many target points as possible. Tests have shown that the proposed heuristic can solve this problem using, on average, fifteen percent fewer observers than another heuristic described in the literature. This will permit more efficient positioning of facilities such as mobile phone towers, fire observation towers, and vigilance systems.
@article{magalhaes-ijcisim-2011,
author = "de Magalhães, Salles V. G. and Andrade, Marcus V. A. and Franklin, W. Randolph",
title = "Multiple Observer Siting in Huge Terrains Stored in External Memory",
journal = "International Journal of Computer Information Systems and Industrial Management (IJCISIM)",
year = "2011",
volume = "3",
issn = "2150-7988",
href = "\bibhref{148-ijcisim11-emsite-magalhaes}",
mykey = "visviewshedsiting marcus salles"
}
The recent availability of detailed geographic data permits terrain applications to process large areas at high resolution. However the required massive data processing presents significant challenges, demanding algorithms optimized for bothdata movement and computation. One such application is viewshed computation,that is, to determine all the points visible from a given point p. In this paper, we present an efficient algorithm to compute viewsheds on terrain stored in external memory. In the usual case where the observer's radius of interest is smaller than the terrain size, the algorithm complexity is θ(scan(n2)) where n2 is the number of points in an n×n DEM and scan(n2)is the minimum number of I/O operations required to read n2 contiguous items from external memory. This is much faster than existing published algorithms.
@article{andrade-geoinfo-ext-viewshed-2008,
author = "Andrade, Marcus V. A. and de Magalhães, Salles V. G. and de Magalhães, Mirella A. and Franklin, W. Randolph and Cutler, Barbara M.",
title = "Efficient viewshed computation on terrain in external memory",
journal = "Geoinformatica",
year = "2010",
href = "\bibhref{126-andrade-geoinfo-ext-viewshed-2008}",
URL = "http://www.springerlink.com/content/p1783648185g1252/",
DOI = "https://doi.org/10.1007/s10707-009-0100-9",
mykey = "visviewshedsiting marcus salles",
note = "(online 26 Nov 2009)"
}
We present the GeoStar project at RPI, which researches various terrain (i.e., elevation) representations and operations thereon. This work is motivated by the large amounts of hi-res data now available. The purpose of each representation is to lossily compress terrain while maintaining important properties. Our ODETLAP representation generalizes a Laplacian partial differential equation by using two inconsistent equations for each known point in the grid, as well as one equation for each unknown point. The surface is reconstructed from a carefully chosen small set of known points. Our second representation segments the terrain intoa set of regions, each of which is simply described. Our third representation has the most long term potential: scooping, which forms the terrain by emulating surface water erosion. Siting hundreds of observers, such as border guards, so that their viewsheds jointly cover the maximum terrain is our first operation. This process allows both observer and target to be above the local terrain, and the observer to have a finite radius of interest. Planning a path so that a smuggler may get from point A to point B while maximally avoiding the border guards is our second operation. The path metric includes path length, distance traveled uphill,and amount of time visible to a guard. The quality of our representations is determined, not only by their RMS elevation error, but by how accurately they support these operations.
@inproceedings{acmgis07,
author = "Franklin, W Randolph and Inanc, Metin and Xie, Zhongyi and Tracy, Daniel M. and Cutler, Barbara and Andrade, Marcus V A and Luk, Franklin",
title = "Smugglers and border guards -- the {GeoStar} project at {RPI}",
booktitle = "15th {ACM} International Symposium on Advances in Geographic Information Systems {(ACM GIS 2007)}",
year = "2007",
address = "Seattle, WA, USA",
month = "Nov",
customlinkslides = "https://wrfranklin.org/p/115-seattle07-acmgis-smugglers-talk.ppt",
mykey = "pathplan visviewshedsiting marcus tracy inanc xie",
href = "\bibhrefpt{115-seattle07-acmgis-smugglers}{115-seattle07-acmgis-smugglers-talk.ppt}"
}
Daniel M. Tracy, W. Randolph Franklin, Barbara Cutler, Marcus A Andrade, Franklin T Luk, Metin Inanc, and Zhongyi Xie.
Multiple observer siting and path planning on lossily compressed terrain.
In Proceedings of SPIE Vol. 6697 Advanced Signal Processing Algorithms, Architectures, and Implementations XVII. San Diego CA, 27 August 2007. International Society for Optical Engineering.
paper 6697-16.
[abstract▼] [details] [full text]
[BibTeX▼]
We examine a smugglers and border guards scenario. We place observers on a terrain so as to optimize their visible coverage area. Then we compute a path that a smuggler would take so as to avoid detection, while also minimizing the path length. We also examine how our results are affected by using a lossy representation of the terrain instead. We propose three new application-specific error metrics for evaluating terrain compression. Our target terrain applications are the optimal placement of observers on a landscape and the navigation through the terrain by smugglers. Instead of using standard metrics such as average or maximum elevation error, we seek to optimize our compression on the specific real-world application of smugglers and border guards.
@inproceedings{dt-wrf-spie-2007,
author = "Tracy, Daniel M. and Franklin, W. Randolph and Cutler, Barbara and Andrade, Marcus A and Luk, Franklin T and Inanc, Metin and Xie, Zhongyi",
title = "Multiple observer siting and path planning on lossily compressed terrain",
booktitle = "Proceedings of {SPIE} Vol. 6697 Advanced Signal Processing Algorithms, Architectures, and Implementations {XVII}",
year = "2007",
month = "27 August",
organization = "International Society for Optical Engineering",
address = "San Diego CA",
note = "paper 6697-16",
mykey = "inanc tracy marcus xie pathplan visviewshedsiting",
href = "\bibhref{114-spie07-tracy}"
}
@incollection{wrf-sdh2006,
author = "Franklin, W. Randolph and Vogt, Christian",
editor = "Riedl, Andreas and Kainz, Wolfgang and Elmes, Gregory",
title = "Tradeoffs when multiple observer siting on large terrain cells",
booktitle = "Progress in Spatial Data Handling: 12th International Symposium on Spatial Data Handling",
pages = "845--861",
publisher = "Springer",
year = "2006",
note = "ISBN 978-3-540-35588-5",
address = "Vienna",
mykey = "visviewshedsiting",
customlinkslides = "https://wrfranklin.org/p/112-sdh06-vienna-siting-tradeoffs-talk.ppt",
href = "\bibhrefpt{112-sdh06-vienna-siting-tradeoffs}{112-sdh06-vienna-siting-tradeoffs-talk.ppt}"
}
This paper refines our testbed implementation of a suite of programs, for fast viewshed, and for fast approximate visibility index determination, and for siting multiple observers jointly to cover terrain. We process DEMs up to 2402 × 2402, while executing so quickly that multiple experiments are easily possible. Both the observer and target may be at a given fixed height above the terrain. We conclude that estimating visibility index using 20-30 random targets per observer is a good compromise between speed and quality. When forcing the selection of top observers to be well spaced out, subdividing the cell into such small blocks that only 2-5 observers are selected per block is best. Applications of multiple observer siting include radio towers, terrain observation, and mitigation of environmental visual nuisances.
@inproceedings{wrf-siting-apr2004,
author = "Franklin, W. Randolph and Vogt, Christian",
title = "Efficient observer siting on large terrain cells (extended abstract)",
booktitle = "{GIScience} 2004: Third International Conference on Geographic Information Science",
year = "2004",
address = "U Maryland College Park",
month = "20--23 Oct",
mykey = "visviewshedsiting",
customlinkslides = "https://wrfranklin.org/p/103-collegepark-giscience2004-siting-talk.pdf",
href = "\bibhrefpt{103-collegepark-giscience2004-siting}{103-collegepark-giscience2004-siting-talk.pdf}"
}
@misc{wrf-siting-web,
author = "Franklin, W. Randolph",
title = "Terrain observer siting research web site",
howpublished = "\url{https://wrfranklin.org/Research/siting/}",
mykey = "visviewshedsiting",
year = "2004"
}
We describe two current projects with our toolkit for siting multiple observers on terrain. (Both observers and targets are at some specified height above ground level. Observers can see targets, when not hidden by the terrain, out to a specified radius of interest.) Siting the observers so that they are intervisible, i.e., so that the visibility graph is a connected set, is the first project. The second project tests the effect, on the optimality of the multiple observer siting (w/o intervisiblity), of reducing the map cell's horizontal or vertical resolution. We lowered the resolution, sited observers optimally, then computed those observers' joint visibility index on the hi-res data. We observed that much less precise vertical resolution is ok, but that reducing the horizontal resolution by even a factor of two leads to an observer siting with significantly reduced joint visibility index, when evaluated on the hi-res data. Applications of multiple observer siting include siting radio towers and mitigating visual nuisances.
@inproceedings{wrf-cv-siting-isprs,
author = "Franklin, W. Randolph and Vogt, Christian",
title = "Multiple observer siting on terrain with intervisibility or lo-res data",
booktitle = "{XX}th Congress, International Society for Photogrammetry and Remote Sensing",
year = "2004",
address = "Istanbul",
month = "12-23 July",
customlinkposter = "https://wrfranklin.org/p/100-istanbul-isprs04-siting-poster.pdf",
mykey = "visviewshedsiting",
href = "\bibhrefpp{100-istanbul-isprs04-siting}{100-istanbul-isprs04-siting-poster.jpg}"
}
2002
W. Randolph Franklin.
Siting observers on terrain.
In Dianne Richardson and Peter van Oosterom, editors, Advances in Spatial Data Handling: 10th International Symposium on Spatial Data Handling, 109–120. Springer-Verlag, 2002.
[abstract▼] [details] [full text]
[BibTeX▼]
This paper presents an experimental study of a new algorithm that synthesizes separate programs, for fast viewshed, and for fast approximate visibility index determination, into a working testbed for siting multiple observers jointly to cover terrain from a full level-1 DEM, and to do it so quickly that multiple experiments are easily possible. Both the observer and target may be at a given fixed distance above the terrain. The process operates as follows. (1) An approximate visibility index is calculated for each point in the cell under consideration. (2) A set of tentative observers is selected from the highly visible points. (3) The actual observers are selected from them, so as to cover as much of the cell of possibe, using a greedy algorithm. Various experiments with varying parameters were performed on the Lake Champlain West cell, with observations such as the following. (1) Forcing tentative observers to be well spaced was more important than using the most visible tentative observers. (2) Most of the new observers added (because they covered the most unseen points) were already visible to an existing observer. (3) Randomly deleting many tentative observers before final selection didn't reduce the final area covered.
@inproceedings{wrf-site,
author = "Franklin, W. Randolph",
editor = "Richardson, Dianne and van Oosterom, Peter",
title = "Siting observers on terrain",
booktitle = "Advances in Spatial Data Handling: 10th International Symposium on Spatial Data Handling",
publisher = "Springer-Verlag",
pages = "109--120",
year = "2002",
href = "\bibhref{97-ottawa-sdh02-siting}",
isbn = "2-540-43802-5",
mykey = "visviewshedsiting"
}
Wm Randolph Franklin and Clark Ray.
Higher isn't necessarily better: visibility algorithms and experiments.
In Thomas C. Waugh and Richard G. Healey, editors, Advances in GIS Research: Sixth International Symposium on Spatial Data Handling, 751–770. Edinburgh, 5–9 Sept 1994. The International Geographical Union's Commission on Geographical Information Systems and The Association for Geographic Information, Taylor & Francis.
[abstract▼] [details] [full text]
[BibTeX▼]
We describe several fast programs to compute viewsheds and weighted visibility indices for observation points in a raster terrain. These pro grams explore various tradeoffs between speed and accuracy. We have analyzed many cells of data; there is no strong correlation between a point's elevation and its weighted visibility index. However, the, very few, high visibility points tend to characterize features of the terrain. This work can form a basis for automatically locating observers jointly to cover a terrain region of interest.
@inproceedings{fr-hinbv-94,
author = "Franklin, Wm Randolph and Ray, Clark",
editor = "Waugh, Thomas C. and Healey, Richard G.",
title = "Higher isn't Necessarily Better: Visibility Algorithms and Experiments",
booktitle = "Advances in {GIS} Research: Sixth International Symposium on Spatial Data Handling",
organization = "The International Geographical Union's Commission on Geographical Information Systems and The Association for Geographic Information",
year = "1994",
month = "5--9 Sept",
address = "Edinburgh",
publisher = "Taylor \\& Francis",
pages = "751--770",
mykey = "visviewshedsiting",
href = "\bibhref{84-edinburgh-sdh94-higher-not-better}"
}
W. Randolph Franklin.
Prism — a prism plotting program.
In Allan H. Schmidt, editor, Mapping Software and Cartographic Data Bases, Harvard Library of Computer Mapping, 1979 collection, pages 75–79.
1979. [details]
[BibTeX▼]
@incollection{wrf-prism-hu,
author = "Franklin, W. Randolph",
editor = "Schmidt, Allan H.",
title = "Prism --- A Prism Plotting Program",
booktitle = "Mapping Software and Cartographic Data Bases",
year = "1979",
pages = "75--79",
mykey = "vis3dobj",
series = "Harvard Library of Computer Mapping, 1979 collection"
}
@inproceedings{fl-3gdds-78,
author = "Franklin, Wm Randolph and Lewis, Harry R.",
title = "3-{D} Graphic Display of Discrete Spatial Data By Prism Maps",
booktitle = "Proc. {SIGGRAPH}'78",
journal = "Computer Graphics",
volume = "12(3)",
month = "August",
year = "1978",
pages = "70--75",
keywords = "computer graphics, hidden line/surface elimination",
mykey = "vis3dobj",
href = "\bibhref{5b-sig78-prism}"
}
Wm Randolph Franklin.
Combinatorics of hidden surface algorithms.
Technical Report, Center for Research in Computing Technology, Harvard Univ., Cambridge, MA, June 1978. [details] [full text]
[BibTeX▼]
@phdthesis{f-chsa-78,
author = "Franklin, Wm Randolph",
title = "Combinatorics of hidden surface algorithms",
type = "Technical {Report}",
number = "TR-12-78",
school = "Center for Research in Computing Technology, Harvard Univ.",
address = "Cambridge, MA",
year = "1978",
month = "June",
oldlabel = "geom-221",
href = "\bibhrefthesis{0-harvard78-thesis}",
comments = "Thesis w/o appendices available as a TR from CRCT. Complete thesis available only on microfilm from Harvard, since Harvard did not then subscribe to UMI.",
mykey = "vis3dobj",
keywords = "computer graphics, hidden line/surface elimination"
}