APL Home

Campus Map

Aaron Marburg

Senior Electrical/Computer Engineer






Dr. Marburg's research focuses on the development of robotic platforms for ocean exploration and science, with a focus on perception, situational awareness, and mission planning. He also has a background in remote sensing, photogrammetry and precision navigation, and a strong interest in human-machine interfaces, and data and metadata management. He has over 15 years experience in electrical and software design for robotics, scientific instrumentation and high-performance computing. Dr. Marburg joined APL-UW as a SEED postdoctoral researcher in 2015 after completing his Ph.D. at the University of Canterbury in Christchurch, New Zealand.

Department Affiliation

Ocean Engineering


B.S. Engineering, Swarthmore College, 1998

M.S. Aeronautical & Astronautical Engineering, Stanford University, 2004

Ph.D. Electrical & Computer Engineering, Canterbury University, 2015


2000-present and while at APL-UW

Automated QA/QC and time series analysis on OOI high-definition video data

Knuth, F., L. Belabassi, L. Garzio, M. Smith, M. Vardaro, and A. Marburg, "Automated QA/QC and time series analysis on OOI high-definition video data," Proc., MTS/IEEE OCEANS Conference, 19-23 September, Monterey, CA, doi:10.1109/OCEANS.2016.7761396 (IEEE, 2016).

More Info

1 Dec 2016

The Ocean Observatories Initiative's (OOI) Cabled Array (CA) is delivering high-definition video data since August 2015 via fiber optic cable from a statically positioned SubC 1Cam HD (CAMHD) video camera, deployed at the Mushroom hydrothermal vent in the Axial Seamount Hydrothermal Expeditions (ASHES) Field off the coast of Oregon (lat 45° 56.0186'N, long 130° 00.8185'W, depth 1,542 m). Over 20 TB of video data have been archived and are publically available via the OOI raw data repository. The CAMHD runs a 14-minute pan/tilt/zoom routine at eight even intervals throughout the day, producing 13GB of uncompressed HD video each time, with a focus on locations of scientific interest across the vent. Due to the amount of video data already collected, and the anticipated data volumes over the life of the project, automating analyses on the quality and consistency of these data, as well as developing tools for the automatic generation of value-added data products, is critical. In this paper we present results from automated analysis of CAMHD video data files for quality assurance purposes. Objectives include ensuring consistent file size, duration and naming convention on the archive, as well as producing time-series on frames of interest to analyze change in content or image quality over time. For example, we identified that an issue in the video streaming software, which has since been resolved, truncated ~25% of existing mp4 files on the archive. Analyses such as this allow scientists to rapidly understand the structure and quality of video data on the archive, laying the groundwork to create an array of customized analysis routines that meet a range of scientific needs.

Axial Seamount – wired and restless: A cabled submarine network enables real-time, tracking of a Mid-Ocean Ridge eruption and live video of an active hydrothermal system Juan de Fuca Ridge, NE Pacific

Delaney, J.R., D.S. Kelley, A. Marburg, M. Stoermer, H. Hadaway, K. Juniper, and F. Knuth, "Axial Seamount – wired and restless: A cabled submarine network enables real-time, tracking of a Mid-Ocean Ridge eruption and live video of an active hydrothermal system Juan de Fuca Ridge, NE Pacific," Proc., MTS/IEEE OCEANS, 19-23 September, Monterey, CA, doi:10.1109/OCEANS.2016.7761484 (IEEE, 2016).

More Info

1 Dec 2016

The most scientifically diverse and technologically advanced component of the National Science Foundations' $386M investment in the Ocean Observatories Initiative (OOI), involves 900 kilometers of high power and bandwidth electro-optical cable extending from Pacific City, OR, across active portions of the Juan de Fuca tectonic plate and up into the overlying ocean. Completed on time and under budget in October, 2014, this mesoscale fiber-optic sensor array enables real-time, high-bandwidth, 2-way communication with seafloor and water-column sensor networks across: 1) a portion of the global Mid-Ocean Ridge (MOR), 2) a section of the Cascadia Subduction Zone, and, 3) a cross-section of the California Current, a component of the North Pacific Gyre. Much of the data generated from >130 fiber-linked instruments has become available for scientific, educational and public user communities over the past 6 to 12 months, via the OOI Cyber-infrastructure (http://oceanobservatories.org/data-portal/). Since the OOI Cabled System has been in use and streaming live data to shore, two major developments have emerged that bear on undersea volcano-hydrothermal systems: 1) The 2015 submarine eruption of Axial Seamount was documented in a unique fashion by 20 remote, hardwired instruments distributed across the floor of the summit caldera. 2) Live, streaming video of an active hydrothermal system within one of the vent fields inside the caldera reveals subtle changes taking place in the Axial system.

Deep learning for benthic fauna identification

Marburg, A., and K. Bigham, "Deep learning for benthic fauna identification," Proc., MTS/IEEE OCEANS Conference, 19-23 September, Monterey, CA, doi:10.1109/OCEANS.2016.7761146 (IEEE, 2016).

More Info

1 Dec 2016

This paper describes the application of convolutional neural networks (CNNs) to the identification and classification of ten classes of benthic macrofauna in high-resolution photomosaics captured on the Pacific continental shelf by an ROV. Each photomosaic was previously hand-annotated with the location and classification of each animal, providing a training set for the machine learning algorithms. These annotations are used to extract image patches around each contact, resulting in approximately 5000 image samples, which are supplemented with randomly selected image patches representing the background. The resulting corpus of data is used to train a series of convolutional neural networks in the Nvidia DIGITS and Google Tensorflow environments. Due to the relatively sparse nature of the training data set, a number of data augmentation approaches are used to increase the diversity of training data. The performance of the resulting algorithm is evaluated in three problem scenarios: first, classification of fauna in an image patch known to contain a target; second, classification of a given image patch as either background or non-background; and third, a single-pass combination of the two problems. The presented networks prove highly accurate at background/non-background segmentation with ~96% accuracy. Fauna identification is less reliable at ~89% accuracy, and unified segmentation and identification proves to be the most challenging at ~88% accuracy.

More Publications

Extrinsic calibration of an RGB camera to a 3D imaging sonar

Marburg, A., and A. Stewart, "Extrinsic calibration of an RGB camera to a 3D imaging sonar," Proc., OCEANS 2015, 19-22 October, Washington, D.C. (MTS/IEEE, 2015).

More Info

19 Oct 2015

The introduction of low-cost RGB-depth (RGB-D) sensors have led to a diversity of algorithms for robust 3D scene reconstruction under controlled settings, but the underwater realization of such algorithms has been hampered by the constrained performance of most RGB-D sensors in water. We explore the possibility of fusing a point cloud generated from a high-frequency, mechanically scanned 3D imaging sonar with visual data from a camera to create a rich 3D representation of objects in the water column. A state-of-the-art algorithm for depth sensor-to-camera registration utilizing concurrent images of spherical targets is adapted, and the resulting alignment is used to combine sonar and visual imagery.

SMARTPIG: Simultaneous mosaicking and resectioning through planar image graphs

Marburg, A., and M.P. Hayes, "SMARTPIG: Simultaneous mosaicking and resectioning through planar image graphs," Proc. IEEE International Conference on Robotics and Automation, 26-30 May, Seattle, WA, 5767-5774, doi:10.1109/ICRA.2015.7140007 (IEEE, 2015).

More Info

26 May 2015

This paper describes Smartpig, an algorithm for the iterative mosaicking of images of a planar surface using a unique parameterization which decomposes inter-image projective warps into camera intrinsics, fronto-parallel projections, and inter-image similarities. The constraints resulting from the inter-image alignments within an image set are stored in an undirected graph structure allowing efficient optimization of image projections on the plane. Camera pose is also directly recoverable from the graph, making Smartpig a feasible solution to the problem of simultaneous location and mapping (SLAM). Smartpig is demonstrated on a set of 144 high resolution aerial images and evaluated with a number of metrics against ground control.

Acoustics Air-Sea Interaction & Remote Sensing Center for Environmental & Information Systems Center for Industrial & Medical Ultrasound Electronic & Photonic Systems Ocean Engineering Ocean Physics Polar Science Center