APL-UW Home

Jobs
About
Campus Map
Contact
Privacy
Intranet

Greg Okopal

Principal Engineer

Email

okopal@apl.washington.edu

Phone

206-616-6775

Education

B.S. Computer Engineering, Villanova University, 2002

M.S. Electrical Engineering, University of Pittsburgh, 2006

Ph.D. Electrical Engineering, University of Pittsburgh, 2009

Publications

2000-present and while at APL-UW

Robust human tracking based on DPM constrained multiple-kernel from a moving camera

Hou, L., W. Wan, K.-H. Lee, J.-N. Hwang, G. Okopal, and J. Pitton, "Robust human tracking based on DPM constrained multiple-kernel from a moving camera," J. Sign. Process. Syst., 86, 27-39, doi:10.1007/s11265-015-1097-y, 2017

More Info

1 Jan 2017

In this paper, we attempt to solve the challenging task of precise and robust human tracking from a moving camera. We propose an innovative human tracking approach, which efficiently integrates the deformable part model (DPM) into multiple-kernel tracking from a moving camera. The proposed approach consists of a two-stage tracking procedure. For each frame, we first iteratively mean-shift several spatially weighted color histograms, called kernels, from the current frame to the next frame. Each kernel corresponds to a part model of a DPM-detected human. In the second step, conditioned on the tracking results of these kernels on the later frame, we then iteratively mean-shift the part models on that frame. The part models are represented by histogram of gradient (HOG) features, and the deformation cost of each part model provided by the trained DPM detector is used to constrain the movement of each detected body part from the first step. The proposed approach takes advantage of not only low computation owing to the kernel-based tracking, but also robustness of the DPM detector without the need of laborious human detection for each frame. Experimental results have shown that the proposed approach makes it possible to successfully track humans robustly with high accuracy under different scenarios from a moving camera.

Ground-moving-platform-based human tracking using visual SLAM and constrained multiple kernels

Lee, K.-H., J.-N. Hwang, G. Okopal, and J. Pitton, "Ground-moving-platform-based human tracking using visual SLAM and constrained multiple kernels," IEEE Trans. Intell. Transp. Syst., 17, 3602-3612, doi:10.1109/TITS.2016.2557763, 2016.

More Info

1 Dec 2016

This paper proposes a robust ground-moving-platform-based human tracking system, which effectively integrates visual simultaneous localization and mapping (V-SLAM), human detection, ground plane estimation, and kernel-based tracking techniques. The proposed system systematically detects humans from recorded video frames of a moving camera and tracks the humans in the V-SLAM-inferred 3-D space via a tracking-by-detection scheme. To efficiently associate the detected human frame by frame, we propose a novel human tracking framework, combining the constrained-multiple-kernel tracking and the estimated 3-D information (depth), to globally optimize the data association between consecutive frames. By taking advantage of the appearance model and 3-D information, the proposed system not only achieves high effectiveness but also well handles occlusion in the tracking. Experimental results show the favorable performance of the proposed system, which efficiently tracks humans in a camera equipped on a ground-moving platform such as a dash camera and an unmanned ground vehicle.

Speech analysis with the strong uncorrelating transform

Okopal, G., S. Wisdom, and L. Atlas, "Speech analysis with the strong uncorrelating transform," IEEE/ACM Trans. Audio Speech Lang. Process., 23, 1858-1868, doi:10.1109/TASLP.2015.2456426, 2015.

More Info

1 Nov 2015

The strong uncorrelating transform (SUT) provides estimates of independent components from linear mixtures using only second-order information, provided that the components have unique circularity coefficients. We propose a processing framework for generating complex-valued subbands from real-valued mixtures of speech and noise where the objective is to control the likely values of the sample circularity coefficients of the underlying speech and noise components in each subband. We show how several processing parameters affect the noncircularity of speech-like and noise components in the subband, ultimately informing parameter choices that allow for estimation of each of the components in a subband using the SUT. Additionally, because the speech and noise components will have unique sample circularity coefficients, this statistic can be used to identify time-frequency regions that contain voiced speech. We give an example of the recovery of the circularity coefficients of a real speech signal from a two-channel noisy mixture at -25 dB SNR, which demonstrates how the estimates of noncircularity can reveal the time-frequency structure of a speech signal in very high levels of noise. Finally, we present the results of a voice activity detection (VAD) experiment showing that two new circularity-based statistics, one of which is derived from the SUT processing, can achieve improved performance over state-of-the-art VADs in real-world recordings of noise.

More Publications

Acoustics Air-Sea Interaction & Remote Sensing Center for Environmental & Information Systems Center for Industrial & Medical Ultrasound Electronic & Photonic Systems Ocean Engineering Ocean Physics Polar Science Center
Close

 

Close