APL Home
APL-UW Home

Jobs
About
Campus Map
Contact
Privacy
Intranet

Caren Marzban

Principal Physicist

Lecturer, Statistics

Email

marzban@stat.washington.edu

Phone

206-221-4361

Education

B.S. Physics, Michigan State University, 1981

Ph.D. Theoretical Physics, University of North Carolina, 1988

Publications

2000-present and while at APL-UW

Mixture models for estimating maximum blood flow velocity

Marzban, C., G. Wenxiao, and P.D. Mourad, "Mixture models for estimating maximum blood flow velocity," J. Ultrasound Med., 35, 93-101, doi:10.7863/ultra.14.05069, 2016.

More Info

1 Jan 2016



Objectives—A gaussian mixture model (GMM) was recently developed for estimating the probability density function of blood flow velocity measured with transcranial Doppler ultrasound data. In turn, the quantiles of the probability density function allow one to construct estimators of the “maximum” blood flow velocity. However, GMMs assume gaussianity, a feature that is not omnipresent in observed data. The objective of this work was to develop mixture models that do not invoke the gaussian assumption.

Methods—Here, GMMs were extended to a skewed GMM and a nongaussian kernel mixture model. All models were developed on data from 59 patients with closed head injuries from multiple hospitals in the United States, with ages ranging from 13 to 81 years and Glasgow Coma Scale scores ranging from 3 to 11. The models were assessed in terms of the log likelihood (a goodness-of-fit measure) and via visual comparison with the underlying spectrograms.

Results—Among the models examined, the skewed GMM showed a significantly (P< .05) higher log likelihood for 56 of the 59 patients and produced maximum flow velocity estimates consistent with the observed spectrograms for all patients. Kernel mixture models are generally less “robust” in that their quality is inconsistent across patients.

Conclusions—Among the models examined, it was found that the skewed GMM provided a better model of the data both in terms of the quality of the fit and in terms of visual comparison of the underlying spectrogram and the estimated maximum blood flow velocity. Nongaussian mixture models have potential for even higher-quality assessment of blood flow, but further development is called for.

Model tuning with canonical correlation analysis

Marzban, C., S. Sandgathe, and J.D. Doyle, "Model tuning with canonical correlation analysis," Mon. Wea. Rev., 142, 2018-2027, doi:10.1175/MWR-D-13-00245.1, 2014.

More Info

1 May 2014

Knowledge of the relationship between model parameters and forecast quantities is useful because it can aid in setting the values of the former for the purpose of having a desired effect on the latter. Here it is proposed that a well-established multivariate statistical method known as canonical correlation analysis can be formulated to gauge the strength of that relationship. The method is applied to several model parameters in the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS) for the purpose of "controlling" three forecast quantities: 1) convective precipitation, 2) stable precipitation, and 3) snow. It is shown that the model parameters employed here can be set to affect the sum, and the difference between convective and stable precipitation, while keeping snow mostly constant; a different combination of model parameters is shown to mostly affect the difference between stable precipitation and snow, with minimal effect on convective precipitation. In short, the proposed method cannot only capture the complex relationship between model parameters and forecast quantities, it can also be utilized to optimally control certain combinations of the latter.

Variance-based sensitivity analysis: Preliminary results in COAMPS

Marzban, C., S. Sandgathe, J.D. Doyle, and N.C. Lederer, "Variance-based sensitivity analysis: Preliminary results in COAMPS," Mon. Wea. Rev., 142, 2028-2042, doi:10.1175/MWR-D-13-00195.1, 2014.

More Info

1 May 2014

Numerical weather prediction models have a number of parameters whose values are either estimated from empirical data or theoretical calculations. These values are usually then optimized according to some criterion (e.g., minimizing a cost function) in order to obtain superior prediction. To that end, it is useful to know which parameters have an effect on a given forecast quantity, and which do not. Here the authors demonstrate a variance-based sensitivity analysis involving 11 parameters in the Coupled Ocean%u2013Atmosphere Mesoscale Prediction System (COAMPS). Several forecast quantities are examined: 24-h accumulated 1) convective precipitation, 2) stable precipitation, 3) total precipitation, and 4) snow. The analysis is based on 36 days of 24-h forecasts between 1 January and 4 July 2009. Regarding convective precipitation, not surprisingly, the most influential parameter is found to be the fraction of available precipitation in the Kain%u2013Fritsch cumulus parameterization fed back to the grid scale. Stable and total precipitation are most affected by a linear factor that multiplies the surface fluxes; and the parameter that most affects accumulated snow is the microphysics slope intercept parameter for snow. Furthermore, all of the interactions between the parameters are found to be either exceedingly small or have too much variability (across days and/or parameter values) to be of primary concern.

More Publications

Earth before life

Marzban, C., R. Viswanathan, and U. Yurtsever, "Earth before life," Biol. Direct, 9, doi:10.1186/1745-6150-9-1, 2014.

More Info

9 Jan 2014

A recent study argued, based on data on functional genome size of major phyla, that there is evidence life may have originated significantly prior to the formation of the Earth.

Here a more refined regression analysis is performed in which 1) measurement error is systematically taken into account, and 2) interval estimates (e.g., confidence or prediction intervals) are produced. It is shown that such models for which the interval estimate for the time origin of the genome includes the age of the Earth are consistent with observed data.

The appearance of life after the formation of the Earth is consistent with the data set under examination.

Variance-based sensitivity analysis: An illustration on the Lorenz '63 model

Marzban, C., "Variance-based sensitivity analysis: An illustration on the Lorenz '63 model," Mon. Wea. Rev., 141, 4069-4079, doi:10.1175/MWR-D-13-00032.1, 2013.

More Info

1 Nov 2013

Sensitivity analysis (SA) generally refers to an assessment of the sensitivity of the output(s) of some complex model with respect to changes in the input(s). Examples of inputs or outputs include initial state variables, parameters of a numerical model, or state variables at some future time. Sensitivity analysis is useful for data assimilation, model tuning, calibration, and dimensionality reduction; and there exists a wide range of SA techniques for each. This paper discusses one special class of SA techniques, referred to as variance based. As a first step in demonstrating the utility of the method in understanding the relationship between forecasts and parameters of complex numerical models, here the method is applied to the Lorenz '63 model, and the results are compared with an adjoint-based approach to SA. The method has three major components: 1) analysis of variance, 2) emulation of computer data, and 3) experimental—sampling design. The role of these three topics in variance-based SA is addressed in generality. More specifically, the application to the Lorenz '63 model suggests that the Z state variable is most sensitive to the b and r parameters, and is mostly unaffected by the s parameter. There is also evidence for an interaction between the r and b parameters. It is shown that these conclusions are true for both simple random sampling and Latin hypercube sampling, although the latter leads to slightly more precise estimates for some of the sensitivity measures.

A method for estimating zero-flow pressure and intracranial pressure

Marzban, C., P.R. Illian, D. Morison, A. Moore, M. Kliot, M. Czosnyka, and P.D. Mourad, "A method for estimating zero-flow pressure and intracranial pressure," J. Neurosurg. Anesthesiol., 25, 25-32, doi:10.1097/ANA.0b013e318263c295, 2013.

More Info

1 Jan 2013

BACKGROUND: It has been hypothesized that the critical closing pressure of cerebral circulation, or zero-flow pressure (ZFP), can estimate intracranial pressure (ICP). One ZFP estimation method used extrapolation of arterial blood pressure as against blood-flow velocity. The aim of this study was to improve ICP predictions. METHODS: Two revisions have been considered: (1) the linear model used for extrapolation is extended to a nonlinear equation; and (2) the parameters of the model are estimated by an alternative criterion (not least squares). The method is applied to data on transcranial Doppler measurements of blood-flow velocity, arterial blood pressure, and ICP from 104 patients suffering from closed traumatic brain injury, sampled across the United States and England. RESULTS: The revisions lead to qualitative (eg, precluding negative ICP) and quantitative improvements in ICP prediction. While moving from the original to the revised method, the ±2 SD of the error is reduced from 33 to 24 mm Hg, and the root-mean-squared error is reduced from 11 to 8.2 mm Hg. The distribution of root-mean-squared error is tighter as well; for the revised method the 25th and 75th percentiles are 4.1 and 13.7 mm Hg, respectively, as compared with 5.1 and 18.8 mm Hg for the original method. CONCLUSIONS: Proposed alterations to a procedure for estimating ZFP lead to more accurate and more precise estimates of ICP, thereby offering improved means of estimating it noninvasively. The quality of the estimates is inadequate for many applications, but further work is proposed, which may lead to clinically useful results.

On the effect of correlations on rank histograms: Reliability of temperature and wind-speed forecasts from fine-scale ensemble reforecasts

Marzban, C., R. Wang, F. Kong, and S. Leyton, "On the effect of correlations on rank histograms: Reliability of temperature and wind-speed forecasts from fine-scale ensemble reforecasts," Mon. Wea. Rev., 139, 295-310, doi: 10.1175/2010MWR3129.1, 2011.

More Info

1 Jan 2011

The rank histogram (RH) is a visual tool for assessing the reliability of ensemble forecasts (i.e., the degree to which the forecasts and the observations have the same distribution). But it is already known that in certain situations it conveys misleading information. Here, it is shown that a temporal correlation can lead to a misleading RH, but such a correlation contributes only to the sampling variability of the RH, and so it is accounted for by producing a RH that explicitly displays sampling variability. A simulation is employed to show that the variance within each ensemble member (i.e., climatological variance), the correlation between ensemble members, and the correlation between the observations and the forecasts, all have a confounding effect on the RH, making it difficult to use the RH for assessing the climatological component of forecast reliability. It is proposed that a "residual" quantile-quantile plot (denoted R-Q-Q plot) is better suited than the RH for assessing the climatological component of forecast reliability. Then, the RH and R-Q-Q plots for temperature and wind speed forecasts at 90 stations across the continental United States are computed. A wide range of forecast reliability is noted. For some stations, the nonreliability of the forecasts can be attributed to bias and/or under-or overclimatological dispersion. For others, the difference between the distributions can be traced to lighter or heavier tails in the distributions, while for other stations the distributions of the forecasts and the observations appear to be completely different. A spatial signature is also noted and discussed briefly.

Optical flow for verification

Marzban, C., and S. Sandgathe, "Optical flow for verification," Weather Forecast., 25, 1479-1494, doi:10.1175/2010WAF2222351.1, 2010.

More Info

1 Oct 2010

Modern numerical weather prediction (NWP) models produce forecasts that are gridded spatial fields. Digital images can also be viewed as gridded spatial fields, and as such, techniques from image analysis can be employed to address the problem of verification of NWP forecasts. One technique for estimating how images change temporally is called optical flow, where it is assumed that temporal changes in images (e.g., in a video) can be represented as a fluid flowing in some manner. Multiple realizations of the general idea have already been employed in verification problems as well as in data assimilation.

Here, a specific formulation of optical flow, called Lucas–Kanade, is reviewed and generalized as a tool for estimating three components of forecast error: intensity and two components of displacement, direction and distance. The method is illustrated first on simulated data, and then on a 418-day series of 24-h forecasts of sea level pressure from one member [the Global Forecast System (GFS)–fifth-generation Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model (MM5)] of the University of Washington's Mesoscale Ensemble system.

The simulation study confirms (and quantifies) the expectation that the method correctly assesses forecast errors. The method is also applied to a real dataset consisting of 418 twenty-four-hour forecasts spanning 2 April 2008 – 2 November 2009, demonstrating its value for analyzing NWP model performance. Results reveal a significant intensity bias in the subtropics, especially in the southern California region. They also expose a systematic east-northeast or downstream bias of approximately 50 km over land, possibly due to the treatment of terrain in the coarse-resolution model.

Three spatial verification techniques: Cluster analysis, variogram, and optical flow

Marzban, C., S. Sandgathe, H. Lyons, and N. Lederer, "Three spatial verification techniques: Cluster analysis, variogram, and optical flow," Weather Forecast., 24, 1457-1471, 2009.

More Info

1 Dec 2009

Three spatial verification techniques are applied to three datasets. The datasets consist of a mixture of real and artificial forecasts, and corresponding observations, designed to aid in better understanding the effects of global (i.e., across the entire field) displacement and intensity errors. The three verification techniques, each based on well-known statistical methods, have little in common and, so, present different facets of forecast quality. It is shown that a verification method based on cluster analysis can identify "objects" in a forecast and an observation field, thereby allowing for object-oriented verification in the sense that it considers displacement, missed forecasts, and false alarms. A second method compares the observed and forecast fields, not in terms of the objects within them, but in terms of the covariance structure of the fields, as summarized by their variogram. The last method addresses the agreement between the two fields by inferring the function that maps one to the other. The map — generally called optical flow — provides a (visual) summary of the "difference" between the two fields. A further summary measure of that map is found to yield useful information on the distortion error in the forecasts.

Using labeled data to evaluate change detectors in a multivariate streaming environment

Kim, A.Y., C. Marzban, D.B. Percival, and W. Stuetzle, "Using labeled data to evaluate change detectors in a multivariate streaming environment," Signal Process., 89, 2529-2536, doi:10.1016/j.sigpro.2009.04.011, 2009.

More Info

1 Dec 2009

We consider the problem of detecting changes in a multivariate data stream. A change detector is defined by a detection algorithm and an alarm threshold. A detection algorithm maps the stream of input vectors into a univariate detection stream. The detector signals a change when the detection stream exceeds the chosen alarm threshold. We consider two aspects of the problem: (1) setting the alarm threshold and (2) measuring/comparing the performance of detection algorithms.

We assume we are given a segment of the stream where changes of interest are marked. We present evidence that, without such marked training data, it might not be possible to accurately estimate the false alarm rate for a given alarm threshold. Commonly used approaches assume the data stream consists of independent observations, an implausible assumption given the time series nature of the data. Lack of independence can lead to estimates that are badly biased. Marked training data can also be used for realistic comparison of detection algorithms. We define a version of the receiver operating characteristic curve adapted to the change detection problem and propose a block bootstrap for comparing such curves. We illustrate the proposed methodology using multivariate data derived from an image stream.

Verification with variograms

Marzban, C., and S. Sandgathe, "Verification with variograms," Weather Forecast., 24, 1102-1120, doi: 10.1175/2009WAF2222122.1, 2009.

More Info

1 Aug 2009

The verification of a gridded forecast field, for example, one produced by numerical weather prediction (NWP) models, cannot be performed on a gridpoint-by-gridpoint basis; that type of approach would ignore the spatial structures present in both forecast and observation fields, leading to misinformative or noninformative verification results. A variety of methods have been proposed to acknowledge the spatial structure of the fields.

Here, a method is examined that compares the two fields in terms of their variograms. Two types of variograms are examined: one examines correlation on different spatial scales and is a measure of texture; the other type of variogram is additionally sensitive to the size and location of objects in a field and can assess size and location errors. Using these variograms, the forecasts of three NWP model formulations are compared with observations/analysis, on a dataset consisting of 30 days in spring 2005. It is found that within statistical uncertainty the three formulations are comparable with one another in terms of forecasting the spatial structure of observed reflectivity fields. None, however, produce the observed structure across all scales, and all tend to overforecast the spatial extent and also forecast a smoother precipitation (reflectivity) field.

A finer comparison suggests that the University of Oklahoma 2-km resolution Advanced Research Weather Research and Forecasting (WRF-ARW) model and the National Center for Atmospheric Research (NCAR) 4-km resolution WRF-ARW slightly outperform the 4.5-km WRF-Nonhydrostatic Mesoscale Model (NMM), developed by the National Oceanic and Atmospheric Administration/National Centers for Environmental Prediction (NOAA/NCEP), in terms of producing forecasts whose spatial structures are closer to that of the observed field.

Towards predicting intracranial pressure using transcranial Doppler and arterial blood pressure data

Mourad, P.D., C. Marzban, and M. Kliot, "Towards predicting intracranial pressure using transcranial Doppler and arterial blood pressure data," J. Acoust. Soc. Am., 125, 2514, 2009.

More Info

1 Apr 2009

Pressure within the cranium (intracranial pressure, or "ICP") represents a vital clinical variable whose assessment — currently via invasive means — and integration into a clinical exam constitutes a necessary step for adequate medical care for those patients with injured brains. In the present work we sought to develop a non-invasive way of predicting this variable and its corollary — cerebral perfusion pressure (CPP), which equals ICP minus arterial blood pressure (ABP).

We collected transcranial Doppler (TCD), invasive ICP and ABP data from patients at a variety of hospitals. We developed a series of regression-based statistical algorithms for subsets of those patients sorted by etiology with the goal of predicting ICP and CPP. We could discriminate between high and low values of ICP (above/below 20 mmHg) with sensitivities and specificities generally greater than 70 percent, and predict CPP within ±5 percent, for patients with traumatic brain injury. TCD and invasive ABP data can be translated into useful measures of ICP and CPP. Future work will target use of non-invasive ABP data, automation of TCD data acquisition, and improvement in algorithm performance.

An object-oriented verification of three NWP model formulations via cluster analysis: An objective and a subjective analysis

Marzban, C., S. Sandgathe, and H. Lyons, "An object-oriented verification of three NWP model formulations via cluster analysis: An objective and a subjective analysis,". Mon. Weather Rev., 136, 3392-3407, 2008.

More Info

1 Sep 2008

Recently, an object-oriented verification scheme was developed for assessing errors in forecasts of spatial fields. The main goal of the scheme was to allow the automatic and objective evaluation of a large number of forecasts. However, processing speed was an obstacle. Here, it is shown that the methodology can be revised to increase efficiency, allowing for the evaluation of 32 days of reflectivity forecasts from three different mesoscale numerical weather prediction model formulations. It is demonstrated that the methodology can address not only spatial errors, but also intensity and timing errors. The results of the verification are compared with those performed by a human expert.

For the case when the analysis involves only spatial information (and not intensity), although there exist variations from day to day, it is found that the three model formulations perform comparably, over the 32 days examined and across a wide range of spatial scales. However, the higher-resolution model formulation appears to have a slight edge over the other two; the statistical significance of that conclusion is weak but nontrivial. When intensity is included in the analysis, it is found that these conclusions are generally unaffected. As for timing errors, although for specific dates a model may have different timing errors on different spatial scales, over the 32-day period the three models are mostly "on time." Moreover, although the method is nonsubjective, its results are shown to be consistent with an expert's analysis of the 32 forecasts. This conclusion is tentative because of the focused nature of the data, spanning only one season in one year. But the proposed methodology now allows for the verification of many more forecasts.

Cluster analysis for object-oriented verification of fields: A variation

Marzban, C., and S. Sandgathe, "Cluster analysis for object-oriented verification of fields: A variation," Mon. Weather Rev., 136, 1013-1025, doi:10.1175/2007MWR1994.1, 2008.

More Info

1 Mar 2008

In a recent paper, a statistical method referred to as cluster analysis was employed to identify clusters in forecast and observed fields. Further criteria were also proposed for matching the identified clusters in one field with those in the other. As such, the proposed methodology was designed to perform an automated form of what has been called object-oriented verification. Herein, a variation of that methodology is proposed that effectively avoids (or simplifies) the criteria for matching the objects. The basic idea is to perform cluster analysis on the combined set of observations and forecasts, rather than on the individual fields separately. This method will be referred to as combinative cluster analysis (CCA). CCA naturally lends itself to the computation of false alarms, hits, and misses, and therefore, to the critical success index (CSI).

A desirable feature of the previous method—the ability to assess performance on different spatial scales—is maintained. The method is demonstrated on reflectivity data and corresponding forecasts for three dates using three mesoscale numerical weather prediction model formulations—the NCEP/NWS Nonhydrostatic Mesoscale Model (NMM) at 4-km resolution (nmm4), the University of Oklahoma%u2019s Center for Analysis and Prediction of Storms (CAPS) Weather Research and Forecasting Model (WRF) at 2-km resolution (arw2), and the NCAR WRF at 4-km resolution (arw4). In the small demonstration sample herein, model forecast quality is efficiently differentiated when performance is assessed in terms of the CSI. In this sample, arw2 appears to outperform the other two model formulations across all scales when the cluster analysis is performed in the space of spatial coordinates and reflectivity. However, when the analysis is performed only on spatial data (i.e., when only the spatial placement of the reflectivity is assessed), the difference is not significant. This result has been verified both visually and using a standard gridpoint verification, and seems to provide a reasonable assessment of model performance. This demonstration of CCA indicates promise in quickly evaluating mesoscale model performance while avoiding the subjectivity and labor intensiveness of human evaluation or the pitfalls of non-object-oriented automated verification.

Ceiling and visibility forecasts via neural networks

Marzban, C., S. Leyton, and B. Colman, "Ceiling and visibility forecasts via neural networks," Wea. Forecasting, 22, 466-479, 2007.

More Info

1 Jun 2007

Statistical postprocessing of numerical model output can improve forecast quality, especially when model output is combined with surface observations. In this article, the development of nonlinear postprocessors for the prediction of ceiling and visibility is discussed. The forecast period is approximately 2001–05, involving data from hourly surface observations, and from the fifth-generation Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model. The statistical model for mapping these data to ceiling and visibility is a neural network. A total of 39 such neural networks are developed for each of 39 terminal aerodrome forecast stations in the northwest United States. These postprocessors are compared with a number of alternatives, including logistic regression, and model output statistics (MOS) derived from the Aviation Model/Global Forecast System. It is found that the performance of the neural networks is generally superior to logistic regression and MOS. Depending on the comparison, different measures of performance are examined, including the Heidke skill statistic, cross-entropy, relative operating characteristic curves, discrimination plots, and attributes diagrams. The extent of the improvement brought about by the neural network depends on the measure of performance, and the specific station.

Bottom-up forcing and the decline of Stellar sea lions (Eumetopias jubatas) in Alaska: Assessing the ocean climate hypothesis

Trites, A.W., et al. (including C. Marzban), "Bottom-up forcing and the decline of Stellar sea lions (Eumetopias jubatas) in Alaska: Assessing the ocean climate hypothesis," Fish. Oceanogr., 16, 46-67, doi:10.1111/j.1365-2419.2006.00408.x, 2007.

More Info

1 Jan 2007

Declines of Steller sea lion (Eumetopias jubatus) populations in the Aleutian Islands and Gulf of Alaska could be a consequence of physical oceanographic changes associated with the 1976–77 climate regime shift. Changes in ocean climate are hypothesized to have affected the quantity, quality, and accessibility of prey, which in turn may have affected the rates of birth and death of sea lions. Recent studies of the spatial and temporal variations in the ocean climate system of the North Pacific support this hypothesis. Ocean climate changes appear to have created adaptive opportunities for various species that are preyed upon by Steller sea lions at mid-trophic levels. The east–west asymmetry of the oceanic response to climate forcing after 1976–77 is consistent with both the temporal aspect (populations decreased after the late 1970s) and the spatial aspect of the decline (western, but not eastern, sea lion populations decreased). These broad-scale climate variations appear to be modulated by regionally sensitive biogeographic structures along the Aleutian Islands and Gulf of Alaska, which include a transition point from coastal to open-ocean conditions at Samalga Pass westward along the Aleutian Islands. These transition points delineate distinct clusterings of different combinations of prey species, which are in turn correlated with differential population sizes and trajectories of Steller sea lions. Archaeological records spanning 4000 yr further indicate that sea lion populations have experienced major shifts in abundance in the past. Shifts in ocean climate are the most parsimonious underlying explanation for the broad suite of ecosystem changes that have been observed in the North Pacific Ocean in recent decades.

Cluster analysis for verification of precipitation fields

Marzban, C., and S. Sandgathe, "Cluster analysis for verification of precipitation fields," Weather Forecast., 21, 824-838, 2006.

More Info

1 Oct 2006

A statistical method referred to as cluster analysis is employed to identify features in forecast and observation fields. These features qualify as natural candidates for events or objects in terms of which verification can be performed. The methodology is introduced and illustrated on synthetic and real quantitative precipitation data. First, it is shown that the method correctly identifies clusters that are in agreement with what most experts might interpret as features or objects in the field. Then, it is shown that the verification of the forecasts can be performed within an event-based framework, with the events identified as the clusters. The number of clusters in a field is interpreted as a measure of scale, and the final "product" of the methodology is an "error surface" representing the error in the forecasts as a function of the number of clusters in the forecast and observation fields. This allows for the examination of forecast error as a function of scale.

MOS, Perfect Prog, and reanalysis

Marzban, C., S. Sandgathe, and E. Kalnay, "MOS, Perfect Prog, and reanalysis," Mon. Weather Rev., 134, 657-663, doi:10.1175/MWR3088.1, 2005.

More Info

1 Feb 2006

Statistical postprocessing methods have been successful in correcting many defects inherent in numerical weather prediction model forecasts. Among them, model output statistics (MOS) and perfect prog have been most common, each with its own strengths and weaknesses. Here, an alternative method (called RAN) is examined that combines the two, while at the same time utilizes the information in reanalysis data. The three methods are examined from a purely formal/mathematical point of view. The results suggest that whereas MOS is expected to outperform perfect prog and RAN in terms of mean squared error, bias, and error variance, the RAN approach is expected to yield more certain and bias-free forecasts. It is suggested therefore that a real-time RAN-based postprocessor be developed for further testing.

Inventions

System and Methods for Tracking Finger and Hand Movement Using Ultrasound

Record of Invention Number: 47931

John Kucewicz, Brian MacConaghy, Caren Marzban

Disclosure

10 Jan 2017

Acoustics Air-Sea Interaction & Remote Sensing Center for Environmental & Information Systems Center for Industrial & Medical Ultrasound Electronic & Photonic Systems Ocean Engineering Ocean Physics Polar Science Center
Close

 

Close