Past Webinars


May be streamed at no charge at any time.

Webinars on our Online Training Center
Stream 'technical' webinars on our Online Training Center for a taste of what is contained in our online courses!
The webinars are listed as "free courses". Each takes about 60 minutes.
Q&A files as an Excel document, and slides as a pdf document, can also be downloaded from the Training Center.

Webinars connected to our Nondetects And Data Analysis course:
1. Intro to Nondetects and Data Analysis
An introduction to data analysis for variables with nondetects.
Materials from the webinar (including slides and info) can also be downloaded as a zip file:
Download here

2. Fitting Distributions to Data with Nondetects
How to decide which distribution best fits your data. Making the most of small datasets with nondetects.

3. Testing Groups of Data With Multiple DLs
Analogs to Analysis of Variance and the Kruskal-Wallis tests for data with nondetects at multiple detection limits. Also, how to perform multiple comparison tests with nondetects.

4. The Mystery of Nondetects: How Censored Data Methods Work
Substitution of a constant times the reporting/detection limit (for example 1/2 DL) introduces bias into estimates of mean, standard deviation and upper confidence limits. The better alternative is to use methods for censored data. How these methods work is not widely understood by the environmental science community. The most frequent question I am asked about them is "But what number do I put in for the nondetects when I use them?" The answer is "you don't". The reasons why this is so, and how these methods work, will be presented in this webinar.

5. Correlation and Regression for Data with Nondetects
You can do it all, without substituting fabricated values.

6. Trend Analysis for Data with Nondetects
Are concentrations changing over time? Can I tell even when there are multiple detection limits used?
Parametric and nonparametric methods for data with nondetects, including the Seasonal Kendall test for trend.

7. Incorporating Greater Than and Less Than Values in Data Analysis
One way of representing censored data in a database is the "interval endpoints" format. Two columns are used with the first being the low end of possible values for the variable (often 0 for censored chemical data) and the second column holding the highest possible values (often the detection limits). One benefit of storing data this way is that it allows 'greater thans' to also be stored in the same two columns. Most censored methods for data analysis can incorporate both 'less thans' and 'greater thans' as interval-censored data and compute everything from means to hypothesis tests and regression. This webinar will give you examples of how to do these types of analyses.

8. Matched Pair Tests for Data with Nondetects
Paired differences such as concentrations Before/After a treatment at a series of sites or Upstream/Downstream of an outfall at several points in time are usually analyzed for change with a paired t-test (parametric) or signed-rank test (nonparametric). But what methods can be used when data include values below detection or quantitation limits without substitution of numbers for nondetects? This video will present what methods are available and demonstrate their use with R software.

9. NADA2: Everything You Can Do With Nondetects
What statistical analyses can you do today for data with nondetects without substituting numbers like ½ the detection limit? It is essentially every analysis you do when there are no nondetects. Of course there's estimating means and other descriptive statistics. You may have moved on to computing confidence intervals on those statistics. But there's much more. In our NADA online course right now, and coming in early 2021 as the NADA2 package for R, are routines for drawing (while incorporating the information in nondetects) boxplots, scatterplots with fitted models, and probability plots to determine how well a standard distribution such as the normal, lognormal or gamma fit the data. You can compute prediction and tolerance intervals, or perform hypothesis tests (parametric, nonparametric and permutation varieties). Follow that up with multiple comparison tests to determine which groups differ from others. You can compute correlation coefficients, build and evaluate regression models using AIC or other statistics to find the best model. You can perform trend analysis such as the seasonal Kendall test while adjusting for the effect of exogenous variables that are not time. You can even compute multivariate procedures such as cluster analysis, NMDS plots, PCA (Principal Components Analysis) and multivariate group and trend tests. Take a tour of what this R package can do.


Webinars related to our Applied Environmental Statistics courses:
9. Intro to R
Break down the barrier of how to get started using R! Our AES course is also an introduction to using R software.
R is one of the most widely used statistics software packages in the world. Its versatility as a programming language and its interconnectivity with email, web page generation and other computer processes make it a bit daunting for people just starting to use it for data analysis. It need not be that way. This webinar introduces you to R software and its use for data analysis. You'll learn how to type commands, install and load packages, and use the pull-down menus of R Commander (Rcmdr) to compute confidence intervals and a test for whether the mean exceeds a numerical standard.

10. Never Worry About A Normal Distribution Again!
Permutation Tests and Bootstrapping
Traditional parametric tests for differences in means (Analysis of Variance, t-tests and more) as well as t-intervals require data within groups to follow a normal distribution. If this isn't so, p-values may be inflated so that differences in means are not detected, and confidence intervals are often too wide. Permutation tests and bootstrap intervals avoid the normality assumption, returning accurate p-values and interval widths while being distribution-free. These methods are widely used in a variety of applied statistics fields including environmental science, but have not been sufficiently used in water quality, air quality and soils applications. This webinar will describe how these methods work, where you can find them, and demonstrate their benefits over older traditional methods.

11. Which of These Things is Not Like the Others?
How Multiple Comparison Tests Work
Multiple comparison tests determine which groups differ from others. Why are they needed following an ANOVA or Kruskal-Wallis test? How do they work? There are familiar types such as Tukey's test, and a newish version called the False Discovery Rate. Learn why the False Discovery Rate is a method you should probably be using.


Less technical videos on environmental statistics. At our Videos page.
Free to stream and watch.

1. Forty Years of Water Quality Statistics: What's Changed, What Hasn't?
An overview of how methods have changed from 1980 - 2019 in interpreting water quality data. Some folks are still using methods from the era of rotary-dial phones. You've upgraded your phone. How about updating your statistical methods?

2. How Many Observations Do I Need?
One of the most common questions I am asked is “How many observations do I need to compute a confidence interval or find a difference in a hypothesis test?” To answer this you'll need to know quite a bit of information first. This webinar will go over what information is needed for two-group parametric and nonparametric hypothesis tests (t-test and Wilcoxon rank-sum test). More information is provided in the new Second Edition of Statistical Methods in Water Resources [published by the US Geological Survey and available here].

3. Seven Perilous Errors in Environmental Statistics
Seven common errors to avoid!
Seven common errors in statistical analysis by environmental scientists all stem from an outdated understanding of statistics. I'll define the seven 'perilous errors' and how each can be avoided. They revolve around old ideas about hypothesis tests, p-values, using logarithms of data, evaluating what is a good regression equation, evaluating outliers and dealing with nondetects. Understanding why each error is perilous can save the scientist from publishing incorrect statements, using inefficient analysis methods, and wasting scarce financial resources. These errors have persisted through the years -- break the cycle and step into the 21st Century.

Plus VIDS:
short (15-20 min) videos on practical topics. Most recent: VID#5: How Many Observations Are Censored Data Worth?
divider line
Sign up for our newsletter to stay informed of when new webinars and courses are posted.


Past attendees said this about our webinars:
"A great introduction to stuff I need to know, and a great review of things I once knew. You have an uncanny ability to convey things most people avoid into a language people can understand."
……………………… -- State agency staff

"Thanks Dennis, we really enjoyed the seminars and feel they will be very helpful in future data analyses."
………………………-- Environmental consultant
Online at:


Arrow to go to the top of the page