Volume 2 is also intended to help reduce analytical variation in the measurement of soil health indicators. This is important because, as previously shown by the standardization of NRCS inherent soil property characterization methods, standardization makes large‐scale data integration and comparisons feasible. Without rigorous standardization of soil health methods, variation among laboratories will hinder evaluation of changes over time and space and development of interpretations for various soil types and climate scenarios. This will in turn make regional and national compilations of soil health data very difficult to interpret.
Standardization of methods and protocols, along with appropriate proficiency testing, will facilitate collection of high‐quality data with a high degree of interpretability, which is needed to facilitate development and use of regionally‐appropriate interpretation functions (i.e., scoring algorithms). Those algorithms are needed to transform raw laboratory data into unitless (0 to 1) values that shows how well a specific soil is performing a production or environmental function. Such ratings can then be used for on farm management decision making. Private and public soil testing laboratories that use broadly standardized methods will therefore have the advantage of being able to offer broadly validated soil health testing and interpretation using functions and recommendations developed from a large dataset achieved through multiorganization public‐private partnership contributions.
Interpretation of Soil Health Information
Several nationally appropriate tools, including the Revised Universal Soil Loss Equation (RUSLE), Soil Conditioning Index (SCI), Water Erosion Prediction Project (WEPP), Wind Erosion Prediction System (WEPS), AgroEcosystem Performance Assessment Tool (AEPAT), and Soil Management Assessment Framework (SMAF), have been developed to help interpret soil health related data (USDA‐NRCS, 2019b). RUSLE2 estimates soil loss due to rill and inter‐rill erosion caused by rainfall on cropland (Renard et al., 2011; USDA‐ARS, 2015). The SCI combines information from the soil tillage intensity rating tool (STIR), a N‐leaching index, and Version 2 of the Revised Universal Soil Loss Equation (RUSLE2) to provide information to producers regarding how their management decisions are affecting their soil resources and is widely used in NRCS conservation planning. AEPAT is a research‐oriented index methodology that ranks agroecosystem performance among management practices for chosen functions and indicators (Liebig et al., 2004; Wienhold et al., 2006). Water Erosion Prediction Project (WEPP) is a process‐based, distributed parameter, continuous simulation, erosion prediction model for use on personal computers (USDA‐ARS, 2017); Wind Erosion Prediction System (WEPS) predicts many forms of soil erosion by wind including saltation‐creep and suspension (USDA‐ARS, 2018). Without question, wind‐, water‐, and anthropogenic‐induced soil erosion continues to be a global problem (Karlen & Rice, 2015) and must be the first factor mitigated to truly improve soil health, as it is an advanced symptom of degradation including loss of soil organism habitat, stable aggregation, and other critical soil functions.
Soil health indicator measurements, when coupled with an available assessment framework, complement soil erosion tools as they can directly and more definitively detect less advanced symptoms of soil health degradation across diverse management systems. Laboratory data, without field‐level information can be difficult to interpret or use for management decisions, and should only be used when supplemented with qualitative, in‐field assessments of SH and an understanding of the past and current management system in use.
Data collected over time from the same field can be used to monitor soil health, but this may take a long time to be of value to producers or organizations, as it requires establishing a baseline and sampling over a number of years. Use of soil health assessment frameworks allow single field indicator measurements to be interpreted and used for decision making by leveraging a wealth of research conducted over the last 50 yr and continued targeted data collection. The first such framework (SMAF;) was developed collaboratively between ARS and NRCS (Andrews et al., 2004). Stott et al. (2010) and Wienhold et al. (2009) improved the SMAF by providing additional indicator scoring curves, thus improving its utility for both crop and pasture lands. SMAF uses broad soil taxonomic groups (suborders) as a foundation for assessment and allows curve modification based on inherent soil suborder characteristics. This is often essential as a contextual basis for indicator interpretation.
By design, SMAF assessments are soil‐ and site‐specific, because they depend on soil, climate, and human values such as intended land use, management goals, and environmental sensitivity. A purported SMAF strength is that all of those factors can be manipulated by the user (primarily researchers). This will cause subtle changes in the scoring curves, causing some to argue that is not an advantage because it makes the process too complex for producers and their service providers. The approach taken by the SMAF was thereafter adapted for high throughput, public laboratory soil health testing in New York State by Idowu et al. (2008). The Comprehensive Assessment of Soil Health (CASH) was designed to evaluate soil functioning with respect to crop production and environmental impact and provide producers with a soil health status report similar to soil fertility reports commonly provided by soil testing labs. Most scores are effectively percentile ratings, comparing a measured value to the known population distribution in a soil textural group. CASH was based on the SMAF but used indicator methods with faster analytical procedures to accommodate a high‐throughput lab setting. Furthermore, CASH was originally developed solely for New York, so scoring functions varied by texture, but were not adjusted for any other inherent soil characteristics associated with taxonomic classification or climate.
The framework approach for interpreting measured soil health data is further discussed in Volume 1 (Chapter 5). In summary, both SMAF and CASH provide efficient comparisons of similar soils under diverse management and estimates regarding the level of functioning of a particular field within the overall soil health continuum (van Es & Karlen, 2019). The key to robust interpretations is being able to compare soil samples from both agricultural and non‐agricultural ecosystems, as well as for different soil and crop management practices, using consistent, standard, methods.
Utilizing Soil Health Assessments to Inform Soil Management Decisions
It was stated in the Foreword to Doran et al. (1994) that “scientists and lay persons have long recognized that the quality of two great natural resources– air and water– can be degraded by human activity. Unfortunately, few people have considered that the quality of soil can also be affected by differing uses and management practices. Interest in soil quality has heightened during the past 3 yr as a small cadre of soil scientists became more concerned about the role of soils in sustainable production systems and the linkages between soil characteristics and plant‐human health.” This reflects just one early step in the exponential progress made during the past three decades that has led from soil quality being a research niche to broad awareness of the critical importance of healthy soils to agriculture and societies in general.
Soil health considerations are currently being incorporated across the activities of many agriculture‐serving organizations nationally. For example, it has been incorporated into NRCS conservation planning and implementation programs. New soil health resource concerns, or constraints that can be documented by conservation planners, were published (USDA‐NRCS, 2020a). These are also being embedded into key Conservation Practice