Design for Excellence in Electronics Manufacturing. Cheryl Tulkoff. Читать онлайн. Newlib. NEWLIB.NET

Автор: Cheryl Tulkoff
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Техническая литература
Год издания: 0
isbn: 9781119109396
Скачать книгу
government's Generally Accepted Government Auditing Standards. They use the concept of data reliability, which is defined as “a state that exists when data is sufficiently complete and error‐free to be convincing for its purpose and context.” Data reliability refers to the accuracy and completeness of data for a specific intended use, but it doesn't mean that the data is error‐free. Errors may be found, but errors are within a tolerable range, assessed for risk, and found to be accurate enough to support the conclusions reached. In this context, reliable data is:

       Complete: Includes all the data elements and records needed

       Accurate: Free from measurement error

       Consistent: Obtained and used in a manner that is clear and can be replicated

       Correct: Reflects the data entered or calculated at the source

       Unaltered: Reflects source and has not been tampered with

      So, don't simply ask “Is the data accurate?” Instead, ask “Are we reasonably confident that the data presents a picture that is not significantly different from reality?”

      Shedding further light on the topic of bias in scientific data and research are some foundations that have made it their mission to improve data integrity and study repeatability. Two such organizations are the Laura and John Arnold Foundation (LJAF) and the Center for Open Science (COS). The LJAF's Research Integrity Initiative seeks to improve the reliability and validity of scientific research across fields that range from governmental to philanthropy to individual decision making. The challenge is that people believe that if work is published in a journal, it is scientifically sound. That's not always true since scientific journals have a bias toward new, novel, and successful research. How often do you read great articles about failed studies?

      LJAF promotes research that is rigorous, transparent, and reproducible. These three tenets apply equally well to reliability studies. Studies should be:

       Rigorous: Randomized and well‐controlled with sufficient sample sizes and durations.

       Transparent: Researchers explain what they intend to study, make the elements of the experiment easily accessible, and publish the findings regardless of whether they confirm the hypothesis.

       Reproducible: Repeating the work and validating that the outcome is consistent and can be reproduced independently.

      The Center for Open Science also has a mission to increase openness, integrity, and reproducibility of research. COS makes a great analogy to how a second‐grade student works in science class: observe, test, show your work, and share. These are also shared, universal values in the electronics industry, but things get in the way of living up to those values. COS advocates spending increased time spent on experiment design. This involves communicating the hypothesis and design, having an appropriate sample size, and using statistics correctly. Taking time to do things right the first time prevents others from being led down the wrong path. COS also emphasizes that just because a study doesn't give the desired outcome or answer doesn't make the study worthless. It doesn't even mean that a study is wrong. It may simply mean that the problem being studied is more complicated than can be summed up in a single experiment or two.

      Ultimately, ignoring data and analysis biases can lead to catastrophe. The Harvard Business Review published a paper (Tinsley et al. 2011) with case studies illustrating the harmful impacts of bias. The Toyota case study shows the consequences of outlier bias. Ignoring a sharp increase in sudden acceleration complaints, the “near misses,” led to tragedy. The Apple iPhone 4 antenna example illustrates asymmetric attention bias. The problem with signal strength was well‐known and ignored since it was an old problem that had been tolerated by the public – until it wasn't. So, now that some of the many biases and de‐biasing techniques out there have been discussed, is your reliability data reliable? How confident are you that it truly reflects reality (Tulkoff 2017)?

      2.4.1 Sources of Reliability Data

      Sources of reliability data include:

       Suppliers

       Repair, warranty, field

       Development tests: compliance and reliability tests

       Production tests

       Modeling and prediction

       Customer feedback, surveys

       Failure analysis reports

       Industry reports

      2.4.2 Reliability Data from Suppliers

      Getting reliability data from suppliers is a great way to gain experience in using and analyzing reliability data. Suppliers are a good source for ideas and practices that are standard for their commercial off‐the‐shelf (COTS) components. Suppliers are also a free or inexpensive source of education on quality and reliability data analysis. Ask for and use them for this expertise.

      What kind of data is expected? The specific tests and quantity of data available depend on the industry. Be aware of the relevant industry standards and their requirements; but, also be aware that these standards are “least common denominators.” Simply because a part meets a standard doesn't guarantee it will be reliable in each application. The same part may have very different probabilities of failure under different use conditions. At a minimum, expect data on the component life at defined environmental and design conditions (specifications). What data can be requested? Plenty! Surprisingly few customers ask for data that is not volunteered but is often readily available. Ask for any quality and reliability data available for the parts of interest. When and how frequently should data be reviewed? Reliability data should be reviewed at initial part selection; when a part, process, or product changes; when a problem or failure occurs; and on a routine schedule for parts identified as critical to quality (CTQ) or performance. Who performs the testing and analysis? Analyses may be supplier performed, user performed (acceptance‐based testing), and/or performed by a third party or independent laboratory. Users should have CTQ suppliers under a scorecard process and should make reliability and quality data part of the supplier‐selection and ‐monitoring process.

      Areas to consider for supplier data and supplier performance monitoring include:

       Statistical process control (SPC) report data

       Outputs from a Continuous Improvement (CI) system

       Abnormal lot control data

       Process change notification (PCN) for all changes

       Process change approval (PCA) on all major changes (custom‐designed parts)

       Yield and in‐line monitor data

       Facility audit results

       ISO 9001 certification (or other relevant certifications or standards)

       Reliability monitor program data

       Storage and handling data

      For silicon die and packaging, examples of reliability data that may be available include data retention bake (DRB), electrostatic discharge (ESD), endurance test (END), HAST, high‐temperature operating life test (HTOL), latch up (LU), steam pressure pot (SPP), and temperature cycle (TC).

      

      Reliability probability and statistics is a complex, diverse area that people spend years mastering, so don't expect to get the analysis right the first time through! Properly applied statistics can help clarify an issue; but if misused or misapplied, they can generate misleading results. So be skeptical and pessimistic regarding reliability data. Reliability statistics require careful, honest interpretation. Statistical probability should