This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS 1
Demystifying Iddq Data With Process Variation for Automatic Chip Classification
Chia-Ling (Lynn) Chang and Charles H.-P. Wen
Abstract— Iddq testing is an integral component of test suites for the screening of unreliable devices. As the scale of silicon technology continues shrinking, Iddq values and associated fluctuations increase.
In addition, increased design complexity makes defect-induced leakage currents difficult to differentiate from full-chip currents. Consequently, traditional Iddq methods result in more test escapes and yield loss. This brief proposes a new test method, called σ -Iddq to provide the following: 1) Iddq analysis with process-parameter deduction and 2) the algorithm for automatic chip-classification called collective analysis without the need to manually determine threshold values. We randomly inserted a number of multiple defects into samples of ISCAS’89 and IWSL’05 benchmark circuits. Experimental results demonstrate that the proposed σ -Iddq method can achieve higher classification accuracy than singlethreshold Iddq testing or Iddq in a 45-nm technology. The overall classification accuracy of the collective analysis achieve averaged 99.28% and 99.70% on σ -Iddq data from process-parameter deductions with average-case search and multilevel search, respectively, demonstrating that the influence of process variation and design scaling can be significantly reduced to enable a better identification of defective chips.
Index Terms— Circuit testing, data mining, Iddq.
Iddq (or leakage current)  testing is an integral component in test suites for the screening of unreliable devices produced using CMOS technology. Traditional Iddq testing uses a single threshold to classify chips. Many variants, such as current signature , Iddq , , and current ratio , were later proposed to reduce variations in faultfree Iddq measurement . Although they are simple and easy to implement, the screening resolution of these methods often depends on the quality of Iddq measurement. Furthermore, determining an effective threshold value for chip classification can be a daunting task.
Many advanced methods, such as wafer-level spatial-correlation methods –, graphical Iddq , and CIddq , have been introduced to compensate for the waning effectiveness of Iddq testing.
Spatially-based methods ,  require a threshold value to classify dies according to their location on the wafer. Nearest-neighbor method  is used as references to provide a basis from which to conduct Iddq comparison. However, the threshold value is waferdependent and difficult to apply to other wafers. Graphical Iddq  removes outliers from current signatures to improve classification accuracy; however, this approach requires visual identification by engineers. CIddq  uses variation reduction to combat the effect of process variation, but CIddq requires test modification for Iddq patterns.
Iddq testing had been replaced by other testing methods, such as voltage testing , due to the difficulty associated with differentiating the Iddq distribution of good chips and bad chips, resulting in an increase in the number of test escapes and yield loss. Fig. 1 shows an example of Iddq distributions of an ISCAS’85 circuit through simulation with two different processes technologies where each bad chip is injected with one random defect. As shown in Fig. 1(a), it is impossible to prevent test escapes or yield loss once a threshold has
Manuscript received October 23, 2013; revised February 28, 2014; accepted
May 4, 2014.
The authors are with the Department of Electrical and Computer Engineering, National Chiao Tung University, Hsinchu 300, Taiwan (e-mail: firstname.lastname@example.org; email@example.com).
Digital Object Identifier 10.1109/TVLSI.2014.2326081
Fig. 1. Iddq distributions under 90- and 45-nm processes.
Fig. 2. (a) Iddq  and (b) current ratio  in a 45-nm technology. been decided. In Fig. 1(b), all defective (bad) chips are test escapes that cannot be separated because the bad-chip Iddq distribution is encompassed entirely within the Iddq distribution of good chips.
Increases in the number of test escapes can result in tremendous losses for circuit designs.
To combat the problem of Iddq testing, we applied Iddq  and current ratio  within the same circuit, the results of which are shown in Fig. 2. Iddq is an attempt to remove pattern-dependent effects by computing differences between Iddq values for successive patterns and making decisions accordingly. Current ratio (the ratio of maximum Iddq to minimum Iddq on each chip) is used to derive a regression function for the identification of faulty chips as outliers.
Fig. 2(a) shows the difficulty in selecting a good threshold for Iddq, as with other single-threshold approaches. Fig. 2(b) shows the difficulty involved in differentiating the current ratios of faultfree and faulty chips.
Iddq testing faces two particular challenges: 1) leakage current resulting from defects becomes increasingly smaller in scaled designs and 2) leakage in normal cells grows considerably with an increase in process variation. Thus, the leakage distributions of a good die and a bad die are overlapped due to design scaling and process variation. Therefore, the proposed σ -Iddq leveraged several studies and combined techniques from –, ,  to deal with the worsening Iddq problem.
The proposed σ -Iddq was inspired by  in which the notion of residual Iddq (I˜ddq) is defined as the difference between measured
Iddq (Iddq) from the die being tested and estimated Iddq (Iˆddq) computed from a good die. That is
I˜ddq = Iddq − Iˆddq(x, y, P) (1) where x and y represent the coordinate location on a wafer and P comprises the process parameters (and/or patterns). In this brief, we redefine the estimated Iddq in (1) as estimated fault-free