Demystifying Iddq Data With Process Variation for Automatic Chip Classificationby Chia-Ling Lynn Chang, Charles H.-P. Wen

IEEE Transactions on Very Large Scale Integration (VLSI) Systems

About

Year
2015
DOI
10.1109/TVLSI.2014.2326081
Subject
Hardware and Architecture / Electrical and Electronic Engineering / Software

Similar

Demystifying the Peer Review Process

Authors:
Paul R. Lichter
1993

A Bank Adopts Automatic Data Processing

Authors:
R. Hindle
1960

Processing of data from an automatic data log

Authors:
Dag E.S. Campbell, Jan Ekstedt, Inger Eriksson, Lars Elve Larsson
1970

Automatic classification of urban pavements using mobile LiDAR data and roughness descriptors

Authors:
L. Díaz-Vilariño, H. González-Jorge, M. Bueno, P. Arias, I. Puente
2016

Text

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS 1

Demystifying Iddq Data With Process Variation for Automatic Chip Classification

Chia-Ling (Lynn) Chang and Charles H.-P. Wen

Abstract— Iddq testing is an integral component of test suites for the screening of unreliable devices. As the scale of silicon technology continues shrinking, Iddq values and associated fluctuations increase.

In addition, increased design complexity makes defect-induced leakage currents difficult to differentiate from full-chip currents. Consequently, traditional Iddq methods result in more test escapes and yield loss. This brief proposes a new test method, called σ -Iddq to provide the following: 1) Iddq analysis with process-parameter deduction and 2) the algorithm for automatic chip-classification called collective analysis without the need to manually determine threshold values. We randomly inserted a number of multiple defects into samples of ISCAS’89 and IWSL’05 benchmark circuits. Experimental results demonstrate that the proposed σ -Iddq method can achieve higher classification accuracy than singlethreshold Iddq testing or Iddq in a 45-nm technology. The overall classification accuracy of the collective analysis achieve averaged 99.28% and 99.70% on σ -Iddq data from process-parameter deductions with average-case search and multilevel search, respectively, demonstrating that the influence of process variation and design scaling can be significantly reduced to enable a better identification of defective chips.

Index Terms— Circuit testing, data mining, Iddq.

I. INTRODUCTION

Iddq (or leakage current) [1] testing is an integral component in test suites for the screening of unreliable devices produced using CMOS technology. Traditional Iddq testing uses a single threshold to classify chips. Many variants, such as current signature [2], Iddq [3], [4], and current ratio [5], were later proposed to reduce variations in faultfree Iddq measurement [13]. Although they are simple and easy to implement, the screening resolution of these methods often depends on the quality of Iddq measurement. Furthermore, determining an effective threshold value for chip classification can be a daunting task.

Many advanced methods, such as wafer-level spatial-correlation methods [6]–[8], graphical Iddq [9], and CIddq [10], have been introduced to compensate for the waning effectiveness of Iddq testing.

Spatially-based methods [6], [7] require a threshold value to classify dies according to their location on the wafer. Nearest-neighbor method [8] is used as references to provide a basis from which to conduct Iddq comparison. However, the threshold value is waferdependent and difficult to apply to other wafers. Graphical Iddq [9] removes outliers from current signatures to improve classification accuracy; however, this approach requires visual identification by engineers. CIddq [10] uses variation reduction to combat the effect of process variation, but CIddq requires test modification for Iddq patterns.

Iddq testing had been replaced by other testing methods, such as voltage testing [11], due to the difficulty associated with differentiating the Iddq distribution of good chips and bad chips, resulting in an increase in the number of test escapes and yield loss. Fig. 1 shows an example of Iddq distributions of an ISCAS’85 circuit through simulation with two different processes technologies where each bad chip is injected with one random defect. As shown in Fig. 1(a), it is impossible to prevent test escapes or yield loss once a threshold has

Manuscript received October 23, 2013; revised February 28, 2014; accepted

May 4, 2014.

The authors are with the Department of Electrical and Computer Engineering, National Chiao Tung University, Hsinchu 300, Taiwan (e-mail: tinger.cm98g@nctu.edu.tw; opwen@g2.nctu.edu.tw).

Digital Object Identifier 10.1109/TVLSI.2014.2326081

Fig. 1. Iddq distributions under 90- and 45-nm processes.

Fig. 2. (a) Iddq [4] and (b) current ratio [5] in a 45-nm technology. been decided. In Fig. 1(b), all defective (bad) chips are test escapes that cannot be separated because the bad-chip Iddq distribution is encompassed entirely within the Iddq distribution of good chips.

Increases in the number of test escapes can result in tremendous losses for circuit designs.

To combat the problem of Iddq testing, we applied Iddq [4] and current ratio [5] within the same circuit, the results of which are shown in Fig. 2. Iddq is an attempt to remove pattern-dependent effects by computing differences between Iddq values for successive patterns and making decisions accordingly. Current ratio (the ratio of maximum Iddq to minimum Iddq on each chip) is used to derive a regression function for the identification of faulty chips as outliers.

Fig. 2(a) shows the difficulty in selecting a good threshold for Iddq, as with other single-threshold approaches. Fig. 2(b) shows the difficulty involved in differentiating the current ratios of faultfree and faulty chips.

Iddq testing faces two particular challenges: 1) leakage current resulting from defects becomes increasingly smaller in scaled designs and 2) leakage in normal cells grows considerably with an increase in process variation. Thus, the leakage distributions of a good die and a bad die are overlapped due to design scaling and process variation. Therefore, the proposed σ -Iddq leveraged several studies and combined techniques from [12]–[14], [18], [24] to deal with the worsening Iddq problem.

The proposed σ -Iddq was inspired by [12] in which the notion of residual Iddq (I˜ddq) is defined as the difference between measured

Iddq (Iddq) from the die being tested and estimated Iddq (Iˆddq) computed from a good die. That is

I˜ddq = Iddq − Iˆddq(x, y, P) (1) where x and y represent the coordinate location on a wafer and P comprises the process parameters (and/or patterns). In this brief, we redefine the estimated Iddq in (1) as estimated fault-free