Due to the process scaling trend, device variations and leakage are increasing sharply. As the need for low power systems grows, the supply voltage (VDD) has been scaled down to reduce both dynamic and leakage power. The operation of the SRAM at lower supply voltage becomes very challenging.
The minimum operating voltage, Vmin, needs to be satisfied otherwise will cause write failure, red disturb failure, access failure or retention failure. Diving into sram detail, each bitcell Vmin is different over array with local variation. Vmin is worst at the top, and best at the bottom due to RBL impact. Lower Vmin is important, but smaller variation is also important for controllability.
Usually the SRAM Vmin is limited by either write failure or read disturb failure. It is difficult to predict a priori as to which of the two failure mode dominates because it is dependent upon many factors including the bitcell architecture, technology node etc. Various read assist and write assist circuits have been proposed and implemented. Some of them target at BL (bit line) optimization and others target at WL (word line) and Vddc optimization.
ISSCC 2018 has two presentations about 10nm FinFET SRAM from Intel and 7nm FinFET SRAM from Samsung. Without diving into presentation details, here we take a peek of Vmin and assistance circuits they propose to lower Vmin.
Intel sram has two versions, HDC which is optimized for high density and LVC optimized for low voltage and therefore low power. This is done through different ratio of PU/PG/PD of FinFET gate.
Intel presentation proposes S-WL, stepped-WL, to improve write Vmin.
Write Vmin is improved by 150mV while read Vmin stays the same.
Comparison of HDC and LVC Vmin:
Samsung presentation proposes DWD to optimize voltage loss on BL (bit line) and shows a dramatic improvement on overall Vmin.