The memory cell is the fundamental building block of
computer memory. The memory cell is an
electronic circuit that stores one
bit of binary information and it must be set to store a logic 1 (
high voltage level) and reset to store a logic 0 (low voltage level). Its value is maintained/stored until it is changed by the set/reset process. The value in the memory cell can be accessed by reading it.
The SRAM (
static RAM) memory cell is a type of
flip-flop circuit, typically implemented using MOSFETs. These require very low power to keep the stored value when not being accessed. A second type, DRAM (
dynamic RAM), is based around MOS capacitors. Charging and discharging a capacitor can store a '1' or a '0' in the cell. However, the charge in this capacitor will slowly leak away, and must be refreshed periodically. Because of this refresh process, DRAM uses more power. However, DRAM can achieve greater storage densities.
The memory cell is the fundamental building block of memory. It can be implemented using different technologies, such as
bipolar,
MOS, and other
semiconductor devices. It can also be built from
magnetic material such as
ferrite cores or magnetic bubbles.[1] Regardless of the implementation technology used, the purpose of the binary memory cell is always the same. It stores one bit of binary information that can be accessed by reading the cell and it must be set to store a 1 and reset to store a 0.[2]
Significance
Logic circuits without memory cells are called
combinational, meaning the output depends only on the present input.
But memory is a key element of
digital systems. In computers, it allows to store both programs and data and memory cells are also used for temporary storage of the output of combinational circuits to be used later by digital systems.
Logic circuits that use memory cells are called
sequential circuits, meaning the output depends not only on the present input, but also on the history of past inputs.
This dependence on the history of past inputs makes these circuits
stateful and it is the memory cells that store this state.
These circuits require a timing generator or clock for their operation.[3]
Computer memory used in most contemporary
computer systems is built mainly out of DRAM cells; since the layout is much smaller than SRAM, it can be more densely packed yielding cheaper memory with greater capacity. Since the DRAM memory cell stores its value as the charge of a capacitor, and there are current leakage issues, its value must be constantly rewritten. This is one of the reasons that make DRAM cells slower than the larger SRAM (static RAM) cells, which has its value always available. That is the reason why SRAM memory is used for on-
chipcache included in modern
microprocessor chips.[4]
On December 11, 1946
Freddie Williams applied for a patent on his cathode-ray tube (CRT) storing device (
Williams tube) with 128 40-
bit words. It was operational in 1947 and is considered the first practical implementation of
random-access memory (RAM).[5] In that year, the first patent applications for
magnetic-core memory were filed by Frederick Viehe.[6][7] Practical magnetic-core memory was developed by
An Wang in 1948, and improved by
Jay Forrester and
Jan A. Rajchman in the early 1950s, before being commercialised with the
Whirlwind computer in 1953.[8]Ken Olsen also contributed to its development.[9]
Semiconductor memory began in the early 1960s with bipolar memory cells, made of
bipolar transistors. While it improved performance, it could not compete with the lower price of magnetic-core memory.[10]
SRAM typically has six-
transistor cells, whereas
DRAM (dynamic random-access memory) typically has single-transistor cells.[15][13] In 1965,
Toshiba's Toscal BC-1411
electronic calculator used a form of capacitive bipolar DRAM, storing 180-bit data on discrete memory cells, consisting of
germanium bipolar transistors and capacitors.[16][17] MOS technology is the basis for modern DRAM. In 1966,
Robert H. Dennard at the
IBM Thomas J. Watson Research Center was working on MOS memory. While examining the characteristics of MOS technology, he found it was capable of building
capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell.[18] In 1967, Dennard filed a patent for a single-transistor DRAM memory cell, based on MOS technology.[19]
The first commercial bipolar 64-bit
SRAM was released by
Intel in 1969 with the 3101
SchottkyTTL. One year later, it released the first DRAM
integrated circuit chip, the
Intel 1103, based on MOS technology. By 1972, it beat previous records in
semiconductor memory sales.[20] DRAM chips during the early 1970s had three-transistor cells, before single-transistor cells became standard since the mid-1970s.[15][13]
CMOS memory was commercialized by
RCA, which launched a 288-bit CMOS SRAM memory chip in 1968.[21] CMOS memory was initially slower than
NMOS memory, which was more widely used by computers in the 1970s.[22] In 1978,
Hitachi introduced the twin-well CMOS process, with its HM6147 (4kb SRAM) memory chip, manufactured with a
3 µm process. The HM6147 chip was able to match the performance of the fastest NMOS memory chip at the time, while the HM6147 also consumed significantly less power. With comparable performance and much less power consumption, the twin-well CMOS process eventually overtook NMOS as the most common
semiconductor manufacturing process for computer memory in the 1980s.[22]
The two most common types of DRAM memory cells since the 1980s have been trench-capacitor cells and stacked-capacitor cells.[23] Trench-capacitor cells are where holes (trenches) are made in a silicon substrate, whose side walls are used as a memory cell, whereas
stacked-capacitor cells are the earliest form of three-dimensional memory (3D memory), where memory cells are stacked vertically in a three-dimensional cell structure.[24] Both debuted in 1984, when Hitachi introduced trench-capacitor memory and
Fujitsu introduced stacked-capacitor memory.[23]
The following schematics detail the three most used implementations for memory cells:
The dynamic random access memory cell (DRAM);
The static random access memory cell (SRAM);
Flip-flops like the J/K shown below, using only
logic gates.
Operation
DRAM memory cell
Storage
The storage element of the
DRAM memory cell is the
capacitor labeled (4) in the diagram above. The charge stored in the capacitor degrades over time, so its value must be refreshed (read and rewritten) periodically. The
nMOS transistor (3) acts as a gate to allow reading or writing when open or storing when closed.[35]
Reading
For reading the Word line (2) drives a logic 1 (voltage high) into the gate of the
nMOS transistor (3) which makes it conductive and the charge stored at the capacitor (4) is then transferred to the bit line (1). The bit line will have a
parasitic capacitance (5) that will drain part of the charge and slow the reading process. The capacitance of the bit line will determine the needed size of the storage capacitor (4). It is a trade-off. If the storage capacitor is too small, the voltage of the bit line would take too much time to raise or not even rise above the threshold needed by the amplifiers at the end of the bit line. Since the reading process degrades the charge in the storage capacitor (4) its value is rewritten after each read.[36]
Writing
The writing process is the easiest, the desired value logic 1 (high voltage) or logic 0 (low voltage) is driven into the bit line. The word line activates the
nMOS transistor (3) connecting it to the storage capacitor (4). The only issue is to keep it open enough time to ensure that the capacitor is fully charged or discharged before turning off the nMOS transistor (3).[36]
SRAM memory cell
Storage
The working principle of
SRAM memory cell can be easier to understand if the transistors M1 through M4 are drawn as
logic gates. That way it is clear that at its heart, the cell storage is built by using two cross-coupled
inverters. This simple loop creates a bi-stable circuit. A logic 1 at the input of the first inverter turns into a 0 at its output, and it is fed into the second inverter which transforms that logic 0 back to a logic 1 feeding back the same value to the input of the first inverter. That creates a stable state that does not change over time. Similarly the other stable state of the circuit is to have a logic 0 at the input of the first inverter. After been inverted twice it will also feedback the same value.[37]
Therefore there are only two stable states that the circuit can be in:
= 0 and = 1
= 1 and = 0
Reading
To read the contents of the memory cell stored in the loop, the transistors M5 and M6 must be turned on. when they receive voltage to their gates from the word line (), they become conductive and so the and values get transmitted to the bit line () and to its complement ().[37] Finally this values get amplified at the end of the bit lines.[37]
Writing
The writing process is similar, the difference is that now the new value that will be stored in the memory cell is driven into the bit line () and the inverted one into its complement (). Next transistors M5 and M6 are open by driving a logic 1 (voltage high) into the word line (). This effectively connects the bit lines to the by-stable inverter loop. There are two possible cases:
If the value of the loop is the same as the new value driven, there is no change;
if the value of the loop is different from the new value driven there are two conflicting values, in order for the voltage in the bit lines to overwrite the output of the inverters, the size of the M5 and M6 transistors must be larger than that of the M1-M4 transistors. This allows more current to flow through first ones and therefore tips the voltage in the direction of the new value, at some point the loop will then amplify this intermediate value to full rail.[37]
The
flip-flop has many different implementations, its storage element is usually a latch consisting of a
NAND gate loop or a
NOR gate loop with additional gates used to implement clocking. Its value is always available for reading as an output. The value remains stored until it is changed through the set or reset process. Flip-flops are typically implemented using
MOSFETs.
A floating-gate memory cell is basically an
MOS transistor with a gate completely surrounded by
dielectrics (Fig. 1.2), the floating-gate (FG), and electrically governed by a capacitive-coupled control-gate (CG). Being electrically isolated, the FG acts as the storing electrode for the cell device. Charge injected into the FG is maintained there, allowing modulation of the ‘apparent’ threshold voltage (i.e. VT seen from the CG) of the cell transistor.[27]
^Kahng, D.; Sze, S.M. (1967). "A floating-gate and its application to memory devices". The Bell System Technical Journal. 46 (6): 1288–95.
doi:
10.1002/j.1538-7305.1967.tb01738.x.
^Masuoka, F.; Momodomi, M.; Iwata, Y.; Shirota, R. (1987). "New ultra high density EPROM and flash EEPROM with NAND structure cell". Electron Devices Meeting, 1987 International.
IEDM 1987.
IEEE.
doi:
10.1109/IEDM.1987.191485.