Fill This Form To Receive Instant Help
Homework answers / question archive / Chapter 3 Boolean Algebra and Digital Logic Objectives • Understand the relationship between Boolean logic and digital computer circuits
Chapter 3 Boolean Algebra and Digital Logic Objectives • Understand the relationship between Boolean logic and digital computer circuits. • Learn how to design simple logic circuits. • Understand how digital circuits work together to form complex computer systems. 3.1 Introduction (1 of 2) • In the latter part of the nineteenth century, George Boole incensed philosophers and mathematicians alike when he suggested that logical thought could be represented through mathematical equations: – How dare anyone suggest that human thought could be encapsulated and manipulated like an algebraic formula? • Computers, as we know them today, are implementations of Boole’s Laws of Thought: – John Atanasoff and Claude Shannon were among the first to see this connection. 3.1 Introduction (2 of 2) • In the middle of the twentieth century, computers were commonly known as “thinking machines” and “electronic brains.” – Many people were fearful of them. • Nowadays, we rarely ponder the relationship between electronic digital computers and human logic. Computers are accepted as part of our lives. • In this chapter, you will learn the simplicity that constitutes the essence of the machine. 3.2 Boolean Algebra (1 of 17) • Boolean algebra is a mathematical system for the manipulation of variables that can have one of two values. – In formal logic, these values are “true” and “false.” – In digital systems, these values are “on” and “off,” 1 and 0, or “high” and “low.” • Boolean expressions are created by performing operations on Boolean variables. – Common Boolean operators include AND, OR, and NOT. 3.2 Boolean Algebra (2 of 17) • A Boolean operator can be completely described using a truth table. • The truth table for the Boolean operators AND and OR are shown at the right. • The AND operator is also known as a Boolean product. The OR operator is the Boolean sum. 3.2 Boolean Algebra (3 of 17) • The truth table for the Boolean NOT operator is shown at the right. • The NOT operation is most often designated by a prime mark (X’). It is sometimes ?) indicated by an overbar (X or an “elbow” (?X). 3.2 Boolean Algebra (4 of 17) • A Boolean function has: – at least one Boolean variable, – at least one Boolean operator, and – at least one input from the set {0,1}. • It produces an output that is also a member of the set {0,1}. Now you know why the binary numbering system is so handy in digital systems. 3.2 Boolean Algebra (5 of 17) • The truth table for the Boolean function: is shown at the right. • To make evaluation of the Boolean function easier, the truth table contains extra (shaded) columns to hold evaluations of subparts of the function. 3.2 Boolean Algebra (6 of 17) • As with common arithmetic, Boolean operations have rules of precedence. • The NOT operator has highest priority, followed by AND and then OR. • This is how we chose the (shaded) function subparts in our table. 3.2 Boolean Algebra –Majority Function • M=x’yz+xy’z+xyz’+xyz = xy + xz + yz x 0 y 0 z 0 M 0 0 0 1 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 1 1 1 1 0 1 1 1 1 1 3.2 Boolean Algebra (7 of 17) • Digital computers contain circuits that implement Boolean functions. • The simpler that we can make a Boolean function, the smaller the circuit that will result. – Simpler circuits are cheaper to build, consume less power, and run faster than complex circuits. • With this in mind, we always want to reduce our Boolean functions to their simplest form. • There are a number of Boolean identities that help us to do this. 3.2 Boolean Algebra (8 of 17) • Most Boolean identities have an AND (product) form as well as an OR (sum) form. We give our identities using both forms. Our first group is rather intuitive: 3.2 Boolean Algebra (9 of 17) • Our second group of Boolean identities should be familiar to you from your study of algebra: 3.2 Boolean Algebra (10 of 17) • Our last group of Boolean identities are perhaps the most useful. • If you have studied set theory or formal logic, these laws are also familiar to you. 3.2 Boolean Algebra (11 of 17) F(x,y,z) = xy + x′z + yz • We can use Boolean identities to simplify: 3.2 Boolean Algebra (12 of 17) • Sometimes it is more economical to build a circuit using the complement of a function (and complementing its result) than it is to implement the function directly. • DeMorgan’s law provides an easy way of finding the complement of a Boolean function. • Recall DeMorgan’s law states: (xy)’ = x’+ y’ and (x + y)’= x’y’ 3.2 Boolean Algebra (14 of 17) • Through our exercises in simplifying Boolean expressions, we see that there are numerous ways of stating the same Boolean expression. – These “synonymous” forms are logically equivalent. – Logically equivalent expressions have identical truth tables. • In order to eliminate as much confusion as possible, designers express Boolean functions in standardized or canonical form. 3.2 Boolean Algebra (15 of 17) • There are two canonical forms for Boolean expressions: sum-of-products and product-of-sums. – Recall the Boolean product is the AND operation and the Boolean sum is the OR operation. • In the sum-of-products form, ANDed variables are ORed together. – For example: • In the product-of-sums form, ORed variables are ANDed together. – For example: 3.2 Boolean Algebra (16 of 17) • It is easy to convert a function to sum-of-products form using its truth table. • We are interested in the values of the variables that make the function true (= 1). • Using the truth table, we list the values of the variables that result in a true function value. • Each group of variables is then ORed together. 3.2 Boolean Algebra (17 of 17) • The sum-of-products form for our function is: We note that this function is not in simplest terms. Our aim is only to rewrite our function in canonical sum-of-products form. 3.3 Logic Gates (1 of 6) • We have looked at Boolean functions in abstract terms. • In this section, we see that Boolean functions are implemented in digital computer circuits called gates. • A gate is an electronic device that produces a result based on two or more input values. – In reality, gates consist of one to six transistors, but digital designers think of them as a single unit. – Integrated circuits contain collections of gates suited to a particular purpose. 3.3 Logic Gates (2 of 6) • The three simplest gates are the AND, OR, and NOT gates. • They correspond directly to their respective Boolean operations, as you can see by their truth tables. Majority Function • M=x’yz+xy’z+xyz’+xyz = xy + xz + yz 3.3 Logic Gates (3 of 6) • Another very useful gate is the exclusive OR (XOR) gate. • The output of the XOR operation is true only when the values of the inputs differ. Note the special symbol ? for the XOR operation. 3.3 Logic Gates (4 of 6) • NAND and NOR are two very important gates. Their symbols and truth tables are shown at the right. 3.3 Logic Gates (6 of 6) • Gates can have multiple inputs and more than one output. – A second output can be provided for the complement of the operation. – We’ll see more of this later. 3.4 Karnaugh Maps • Simplification of Boolean functions leads to simpler (and usually faster) digital circuits. • Simplifying Boolean functions using identities is time-consuming and errorprone. • This special section presents an easy, systematic method for reducing Boolean expressions. 3.4.1 Introduction • In 1953, Maurice Karnaugh was a telecommunications engineer at Bell Labs. • While exploring the new field of digital logic and its application to the design of telephone circuits, he invented a graphical way of visualizing and then simplifying Boolean expressions. • This graphical representation, now known as a Karnaugh map, or Kmap, is named in his honor. 3.4.2 Description of Kmaps and Terminology (1 of 5) • A Kmap is a matrix consisting of rows and columns that represent the output values of a Boolean function. • The output values placed in each cell are derived from the minterms of a Boolean function. • A minterm is a product term that contains all of the function’s variables exactly once, either complemented or not complemented. 3.4.2 Description of Kmaps and Terminology (2 of 5) • For example, the minterms for a function having the inputs x and y are x’y, x’y, xy’, and xy. • Consider the Boolean function, F(x,y)= xy + xy’ • Its minterms are: 3.4.2 Description of Kmaps and Terminology (3 of 5) • Similarly, a function having three inputs, has the minterms that are shown in this diagram. 3.4.2 Description of Kmaps and Terminology (4 of 5) • A Kmap has a cell for each minterm. • This means that it has a cell for each line for the truth table of a function. • The truth table for the function F(x,y) = xy is shown at the right along with its corresponding Kmap. 3.4.3 Kmap Simplification for Two Variables (3 of 3) • The rules of Kmap simplification are: – Groupings can contain only 1s; no 0s. – Groups can be formed only at right angles; diagonal groups are not allowed. – The number of 1s in a group must be a power of 2 – even if it contains a single 1. – The groups must be made as large as possible. – Groups can overlap and wrap around the sides of the Kmap. 3.4.4 Kmap Simplification for Three Variables (1 of 7) • A Kmap for three variables is constructed as shown in the diagram below. • We have placed each minterm in the cell that will hold its value. – Notice that the values for the yz combination at the top of the matrix form a pattern that is not a normal binary sequence. 3.4.4 Kmap Simplification for Three Variables (2 of 7) • Thus, the first row of the Kmap contains all minterms where x has a value of zero. • The first column contains all minterms where y and z both have a value of zero. 3.4.4 Kmap Simplification for Three Variables (3 of 7) • Consider the function: F(X,Y,Z)= X’Y’Z + X’YZ + XY’Z + XYZ • Its Kmap is given below. – What is the largest group of 1s that is a power of 2? 3.4.4 Kmap Simplification for Three Variables (4 of 7) • This grouping tells us that changes in the variables x and y have no influence upon the value of the function: They are irrelevant. • This means that the function, F(X,Y,Z)= X’Y’Z + X’YZ + XY’Z + XYZ reduces to F(x) = z. You could verify this reduction with identities or a truth table. 3.4.4 Kmap Simplification for Three Variables (5 of 7) • Now for a more complicated Kmap. Consider the function: F(X,Y,Z)= X’Y’Z’+ X’Y’Z + X’YZ + X’YZ’+ XY’Z’+ XYZ’ • Its Kmap is shown below. There are (only) two groupings of 1s. – Can you find them? 3.4.4 Kmap Simplification for Three Variables (6 of 7) • In this Kmap, we see an example of a group that wraps around the sides of a Kmap. • This group tells us that the values of x and y are not relevant to the term of the function that is encompassed by the group. – What does this tell us about this term of the function? What about the green group in the top row? 3.4.4 Kmap Simplification for Three Variables (7 of 7) • The green group in the top row tells us that only the value of x is significant in that group. • We see that it is complemented in that row, so the other term of the reduced function is X’ • Our reduced function is F(X,Y,Z)= X’+ Z’ Recall that we had six minterms in our original function! 3.4.5 Kmap Simplification for Four Variables (1 of 4) • Our model can be extended to accommodate the 16 minterms that are produced by a four-input function. • This is the format for a 16-minterm Kmap: 3.4.5 Kmap Simplification for Four Variables (2 of 4) • We have populated the Kmap shown below with the nonzero minterms from the function: F(W,X,Y,Z)= W’X’Y’Z’+ W’X’Y’Z + W’X’YZ’ + W’XYZ’+ WX’Y’Z’+ WX’Y’Z + WX’YZ’ – Can you identify (only) three groups in this Kmap? Recall that groups can overlap. 3.4.5 Kmap Simplification for Four Variables (3 of 4) • Our three groups consist of: – A purple group entirely within the Kmap at the right. – A pink group that wraps the top and bottom. – A green group that spans the corners. • Thus we have three terms in our final function: F(W,X,Y,Z)= X’Y’+ X’Z’ + W’YZ’ 3.4.5 Kmap Simplification for Four Variables (4 of 4) • It is possible to have a choice as to how to pick groups within a Kmap, while keeping the groups as large as possible. • The (different) functions that result from the groupings below are logically equivalent. 3.4.7 Summary (1 of 2) • Kmaps provide an easy graphical method of simplifying Boolean expressions. • A Kmap is a matrix consisting of the outputs of the minterms of a Boolean function. • In this section, we have discussed 2-, 3-, and 4input Kmaps. This method can be extended to any number of inputs through the use of multiple tables. 3.4.7 Summary (2 of 2) • Recapping the rules of Kmap simplification: – Groupings can contain only 1s; no 0s. – Groups can be formed only at right angles; diagonal groups are not allowed. – The number of 1s in a group must be a power of 2 – even if it contains a single 1. – The groups must be made as large as possible. – Groups can overlap and wrap around the sides of the Kmap. 3.5 Digital Components (1 of 8) • The main thing to remember is that combinations of gates implement Boolean functions. • The circuit below implements the Boolean function F(x,y,z) = x + y’z: We simplify our Boolean expressions so that we can create simpler circuits. 3.6 Combinational Circuits (2 of 12) • Combinational logic circuits give us many useful devices. • One of the simplest is the half adder, which finds the sum of two bits. • We can gain some insight as to the construction of a half adder by looking at its truth table, shown at the right. 3.6 Combinational Circuits (3 of 12) • As we see, the sum can be found using the XOR operation and the carry using the AND operation. 3.6 Combinational Circuits (4 of 12) • We can change our half adder into to a full adder by including gates for processing the carry bit. • The truth table for a full adder is shown at the right. 3.6 Combinational Circuits (5 of 12) • How can we change the half adder shown below to make it a full adder? 3.6 Combinational Circuits (6 of 12) • Here’s our completed full adder. 3.6 Combinational Circuits (7 of 12) • Just as we combined half adders to make a full adder, full adders can connected in series. • The carry bit “ripples” from one adder to the next; hence, this configuration is called a ripple-carry adder. Today’s systems employ more efficient adders. 3.6 Combinational Circuits (8 of 12) • Decoders are another important type of combinational circuit. • Among other things, they are useful in selecting a memory location according a binary value placed on the address lines of a memory bus. • Address decoders with n inputs can select any of 2n locations. This is a block diagram for a decoder. 3.6 Combinational Circuits (9 of 12) • This is what a 2-to-4 decoder looks like on the inside. If x = 0 and y = 1, which output line is enabled? 3.6 Combinational Circuits (10 of 12) • A multiplexer does just the opposite of a decoder. • It selects a single output from several inputs. • The particular input chosen for output is determined by the value of the multiplexer’s control lines. • To be able to select among n inputs, log2n control lines are needed. • This is a block diagram for a multiplexer. 3.6 Combinational Circuits (11 of 12) • This is what a 4-to-1 multiplexer looks like on the inside. If S0 = 1 and S1 = 0, which input is transferred to the output? 3.6 Combinational Circuits (12 of 12) • This shifter moves the bits of a nibble one position to the left or right. If S = 0, in which direction do the input bits shift? 1-Bit ALU F0 F1 C_in A B 0 0 x 0 0 0 0 x 0 1 0 0 x 1 0 0 0 x 1 1 0 1 x 0 0 0 1 x 0 1 0 1 x 1 0 0 1 x 1 1 1 0 x x 0 1 0 x x 1 1 1 0 0 0 1 1 0 0 1 1 1 0 1 0 1 1 0 1 1 1 1 1 0 0 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 Output Carryout F0 F1 C_In A B Output C_Out 0 0 x 0 0 0 X 0 0 x 0 1 0 X 0 0 x 1 0 0 X 0 0 x 1 1 1 X 0 1 x 0 0 0 X 0 1 x 0 1 1 X 0 1 x 1 0 1 X 0 1 x 1 1 1 X 1 0 x x 0 1 X 1 0 x x 1 0 X 1 1 0 0 0 0 0 1 1 0 0 1 1 0 1 1 0 1 0 1 0 1 1 0 1 1 0 1 1 1 1 0 0 1 0 1 1 1 0 1 0 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 Parity Checker---Error Detector - 2-bits ALU 3.7 Sequential Circuits (1 of 30) • Combinational logic circuits are perfect for situations when we require the immediate application of a Boolean function to a set of inputs. • There are other times, however, when we need a circuit to change its value with consideration to its current state as well as its inputs. – These circuits have to “remember” their current state. • Sequential logic circuits provide this functionality for us. 3.7 Sequential Circuits (2 of 30) • As the name implies, sequential logic circuits require a means by which events can be sequenced. • State changes are controlled by clocks. – A “clock” is a special circuit that sends electrical pulses through a circuit. • Clocks produce electrical waveforms such as the one shown below. 3.7 Sequential Circuits (3 of 30) • State changes occur in sequential circuits only when the clock ticks. • Circuits can change state on the rising edge, falling edge, or when the clock pulse reaches its highest voltage. 3.7 Sequential Circuits (4 of 30) • Circuits that change state on the rising edge, or falling edge of the clock pulse are called edge-triggered. • Level-triggered circuits change state when the clock voltage reaches its highest or lowest level. 3.7 Sequential Circuits (5 of 30) • To retain their state values, sequential circuits rely on feedback. • Feedback in digital circuits occurs when an output is looped back to the input. • A simple example of this concept is shown below. – If Q is 0 it will always be 0, if it is 1, it will always be 1. Why? 3.7 Sequential Circuits (6 of 30) • You can see how feedback works by examining the most basic sequential logic components, the SR flip-flop. – The “SR” stands for set/reset. • The internals of an SR flip-flop are shown below, along with its block diagram. 3.7 Sequential Circuits (7 of 30) • The behavior of an SR flip-flop is described by a characteristic table. • Q(t) means the value of the output at time t. Q(t+1) is the value of Q after the next clock pulse. 3.7 Sequential Circuits (8 of 30) • The SR flip-flop actually has three inputs: S, R, and its current output, Q. • Thus, we can construct a truth table for this circuit, as shown at the right. • Notice the two undefined values. When both S and R are 1, the SR flip-flop is unstable. 3.7 Sequential Circuits (9 of 30) • If we can be sure that the inputs to an SR flip-flop will never both be 1, we will never have an unstable circuit. This may not always be the case. • The SR flip-flop can be modified to provide a stable state when both inputs are 1. • This modified flip-flop is called a JK flip-flop, shown at the right. 3.7 Sequential Circuits (10 of 30) • At the right, we see how an SR flip-flop can be modified to create a JK flip-flop. • The characteristic table indicates that the flip-flop is stable for all inputs. 3.7 Sequential Circuits (11 of 30) • Another modification of the SR flip-flop is the D flipflop, shown below with its characteristic table. • You will notice that the output of the flip-flop remains the same during subsequent clock pulses. The output changes only when the value of D changes. 3.7 Sequential Circuits (12 of 30) • The D flip-flop is the fundamental circuit of computer memory. – D flip-flops are usually illustrated using the block diagram shown below. • The characteristic table for the D flip-flop is shown at the right. 3.7 Sequential Circuits (13 of 30) • The behavior of sequential circuits can be expressed using characteristic tables or finite state machines (FSMs). – FSMs consist of a set of nodes that hold the states of the machine and a set of arcs that connect the states. • Moore and Mealy machines are two types of FSMs that are equivalent. – They differ only in how they express the outputs of the machine. • Moore machines place outputs on each node, while Mealy machines present their outputs on the transitions. 3.7 Sequential Circuits (21 of 30) • This illustration shows a 4-bit register consisting of D flip-flops. You will usually see its block diagram (below) instead. A larger memory configuration is shown on the next slide. 3.7 Sequential Circuits (22 of 30) Example Question: For the 4 x 3 memory unit, the status of this memory is given as: Word0={000}, Word1={101}, Word2={011}, Word3={111} If Write Enable=1, S0=1, S1=0 and Input: In0=0, In1=1, In2=0, what is the status of this memory unit? If Write Enable=0, S0=0, S1=1, what the outputs will be: Out0= , Out1= , Out2= . Example Question---Answer: For the 4 x 3 memory unit, the status of this memory is given as: Word0={000}, Word1={101}, Word2={011}, Word3={111} If Write Enable=1, S0=1, S1=0 and Input: In0=0, In1=1, In2=0, what is the status of this memory unit? Word1 is selected, Word1={010} W0, W2, W3 No Change! If Write Enable=0, S0=0, S1=1, what the outputs will be: Out0= 0 , Out1= 1 , Out2= 1. Word2 is selected, Word2 go to output. 3.7 Sequential Circuits (23 of 30) • A binary counter is another example of a sequential circuit. • The low-order bit is complemented at each clock pulse. • Whenever it changes from 0 to 1, the next bit is complemented, and so on through the other flipflops. 3.7 Sequential Circuits (24 of 30) • Convolutional coding and decoding requires sequential circuits. • One important convolutional code is the (2,1) convolutional code that underlies the PRML code that is briefly described at the end of Chapter 2. • A (2, 1) convolutional code is so named because two symbols are output for every one symbol input. • A convolutional encoder for PRML with its characteristic table is shown on the next slide. 3.8 Designing Circuits (1 of 3) • We have seen digital circuits from two points of view: digital analysis and digital synthesis. – Digital analysis explores the relationship between a circuits inputs and its outputs. – Digital synthesis creates logic diagrams using the values specified in a truth table. • Digital systems designers must also be mindful of the physical behaviors of circuits to include minute propagation delays that occur between the time when a circuit’s inputs are energized and when the output is accurate and stable. 3.8 Designing Circuits (2 of 3) • Digital designers rely on specialized software, such as VHDL and Verilog, to create efficient circuits. – Thus, software is an enabler for the construction of better hardware. • Of course, software is in reality a collection of algorithms that could just as well be implemented in hardware. – Recall the Principle of Equivalence of Hardware and Software. 3.8 Designing Circuits (3 of 3) • When we need to implement a simple, specialized algorithm and its execution speed must be as fast as possible, a hardware solution is often preferred. • This is the idea behind embedded systems, which are small special-purpose computers that we find in many everyday things. • Embedded systems require special programming that demands an understanding of the operation of digital circuits, the basics of which you have learned in this chapter. Conclusion (1 of 2) • Computers are implementations of Boolean logic. • Boolean functions are completely described by truth tables. • Logic gates are small circuits that implement Boolean operators. • The basic gates are AND, OR, and NOT. – The XOR gate is very useful in parity checkers and adders. • The “universal gates” are NOR, and NAND. Conclusion (2 of 2) • Computer circuits consist of combinational logic circuits and sequential logic circuits. • Combinational circuits produce outputs (almost) immediately when their inputs change. • Sequential circuits require clocks to control their changes of state. • The basic sequential circuit unit is the flip-flop: The behaviors of the SR, JK, and D flip-flops are important to know. Please show the procedures how you arrived at each answer. 1. (10) 1 2. (15) Write a code segment in MARIE assembly language to do the following: If X