you may cite the text (see end of each chapter) for applications or you may cite other textbooks.

The report must be 3 typed pages (12 inch font), double spaced, and it must include a Bibliography.

You must have three sources , two of which must be book sources.

Index of Applications

BIOLOGY AND LIFE SCIENCES

Age distribution vector, 378, 391, 392, 395

Age progression software, 180

Age transition matrix, 378, 391, 392, 395

Agriculture, 37, 50

Cosmetic surgery results simulation, 180

Duchenne muscular dystrophy, 365

Galloping speeds of animals, 276

Genetics, 365

Health care expenditures, 146

Heart rhythm analysis, 255

Hemophilia A, 365

Hereditary baldness, 365

Nutrition, 11

Population

of deer, 37

of laboratory mice, 91

of rabbits, 379

of sharks, 396

of small fish, 396

Population age and growth over time, 331

Population genetics, 365

Population growth, 378, 379, 391, 392,

395, 396, 398

Predator-prey relationship, 396

Red-green color blindness, 365

Reproduction rates of deer, 103

Sex-linked inheritance, 365

Spread of a virus, 91, 93

Vitamin C content, 11

Wound healing simulation, 180

X-linked inheritance, 365

BUSINESS AND ECONOMICS

Airplane allocation, 91

Borrowing money, 23

Demand, for a rechargeable power drill, 103

Demand matrix, external, 98

Economic system, 97, 98

of a small community, 103

Finance, 23

Fundraising, 92

Gasoline sales, 105

Industrial system, 102, 107

Input-output matrix, 97

Leontief input-output model(s), 97, 98, 103

Major League Baseball salaries, 107

Manufacturing

labor and material costs, 105

models and prices, 150

production levels, 51, 105

Net profit, Microsoft, 32

Output matrix, 98

Petroleum production, 292

Profit, from crops, 50

Purchase of a product, 91

Revenue

fast-food stand, 242

General Dynamics Corporation, 266, 276

Google, Inc., 291

telecommunications company, 242

software publishers, 143

Sales, 37

concession area, 42

stocks, 92

Wal-Mart, 32

Sales promotion, 106

Satellite television service, 85, 86, 147

Software publishing, 143

ENGINEERING AND TECHNOLOGY

Aircraft design, 79

Circuit design, 322

Computer graphics, 338

Computer monitors, 190

Control system, 314

Controllability matrix, 314

Cryptography, 94–96, 102, 107

Data encryption, 94

Decoding a message, 96, 102, 107

Digital signal processing, 172

Electrical network analysis, 30, 31, 34, 37,

150

Electronic equipment, 190

Encoding a message, 95, 102, 107

Encryption key, 94

Engineering and control, 130

Error checking

digit, 200

matrix, 200

Feed horn, 223

Global Positioning System, 16

Google’s Page Rank algorithm, 86

Image morphing and warping, 180

Information retrieval, 58

Internet search engine, 58

Ladder network, 322

Locating lost vessels at sea, 16

Movie special effects, 180

Network analysis, 29–34, 37

Radar, 172

Sampling, 172

Satellite dish, 223

Smart phones, 190

Televisions, 190

Wireless communications, 172

MATHEMATICS AND GEOMETRY

Adjoint of a matrix, 134, 135, 142, 146, 150

Collinear points in the xy-plane, 139, 143

Conic section(s), 226, 229

general equation, 141

rotation of axes, 221–224, 226, 229,

383–385, 392, 395

Constrained optimization, 389, 390, 392,

395

Contraction in R2, 337, 341, 342

Coplanar points in space, 140, 143

Cramer’s Rule, 130, 136, 137, 142, 143, 146

Cross product of two vectors, 277–280,

288, 289, 294

Differential equation(s)

linear, 218, 225, 226, 229

second order, 164

system of first order, 354, 380, 381,

391, 392, 395, 396, 398

Expansion in R2, 337, 341, 342, 345

Fibonacci sequence, 396

Fourier approximation(s), 285–287, 289, 292

Geometry of linear transformations in R2,

336–338, 341, 342, 345

Hessian matrix, 375

Jacobian, 145

Lagrange multiplier, 34

Laplace transform, 130

Least squares approximation(s), 281–284, 289

linear, 282, 289, 292

quadratic, 283, 289, 292

Linear programming, 47

Magnification in R2, 341, 342

Mathematical modeling, 273, 274, 276

Parabola passing through three points, 150

Partial fraction decomposition, 34, 37

Polynomial curve fitting, 25–28, 32, 34, 37

Quadratic form(s), 382–388, 392, 395, 398

Quadric surface, rotation of, 388, 392

Reflection in R2, 336, 341, 342, 345, 346

Relative maxima and minima, 375

Rotation

in R2, 303, 343, 393, 397

in R3, 339, 340, 342, 345

Second Partials Test for relative extrema, 375

Shear in R2, 337, 338, 341, 342, 345

Taylor polynomial of degree 1, 282

Three-point form of the equation of a plane,

141, 143, 146

Translation in R2, 308, 343

Triple scalar product, 288

Two-point form of the equation of a line,

139, 143, 146, 150

Unit circle, 253

Wronskian, 219, 225, 226, 229

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

PHYSICAL SCIENCES

Acoustical noise levels, 28

Airplane speed, 11

Area

of a parallelogram using cross product,

279, 280, 288, 294

of a triangle

using cross product, 289

using determinants, 138, 142, 146,

150

Astronomy, 27, 274

Balancing a chemical equation, 4

Beam deflection, 64, 72

Chemical

changing state, 91

mixture, 37

reaction, 4

Comet landing, 141

Computational fluid dynamics, 79

Crystallography, 213

Degree of freedom, 164

Diffusion, 354

Dynamical systems, 396

Earthquake monitoring, 16

Electric and magnetic flux, 240

Flexibility matrix, 64, 72

Force

matrix, 72

to pull an object up a ramp, 157

Geophysics, 172

Grayscale, 190

Hooke’s Law, 64

Kepler’s First Law of Planetary Motion, 141

Kirchhoff’s Laws, 30, 322

Lattice of a crystal, 213

Mass-spring system, 164, 167

Mean distance from the sun, 27, 274

Natural frequency, 164

Newton’s Second Law of Motion, 164

Ohm’s Law, 322

Pendulum, 225

Planetary periods, 27, 274

Primary additive colors, 190

RGB color model, 190

Stiffness matrix, 64, 72

Temperature, 34

Torque, 277

Traffic flow, 28, 33

Undamped system, 164

Unit cell, 213

end-centered monoclinic, 213

Vertical motion, 37

Volume

of a parallelepiped, 288, 289, 292

of a tetrahedron, 114, 140, 143

Water flow, 33

Wind energy consumption, 103

Work, 248

SOCIAL SCIENCES AND

DEMOGRAPHICS

Caribbean Cruise, 106

Cellular phone subscribers, 107

Consumer preference model, 85, 86, 92, 147

Final grades, 105

Grade distribution, 92

Master’s degrees awarded, 276

Politics, voting apportionment, 51

Population

of consumers, 91

regions of the United States, 51

of smokers and nonsmokers, 91

United States, 32

world, 273

Population migration, 106

Smokers and nonsmokers, 91

Sports

activities, 91

Super Bowl I, 36

Television watching, 91

Test scores, 108

STATISTICS AND PROBABILITY

Canonical regression analysis, 304

Least squares regression

analysis, 99–101, 103, 107, 265, 271–276

cubic polynomial, 276

line, 100, 103, 107, 271, 274, 276, 296

quadratic polynomial, 273, 276

Leslie matrix, 331, 378

Markov chain, 85, 86, 92, 93, 106

absorbing, 89, 90, 92, 93, 106

Multiple regression analysis, 304

Multivariate statistics, 304

State matrix, 85, 106, 147, 331

Steady state probability vector, 386

Stochastic matrices, 84–86, 91–93, 106, 331

MISCELLANEOUS

Architecture, 388

Catedral Metropolitana Nossa Senhora

Aparecida, 388

Chess tournament, 93

Classified documents, 106

Determining directions, 16

Dominoes, A2

Flight crew scheduling, 47

Sudoku, 120

Tips, 23

U.S. Postal Service, 200

ZIP + 4 barcode, 200

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

Elementary Linear Algebra

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

Elementary Linear Algebra

8e

Ron Larson

The Pennsylvania State University

The Behrend College

Australia • Brazil • Mexico • Singapore • United Kingdom • United States

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

This is an electronic version of the print textbook. Due to electronic rights restrictions, some third party content may be suppressed. Editorial

review has deemed that any suppressed content does not materially affect the overall learning experience. The publisher reserves the right to

remove content from this title at any time if subsequent rights restrictions require it. For valuable information on pricing, previous

editions, changes to current editions, and alternate formats, please visit www.cengage.com/highered to search by

ISBN#, author, title, or keyword for materials in your areas of interest.

Important Notice: Media content referenced within the product description or the product text may not be available in the eBook version.

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

Elementary Linear Algebra

Eighth Edition

Ron Larson

Product Director: Terry Boyle

Product Manager: Richard Stratton

Content Developer: Spencer Arritt

Product Assistant: Kathryn Schrumpf

Marketing Manager: Ana Albinson

Content Project Manager: Jennifer Risden

© 2017, 2013, 2009 Cengage Learning

WCN: 02-200-203

ALL RIGHTS RESERVED. No part of this work covered by the copyright

herein may be reproduced, transmitted, stored, or used in any form or by

any means graphic, electronic, or mechanical, including but not limited

to photocopying, recording, scanning, digitizing, taping, Web distribution,

information networks, or information storage and retrieval systems,

except as permitted under Section 107 or 108 of the 1976 United States

Copyright Act, without the prior written permission of the publisher.

Manufacturing Planner: Doug Bertke

Production Service: Larson Texts, Inc.

Photo Researcher: Lumina Datamatics

Text Researcher: Lumina Datamatics

Text Designer: Larson Texts, Inc.

Cover Designer: Larson Texts, Inc.

For product information and technology assistance, contact us at

Cengage Learning Customer & Sales Support, 1-800-354-9706.

For permission to use material from this text or product,

submit all requests online at www.cengage.com/permissions.

Further permissions questions can be e-mailed to

permissionrequest@cengage.com.

Cover Image: Keo/Shutterstock.com

Compositor: Larson Texts, Inc.

Library of Congress Control Number: 2015944033

Student Edition

ISBN: 978-1-305-65800-4

Loose-leaf Edition

ISBN: 978-1-305-95320-8

Cengage Learning

20 Channel Center Street

Boston, MA 02210

USA

Cengage Learning is a leading provider of customized learning solutions

with employees residing in nearly 40 different countries and sales in

more than 125 countries around the world. Find your local representative

at www.cengage.com.

Cengage Learning products are represented in Canada by Nelson

Education, Ltd.

To learn more about Cengage Learning Solutions, visit www.cengage.com.

Purchase any of our products at your local college store or at our

preferred online store www.cengagebrain.com.

Printed in the United States of America

Print Number: 01 Print Year: 2015

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

Contents

1

Systems of Linear Equations

1.1

1.2

1.3

2

3

2

13

25

35

38

38

Matrices

39

2.1

2.2

2.3

2.4

2.5

2.6

40

52

62

74

84

94

104

108

108

Operations with Matrices

Properties of Matrix Operations

The Inverse of a Matrix

Elementary Matrices

Markov Chains

More Applications of Matrix Operations

Review Exercises

Project 1 Exploring Matrix Multiplication

Project 2 Nilpotent Matrices

Determinants

3.1

3.2

3.3

3.4

4

Introduction to Systems of Linear Equations

Gaussian Elimination and Gauss-Jordan Elimination

Applications of Systems of Linear Equations

Review Exercises

Project 1 Graphing Linear Equations

Project 2 Underdetermined and Overdetermined Systems

1

The Determinant of a Matrix

Determinants and Elementary Operations

Properties of Determinants

Applications of Determinants

Review Exercises

Project 1 Stochastic Matrices

Project 2 The Cayley-Hamilton Theorem

Cumulative Test for Chapters 1–3

Vector Spaces

4.1

4.2

4.3

4.4

4.5

4.6

4.7

4.8

Rn

Vectors in

Vector Spaces

Subspaces of Vector Spaces

Spanning Sets and Linear Independence

Basis and Dimension

Rank of a Matrix and Systems of Linear Equations

Coordinates and Change of Basis

Applications of Vector Spaces

Review Exercises

Project 1 Solutions of Linear Systems

Project 2 Direct Sum

109

110

118

126

134

144

147

147

149

151

152

161

168

175

186

195

208

218

227

230

230

v

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

vi

Contents

5

Inner Product Spaces

5.1

5.2

5.3

5.4

5.5

6

Linear Transformations

6.1

6.2

6.3

6.4

6.5

7

Introduction to Linear Transformations

The Kernel and Range of a Linear Transformation

Matrices for Linear Transformations

Transition Matrices and Similarity

Applications of Linear Transformations

Review Exercises

Project 1 Reflections in R 2 (I)

Project 2 Reflections in R 2 (II)

Eigenvalues and Eigenvectors

7.1

7.2

7.3

7.4

8

Length and Dot Product in R n

Inner Product Spaces

Orthonormal Bases: Gram-Schmidt Process

Mathematical Models and Least Squares Analysis

Applications of Inner Product Spaces

Review Exercises

Project 1 The QR-Factorization

Project 2 Orthogonal Matrices and Change of Basis

Cumulative Test for Chapters 4 and 5

Eigenvalues and Eigenvectors

Diagonalization

Symmetric Matrices and Orthogonal Diagonalization

Applications of Eigenvalues and Eigenvectors

Review Exercises

Project 1 Population Growth and Dynamical Systems (I)

Project 2 The Fibonacci Sequence

Cumulative Test for Chapters 6 and 7

231

232

243

254

265

277

290

293

294

295

297

298

309

320

330

336

343

346

346

347

348

359

368

378

393

396

396

397

Complex Vector Spaces (online)*

8.1

8.2

8.3

8.4

8.5

Complex Numbers

Conjugates and Division of Complex Numbers

Polar Form and DeMoivre’s Theorem

Complex Vector Spaces and Inner Products

Unitary and Hermitian Matrices

Review Exercises

Project 1 The Mandelbrot Set

Project 2 Population Growth and Dynamical Systems (II)

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

Contents

9

Linear Programming (online)*

9.1

9.2

9.3

9.4

9.5

10

vii

Systems of Linear Inequalities

Linear Programming Involving Two Variables

The Simplex Method: Maximization

The Simplex Method: Minimization

The Simplex Method: Mixed Constraints

Review Exercises

Project 1 Beach Sand Replenishment (I)

Project 2 Beach Sand Replenishment (II)

Numerical Methods (online)*

10.1

10.2

10.3

10.4

Gaussian Elimination with Partial Pivoting

Iterative Methods for Solving Linear Systems

Power Method for Approximating Eigenvalues

Applications of Numerical Methods

Review Exercises

Project 1 The Successive Over-Relaxation (SOR) Method

Project 2 United States Population

Appendix

A1

Mathematical Induction and Other Forms of Proofs

Answers to Odd-Numbered Exercises and Tests

Index

Technology Guide*

*Available online at CengageBrain.com.

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

A7

A41

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

Preface

Welcome to Elementary Linear Algebra, Eighth Edition. I am proud to present to you this new edition. As with

all editions, I have been able to incorporate many useful comments from you, our user. And while much has

changed in this revision, you will still find what you expect—a pedagogically sound, mathematically precise, and

comprehensive textbook. Additionally, I am pleased and excited to offer you something brand new— a companion

website at LarsonLinearAlgebra.com. My goal for every edition of this textbook is to provide students with the

tools that they need to master linear algebra. I hope you find that the changes in this edition, together with

LarsonLinearAlgebra.com, will help accomplish just that.

New To This Edition

NEW LarsonLinearAlgebra.com

This companion website offers multiple tools and

resources to supplement your learning. Access to

these features is free. Watch videos explaining

concepts from the book, explore examples, download

data sets and much more.

5.2

In Exercises 85 and 86, determine

whether each statement is true or false. If a statement

is true, give a reason or cite an appropriate statement

from the text. If a statement is false, provide an example

that shows the statement is not true in all cases or cite an

appropriate statement from the text.

true or False?

85. (a) The dot product is the only inner product that can be

defined in Rn.

(b) A nonzero vector in an inner product can have a

norm of zero.

86. (a) The norm of the vector u is the angle between u and

the positive x-axis.

(b) The angle θ between a vector v and the projection

of u onto v is obtuse when the scalar a < 0 and
acute when a > 0, where av = projvu.

87. Let u = (4, 2) and v = (2, −2) be vectors in R2 with

the inner product 〈u, v〉 = u1v1 + 2u2v2.

(a) Show that u and v are orthogonal.

(b) Sketch u and v. Are they orthogonal in the Euclidean

sense?

88. Proof Prove that

u + v2 + u − v2 = 2u2 + 2v2

for any vectors u and v in an inner product space V.

89. Proof Prove that the function is an inner product on Rn.

〈u, v〉 = c1u1v1 + c2u2v2 + . . . + cnunvn, ci > 0

90. Proof Let u and v be nonzero vectors in an inner

product space V. Prove that u − projvu is orthogonal

to v.

91. Proof Prove Property 2 of Theorem 5.7: If u, v,

and w are vectors in an inner product space V, then

〈u + v, w〉 = 〈u, w〉 + 〈v, w〉.

92. Proof Prove Property 3 of Theorem 5.7: If u and v

are vectors in an inner product space V and c is any real

number, then 〈u, cv〉 = c〈u, v〉.

93. guided Proof Let W be a subspace of the inner

product space V. Prove that the set

W⊥ = { v ∈ V: 〈v, w〉 = 0 for all w ∈ W }

is a subspace of V.

Getting Started: To prove that W⊥ is a subspace of

V, you must show that W⊥ is nonempty and that the

closure conditions for a subspace hold (Theorem 4.5).

(i) Find a vector in W⊥ to conclude that it is nonempty.

(ii) To show the closure of W⊥ under addition, you

need to show that 〈v1 + v2, w〉 = 0 for all w ∈ W

and for any v1, v2 ∈ W⊥. Use the properties of

inner products and the fact that 〈v1, w〉 and 〈v2, w〉

are both zero to show this.

(iii) To show closure under multiplication by a scalar,

proceed as in part (ii). Use the properties of inner

products and the condition of belonging to W⊥.

9781305658004_0502.indd 253

253

Exercises

94. Use the result of Exercise 93 to find W⊥ when W is the

span of (1, 2, 3) in V = R3.

95. guided Proof Let 〈u, v〉 be the Euclidean inner

product on Rn. Use the fact that 〈u, v〉 = uTv to prove

that for any n × n matrix A,

(a) 〈ATAu, v〉 = 〈u, Av〉

and

(b) 〈ATAu, u〉 = Au2.

Getting Started: To prove (a) and (b), make use of both

the properties of transposes (Theorem 2.6) and the

properties of the dot product (Theorem 5.3).

(i) To prove part (a), make repeated use of the property

〈u, v〉 = uTv and Property 4 of Theorem 2.6.

(ii) To prove part (b), make use of the property

〈u, v〉 = uTv, Property 4 of Theorem 2.6, and

Property 4 of Theorem 5.3.

96. CAPSTONE

(a) Explain how to determine whether a function

defines an inner product.

(b) Let u and v be vectors in an inner product space V,

such that v ≠ 0. Explain how to find the orthogonal

projection of u onto v.

In Exercises 97–100,

find c1 and c2 for the inner product of R2,

〈u, v〉 = c1u1v1 + c2u2v2

such that the graph represents a unit circle as shown.

y

y

97.

98.

Finding Inner Product Weights

4

3

2

||u|| = 1

−3 − 2

2 3

−3

−2

−3

1

3

y

100.

5

6

4

||u|| = 1

1

−5 − 3

−1

x

−4

y

99.

||u|| = 1

1

x

1

3

5

x

−6

||u|| = 1

6

x

−4

−5

−6

101. Consider the vectors

u = (6, 2, 4) and v = (1, 2, 0)

from Example 10. Without using Theorem 5.9, show

that among all the scalar multiples cv of the vector

v, the projection of u onto v is the vector closest to

u—that is, show that d(u, projvu) is a minimum.

REVISED Exercise Sets

The exercise sets have been carefully and extensively

examined to ensure they are rigorous, relevant, and

cover all the topics necessary to understand the

fundamentals of linear algebra. The exercises are

ordered and titled so you can see the connections

between examples and exercises. Many new skillbuilding, challenging, and application exercises have

been added. As in earlier editions, the following

pedagogically-proven types of exercises are included.

•

•

•

•

•

True or False Exercises

Proofs

Guided Proofs

Writing Exercises

Technology Exercises (indicated throughout the

text with

)

Exercises utilizing electronic data sets are indicated

by

and found at CengageBrain.com.

8/18/15 10:21 AM

ix

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

x

Preface

2

Table of Contents Changes

Based on market research and feedback from users,

Section 2.5 in the previous edition (Applications of

Matrix Operations) has been expanded from one section

to two sections to include content on Markov chains.

So now, Chapter 2 has two application sections:

Section 2.5 (Markov Chains) and Section 2.6 (More

Applications of Matrix Operations). In addition,

Section 7.4 (Applications of Eigenvalues and

Eigenvectors) has been expanded to include content

on constrained optimization.

2.1

2.2

2.3

2.4

2.5

2.6

Trusted Features

Matrices

Operations with Matrices

Properties of Matrix Operations

The Inverse of a Matrix

Elementary Matrices

Markov Chains

More Applications of Matrix Operations

Data Encryption (p. 94)

Computational Fluid Dynamics (p. 79)

®

For the past several years, an independent website—

CalcChat.com—has provided free solutions to all

odd-numbered problems in the text. Thousands of

students have visited the site for practice and help

with their homework from live tutors. You can also

use your smartphone’s QR Code® reader to scan the

icon

at the beginning of each exercise set to

access the solutions.

Beam Deflection (p. 64)

Information Retrieval (p. 58)

Flight Crew Scheduling (p. 47)

62

Clockwise from top left, Cousin_Avi/Shutterstock.com; Goncharuk/Shutterstock.com;

Gunnar Pippel/Shutterstock.com; Andresr/Shutterstock.com; nostal6ie/Shutterstock.com

Chapter 2

2.3 The Inverse of a Matrix

Chapter Openers

Use properties of inverse matrices.

Each Chapter Opener highlights five real-life

applications of linear algebra found throughout the

chapter. Many of the applications reference the

Linear Algebra Applied feature (discussed on the

next page). You can find a full list of the

applications in the Index of Applications on the

inside front cover.

Use an inverse matrix to solve a system of linear equations.

Matrices and their inverses

Section 2.2 discussed some of the similarities between the algebra of real numbers and

the algebra of matrices. This section further develops the algebra of matrices to include

the solutions of matrix equations involving matrix multiplication. To begin, consider

the real number equation ax = b. To solve this equation for x, multiply both sides of

the equation by a−1 (provided a ≠ 0).

ax = b

(a−1a)x = a−1b

(1)x = a−1b

x = a−1b

The number a−1 is the multiplicative inverse of a because a−1a = 1 (the identity

element for multiplication). The definition of the multiplicative inverse of a matrix is

similar.

Section Objectives

definition of the inverse of a Matrix

A bulleted list of learning objectives, located at

the beginning of each section, provides you the

opportunity to preview what will be presented

in the upcoming section.

An n × n matrix A is invertible (or nonsingular) when there exists an n × n

matrix B such that

AB = BA = In

where In is the identity matrix of order n. The matrix B is the (multiplicative)

inverse of A. A matrix that does not have an inverse is noninvertible (or

singular).

Nonsquare matrices do not have inverses. To see this, note that if A is of size

m × n and B is of size n × m (where m ≠ n), then the products AB and BA are of

different sizes and cannot be equal to each other. Not all square matrices have inverses.

(See Example 4.) The next theorem, however, states that if a matrix does have an

inverse, then that inverse is unique.

theoreM 2.7

Theorems, Definitions, and

Properties

Presented in clear and mathematically precise

language, all theorems, definitions, and properties

are highlighted for emphasis and easy reference.

Uniqueness of an inverse Matrix

If A is an invertible matrix, then its inverse is unique. The inverse of A is

denoted by A−1.

proof

If A is invertible, then it has at least one inverse B such that

AB = I = BA.

Proofs in Outline Form

Assume that A has another inverse C such that

In addition to proofs in the exercises, some

proofs are presented in outline form. This omits

the need for burdensome calculations.

AC = I = CA.

Demonstrate that B and C are equal, as shown on the next page.

QR Code is a registered trademark of Denso Wave Incorporated

9/10/15 10:21 AM

9781305658004_0201.indd 39

Find the inverse of a matrix (if it exists).

9781305658004_0203.indd 62

39

Matrices

8/18/15 11:34 AM

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

[

∑

Using the Discovery feature helps you develop

or

an intuitive understanding of mathematical

det(A) = ∣A∣ = ∑ a C

concepts and relationships.

j=1

ij

i=1

[

−2

1

2

4

1

−1

A=

0

3

]

3

0

0

0

using Gauss-Jordan

elimination to obtain

the transition matrix

P −1 when the change

of basis is from a

nonstandard basis to

a standard basis.

2 2 1

0

2

.

3

−2

(

TeChnology

Many graphing utilities and

software programs can

find the determinant of

a square matrix. If you use

a graphing utility, then you may

see something similar to the

screen below for Example 4.

The Technology guide at

CengageBrain.com can help

you use technology to find a

determinant.

3

0

0

0

cn1

cn2 . . . cnn

⋮

C13 = (−1)

1+3

=

∣

−1

0

3

∣

−1

0

3

1

2

4

∣

∣

1

2

4

2

3

−2

.

xi

Finding a transition Matrix

[

]

[

]

3

1

[

1

0

1

0

−1

2

2

3

{(

√2 √2

]

)(

[

6

6

3

) (3

2

1

0

0

1

0

0 −1

3

0

1

0

0

1

0

3

First show that the three vectors are mutually orthogonal.

−5

0

0

1

0

0

1

1

SOlutiOn

1 1

v1 ∙ v2 = −that

+ +transition

0=0

From this, you can conclude

matrix from B to B′ is

6 the

6

z

P−1 =

)

(

)

[

−1

3

1

3 3

4

−7

−2

)}

2

−3

−1

]

2

2

v14∙ v3 =2

−

+0=0

−7 −33√

. 2 3√2

√2

√2

2√2

−2

−1

v2 ∙ v3 = −

−

+

=0

]

2

3

−2

C13 = (0)(−1)2+1

matrices in R3 to designate the locations of atoms in a

unit cell. For example, the figure below shows the unit

an Orthonormal

Basis for P3

cell known as end-centered

monoclinic.

Delete 1st row and 3rd column.

In P3 , with the inner product

∣

〈 p, q〉 = a0b0 + a1b1 + a2b2 + a3b3

the standard basis B = { 1, x, x2, x3 } is orthonormal. The verification of this is left as an

exercise. (See Exercise 17.)

2

−1

+ (2)(−1)2+2

−2

3

= 0 + 2(1)(−4) + 3(−1)(−7)

= 13.

called a lattice. The simplest

repeating unit in a lattice is a

3

so you know that they span R . By Theorem 4.12, they form a (nonstandard)

applied5.11),

unitbasis

cell.for

Crystallographers

can use bases and coordinate

orthonormal

R3.

Simplify.

∣ ∣

1

4

)

(

∣

∣

2

−1

+ (3)(−1)2+3

−2

3

∣

1

4

One possible coordinate matrix for the top end-centered

Time-frequency

analysis

of irregular physiological signals,

T

linear

(blue) atom such

is [xas

]B′ beat-to-beat

= [12 12 1]cardiac

.

rhythm variations (also known

algeBra as heart rate variability or HRV), canBrazhnykov

be difficult.

This is

Andriy/Shutterstock.com

applied

because the structure of a signal can include multiple

periodic, nonperiodic, and pseudo-periodic components.

Researchers have proposed and validated a simplified HRV

analysis method called orthonormal-basis partitioning and

time-frequency representation (OPTR). This method can

detect both abrupt and slow changes in the HRV signal’s

structure, divide a nonstationary HRV signal into segments

8/18/15

11:58 AM

that are “less nonstationary,” and determine patterns

in the

HRV. The researchers found that although it had poor time

resolution with signals that changed gradually, the OPTR

method accurately represented multicomponent and abrupt

changes in both real-life and simulated HRV signals.

You obtain

∣A∣ = 3(13)

9781305658004_0407.indd 213

= 39.

Chapter 2

]

Show that the set is an orthonormal basis for R .

Then form the matrix

[B′ B] and use Gauss-Jordan elimination to rewrite [B′ B] as

1

1

√2 √2 2√2

2 2 1

[I3 P−1].

,

,

S = {v , v , v } =

,

,0 , −

, ,− ,

Figure 5.11

Expanding by cofactors in the second row yields

0 ]

2 ]

3 ]

-2]]

39

2

⋮

2 , 2of

9

9

9

, 2 2

−some

Notice that three of the entries in the third column are zeros. So, ,to− eliminate

,

3

6 6

3 3 3

Multiply P−1 byNow,

the coordinate

of 1x because

= [1 2 −1]T to see that the result is the

k

the work in the expansion, use the third column.

each vector ismatrix

of length

v2

same

as

that

obtained

in

Example

3.

v3

v1 = √v1 ∙ v1 = √12 + 12 + 0 = 1

∣A∣ = 3(C13) + 0(C23) + 0(C333) + 0(C43)

i

1

1

v1 j

v2 = √v2 ∙ v2 = √18

+ 18

+ 89 = 1

The cofactors C23, C33, and C43 have zero coefficients, so you need only find the

4

4

1

y

v3Crystallography

= √v3 ∙ v3 = √9 is

+ the

= 1. of atomic and molecular

9 + 9science

1 , linear

1the

cofactor C13. To do this, delete the first row and third columnx of A and evaluate

,0

structure. In a crystal, atoms are in a repeating pattern

2

2

So, S is an orthonormal set. The three vectors do not lie in the same plane (see Figure

determinant of the resulting matrix.

algeBra

det A

108

⋮

Find the transition matrix from B to B′ for the bases for R3 below.

soluTion

-2

1

2

4

1

⋮

jth column

See LarsonLinearAlgebra.com for an interactive version of this type of example.

expansion

. . .+a C .

ij = a1jC1j + a2jC2j +

nj nj

Technology notes show how you can use

D I S C O V E RY

graphing utilities and software programs

appropriately in the problem-solving process.

and B′ = {(1, 0), (0, 1)}.

the matrix

Many of the Technology notes reference the The Determinant Form

of order 4

[of

B′ aBmatrix

].

2.

Make

a conjecture

Technology Guide at CengageBrain.com.

Find the determinant of

about the necessity of

[[1

[-1

[0

[3

. . .

c12 . . . c1n

c22 . . . c2n

When expanding by cofactors, you do not need to find cofactors of zero entries,

B = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} and B′ = {(1, 0, 1), (0, −1, 2), (2, 3, −5)}

because zero times its cofactor is zero.

solution

5.3 Orthonormal Bases: Gram-Schmidt Process

255

aijCij = (0)Cij

First use the vectors in the two bases to form the matrices B and B′.

=0

Example 1 describes another nonstandard orthonormal basis for R3.

1

0

0

1

0

2

The row (or column) containing the most zeros is usually the best choice for expansion

B= 0

1

0 and B′ = 0 −1

3

a nonstandard Orthonormal Basis for R 3

Let B = {(1, 0), (1, 2)}

by cofactors. The next example demonstrates1.

this.

0

0

1

1

2 −5

Technology Notes

A

0

c11

c21

ith row

expansion

aijCij = ai1Ci1 + ai2Ci2 + . . . + ainCin

n

0

⋮ ⋮

0

0

In the next example, you will apply this procedure to the change of basis problem

from Example 3.

Let A be a square matrix of order n. Then the determinant of A is

n

. . .

. . .

Preface

Theorem 3.1 expansion by Cofactors

det(A) = ∣A∣ =

0

1

By 113

the lemma following Theorem 4.20, however, the right-hand side of this matrix

is Q = P−1, which implies that the matrix has the form [I P−1], which proves the

theorem.

3.1 The Determinant of a Matrix

Discovery

1

0

(Source: Orthonormal-Basis Partitioning and Time-Frequency

Representation of Cardiac Rhythm Dynamics, Aysin, Benhur, et al,

IEEE Transactions on Biomedical Engineering, 52, no. 5)

Matrices

Sebastian Kaulitzki/Shutterstock.com

9781305658004_0301.indd 113

8/18/15 2:14 PM

Projects

1 Exploring Matrix Multiplication

Test 1

Test 2

Anna

84

96

Bruce

56

72

Chris

78

83

David

82

91

The table shows the first two test scores for Anna, Bruce, Chris, and David. Use the

table to create a matrix M to represent the data. Input M into a software program or

a graphing utility and use it to answer the questions below.

1. Which test was more difficult? Which was easier? Explain.

2. How would you rank the performances of the four students?

1

0

3. Describe the meanings of the matrix products M

and M

.

0

1

[]

9781305658004_0503.indd 255

Linear Algebra Applied

The Linear Algebra Applied feature describes a real-life

application of concepts discussed in a section. These

applications include biology and life sciences, business

and economics, engineering and technology, physical

sciences, and statistics and probability.

[]

4. Describe the meanings of the matrix products [1 0 0 0]M and [0 0 1 0]M.

1

1

5. Describe the meanings of the matrix products M

and 12M

.

1

1

6. Describe the meanings of the matrix products [1 1 1 1]M and 14 [1 1 1 1]M.

1

7. Describe the meaning of the matrix product [1 1 1 1]M

.

1

8. Use matrix multiplication to find the combined overall average score on

both tests.

9. How could you use matrix multiplication to scale the scores on test 1 by a

factor of 1.1?

[]

[]

[]

Capstone Exercises

2 Nilpotent Matrices

The Capstone is a conceptual problem that synthesizes

key topics to check students’ understanding of the

section concepts. I recommend it.

Let A be a nonzero square matrix. Is it possible that a positive integer k exists such

that Ak = O? For example, find A3 for the matrix

[

0

A= 0

0

1

0

0

]

2

1 .

0

A square matrix A is nilpotent of index k when A ≠ O, A2 ≠ O, . . . , Ak−1 ≠ O,

but Ak = O. In this project you will explore nilpotent matrices.

Chapter Projects

1. The matrix in the example above is nilpotent. What is its index?

2. Use a software program or a graphing utility to determine which matrices below

are nilpotent and find their indices.

0

1

0

1

0

0

(a)

(b)

(c)

0

0

1

0

1

0

[

(d)

[

]

1

1

]

0

0

[

[

0

(e) 0

0

]

0

0

0

[

1

0

0

]

[

0

(f) 1

1

Two per chapter, these offer the opportunity for group

activities or more extensive homework assignments,

and are focused on theoretical concepts or applications.

Many encourage the use of technology.

]

0

0

1

0

0

0

]

3. Find 3 × 3 nilpotent matrices of indices 2 and 3.

4. Find 4 × 4 nilpotent matrices of indices 2, 3, and 4.

5.

6.

7.

8.

Find a nilpotent matrix of index 5.

Are nilpotent matrices invertible? Prove your answer.

When A is nilpotent, what can you say about AT? Prove your answer.

Show that if A is nilpotent, then I − A is invertible.

Supri Suharjoto/Shutterstock.com

9781305658004_020R.indd 108

8/18/15 4:07 PM

9/8/15 8:41 AM

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

Instructor Resources

Media

Instructor’s Solutions Manual

The Instructor’s Solutions Manual provides worked-out solutions for all even-numbered

exercises in the text.

Cengage Learning Testing Powered by Cognero (ISBN: 978-1-305-65806-6)

is a flexible, online system that allows you to author, edit, and manage test bank

content, create multiple test versions in an instant, and deliver tests from your LMS,

your classroom, or wherever you want. This is available online at cengage.com/login.

Turn the Light On with MindTap for Larson’s Elementary Linear Algebra

Through personalized paths of dynamic assignments and applications, MindTap is a

digital learning solution and representation of your course that turns cookie cutter into

cutting edge, apathy into engagement, and memorizers into higher-level thinkers.

The Right Content: With MindTap’s carefully curated material, you get the

precise content and groundbreaking tools you need for every course you teach.

Personalization: Customize every element of your course—from rearranging

the Learning Path to inserting videos and activities.

Improved Workflow: Save time when planning lessons with all of the trusted,

most current content you need in one place in MindTap.

Tracking Students’ Progress in Real Time: Promote positive outcomes by

tracking students in real time and tailoring your course as needed based on

the analytics.

Learn more at cengage.com/mindtap.

xii

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

Student Resources

Print

Student Solutions Manual

ISBN-13: 978-1-305-87658-3

The Student Solutions Manual provides complete worked-out solutions to all

odd-numbered exercises in the text. Also included are the solutions to all

Cumulative Test problems.

Media

MindTap for Larson’s Elementary Linear Algebra

MindTap is a digital representation of your course that provides you with the tools

you need to better manage your limited time, stay organized and be successful.

You can complete assignments whenever and wherever you are ready to learn with

course material specially customized for you by your instructor and streamlined in

one proven, easy-to-use interface. With an array of study tools, you’ll get a true

understanding of course concepts, achieve better grades and set the groundwork

for your future courses.

Learn more at cengage.com/mindtap.

CengageBrain.com

To access additional course materials and companion resources, please visit

CengageBrain.com. At the CengageBrain.com home page, search for the ISBN

of your title (from the back cover of your book) using the search box at the top of

the page. This will take you to the product page where free companion resources

can be found.

xiii

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

Acknowledgements

I would like to thank the many people who have helped me during various stages

of writing this new edition. In particular, I appreciate the feedback from the dozens

of instructors who took part in a detailed survey about how they teach linear algebra.

I also appreciate the efforts of the following colleagues who have provided valuable

suggestions throughout the life of this text:

Michael Brown, San Diego Mesa College

Nasser Dastrange, Buena Vista University

Mike Daven, Mount Saint Mary College

David Hemmer, University of Buffalo, SUNY

Wai Lau, Seattle Pacific University

Jorge Sarmiento, County College of Morris.

I would like to thank Bruce H. Edwards, University of Florida, and

David C. Falvo, The Pennsylvania State University, The Behrend College, for

their contributions to previous editions of Elementary Linear Algebra.

On a personal level, I am grateful to my spouse, Deanna Gilbert Larson, for

her love, patience, and support. Also, a special thanks goes to R. Scott O’Neil.

Ron Larson, Ph.D.

Professor of Mathematics

Penn State University

www.RonLarson.com

xiv

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

1

1.1

1.2

1.3

Systems of Linear

Equations

Introduction to Systems of Linear Equations

Gaussian Elimination and Gauss-Jordan Elimination

Applications of Systems of Linear Equations

Traffic Flow (p. 28)

Electrical Network Analysis (p. 30)

Global Positioning System (p. 16)

Airspeed of a Plane (p. 11)

Balancing Chemical Equations (p. 4)

Clockwise from top left, Rafal Olkis/Shutterstock.com; michaeljung/Shutterstock.com;

Fernando Jose V. Soares/Shutterstock.com; Alexander Raths/Shutterstock.com; edobric/Shutterstock.com

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

1

2

Chapter 1

Systems of Linear Equations

1.1 Introduction to Systems of Linear Equations

Recognize a linear equation in n variables.

Find a parametric representation of a solution set.

Determine

whether a system of linear equations is consistent or

inconsistent.

Use

back-substitution and Gaussian elimination to solve a system

of linear equations.

Linear Equations in n Variables

The study of linear algebra demands familiarity with algebra, analytic geometry,

and trigonometry. Occasionally, you will find examples and exercises requiring a

knowledge of calculus, and these are marked in the text.

Early in your study of linear algebra, you will discover that many of the solution

methods involve multiple arithmetic steps, so it is essential that you check your work. Use

software or a calculator to check your work and perform routine computations.

Although you will be familiar with some material in this chapter, you should

carefully study the methods presented. This will cultivate and clarify your intuition for

the more abstract material that follows.

Recall from analytic geometry that the equation of a line in two-dimensional space

has the form

a1x + a2y = b, a1, a2, and b are constants.

This is a linear equation in two variables x and y. Similarly, the equation of a plane

in three-dimensional space has the form

a1x + a2 y + a3z = b, a1, a2, a3, and b are constants.

This is a linear equation in three variables x, y, and z. A linear equation in n variables

is defined below.

Definition of a Linear Equation in n Variables

A linear equation in n variables x1, x2, x3, . . . , xn has the form

a1x1 + a2 x2 + a3 x3 + . . . + an xn = b.

The coefficients a1, a2, a3, . . . , an are real numbers, and the constant term b

is a real number. The number a1 is the leading coefficient, and x1 is the

leading variable.

Linear equations have no products or roots of variables and no variables involved

in trigonometric, exponential, or logarithmic functions. Variables appear only to the

first power.

Linear and Nonlinear Equations

Each equation is linear.

a. 3x + 2y = 7

b. 12x + y − πz = √2

c. (sin π )x1 − 4×2 = e2

Each equation is not linear.

a. xy + z = 2

b. e x − 2y = 4

c. sin x1 + 2×2 − 3×3 = 0

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

1.1

Introduction to Systems of Linear Equations

3

Solutions and Solution Sets

A solution of a linear equation in n variables is a sequence of n real numbers s1, s2,

s3, . . . , sn that satisfy the equation when you substitute the values

x1 = s1, x 2 = s2, x 3 = s3, . . . , xn = sn

into the equation. For example, x1 = 2 and x 2 = 1 satisfy the equation x1 + 2×2 = 4.

Some other solutions are x1 = −4 and x 2 = 4, x1 = 0 and x 2 = 2, and x1 = −2 and

x 2 = 3.

The set of all solutions of a linear equation is its solution set, and when you have

found this set, you have solved the equation. To describe the entire solution set of a

linear equation, use a parametric representation, as illustrated in Examples 2 and 3.

Parametric Representation of a Solution Set

Solve the linear equation x1 + 2×2 = 4.

solution

To find the solution set of an equation involving two variables, solve for one of the

variables in terms of the other variable. Solving for x1 in terms of x2, you obtain

x1 = 4 − 2×2.

In this form, the variable x2 is free, which means that it can take on any real value. The

variable x1 is not free because its value depends on the value assigned to x2. To represent

the infinitely many solutions of this equation, it is convenient to introduce a third variable

t called a parameter. By letting x2 = t, you can represent the solution set as

x1 = 4 − 2t, x2 = t, t is any real number.

To obtain particular solutions, assign values to the parameter t. For instance, t = 1

yields the solution x1 = 2 and x2 = 1, and t = 4 yields the solution x1 = −4

and x2 = 4.

To parametrically represent the solution set of the linear equation in Example 2

another way, you could have chosen x1 to be the free variable. The parametric

representation of the solution set would then have taken the form

x1 = s, x2 = 2 − 12s, s is any real number.

For convenience, when an equation has more than one free variable, choose the

variables that occur last in the equation to be the free variables.

Parametric Representation of a Solution Set

Solve the linear equation 3x + 2y − z = 3.

solution

Choosing y and z to be the free variables, solve for x to obtain

3x = 3 − 2y + z

x = 1 − 23y + 13z.

Letting y = s and z = t, you obtain the parametric representation

x = 1 − 23s + 13t, y = s, z = t

where s and t are any real numbers. Two particular solutions are

x = 1, y = 0, z = 0 and x = 1, y = 1, z = 2.

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

4

Chapter 1

Systems of Linear Equations

SyStEmS oF LInEar EquatIonS

A system of m linear equations in n variables is a set of m equations, each of which

is linear in the same n variables:

rEmarK

The double-subscript notation

indicates aij is the coefficient

of xj in the ith equation.

a11x1 + a12x2 + a13x3 + . . . + a1n xn = b1

a21x1 + a22x2 + a23x3 + . . . + a2n xn = b2

a31x1 + a32x2 + a33x3 + . . . + a3n xn = b3

⋮

am1x1 + am2x2 + am3x3 + . . . + amn xn = bm.

A system of linear equations is also called a linear system. A solution of a linear

system is a sequence of numbers s1, s2, s3, . . . , sn that is a solution of each equation

in the system. For example, the system

3×1 + 2×2 = 3

−x1 + x2 = 4

has x1 = −1 and x2 = 3 as a solution because x1 = −1 and x2 = 3 satisfy both

equations. On the other hand, x1 = 1 and x2 = 0 is not a solution of the system because

these values satisfy only the first equation in the system.

DI S C O VERY

1.

Graph the two lines

3x − y = 1

2x − y = 0

2.

in the xy-plane. Where do they intersect? How many solutions does

this system of linear equations have?

Repeat this analysis for the pairs of lines

3x − y = 1

3x − y = 1

and

3x − y = 0

6x − 2y = 2.

3.

What basic types of solution sets are possible for a system of two

linear equations in two variables?

See LarsonLinearAlgebra.com for an interactive version of this type of exercise.

LInEar

aLgEbra

aPPLIED

In a chemical reaction, atoms reorganize in one or more

substances. For example, when methane gas (CH4 )

combines with oxygen (O2) and burns, carbon dioxide

(CO2 ) and water (H2O) form. Chemists represent this

process by a chemical equation of the form

(x1)CH4 + (x2)O2 → (x3)CO2 + (x4)H2O.

A chemical reaction can neither create nor destroy atoms.

So, all of the atoms represented on the left side of the

arrow must also be on the right side of the arrow. This

is called balancing the chemical equation. In the above

example, chemists can use a system of linear equations

to find values of x1, x2, x3, and x4 that will balance the

chemical equation.

Elnur/Shutterstock.com

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

1.1

5

Introduction to Systems of Linear Equations

It is possible for a system of linear equations to have exactly one solution,

infinitely many solutions, or no solution. A system of linear equations is consistent

when it has at least one solution and inconsistent when it has no solution.

Systems of Two Equations in Two Variables

Solve and graph each system of linear equations.

a. x + y = 3

x − y = −1

b.

x+ y=3

2x + 2y = 6

c. x + y = 3

x+y=1

solution

a. T

his system has exactly one solution, x = 1 and y = 2. One way to obtain

the solution is to add the two equations to give 2x = 2, which implies x = 1

and so y = 2. The graph of this system is two intersecting lines, as shown in

Figure 1.1(a).

b. T

his system has infinitely many solutions because the second equation is the result

of multiplying both sides of the first equation by 2. A parametric representation of

the solution set is

x = 3 − t, y = t, t is any real number.

The graph of this system is two coincident lines, as shown in Figure 1.1(b).

c. T

his system has no solution because the sum of two numbers cannot be 3 and 1

simultaneously. The graph of this system is two parallel lines, as shown in

Figure 1.1(c).

y

y

4

y

3

3

3

2

2

2

1

1

1

−1

1

2

3

a. Two intersecting lines:

x+y= 3

x − y = −1

x

x

1

2

3

b. Two coincident lines:

x+ y=3

2x + 2y = 6

−1

1

2

3

x

−1

c. Two parallel lines:

x+y=3

x+y=1

Figure 1.1

Example 4 illustrates the three basic types of solution sets that are possible for a

system of linear equations. This result is stated here without proof. (The proof is

provided later in Theorem 2.5.)

Number of Solutions of a System of Linear Equations

For a system of linear equations, precisely one of the statements below is true.

1. The system has exactly one solution (consistent system).

2. The system has infinitely many solutions (consistent system).

3. The system has no solution (inconsistent system).

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

6

Chapter 1

Systems of Linear Equations

Solving a System of Linear Equations

Which system is easier to solve algebraically?

x − 2y + 3z = 9

−x + 3y

= −4

2x − 5y + 5z = 17

x − 2y + 3z = 9

y + 3z = 5

z=2

The system on the right is clearly easier to solve. This system is in row‑echelon form,

which means that it has a “stair-step” pattern with leading coefficients of 1. To solve

such a system, use back-substitution.

Using Back-Substitution in Row-Echelon Form

Use back-substitution to solve the system.

x − 2y = 5

y = −2

Equation 1

Equation 2

solution

From Equation 2, you know that y = −2. By substituting this value of y into Equation 1,

you obtain

x − 2(−2) = 5

x = 1.

Substitute −2 for y.

Solve for x.

The system has exactly one solution: x = 1 and y = −2.

The term back-substitution implies that you work backwards. For instance,

in Example 5, the second equation gives you the value of y. Then you substitute

that value into the first equation to solve for x. Example 6 further demonstrates this

procedure.

Using Back-Substitution in Row-Echelon Form

Solve the system.

x − 2y + 3z = 9

y + 3z = 5

z=2

Equation 1

Equation 2

Equation 3

solution

From Equation 3, you know the value of z. To solve for y, substitute z = 2 into

Equation 2 to obtain

y + 3(2) = 5

y = −1.

Substitute 2 for z.

Solve for y.

Then, substitute y = −1 and z = 2 in Equation 1 to obtain

x − 2(−1) + 3(2) = 9

x = 1.

Substitute −1 for y and 2 for z.

Solve for x.

The solution is x = 1, y = −1, and z = 2.

Two systems of linear equations are equivalent when they have the same solution

set. To solve a system that is not in row-echelon form, first rewrite it as an equivalent

system that is in row-echelon form using the operations listed on the next page.

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

1.1

Introduction to Systems of Linear Equations

7

operations that Produce Equivalent Systems

Each of these operations on a system of linear equations produces an equivalent

system.

1. Interchange two equations.

2. Multiply an equation by a nonzero constant.

3. Add a multiple of an equation to another equation.

Rewriting a system of linear equations in row-echelon form usually involves

a chain of equivalent systems, using one of the three basic operations to obtain

each system. This process is called Gaussian elimination, after the German

mathematician Carl Friedrich Gauss (1777–1855).

Carl Friedrich gauss

(1777–1855)

German mathematician

Carl Friedrich Gauss is

recognized, with Newton

and Archimedes, as one

of the three greatest

mathematicians in history.

Gauss used a form of what

is now known as Gaussian

elimination in his research.

Although this method was

named in his honor, the

Chinese used an

almost identical

method some

2000 years prior

to Gauss.

x − 2y +3z = 9

2x − 5y + 5z = 17

− x + 3y = − 4

z

(1, −1, 2)

y

x

using Elimination to rewrite

a System in row-Echelon Form

See LarsonLinearAlgebra.com for an interactive version of this type of example.

Solve the system.

x − 2y + 3z = 9

−x + 3y

= −4

2x − 5y + 5z = 17

SoLutIon

Although there are several ways to begin, you want to use a systematic procedure

that can be applied to larger systems. Work from the upper left corner of the

system, saving the x at the upper left and eliminating the other x-terms from the

first column.

x − 2y + 3z = 9

y + 3z = 5

2x − 5y + 5z = 17

Adding the first equation to

the second equation produces

a new second equation.

x − 2y + 3z = 9

y + 3z = 5

−y − z = −1

Adding −2 times the first

equation to the third equation

produces a new third equation.

Now that you have eliminated all but the first x from the first column, work on the

second column.

x − 2y + 3z = 9

y + 3z = 5

2z = 4

Adding the second equation to

the third equation produces

a new third equation.

x − 2y + 3z = 9

y + 3z = 5

z=2

Multiplying the third equation

by 12 produces a new third

equation.

This is the same system you solved in Example 6, and, as in that example, the solution is

x = 1,

Figure 1.2

y = −1,

z = 2.

Each of the three equations in Example 7 represents a plane in a three-dimensional

coordinate system. The unique solution of the system is the point (x, y, z) = (1, −1, 2),

so the three planes intersect at this point, as shown in Figure 1.2.

Nicku/Shutterstock.com

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

8

Chapter 1

Systems of Linear Equations

Many steps are often required to solve a system of linear equations, so it is

very easy to make arithmetic errors. You should develop the habit of checking your

solution by substituting it into each equation in the original system. For instance,

in Example 7, check the solution x = 1, y = −1, and z = 2 as shown below.

Equation 1: (1) − 2(−1) + 3(2) = 9

= −4

Equation 2: − (1) + 3(−1)

Equation 3: 2(1) − 5(−1) + 5(2) = 17

Substitute the solution

into each equation of the

original system.

The next example involves an inconsistent system—one that has no solution.

The key to recognizing an inconsistent system is that at some stage of the Gaussian

elimination process, you obtain a false statement such as 0 = −2.

An Inconsistent System

Solve the system.

x1 − 3×2 + x3 = 1

2×1 − x2 − 2×3 = 2

x1 + 2×2 − 3×3 = −1

solution

x1 − 3×2 + x3 = 1

5×2 − 4×3 = 0

x1 + 2×2 − 3×3 = −1

equation to the second equation

x1 − 3×2 + x3 = 1

5×2 − 4×3 = 0

5×2 − 4×3 = −2

equation to the third equation

Adding −2 times the first

produces a new second equation.

Adding −1 times the first

produces a new third equation.

(Another way of describing this operation is to say that you subtracted the first

equation from the third equation to produce a new third equation.)

x1 − 3×2 + x3 = 1

5×2 − 4×3 = 0

0 = −2

Subtracting the second equation

from the third equation produces

a new third equation.

The statement 0 = −2 is false, so this system has no solution. Moreover, this system

is equivalent to the original system, so the original system also has no solution.

As in Example 7, the three equations in

Example 8 represent planes in a three-dimensional

coordinate system. In this example, however, the

system is inconsistent. So, the planes do not have a

point in common, as shown at the right.

x1 + 2×2 − 3×3 = −1

x1 − 3×2 + x3 = 1

x3

x1

x2

2×1 − x2 − 2×3 = 2

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

1.1

Introduction to Systems of Linear Equations

9

This section ends with an example of a system of linear equations that has infinitely

many solutions. You can represent the solution set for such a system in parametric

form, as you did in Examples 2 and 3.

a System with Infinitely many Solutions

Solve the system.

x2 − x3 = 0

x1

− 3×3 = −1

−x1 + 3×2

= 1

SoLutIon

Begin by rewriting the system in row-echelon form, as shown below.

− 3×3 = −1

x2 − x3 = 0

−x1 + 3×2

= 1

x1

x1

− 3×3 = −1

x2 − x3 = 0

3×2 − 3×3 = 0

x1

− 3×3 = −1

x2 − x3 = 0

0= 0

Interchange the first

two equations.

Adding the first equation to the

third equation produces a new

third equation.

Adding −3 times the second

equation to the third equation

eliminates the third equation.

The third equation is unnecessary, so omit it to obtain the system shown below.

x1

− 3×3 = −1

x2 − x3 = 0

To represent the solutions, choose x3 to be the free variable and represent it by the

parameter t. Because x2 = x3 and x1 = 3×3 − 1, you can describe the solution set as

x1 = 3t − 1,

x2 = t,

x3 = t, t is any real number.

D ISCO VERY

1.

Graph the two lines represented by the system of equations.

x − 2y = 1

−2x + 3y = −3

2.

Use Gaussian elimination to solve this system as shown below.

x − 2y = 1

−1y = −1

rEmarK

You are asked to repeat this

graphical analysis for other

systems in Exercises 91

and 92.

x − 2y = 1

y=1

x=3

y=1

Graph the system of equations you obtain at each step of this

process. What do you observe about the lines?

See LarsonLinearAlgebra.com for an interactive version of this type of exercise.

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

10

Chapter 1

Systems of Linear Equations

1.1 Exercises

See CalcChat.com for worked-out solutions to odd-numbered exercises.

Linear Equations In Exercises 1–6, determine whether

the equation is linear in the variables x and y.

1. 2x − 3y = 4 2. 3x − 4xy = 0

3.

3 2

+ − 1 = 0 4. x 2 + y2 = 4

y

x

5. 2 sin x − y = 14 6. (cos 3)x + y = −16

Parametric Representation In Exercises 7–10, find

a parametric representation of the solution set of the

linear equation.

7. 2x − 4y = 0 8. 3x − 12 y = 9

9. x + y + z = 1

10. 12×1 + 24×2 − 36×3 = 12

In Exercises 11–24, graph the

system of linear equations. Solve the system and

interpret your answer.

11. 2x + y = 4

12. x + 3y = 2

x−y=2

−x + 2y = 3

Graphical Analysis

13. −x + y = 1

3x − 3y = 4

15. 3x − 5y = 7

2x + y = 9

17. 2x − y = 5

5x − y = 11

19.

x+3 y−1

+

= 1

4

3

2x − y = 12

21. 0.05x − 0.03y = 0.07

0.07x + 0.02y = 0.16

23.

x

y

+ =1

4 6

x−y=3

1

− 3y = 1

−2x + 43 y = −4

16. −x + 3y = 17

4x + 3y = 7

14.

18.

20.

1

2x

32.

33. 2x − 8y = 3

1

2x + y = 0

34. 9x − 4y = 5

1

1

2x + 3 y = 0

35.

4x − 8y = 9

0.8x − 1.6y = 1.8

4x − 5y = 3

−8x + 10y = 14

36. −14.7x + 2.1y = 1.05

44.1x − 6.3y = −3.15

System of Linear Equations In Exercises 37–56, solve

the system of linear equations.

37. x1 − x2 = 0

38. 3x + 2y = 2

3×1 − 2×2 = −1

6x + 4y = 14

x − 5y = 21

6x + 5y = 21

41. 9x − 3y = −1

1

2

1

5x + 5y = − 3

42. 23×1 + 16×2 = 0

4×1 + x2 = 0

x−1 y+2

+

=4

2

3

x − 2y = 5

43.

y

2

2x

+ =

3

6 3

4x + y = 4

In Exercises 25–30, use backsubstitution to solve the system.

26. 2×1 − 4×2 = 6

25. x1 − x2 = 2

x2 = 3

3×2 = 9

29. 5×1 + 2×2 + x3 = 0

2×1 + x2

=0

31. −3x − y = 3

6x + 2y = 1

40.

Back-Substitution

27. −x + y − z = 0

2y + z = 3

1

2z = 0

(a) Use a graphing utility to graph the system.

(b) Use the graph to determine whether the system is

consistent or inconsistent.

(c) If the system is consistent, approximate the solution.

(d) Solve the system algebraically.

(e)

Compare the solution in part (d) with the

approximation in part (c). What can you conclude?

39. 3u + v = 240

u + 3v = 240

22. 0.2x − 0.5y = −27.8

0.3x − 0.4y = 68.7

24.

Graphical Analysis In Exercises 31–36, complete parts

(a)–(e) for the system of equations.

28. x − y

= 5

3y + z = 11

4z = 8

30. x1 + x2 + x3 = 0

x2

=0

44.

x1 − 2×2 = 0

6×1 + 2×2 = 0

x−2 y−1

+

= 2

4

3

x − 3y = 20

x1 + 4 x2 + 1

+

= 1

3

2

3×1 − x2 = −2

45. 0.02×1 − 0.05×2 = −0.19

0.03×1 + 0.04×2 = 0.52

46. 0.05×1 − 0.03×2 = 0.21

0.07×1 + 0.02×2 = 0.17

47. x − y − z = 0

x + 2y − z = 6

2x

−z=5

48. x + y + z = 2

−x + 3y + 2z = 8

4x + y

=4

49. 3×1 − 2×2 + 4×3 = 1

x1 + x2 − 2×3 = 3

2×1 − 3×2 + 6×3 = 8

The symbol indicates an exercise in which you are instructed to use a

graphing utility or software program.

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

1.1

50. 5×1 − 3×2 + 2×3 = 3

2×1 + 4×2 − x3 = 7

x1 − 11×2 + 4×3 = 3

51. 2×1 + x2 − 3×3 = 4

4×1

+ 2×3 = 10

−2×1 + 3×2 − 13×3 = −8

52.

x1

+ 4×3 = 13

7

4×1 − 2×2 + x3 =

2×1 − 2×2 − 7×3 = −19

x − 3y + 2z = 18

5x − 15y + 10z = 18

54. x1 − 2×2 + 5×3 = 2

3×1 + 2×2 − x3 = −2

55.

x+ y+z+ w=6

2x + 3y

− w=0

−3x + 4y + z + 2w = 4

x + 2y − z + w = 0

53.

56. −x1

+ 2×4 = 1

4×2 − x3 − x4 = 2

x2

− x4 = 0

3×1 − 2×2 + 3×3

=4

System of Linear Equations In Exercises 57–62, use

a software program or a graphing utility to solve the

system of linear equations.

57. 123.5x + 61.3y − 32.4z = −262.74

54.7x − 45.6y + 98.2z = 197.4

42.4x − 89.3y + 12.9z =

33.66

58. 120.2x + 62.4y − 36.5z = 258.64

56.8x − 42.8y + 27.3z = −71.44

88.1x + 72.5y − 28.5z = 225.88

59.

x1 + 0.5×2 + 0.33×3 + 0.25×4 = 1.1

0.5×1 + 0.33×2 + 0.25×3 + 0.21×4 = 1.2

0.33×1 + 0.25×2 + 0.2×3 + 0.17×4 = 1.3

0.25×1 + 0.2×2 + 0.17×3 + 0.14×4 = 1.4

60. 0.1x − 2.5y + 1.2z

2.4x + 1.5y − 1.8z

0.4x − 3.2y + 1.6z

1.6x + 1.2y − 3.2z

1

3

2

− 0.75w = 108

+ 0.25w = −81

− 1.4w = 148.8

+ 0.6w = −143.2

349

61. 2×1 − 7×2 + 9×3 = 630

2

4

2

19

3 x1 + 9 x2 − 5 x3 = − 45

4

1

4

139

5 x1 − 8 x2 + 3 x3 = 150

62. 18 x − 17 y + 16 z

1

1

1

7x + 6y − 5z

1

1

1

6x − 5y + 4z

1

1

1

5x + 4y − 3z

− 15 w = 1

+ 14 w = 1

− 13 w = 1

+ 12 w = 1

Exercises

11

Number of Solutions In Exercises 63–66, state why

the system of equations must have at least one solution.

Then solve the system and determine whether it has

exactly one solution or infinitely many solutions.

63. 4x + 3y + 17z = 0

=0

64. 2x + 3y

5x + 4y + 22z = 0

4x + 3y − z = 0

8x + 3y + 3z = 0

4x + 2y + 19z = 0

65. 5x + 5y − z = 0

66. 16x + 3y + z = 0

16x + 2y − z = 0

10x + 5y + 2z = 0

5x + 15y − 9z = 0

67. N

utrition One eight-ounce glass of apple juice and

one eight-ounce glass of orange juice contain a total of

227 milligrams of vitamin C. Two eight-ounce glasses

of apple juice and three eight-ounce glasses of orange

juice contain a total of 578 milligrams of vitamin C.

How much vitamin C is in an eight-ounce glass of each

type of juice?

68. A

irplane Speed Two planes start from Los Angeles

International Airport and fly in opposite directions. The

second plane starts 12 hour after the first plane, but its

speed is 80 kilometers per hour faster. Two hours after

the first plane departs, the planes are 3200 kilometers

apart. Find the airspeed of each plane.

True or False? In Exercises 69 and 70, determine

whether each statement is true or false. If a statement

is true, give a reason or cite an appropriate statement

from the text. If a statement is false, provide an example

that shows the statement is not true in all cases or cite an

appropriate statement from the text.

69. (a) A system of one linear equation in two variables is

always consistent.

(b) A system of two linear equations in three variables

is always consistent.

(c) If a linear system is consistent, then it has infinitely

many solutions.

70. (a) A linear system can have exactly two solutions.

(b)

Two systems of linear equations are equivalent

when they have the same solution set.

(c) A system of three linear equations in two variables

is always inconsistent.

71. F

ind a system of two equations in two variables, x1 and

x2, that has the solution set given by the parametric

representation x1 = t and x2 = 3t − 4, where t is any

real number. Then show that the solutions to the system

can also be written as

x1 =

4

t

+ and x2 = t.

3 3

The symbol

indicates that electronic data sets for these exercises are available

at LarsonLinearAlgebra.com. The data sets are compatible with MATLAB,

Mathematica, Maple, TI-83 Plus, TI-84 Plus, TI-89, and Voyage 200.

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

12

Chapter 1

Systems of Linear Equations

72. F

ind a system of two equations in three variables,

x1, x2, and x3, that has the solution set given by the

parametric representation

x1 = t, x2 = s, and x3 = 3 + s − t

86. C

APSTONE Find values of a, b, and c such

that the system of linear equations has (a) exactly

one solution, (b) infinitely many solutions, and

(c) no solution. Explain.

where s and t are any real numbers. Then show that the

solutions to the system can also be written as

x + 5y + z = 0

x + 6y − z = 0

2x + ay + bz = c

x1 = 3 + s − t, x2 = s, and x3 = t.

In Exercises 73–76, solve the system

of equations by first letting A = 1x, B = 1y, and

C = 1z.

12 12

3 2

73.

74. + = −1

−

=7

x

y

x

y

17

3 4

2 3

+ =0

− =−

x

y

x

y

6

Substitution

75.

2 1

3

+

− = 4

x

y

z

4

2

+ = 10

x

z

2 3 13

− + −

= −8

x

y

z

76.

2 1 2

+ − = 5

x

y

z

3 4

−

= −1

x

y

2 1 3

+ + = 0

x

y

z

Trigonometric Coefficients In Exercises 77 and 78,

solve the system of linear equations for x and y.

77. (cos θ )x + (sin θ )y = 1

(−sin θ )x + (cos θ )y = 0

78. (cos θ )x + (sin θ )y = 1

(−sin θ )x + (cos θ )y = 1

In Exercises 79–84, determine the

value(s) of k such that the system of linear equations has

the indicated number of solutions.

79. No solution

80. Exactly one solution

Coefficient Design

x + ky = 2

x + ky = 0

kx + y = 4

kx + y = 0

81. Exactly one solution

82. No solution

x + 2y + kz = 6

kx + 2ky + 3kz = 4k

x + y + z = 0 3x + 6y + 8z = 4

2x − y + z = 1

83. Infinitely many solutions

4x + ky = 6

kx + y = −3

84. Infinitely many solutions

87. W

riting Consider the system of linear equations in x

and y.

a1 x + b1 y = c1

a2 x + b2 y = c2

a3 x + b3 y = c3

Describe the graphs of these three equations in the

xy-plane when the system has (a) exactly one solution,

(b) infinitely many solutions, and (c) no solution.

88. Writing Explain why the system of linear equations

in Exercise 87 must be consistent when the constant

terms c1, c2, and c3 are all zero.

89.

Show that if ax 2 + bx + c = 0 for all x, then

a = b = c = 0.

90. Consider the system of linear equations in x and y.

ax + by = e

cx + dy = f

Under what conditions will the system have exactly one

solution?

Discovery In Exercises 91 and 92, sketch the lines

represented by the system of equations. Then use

Gaussian elimination to solve the system. At each step of

the elimination process, sketch the corresponding lines.

What do you observe about the lines?

91.

x − 4y = −3

5x − 6y = 13

2x − 3y =

7

−4x + 6y = −14

Writing In Exercises 93 and 94, the graphs of the

two equations appear to be parallel. Solve the system

of equations algebraically. Explain why the graphs are

misleading.

93. 100y − x = 200

99y − x = −198

94. 21x − 20y = 0

13x − 12y = 120

y

kx + y = 16

3x − 4y = −64

85. Determine the values of k such that the system of linear

equations does not have a unique solution.

x + y + kz = 3

x + ky + z = 2

kx + y + z = 1

92.

y

20

4

3

10

1

−3

−1

1 2 3 4

x

−10

10

20

x

−3

−4

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

1.2

13

Gaussian Elimination and Gauss-Jordan Elimination

1.2 Gaussian Elimination and Gauss-Jordan Elimination

Determine the size of a matrix and write an augmented or

coefficient matrix from a system of linear equations.

Use

matrices and Gaussian elimination with back-substitution

to solve a system of linear equations.

Use

matrices and Gauss-Jordan elimination to solve a system

of linear equations.

Solve a homogeneous system of linear equations.

Matrices

Section 1.1 introduced Gaussian elimination as a procedure for solving a system of

linear equations. In this section, you will study this procedure more thoroughly,

beginning with some definitions. The first is the definition of a matrix.

REMARK

The plural of matrix is matrices.

When each entry of a matrix is

a real number, the matrix

is a real matrix. Unless stated

otherwise, assume all matrices

in this text are real matrices.

Definition of a Matrix

If m and n are positive integers, then an m

array

Column 1

Row 1

Row 2

Row 3

⋮

Row m

[

×

n (read “m by n”) matrix is a rectangular

Column 2

Column 3

. . .

a11

a21

a31

a12

a22

a32

a13

a23

a33

…

…

…

a1n

a2n

a3n

am1

am2

am3

…

amn

⋮

⋮

⋮

Column n

⋮

]

in which each entry, aij, of the matrix is a number. An m × n matrix has m rows

and n columns. Matrices are usually denoted by capital letters.

The entry aij is located in the ith row and the jth column. The index i is called the

row subscript because it identifies the row in which the entry lies, and the index j is

called the column subscript because it identifies the column in which the entry lies.

A matrix with m rows and n columns is of size m × n. When m = n, the matrix is

square of order n and the entries a11, a22, a33, . . . , ann are the main diagonal entries.

Sizes of Matrices

Each matrix has the indicated size.

a. [2] Size: 1 × 1

REMARK

Begin by aligning the variables

in the equations vertically. Use

0 to show coefficients of zero

in the matrix. Note the fourth

column of constant terms in

the augmented matrix.

b.

[00 00] Size: 2 × 2

c.

[πe

2

√2

−7

Size: 2 × 3

4

]

One common use of matrices is to represent systems of linear equations. The

matrix derived from the coefficients and constant terms of a system of linear equations

is the augmented matrix of the system. The matrix containing only the coefficients of

the system is the coefficient matrix of the system. Here is an example.

System

x − 4y + 3z = 5

−x + 3y − z = −3

2x

− 4z = 6

Augmented Matrix

[

1

−1

2

−4

3

0

3

−1

−4

5

−3

6

Cofficient Matrix

] [

1

−1

2

−4

3

0

3

−1

−4

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

]

14

Chapter 1

Systems of Linear Equations

Elementary Row Operations

In the previous section, you studied three operations that produce equivalent systems

of linear equations.

1. Interchange two equations.

2. Multiply an equation by a nonzero constant.

3. Add a multiple of an equation to another equation.

In matrix terminology, these three operations correspond to elementary row operations.

An elementary row operation on an augmented matrix produces a new augmented matrix

corresponding to a new (but equivalent) system of linear equations. Two matrices are

row-equivalent when one can be

obtained from the other by a finite sequence of

elementary row operations.

Elementary Row Operations

1. Interchange two rows.

2. Multiply a row by a nonzero constant.

3. Add a multiple of a row to another row.

technology

Many graphing utilities and

software programs can perform

elementary row operations on

matrices. If you use a graphing

utility, you may see something

similar to the screen below for

Example 2(c). The Technology

Guide at CengageBrain.com

can help you use technology

to perform elementary row

operations.

A

[[1 2

[0 3

[2 1

-4 3 ]

-2 -1 ]

5 -2 ]]

mRAdd(-2, A, 1, 3)

[[1 2 -4 3 ]

[0 3 -2 -1 ]

[0 -3 13 -8]]

Although elementary row operations are relatively simple to perform, they can

involve a lot of arithmetic, so it is easy to make a mistake. Noting the elementary row

operations performed in each step can make checking your work easier.

Solving some systems involves many steps, so it is helpful to use a shorthand

method of notation to keep track of each elementary row operation you perform. The

next example introduces this notation.

Elementary Row Operations

a. Interchange the first and second rows.

Original Matrix

[

0

−1

2

1

2

−3

3

0

4

New Row-Equivalent Matrix

4

3

1

] [

−1

0

2

2

1

−3

0

3

4

3

4

1

]

Notation

R1 ↔ R2

b. Multiply the first row by 12 to produce a new first row.

Original Matrix

[

2

1

5

−4

3

−2

6

−3

1

−2

0

2

New Row-Equivalent Matrix

]

[

1

1

5

−2

3

−2

3

−3

1

−1

0

2

]

Notation

(12 )R1 → R1

c. Add −2 times the first row to the third row to produce a new third row.

Original Matrix

[

1

0

2

2

3

1

−4

−2

5

3

−1

−2

New Row-Equivalent Matrix

]

[

1

0

0

2

3

−3

−4

−2

13

3

−1

−8

]

Notation

R3 + (−2)R1 → R3

Notice that adding −2 times row 1 to row 3 does not change row 1.

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

1.2

Gaussian Elimination and Gauss-Jordan Elimination

15

In Example 7 in Section 1.1, you used Gaussian elimination with back‑substitution

to solve a system of linear equations. The next example demonstrates the matrix

version of Gaussian elimination. The two methods are essentially the same. The basic

difference is that with matrices you do not need to keep writing the variables.

Using Elementary Row Operations

to Solve a System

Linear System

x − 2y + 3z = 9

−x + 3y

= −4

2x − 5y + 5z = 17

Add the first equation to the second

equation.

x − 2y + 3z = 9

y + 3z = 5

2x − 5y + 5z = 17

Add −2 times the first equation to the

third equation.

x − 2y + 3z = 9

y + 3z = 5

−y − z = −1

Add the second equation to the third

equation.

x − 2y + 3z = 9

y + 3z = 5

2z = 4

Associated Augmented Matrix

[

1

−1

2

−2

3

−5

3

0

5

9

−4

17

]

Add the first row to the second row to

produce a new second row.

[

1

0

2

−2

1

−5

3

3

5

]

9

5 R2 + R1 → R2

17

Add −2 times the first row to the third

row to produce a new third row.

[

1

0

0

−2

1

−1

3

3

−1

]

9

5

−1

R3 + (−2)R1 → R3

Add the second row to the third row to

produce a new third row.

[

1

0

0

−2

1

0

3

3

2

]

9

5

4

R3 + R2 → R 3

Multiply the third equation by 12. Multiply the third row by 12 to produce

a new third row.

REMARK

The term echelon refers to the

stair-step pattern formed by

the nonzero elements of the

matrix.

x − 2y + 3z = 9

y + 3z = 5

z=2

[

1

0

0

−2

1

0

3

3

1

]

9

5

2 (12 )R3 → R3

Use back‑substitution to find the solution, as in Example 6 in Section 1.1. The solution

is x = 1, y = −1, and z = 2.

The last matrix in Example 3 is in row-echelon form. To be in this form, a matrix

must have the properties listed below.

Row-Echelon Form and Reduced Row-Echelon Form

A matrix in row-echelon form has the properties below.

1. Any rows consisting entirely of zeros occur at the bottom of the matrix.

2. For each row that does not consist entirely of zeros, the first nonzero entry

is 1 (called a leading 1).

3. For two successive (nonzero) rows, the leading 1 in the higher row is farther

to the left than the leading 1 in the lower row.

A matrix in row-echelon form is in reduced row-echelon form when every column

that has a leading 1 has zeros in every position above and below its leading 1.

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

16

Chapter 1

Systems of Linear Equations

row-Echelon Form

tEchnoloGy

Use a graphing utility or

a software program to find

the row-echelon forms of the

matrices in Examples 4(b)

and 4(e) and the reduced

row-echelon forms of the

matrices in Examples 4(a),

4(b), 4(c), and 4(e). The

technology Guide at

CengageBrain.com can help

you use technology to find

the row-echelon and reduced

row-echelon forms of a matrix.

Similar exercises and projects

are also available on the

website.

Determine whether each matrix is in row-echelon form. If it is, determine whether the

matrix is also in reduced row-echelon form.

[

2

1

0

−1

0

1

[

4

3

−2

1

0

0

0

−5

0

0

0

2

1

0

0

−1

3

1

0

1

e. 0

0

[

2

2

0

−3

1

1

4

−1

−3

1

a. 0

0

c.

]

]

3

−2

4

1

]

[

]

2

0

1

−1

0

2

2

0

−4

1

0

0

0

0

0

1

0

−1

2

3

0

0

0

0

0

1

0

5

3

0

1

b. 0

0

d.

[

0

1

0

0

f.

[

1

0

0

]

]

solUtion

The matrices in (a), (c), (d), and (f ) are in row-echelon form. The matrices in (d) and (f)

are in reduced row-echelon form because every column that has a leading 1 has zeros in

every position above and below its leading 1. The matrix in (b) is not in row-echelon form

because the row of all zeros does not occur at the bottom of the matrix. The matrix in (e)

is not in row-echelon form because the first nonzero entry in Row 2 is not 1.

Every matrix is row-equivalent to a matrix in row-echelon form. For instance, in

Example 4(e), multiplying the second row in the matrix by 12 changes the matrix to

row-echelon form.

The procedure for using Gaussian elimination with back-substitution is

summarized below.

Gaussian Elimination with Back-substitution

1. Write the augmented matrix of the system of linear equations.

2. Use elementary row operations to rewrite the matrix in row-echelon form.

3. Write the system of linear equations corresponding to the matrix in

row-echelon form, and use back-substitution to find the solution.

Gaussian elimination with back-substitution works well for solving systems of linear

equations by hand or with a computer. For this algorithm, the order in which you perform

the elementary row operations is important. Operate from left to right by columns, using

elementary row operations to obtain zeros in all entries directly below the leading 1’s.

linEar

alGEBra

appliED

The Global Positioning System (GPS) is a network of

24 satellites originally developed by the U.S. military as a

navigational tool. Today, GPS technology is used in a wide

variety of civilian applications, such as package delivery,

farming, mining, surveying, construction, banking, weather

forecasting, and disaster relief. A GPS receiver works by using

satellite readings to calculate its location. In three dimensions,

the receiver uses signals from at least four satellites to

“trilaterate” its position. In a simplified mathematical model,

a system of three linear equations in four unknowns (three

dimensions and time) is used to determine the coordinates

of the receiver as functions of time.

edobric/Shutterstock.com

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

1.2

Gaussian Elimination and Gauss-Jordan Elimination

17

Gaussian Elimination with Back-Substitution

Solve the system.

x2 + x3 − 2×4 = −3

2

x1 + 2×2 − x3

=

+

+

x

−

=

−2

2×1 4×2

3×4

3

x1 − 4×2 − 7×3 − x4 = −19

solution

The augmented matrix for this system is

[

0

1

2

1

1

2

4

−4

1

−1

1

−7

]

−2 −3

0

2

.

−3 −2

−1 −19

Obtain a leading 1 in the upper left corner and zeros elsewhere in the first column.

[

[

[

1

0

2

1

2

1

4

−4

−1

1

1

−7

0

2

−2 −3

−3 −2

−1 −19

1

0

0

1

2

1

0

−4

−1

1

3

−7

0

2

−2 −3

−3 −6

−1 −19

1

0

0

0

2

1

0

−6

−1

1

3

−6

0

2

−2 −3

−3 −6

−1 −21

]

]

]

Interchange the first

two rows.

R1 ↔ R2

Adding −2 times the

first row to the third

row produces a new

third row.

R3 + (−2)R1 → R3

Adding −1 times the

first row to the fourth

row produces a new

fourth row.

R4 + (−1)R1 → R4

Now that the first column is in the desired form, change the second column as shown

below.

[

1

0

0

0

2

1

0

0

−1

0

2

1 −2 −3

3 −3 −6

0 −13 −39

]

Adding 6 times the

second row to the fourth

row produces a new

fourth row.

R4 + (6)R2 → R4

To write the third and fourth columns in proper form, multiply the third row by 13 and

1

the fourth row by − 13

.

[

1

0

0

0

2

1

0

0

−1

1

1

0

0

−2

−1

1

2

−3

−2

3

]

Multiplying the third

row by 13 and the fourth

1

row by − 13

produces new

third and fourth rows.

(13 )R3 → R3

(− 131 )R4 → R4

The matrix is now in row-echelon form, and the corresponding system is shown below.

x1 + 2×2 − x3

x2 + x3 − 2×4

x3 − x4

x4

= 2

= −3

= −2

= 3

Use back-substitution to find that the solution is x1 = −1, x2 = 2, x3 = 1, and x4 = 3.

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

18

Chapter 1

Systems of Linear Equations

When solving a system of linear equations, remember that it is possible for the

system to have no solution. If, in the elimination process, you obtain a row of all zeros

except for the last entry, then it is unnecessary to continue the process. Simply conclude

that the system has no solution, or is inconsistent.

A System with No Solution

Solve the system.

x1 − x2 + 2×3

x1

+ x3

2×1 − 3×2 + 5×3

3×1 + 2×2 − x3

=4

=6

=4

=1

solution

The augmented matrix for this system is

[

1

1

2

3

−1

0

−3

2

2

1

5

−1

]

4

6

.

4

1

Apply Gaussian elimination to the augmented matrix.

[

[

[

[

1

0

2

3

−1

1

−3

2

2

−1

5

−1

4

2

4

1

1

0

0

3

−1

1

−1

2

2

−1

1

−1

4

2

−4

1

1

0

0

0

−1

1

−1

5

2

4

−1

2

1 −4

−7 −11

1

0

0

0

−1

1

0

5

2

4

−1

2

0 −2

−7 −11

]

]

]

]

R2 + (−1)R1 → R2

R3 + (−2)R1 → R3

R4 + (−3)R1 → R4

R3 + R2 → R 3

Note that the third row of this matrix consists entirely of zeros except for the last entry.

This means that the original system of linear equations is inconsistent. To see why this

is true, convert back to a system of linear equations.

4

x1 − x2 + 2×3 =

2

x2 − x3 =

0 = −2

5×2 − 7×3 = −11

The third equation is not possible, so the system has no solution.

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

1.2

Gaussian Elimination and Gauss-Jordan Elimination

19

GaUss-JorDan EliMination

With Gaussian elimination, you apply elementary row operations to a matrix to

obtain a (row-equivalent) row-echelon form. A second method of elimination, called

Gauss-Jordan elimination after Carl Friedrich Gauss and Wilhelm Jordan (1842–1899),

continues the reduction process until a reduced row-echelon form is obtained.

Example 7 demonstrates this procedure.

Gauss-Jordan Elimination

See LarsonLinearAlgebra.com for an interactive version of this type of example.

Use Gauss-Jordan elimination to solve the system.

x − 2y + 3z = 9

−x + 3y

= −4

2x − 5y + 5z = 17

solUtion

In Example 3, you used Gaussian elimination to obtain the row-echelon form

[

1

0

0

−2

1

0

3

3

1

]

9

5 .

2

Now, apply elementary row operations until you obtain zeros above each of the leading

1’s, as shown below.

[

[

[

1

0

0

0

1

0

9

3

1

19

5

2

1

0

0

0

1

0

9

0

1

19

−1

2

1

0

0

0

1

0

0

0

1

1

−1

2

]

]

]

R1 + (2)R2 → R1

R2 + (−3)R3 → R2

R1 + (−9)R3 → R1

The matrix is now in reduced row-echelon form. Converting back to a system of linear

equations, you have

x= 1

y = −1

z = 2.

The elimination procedures described in this section can sometimes result in

fractional coefficients. For example, in the elimination procedure for the system

rEMarK

No matter which elementary

row operations or order you

use, the reduced row-echelon

form of a matrix is the same.

2x − 5y + 5z = 14

9

3x − 2y + 3z =

−3x + 4y

= −18

you may be inclined to first multiply Row 1 by 12 to produce a leading 1, which will

result in working with fractional coefficients. Sometimes, judiciously choosing which

elementary row operations you apply, and the order in which you apply them, enables

you to avoid fractions.

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

20

Chapter 1

Systems of Linear Equations

DI S CO VERY

1.

2.

Without performing any row operations, explain why the system of

linear equations below is consistent.

2×1 + 3×2 + 5×3 = 0

−5×1 + 6×2 − 17×3 = 0

7×1 − 4×2 + 3×3 = 0

The system below has more variables than equations. Why does it

have an infinite number of solutions?

2×1 + 3×2 + 5×3 + 2×4 = 0

−5×1 + 6×2 − 17×3 − 3×4 = 0

7×1 − 4×2 + 3×3 + 13×4 = 0

The next example demonstrates how Gauss-Jordan elimination can be used to

solve a system with infinitely many solutions.

a system with infinitely Many solutions

Solve the system of linear equations.

2×1 + 4×2 − 2×3 = 0

3×1 + 5×2

=1

solUtion

The augmented matrix for this system is

[23

4

5

−2

0

]

0

.

1

Using a graphing utility, a software program, or Gauss-Jordan elimination, verify that

the reduced row-echelon form of the matrix is

[10

0

1

5

−3

]

2

.

−1

The corresponding system of equations is

x1

+ 5×3 = 2

x2 − 3×3 = −1.

Now, using the parameter t to represent x3, you have

x1 = 2 − 5t,

x2 = −1 + 3t,

x3 = t, t is any real number.

Note in Example 8 that the arbitrary parameter t represents the nonleading

variable x3. The variables x1 and x2 are written as functions of t.

You have looked at two elimination methods for solving a system of linear

equations. Which is better? To some degree the answer depends on personal preference.

In real-life applications of linear algebra, systems of linear equations are usually

solved by computer. Most software uses a form of Gaussian elimination, with

special emphasis on ways to reduce rounding errors and minimize storage of data. The

examples and exercises in this text focus on the underlying concepts, so you should

know both elimination methods.

Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).

Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

1.2

Gaussian Elimination and Gauss-Jordan Elimination

21

Homogeneous Systems of Linear Equations

Systems of linear equations in which each of the constant terms is zero are called

homogeneous. A homogeneous system of m equations in n variables has the form

REMARK

A homogeneous system of

three equations in the three

variables x1, x2, and x3 has the

trivial solution x1 = 0, x2 = 0,

and x3 = 0.

a11x1 + a12x2 + a13x3 + . . . + a1nxn = 0

a21x1 + a22x2 + a23x3 + . . . + a2nxn = 0

⋮

am1x1 + am2x2 + am3x3 + . . . + amnxn = 0.

A homogeneous system must have at least one solution. Specifically, if all variables in

a homogeneous system have the value zero, then each of the equations is satisfied. Such

a solution is trivial (or obvious).

Solving a Homogeneous System

of Linear Equations

Solve the system of linear equations.

x1 − x2 + 3×3 = 0

2×1 + x2 + 3×3 = 0

solution

Applying Gauss-Jordan elimination to the augmented matrix

[12

−1

1

3

3

]

0

0

yields the matrices shown below.

[10

−1

3

3

−3

0

0

[10

−1

1

3

−1

0

0

[10

0

1

2

−1

0

0

]

R2 + (−2)R1 → R2

]

(13 )R2 → R2

]

R1 + R2 → R 1

The system of equations corresponding to this matrix is

x1

+ 2×3 = 0

x2 − x3 = 0.

Using…