Lean Back Into the Future
Consider a tale of two commutes:
You pop out to your garage two minutes before it’s time to go and start your car to warm up the engine. You shovel in the last bites of breakfast, then grab your bag and phone, which you’ll spend the next 40 minutes fighting the urge to check. You pull out of your driveway and soon join the masses on the freeway. You see rows of brake lights ahead and hit a slowdown; your phone tells you an accident three miles away will set you back at least 17 minutes. Your blood pressure shoots sky-high as you think about that morning meeting you’ll now walk into late. Eventually you get downtown, drive past dozens of full (and expensive) parking lots, finally squeeze into a spot, and book it into the office.
Alternatively, two minutes before it’s time to go, you open an app and press a button. A buzz soon tells you that a car’s waiting out front for you. You leave via the front door: you converted your garage into another bedroom long ago. You climb into a driverless car, where you’ll spend the next 20 minutes enjoying that breakfast, getting a head start on work emails, or watching a replay of last night’s game on the built-in monitor. Traffic is smooth—since many people are passengering with others, there are fewer cars to clog the road. Others skip the road altogether, catching a self-flying (yep) taxi. Downtown you pass parks and new construction where vast asphalt lots used to be. Your ride drops you off at the front door of your office building, and you sail in calm and on time.
Cruel dreamland? Today, maybe. But according to BYU experts who are part of the race toward an autonomous future, a driverless commute is closer than you might think. Self-driving cars, they say, will fundamentally change transportation and personal mobility.
From highway pile-ups and DUIs to noxious fumes and road rage, the way we get around now is problematic. But the autonomous-vehicle industry promises to disrupt all of that—and to do it quickly. Already we’ve seen tentative steps forward—from the Roomba vacuum cleaner to Waymo cars navigating the streets of Silicon Valley. But that’s nothing compared to what’s coming, says BYU electrical engineering professor Randal W. Beard. He predicts that within 15 years self-driving taxis will be zipping around every street in every town. As a result, more and more people will choose not to purchase a car.
It’s a shift that will open a host of opportunities within transportation, says Laith R. Sahawneh (PhD ’16). He’s an automated-driving researcher and engineer at Aptiv, which, in conjunction with Lyft, recently launched a fleet of self-driving BMWs in Las Vegas. If the projected use of self-driving vehicles holds, he says, we can expect increased road safety, less traffic congestion, reduced air pollution, more productive use of resources, and extensive transportation options for people whose mobility has traditionally been limited.
“There is no question that we are in the middle of a revolution in autonomous vehicles and intelligent machines in general,” says Beard. And BYU experts—professors and alumni—are deep in the fray.
Their timing couldn’t have been better. When in 1995 Timothy W. McLain (BS ’86, MS ’87) joined BYU’s mechanical-engineering faculty, he’d been studying underwater robots as a Stanford PhD. And Beard, who came to BYU the following year, had been exploring robotics and artificial intelligence while pursuing his PhD in New York.
Both came of age in automation just as a “perfect storm” of innovation came online—think GPS, embedded computing, and advanced battery technologies. “We were lucky, says Beard.”
The two began collaborating on robotics projects, and in 1997 they cofounded BYU’s Multiple Agent Intelligent Coordination and Control (MAGICC) Lab, where they’ve spent nearly two decades playing with unmanned aerial vehicles (UAVs). They’ve sent their planes and copters down campus hallways, through canyons, and down cramped tunnels—always developing more coordinated, more precise, more autonomous movements. For their jobs. Magic indeed.
With the hundred-plus students they’ve mentored, the duo has published in top-ranked journals on autonomous control of UAVs and received funding from the U.S. Air Force, NASA, the National Science Foundation (NSF), and the Defense Advanced Research Projects Agency. They helped develop the Kestrel autopilot system, now sold by Lockheed Martin. And McLain directs the NSF-sponsored Center for Unmanned Aircraft Systems.
Though McLain and Beard are focused on self-flying vehicles, their work overlaps plenty with earthbound vehicles. MAGICC Lab alumni are now leaders throughout the autonomous-vehicle industry at companies like Uber, Aptiv, and Near Earth Autonomy.
McLain says progress “has happened surprisingly fast,” and Beard anticipates that within five years some large international cities will feature limited self-flying-taxi service. Within 10 years advanced autonomy will likely be standard in many new cars, and some self-flying cars will become commercially available. Within 15 years, he predicts, car ownership will decrease as self-driving and self-flying taxis become common.
But first, Beard notes, there are critical questions that still need addressing. Questions of responsibility: “Who is responsible when there is a serious accident? The passenger? The owner of the vehicle? The manufacturer? The software engineer?” And ethics: “If the brakes go out in a self-driving car and there is a choice to either run into a barrier that will likely kill the driver or turn into pedestrian traffic, potentially killing many pedestrians, how should that choice be made?” And if the programming favors pedestrians over passengers, would you be willing to ride?
While McLain and Beard say such questions shouldn’t halt further testing and implementation, McLain believes there may be sense in tapping the brakes, if only to keep pace with the legislative and ethical discussions. For the public to fully trust the technology, he says, “the current approach to self-driving technology development may need to be reined in slightly to provide greater oversight and more gradual transition of technology.”
Even so, for two longtime roboticists, it’s hard not to be pumped as they look down the road ahead. “The future in this area is exciting,” says Beard.
As Sterling J. Anderson (BS ’07) prepared to begin his senior year at BYU, a distracted driver hit his 15-year-old brother, throwing him 30 feet into a ditch and breaking his neck. His brother survived, but the memory still motivates Anderson. “The accident left a lasting impression on me and directly influenced the course of my professional career.”
The course had been percolating for years. A mechanically minded boy, Anderson was tasked one summer by his grandfather with pulling two trailers loaded with hay up a long stretch of road in rural Utah. As the road steepened, the 11-year-old geared down and down again, trying to keep moving. Then, the stuff of nightmares: the tractor stalled and rolled backward, the trailers jackknifing. Anderson was uninjured, but his mental gears kept turning: even at 11, he wondered why the tractor couldn’t be smarter and handle the situation.
Fast forward a few decades, and Anderson is now at the forefront of the autonomous-car industry. After creating MIT’s Intelligent Co-Pilot and leading the Model X and Autopilot programs at Tesla, Anderson cofounded Aurora with the former leaders of Google’s and Uber’s respective self-driving programs. The company, which Wired called “America’s hottest self-driving startup,” has partnered with heavyweights Volks-wagen and Hyundai to bring the technology to market at scale.
Driving a vehicle, says Anderson, is a “deceptively challenging problem.” There’s plenty that’s straightforward and simple: stop on red, go on green, stay between the lines, hang back x seconds from the car in front of you—child’s play for computers. But when you add “the cognitive processes that happen almost subconsciously for a human driver after years of learning the behaviors, actions, and future intent of others,” machines are at a distinct disadvantage.
To demonstrate, Anderson sets a scene: you approach an intersection and see a person standing on a corner. Based on years of observations of your fellow humans, you infer the person’s intent and her likely future direction and speed. If it’s a child, you’ll instinctively provide more room, knowing that kids are less predictable.
But intersections are usually much trickier, filled with pedestrians, vehicles, and bicyclists. “Not only do you have to perceive what’s happening around you,” says Anderson, “you have to infer intent, you have to project future motion, you have to understand how that future motion will be affected by the future motion of others, and you have to understand how your actions will change how that intersection evolves and how people move through it.”
These are just the sorts of problems that BYU computer-science professor David Wingate (BS ’02, MS ’04) and his students wrestle with in BYU’s Perception, Control, and Cognition Lab. Wingate, a winner of the NSF’s top award for junior faculty, is working on a wide array of projects to help develop artificial intelligence that can perform complex tasks like a human.
Why is it so hard to teach a computer to think like a human? There is just so much to know, says Wingate. “We [humans] know that up is the opposite of down and the sun rises in the morning and red is a color and a fish is an animal. We know that if a door is closed, we need to open it before we go through it, and if we can’t open it, we know it’s locked and we need a key to unlock it—unless it’s a passcode,” says Wingate. “This sort of common-sense reasoning about the world, all of that has to be folded into the car algorithms.”
With scores of cities worldwide already hosting automated-vehicle tests, engineers and computer scientists are gathering mountains of data to feed to cars’ computers, preparing them for every potential roadside scenario.
“The biggest unsolved problem is the nature of intelligence,” says Wingate. “And if we can make progress on that, it’ll make the world a better place.”
Toward Safer Streets
“For me, it was a revelation,” remembers D. Blake Barber (BS ’05, MS ’07). On a clear spring afternoon in 2003, the BYU sophomore stood in Rock Canyon Park, Squaw Peak looming to the east and Utah Valley sprawling out to the west. Another student pulled out a plane—a 4-foot-wide flying wing with not much more than a battery, motor, and propeller strapped on—and chucked it into the air. “It wobbled a bit, then stabilized itself and gracefully climbed into the sky, flying circle after circle all with no human intervention,” says Barber.
Barber’s mechanical-engineering professor, McLain, had invited him to join a few students in testing a new autopilot for UAVs. Hooked, Barber started attending McLain’s lab meetings the next week. In his remaining years at BYU, Barber worked alongside other students to teach aircraft to navigate through canyons, follow ground objects, and avoid obstacles (including “an exciting near miss of the Kimball Tower”).
Now a roboticist at Uber’s Advanced Technology Group, Barber is still having fun, still in awe of the technology, still teaching his vehicles to avoid collisions. “Of all of [the] positive benefits [of self-driving vehicles],” he says, “most exciting are the safety benefits.”
Globally, it’s estimated that as many as 50 million people are injured annually in traffic accidents; another 1.3 million die. That’s the equivalent of seven fully-loaded 747s going down every single day, Barber notes. “We would obviously never accept this outcome when it comes to air transportation,” he says, “and we shouldn’t have to accept it with respect to ground transportation either.”
Developers in the industry, Barber says, “are fiercely dedicated to delivering on the incredible safety benefits this technology promises.” They imagine a day when accidents attributed to human error—94 percent of all accidents today—become a distant memory. Without human error, how much property loss could be avoided? How many injuries would never happen? How many lives could be saved?
Barber and the other BYU experts acknowledge that self-driving vehicles won’t mean the end of accidents or fatalities. In fact, there have already been four fatalities involving self-driving cars, including one in March in which a pedestrian was hit by a self-driving Uber. Following that accident, Uber put its testing program on hold and is now working with the National Transportation Safety Board and reviewing its own internal safety processes.
“When each death is a tragedy rather than a statistic,” says Barber, “then each of these events gets the attention it deserves in terms of making sure that it never happens again. When we have the ability to systematically review every safety-related event and to systematically put improvements in place to eliminate these events, the safety of our roadways can improve at an incredible pace.”
In a recent report, the RAND Corporation, a nonprofit think tank, argues that autonomous vehicles should have to be only moderately better—by around 10 percent—than human drivers before being widely used on U.S. roadways. “If we wait until these vehicles are nearly perfect, our research suggests the cost will be many thousands of needless vehicle-crash deaths caused by human mistakes,” says Nidhi Kalra, a study coauthor. “It’s the very definition of perfect being the enemy of good.”
Wingate compares fears over self-driving cars to worries about air bags decades ago: adding air bags into cars significantly reduced traffic fatalities, but air bags have also caused some deaths. And those deaths, though far lower in number than the lives saved, have still caused outcry. “Self-driving cars are a bit the same right now,” he says. “It’s true there have been accidents. And of course we hope they wouldn’t happen. But there’s the promise that ultimately it will be safer and it will be more efficient and it will be a benefit to society.”
The question of safety became even more personal for Barber when he and his wife welcomed twins last year. He’s hoping they’ll someday take to the roads in an environment far safer than the one we have now. Ultimately, he says, “we have a real opportunity to make an immense difference here.”
Automation ranges from level 0 (driver does it all) to level 5 (car does it all). Most automated cars today are at a 3: the car does most things but notifies the driver when help is needed. Next up is level 4: the vehicle can do it all under most conditions, though the driver can still take over.
Not being in control of something we’ve been used to controlling is foreign and uncomfortable to most of us, says Beard. “Even if someone knows intellectually that they are safer in a self-driving car, they may have a negative emotional reaction to riding in a car without a driver,” he notes. “It is hard to reason with those negative emotions. It will . . . take some time and experience for people to feel comfortable with this technology.”
As the technology progresses and trust increases, the BYU experts anticipate a day when transportation as we know it is disrupted, for the benefit of both communities and individuals. “Even if we can save only one life, prevent only one injury, provide access to only one person, or return only one city to the people, our work will have been worth it,” says Anderson.
Wingate calls a self-driving future “a going-to-the-moon thing. It’s a goal, a dream of a way we can make the world a better place through technology. And if we all work together, we can realize that. It’s inspiring, it’s aspirational. Transportation is such a fundamental part of our society: to think that we could redefine it in a better way, it’s mind-boggling and exhilarating. ”
Will my autonomous vehicle be hack-proof?
Blake Barber: “Early on [at Uber] we brought in leading experts in the field to put in place comprehensive and multitiered security measures encompassing hardware, software, and networking stacks. I believe self-driving vehicles will actually be among the most secure vehicles on the road.”
Will we still be able to drive for pleasure?
Sterling Anderson: “As I’m sure an early automobile developer must once have explained to a horse enthusiast, yes. . . . Driving through gridlock is not fun; racing at a track is. The latter will remain.”
Who’s to blame if there’s an accident?
Randal Beard: “It’s a great question that does not currently have a good answer. Do you blame the car owner, the person sitting in the car, the car manufacturer, the designers, etc.? The answers will probably be decided in the courts over time.”
Sterling Anderson: “Self-driving vehicles provide a unique degree of visibility into what happened in an accident, allowing us to objectively recreate the circumstances as they happened. I expect that [autonomous vehicles] will ultimately be underwritten by insurers who will pass the premium on in the form of usage fees baked into the cost of your hailed ride.”
Will an autonomous-vehicle economy save me money?
Blake Barber: “Self-driving vehicles will lead to greatly improved vehicle utilization and allow us to reclaim wasted resources and put them to far more productive uses. This will . . . make both individuals and communities meaningfully richer.”
Randal Beard: “It will probably save many people the expense of car ownership. Car ownership will become a luxury as opposed to a necessity.”
What remains to be done for these cars to be roadworthy everywhere?
Blake Barber: “There are so many challenges remaining before these vehicles will be able to operate anywhere and in any conditions. . . . [They] will first deploy in relatively simple operational areas, operating only in a very specific mapped region, perhaps only during nice weather or even only during certain hours. . . . We’ll then see these operational areas continuously expand as we gain experience and grow in capability.”
Feedback: Send comments on this article to email@example.com.