As a chief computer architect at Hewlett-Packard in the 1980s, Ruby Lee was a leader in changing the way computers are built, simplifying their core instructions so they could do more.
And she revolutionized the way computers use multimedia. If you’ve watched a video or streamed music on your computer or smart phone, Ruby had a lot to do with making that possible. In more recent years here at Princeton, her research has focused on security in computer architecture without sacrificing performance, which is what we’ll talk about today. And she’ll discuss why, even though it’s possible to build more secure devices, the marketplace doesn’t demand it. Ruby Lee is the Forest G. Hamrick Professor in Engineering, and Professor of Electrical and Computer Engineering.
Links:
Links:
“’Awesome professor’ and ‘dream job’ in industry were pivotal for computer architect,’” Princeton Engineering news, May 31, 2021.
“Smartphone sensors can identify people by their behavior,” Princeton Engineering news, September 7, 2018.
Transcript:
Aaron Nathans:
From the Princeton University School of Engineering and Applied Science, this is Cookies, a podcast about technology privacy and security. I’m Aaron Nathans. On this podcast, we’ll discuss how technology has transformed our lives from the way we connect with each other, to the way we shop, work and consume entertainment. We’ll discuss some of the hidden trade-offs we make, as we take advantage of these new tools. Cookies, as you know, can be a tasty snack, but they can also be something that takes your data. On today’s episode, we’ll talk with Ruby Lee.
Ruby is the Forest G. Hamrick professor in engineering and a professor of electrical and computer engineering here at Princeton. As a chief computer architect at Hewlett-Packard in the 1980s, she was a leader in changing the way computers are built, simplifying their core instructions so they could do more. She revolutionized the way computers use multimedia. If you’ve watched a video or stream music on your computer or smartphone, Ruby had a lot to do with making that possible. In more recent years here at Princeton, her research has focused on security and computer architecture without sacrificing performance, which is what we’ll talk about today. For all her pioneering work, last year, she was inducted into the American Academy of Arts and Sciences. Let’s get started. Ruby, welcome to the podcast.
Ruby Lee:
Thank you for inviting me, Aaron.
Aaron Nathans:
All right, well, so you once said that computer designers should work to build secure trustworthy computers without sacrificing performance. Is this really possible and has security kept up with performance?
Ruby Lee:
So is it possible to design secure and trustworthy computers without sacrificing performance? Indeed, this is not obvious since security requires a lot of monitoring and checking and that likely would degrade performance. In fact, conventional wisdom tells us that if you want security, you have to sacrifice performance. And if you want performance, you will have to sacrifice security. So if we look at past history, whenever the computer industry tried to produce a secure computer without considering performance, these computers did not sell well. Customers tended to choose computers that had higher performance over computers that might be more secure.
Aaron Nathans:
Why do you think that is?
Ruby Lee:
So I think customers are attracted by performance and security is something that they did not think about for a very long time. And security is something where, when it works, you don’t see anything exciting happen, because something exciting happens only when security does not work and something goes bad. So customers are more used to things that they can see and they can feel, where security is something you notice when there is an absence of security and something bad happens. So I think it’s very important for computer architects to consider security features without degrading performance.
Actually the grand challenge which I give to my students, my Ph.D. students, is in fact to improve both security and performance at the same time. So this seems like really a stretch goal, but we have actually shown that it is possible if you considered novel computer architecture, design techniques, you can indeed actually improve both security and performance at the same time. However, a lot more work needs to be done before this kind of computers will make it to the market.
So your second question, has security kept up with performance? The short answer is no, because the customer keeps demanding more performance and so performance is continuously being improved. While security in our computers has improved, it has not improved at the same rate. And then furthermore, some security features that have been introduced, have not been used by customers nor have they been properly used.
Aaron Nathans:
Do consumers have options in terms of what kind of levels of security we can purchase in our electronic devices?
Ruby Lee:
Today consumers do not really have options to purchase a more secure version of a computer. Unlike performance options, they can buy a computer model with higher performance or lower performance. Consumers today get whatever is the default security that is implemented in the computer.
Aaron Nathans:
So is it the hardware or the software that makes a computer secure?
Ruby Lee:
Actually you need both security in the hardware and the software. So there are many layers of software and many layers of hardware, and you actually need security in all the software layers and all the hardware layers. So security is like a top to bottom property. It has to be secure, each of software and hardware, and also at the interfaces between layers. Now, when you say purchase a computer, you’re usually just purchasing the hardware and the operating system that is the first layer of software installed on the hardware. So that’s what you purchase. And today you don’t have options to purchase a more secure hardware option or less secure for the same computer model. But different vendors can supply more secure computers, and if customers demand it then more and more computer vendors will supply more security features as required by the customers.
Aaron Nathans:
Well, over the fall, you did something really interesting. You teach a class on computer and smartphone architecture, and with your students being taught remotely, you managed to get funding so every student would be issued a Samsung smartphone. You then gave the students a hands-on look at the phone’s various features such as GPS and video and sensors and how they could be made more secure. Can you speak a little bit about what you taught your students?
Ruby Lee:
Yes, indeed. The smartphone is a very remarkable little computer. It combines a full function computer with a phone for communications. It is also an entertainment device. It has one or more high quality cameras. It’s a storage device and all in a very small form factor. However, the architecture of a smart phone is not well understood. So what I wanted to teach the students was what is the architecture of a smartphone? I wanted them to be able to look under the hood of a smart phone and see what’s there. So I taught them what were the subsystems inside the hardware of a smart phone.
Aaron Nathans:
Did you literally open one up?
Ruby Lee:
We opened one up virtually, so we could look inside with a video camera. We didn’t encourage them to open the smart phones they were given. This is a bit dangerous. So I wanted the students to understand that there is a multiprocessor computer inside the smartphone. The smartphone may have eight computer processes in it, then in addition, it has a lot of networks integrated, not just one or two. So it has the wifi, it has the cellular telephony network, it has Bluetooth for nearby devices, it has the global positioning system or GPS for location determination. It has NFC or the near-field communications for the smartphone acting like a smart card or credit card. And then it has the multimedia subsystem, which is audio and video and fancy cameras. And if that has a display, that’s not only a output device, but an input device as well, because its touch screen has a storage system and has what’s known as sensor subsystem.
So I call this a sensors system, which is probably one of the most unusual features of a smart phone compared to a typical personal computer. The smartphone has maybe 10 or more sensors embedded in it that are useful for characterizing both a user and his environment. So for example, I wanted the students to learn about this census subsystem and two of its most common sensors, like the accelerometer and the gyroscope. So these can measure how a user walks or how he holds the smartphone when he’s talking on the phone or surfing the web or doing some other thing on the smartphone.
And so we had the students get these Samsung smartphones and measure these sensor measurements, while they were doing different activities. Then they could see how the sensor measurements change, whether they’re working or running or surfing the web or sitting still or whatever. And then they could chart these measurements versus time and have an understanding of what these sensor measurements do. And later on in the semester for the term project, if they chose to do this project, they could use some deep learning software that we provided and analyze these sensor measurements to see how different they are from the other students who collected the same sensor measurements.
So this was actually quite interesting because it turns out that you can quite clearly distinguish one person from another just by his sensor measurements, like using the accelerometer and gyroscope sensors. So I wanted the students to understand smartphone architecture and experiment with its hardware subsystems, especially the sensor subsystem. I think they generally have fun doing that. I also wanted them to think about how you could improve the security of the smart phone using some of these new features. So for example, we can determine whether the real user is using the smartphone but not just by these sensor measurements. That’s actually a research project that my PhD students and I had previously done.
So, like I said, the smartphone actually has quite a few interesting features that can be used to improve security, but they also provide extra security and privacy risks, so it’s always two sides of the same coin and students need to understand this.
Aaron Nathans:
Well on that note, so your research team did some work on how sensors in a smartphone can be used to detect and fight back against imposters or people who would try and commandeer a person’s phone to steal all kinds of their personal information, including access to their money. Can you tell us how you and the team did that?
Ruby Lee:
Yes. So, as I was mentioning, there are these sensors on the smartphone that can be used to measure a user’s motion. So for example, the accelerometer can measure the large motions, like how a person walks or raises the arm and so forth.
Aaron Nathans:
Now, Ruby, when you say a sensor, is that something that we could see from the outside? I’m holding my smartphone right now, where would the sensor be on the phone?
Ruby Lee:
It’s embedded inside, you can’t see it. Because all you can see is the display and the camera, right?
Aaron Nathans:
Yes.
Ruby Lee:
Yeah. So it’s embedded inside every smartphone. Every smartphone has accelerometer and gyroscope. Okay?
Aaron Nathans:
So it’s not like a camera? It’s not something that’s peering out, it’s something-
Ruby Lee:
It’s not like the camera, you can’t see it. You don’t know it’s there. But for example, in your health apps, when you see how many steps you’ve walked in a day, that’s using your accelerometer. So it’s in every smart phone and it’s collecting sensor data all the time. So your accelerometer has X, Y, and Z axes that collects sensor data on how you walk. And how many steps you take and so forth can be calculated. And the gyroscope deals with your fine motor actions, like how you hold the smartphone and rotate your wrist and so forth as you are typing into the smartphone or talking on the phone and so forth.
So what we tried to show was that these sensors, these common sensor measurements can be used to characterize a user. So if we learned the normal patterns of the legitimate use of the smartphone, then if someone other than the legitimate user uses the smart phone, we would be able to detect this. Now, what this means is that even if someone happens to know your password or pin and has somehow gotten into your smart phone, instead of being able to access all your data in your smartphone and even beyond the smartphone, the smartphone itself by a mechanism, would detect that this doesn’t look like the normal user because of the sensor measurements not lining up.
Aaron Nathans:
Literally because they’re walking differently?
Ruby Lee:
Yes. Because there’s differences in the way they walk and the way their hands move when they use the smartphone. So we call these behavioral biometrics as opposed to physiological biometrics like your fingerprint or your face features or the iris in your eye and so forth, which the normal biometrics that we now call physiological biometrics. Now the behavioral biometrics like your gait and so forth, as much a part of you as your physiological biometrics. So if we have this kind of implicit imposter detection, then we can detect all the time, whether someone like an imposter is using your smartphone. That would provide a lot of additional security. Because it’s not the smartphone phone that is malicious, it’s the person using the smartphone that can be malicious. That is the tech. So that’s what we did.
And you can’t do this just by looking at the sensor data directly. What it is, is we taught a deep learning algorithm to detect these differences. So with these artificial intelligence mechanisms, like deep learning, we were able to detect with very high accuracy, over 98 percent accuracy, whether this is the real legitimate news or an imposter using the smartphone. In addition, we showed that if the user is worried about his sensor data getting lost or hacked by someone, he or she may not want the sensor data to leave the smartphone.
So for that, we determined another deep learning algorithm that could actually just keep all the data in the smartphone, develop different kinds of deep learning algorithms that would detect whether it is the real user or something anomalous compared to the real user, which then would be classified as an imposter. So this gets slightly less accuracy in detection, but still very good accuracy. Somewhere around the high eighties to nineties accuracy of detection. So this would be a huge benefit to security since one of the most problematic things is that the password gets breached and someone else uses the smartphone, the attacker, or somehow after the legitimate user has entered his password, he gets knocked out and the attacker takes the smartphone and uses it. So this will prevent all those kinds of things.
In addition, I would remark that this method of implicitly detecting smartphone imposters is also an example of improving security without degrading performance, because the user can continue using the smartphone as before. His applications will run like normal, the sensor data is anyway automatically collected all the time in the smartphone. And it’s use is done in the background with these deep learning algorithms to detect the impostor and the amount of computation needed for that is not very great at all. So you can have using the smartphone and the detection going on in the background without really affecting the convenience of the user or the performance of the machine.
Aaron Nathans:
So what kind of security is embedded in your typical smartphone and other than what you’ve just discussed, where are the gaps that need to be addressed?
Ruby Lee:
Yes, that’s a good question. Actually, one of the subsystems that I define for a smartphone architecture is the security subsystem. In fact, smartphones often have better security than your conventional personal computers. So there’s not only hardware built in like a special security processor with its own memory and so forth, but also of course things you notice like the fingerprint recognition and the face recognition features, but also together with the operating system, smartphones have built up quite a bit of security. So let’s take the iPhone as an example, okay? So in the iPhone, it boots up securely so it has good system security, operating system security. It makes sure that the operating system that boots up when you power on your smartphone is a good one and has not been tampered with.
And then it encrypts your data. So that provides confidentiality of data for the consumers. And then the Apple marketplace will vet all applications before putting them on the marketplace to make sure they don’t have well known vulnerabilities or embed malware in them. They also implement all these network security protocols that have been defined so far. All applications have to be signed and verified right before they’re installed. Then there are things like secure payment protocols, which are also defined using the NFC, Near Field Communications, to check your credit card and so forth for payments. And there are even features like the wipe feature. So if your smartphone is lost or stolen, you could remotely wipe out all the data on the smartphone.
In addition, there are privacy controls for the user, such as where the particular application is allowed to use your GPS to determine your accurate location and whether you can use your camera, for example. So these are known to be privacy sensitive mechanisms on the smartphone. But of course, there are still a lot of gaps in this security mechanisms on the smartphone. So the first example was this logging once and access to everything business that we discussed earlier. Where our example of having implicit and continuous detection of imposters, or in other words implicit and continuous verification that is the legitimate user using the smart phone, can go a long way to improving this security.
Another example, for example is what’s called transient attacks. These are attacks that happen during the runtime of the application. These attacks happen silently and quickly during runtime, and they don’t leave any forensic evidence. So that later on, you couldn’t even really tell if the attack happened or not. Now, this is a huge security gap. A lot of times these attack the hardware today and that is a huge security gap that remains an unsolved problem.
Aaron Nathans:
What is the biggest mistake ordinary consumers make when it comes to securing their computers?
Ruby Lee:
Very good question.
Aaron Nathans:
We can throw into digital devices too.
Ruby Lee:
Okay, so typically one would say not putting in a strong password or even not putting a pin and all that, but I would say the biggest mistake consumers make is not demanding more security from the hardware and systems software vendors. If consumers demanded more security, the hardware and software vendors would surely provide it, okay? If consumers are willing to pay a little bit more for a computer, a smartphone that has better security features, you can be sure that they will be provided. If the consumer would often choose a computer, a smartphone that has better security over another one that perhaps might have better gee-whiz features like faster gaming performance, et cetera, then you would be sure that the vendors will provide more secure computers and smart phones.
So I would say the biggest mistake is consumers not being aware of the dangers of security breaches and not demanding that the smart phones and computers have these security protections.
Aaron Nathans:
How would they do that? I mean, if they don’t have the option of walking with their feet and buying something else, how do they make their voices heard?
Ruby Lee:
They just have to ask for it. And the people in corporations that buy computers in large quantities must demand it. And the vendors must show it, must be able to demonstrate what security their machines provide. The government can request that no purchases be made without these features.
Aaron Nathans:
You’re listening to Cookies, a podcast about technology security and privacy. We’re speaking with Ruby Lee. Ruby is the Forrest G. Hamrick professor in engineering and a professor of electrical and computer engineering here at Princeton. On next week’s episode, we’ll talk with Orestis Papakyriakopoulos and Arwa Michelle Mboya. Orestis is a postdoctoral research associate at Princeton center for information technology policy. Arwa is a research assistant at the MIT Media Lab. They’ll discuss why the Google search engine tends to perpetuate some tired old stereotypes and what we can do about it.
Aaron Nathans:
It’s the 100th anniversary of Princeton School of Engineering and Applied Science. To celebrate, we’re providing 100 facts about our past, our present and our future, including some quiz questions to test your knowledge about the people, places and discoveries who have made us who we are. Join the conversation by following us on Instagram at EPrinceton. That’s the letter E, Princeton. But for now back to our conversation with Ruby Lee.
Aaron Nathans:
There’s been quite a bit of press about attacks on computer hardware in the last couple of years. Spectre and Meltdown come to mind. Your team recently did some work on how to try and keep up with how these attacks evolve and how to try to fight them. Can you speak to that?
Ruby Lee:
Yes indeed. These Spectre and Meltdown attacks are very serious and damaging attacks. So for a little bit of context, for decades now, attackers have been attacking the software and attacking the networks. These are relatively lower hanging fruit compared to attacking hardware, which is significantly more difficult for attackers to attack. However, there have been more and more protections for software and networks and attackers are now turning to attacking the hardware very seriously. So hardware is, of course, the foundation of all computers and attacks on hardware are not only very hard to detect, but they are very hard to fix especially very hard to fix without significantly degrading the performance.
So what are these Spectre and Meltdown attacks? Well, the attackers are very perverse. They are now attacking the performance optimization features that the hardware provides to make the computer higher performing machine. Okay? So the hardware performance optimization feature that they are now attacking is called speculative execution. This is a means of speeding up the computation by looking ahead and predicting what will happen, which is all done in the hardware. In the processor. The attackers use this feature in a very unusual and unexpected way to not only access secrets that that application was not supposed to access, but also to leak out these secrets to the outside world.
So this is of course, very dangerous because they can leak out any secrets. Previously this class of, an earlier class of attacks were just plain side channel attacks, where they leaked out the cryptographic key of encryption algorithms, but now the speculative execution attacks can leak out anything in the memory. So this is much more dangerous and more general. So what happens was there was a hue and cry in the industry as to what to do to fix this kind of serious security breach? A lot of software solutions were proposed. Some of which essentially were pretty draconian and just turned off these performance optimization features. But that of course caused a lot of performance degradation for some companies.
So rumor has it that, for example, Netflix was degraded by eight times. It was eight times slower to stream your movie if one of these software solutions was implemented. So not only is this sad and resulted in a lot of lawsuits and so forth, but in subsequent months, more and more of these kinds of attacks showed up. So since January, 2018, about 20 or more of such similar speculative execution attacks emerged. Each one different. And each one such that the previously proposed solution could not defend against the new attack. So we thought this was not the proper way to deal with new attacks and being computer architects and hardware people, we wanted a better solution that was more general.
So we didn’t want a new counter measure for each new attack. We wanted to find out what were the root causes of this big class of attacks. So what we did was, we analyzed all the texts and we identified the minimum number of critical steps that each attack must go through in order to be a successful attack. And we designed a new attack graph that would cover all the attacks. So this shed light on what were some of the pain points and critical steps in the attacks. And what this does is that it says, if you don’t want one of these attacks to succeed, even a new one of this kind of attacks, you can prevent one of these critical attack steps from happening. And if you do that, then the attack would not succeed.
So this then resulted in defining a bunch of defense strategies that could be used to stop this whole class of attacks. So then we looked at all the many solutions that have been proposed in the computer architecture research community by hardware designers, and we saw that every one of the specific solutions they proposed could be classified under one of our defense strategies. So we think this is a very general understanding of this class of attacks, as well as how to do defenses against these attacks.
Aaron Nathans:
Why do you think cyber attackers have the upper hand against consumers a lot of the time? Even savvy consumers with every tool at their disposal?
Ruby Lee:
The issue with cyber attackers is that they only need to find one attack path into the computer. So one set of vulnerabilities by which they could get inside the computer and do something bad. Whereas defenders in general have to defend on all fronts. So this is quite a bit harder because you don’t know where the attacker will strike. It is necessary to defend on all fronts. Of course it’s quite expensive and time consuming to check all fronts all the time. That’s why attackers have the upper hand.
In addition, today our computers and smartphones have only a few major hardware and operating system vendors. So when an attacker finds a path into the system, he or she can use that attack path on many systems since they all tend to have the same hardware and the same operating system software. So the hardware like from Intel or AMD or Arm processors, and the software from Microsoft or Apple or Google.
Aaron Nathans:
So you once said that improving cybersecurity is a little like improving the environment. Although no one single entity is ultimately responsible, everyone should be aware of the consequences and do their part. Whose responsibility is cybersecurity? If not one entity, then what responsibility falls upon individuals, corporations, the government, academia, or others?
Ruby Lee:
Yeah, this is a very good question. Basically, everyone is responsible. So we believe that consumers should be protected, even if they’re not very savvy about cybersecurity. And so the first responsibility we feel belongs to the companies that supply the hardware, the operating systems and the networking services. So these hardware vendors, software vendors, especially system software vendors and internet service providers, should provide the best security that they can. And furthermore, they should try to check that their security systems work with the other people. So the software peoples’ systems work with the hardware peoples’ systems and with the networking peoples’ security systems. Because security is not only a top-to-bottom feature, but it’s also an end-to-end feature. So you need the basic computing and networking and software infrastructure to be secure.
Well then after that comes the applications programs, which run on top of the operating systems. So all these third-party application systems developers, software writers, should try to write secure programs. Now, of course, not all of them are very security savvy. So the companies like Apple and Google and Microsoft should provide good software development environments so that the best security practices can be easily integrated into the software applications.
Then we have the marketplace people like the Apple marketplace or the Android marketplace, and these marketplaces where applications can be bought and downloaded from, they should vet all the applications and make sure that they don’t have malware or that they don’t have well-known vulnerabilities in them before they allow these applications to be installed on the smartphones and other devices. But this comes at quite a cost because when you want to experiment and you want to install your own smartphone, we, in our class couldn’t use the iPhone because of its closed marketplace and its difficulty with installing your own software.
So we had to use Android smartphones where they allow opening installation mechanism. So there is a price for this kind of vetting. And then, what happens to the government? Well, basically one hopes that the open market will solve the security problem in the sense that if consumers demand security, manufacturers will produce more secure computers. But if this kind of method does not work, then government may have to come in to establish policies or laws. An example maybe in the use of smart seats for safety belts and car seats for children. You had to have a law that says manufacturers must provide seat belts that hold car seats and so forth. Consumers driving cars with young children must put their children in car seats if they’re under a certain weight or certain age. So that’s where government could come in.
And then with all these large corporations, including government and military and business corporations and enterprises, when they buy computers, they should implement best practices and buy secure computers and also implement the security correctly. And then there’s this challenge of providing better security, because as we provide better security, the attackers get smarter and they learn to bypass our new security mechanisms. So it’s a continual cat-and-mouse game and you need the researchers in academia and in the research institutions to come up with better security mechanisms. And with mechanisms, as I have said, that would improve security and improve performance at the same time, and these are huge research challenges.
And then finally the average consumer also has some power to request and buy only those computers that have the necessary security features rather than going only for better performance and better whizzy features. So I think in that sense, everybody has a role to play in improving cyber security, just as everybody has a role to play in improving the environment.
Aaron Nathans:
So just a note before we go, you were the second woman to achieve tenure at the School of Engineering and Applied Science back in 1998. You are also the first woman at our school to have been given an endowed chair. What was that experience like, entering a largely male institution and how have things changed both here and in the field of engineering in general?
Ruby Lee:
Okay, that’s an interesting question, Aaron. So I think things have improved and there are now more women faculty and women students in engineering. But the rate of improvement is not as good as one would like. So there are definitely more undergraduate women engineering students here at Princeton. So you will see at least a few women in each engineering class rather than all men, but the pipe is still very leaky.
So there are much fewer graduate engineering students who are women and fewer yet assistant women faculty, and then fewer yet full professors, associate or full professors that have been tenured as engineering female faculty. So this is true all over the country and it’s improving, but there’s still a long way to go.
Aaron Nathans:
What made you endure, what made you persist?
Ruby Lee:
I actually just never thought about these matters very much. I basically always did what I wanted to do. And in Silicon Valley, at Hewlett-Packard, where there was a very good cultural environment, I didn’t really feel any bias against me as a woman engineer or in fact, as a leader of many groups. I actually felt it more when I came to the East Coast and back into academia. So I think there are a couple of things we could do. One is being done these days, which is to cut out both the intentional and unintentional biases against women in engineering. This can be done in a positive way by emphasizing the benefits of diversity to solutions et cetera. If you get diverse people to look at a solution, they come up with better solutions.
And a second thing I might suggest is that we should look at those professions that have done very well in increasing the numbers of women. For example, the medical profession. A few decades ago, there may have been very few women doctors, but today there are a lot of women doctors and they’re very successful and a lot of patients particularly like women doctors. So I think we can learn from them and see what they did that was good. So I think there’s a lot of hope for optimism in this area. Women who want to do engineering should just go into it and not feel constrained in any way. They should know that they would be doing a lot to help improve society and humanity, especially if they go into area like cybersecurity.
Aaron Nathans:
Well, I want to thank you. This has been very interesting and quite enlightening.
Ruby Lee:
Great. Thank you, Aaron. It’s been fun.
Aaron Nathans:
Well, we’ve been speaking with Ruby Lee. Ruby is the Forest G. Hamrick Professor in Engineering and a professor of electric and computer engineering here at Princeton. I want to thank Ruby as well as our recording engineer, Dan Kearns. Thanks as well to Emily Lawrence, Molly Sharlach, Steve Schultz and Neil Adelantar. Cookies is a production of the Princeton University School of Engineering and Applied Science. This podcast is available on iTunes, Spotify, Stitcher and other platforms. Show notes and an audio recording of this podcast are available at our website, engineering.princeton.edu. If you get a chance, please leave a review, it helps.
The views expressed on this podcast do not necessarily reflect those of Princeton University. I’m Aaron Nathans, digital media editor at Princeton Engineering. Watch your feed for another episode of Cookies soon. Peace.