Close Menu

    Never Miss a Thing!

    Get emailed updates whenever a new podcast episode goes live, plus bonus content from Dr. Stieg's fascinating guests! Subscribe to the This Is Your Brain newsletter below

    Can Music Heal Your Brain? with Barbara Minton, PhD

    February 6, 2026

    What Makes Us Vulnerable to Artificial Intelligence? with Jacob Ward

    January 23, 2026

    Is Retirement Bad For You? with Ross Andel

    January 9, 2026
    Facebook X (Twitter) YouTube LinkedIn Instagram
    This Is Your BrainThis Is Your Brain
    This Is Your Brain
    • Home
    • About
    • Podcast
      • Podcast Season 6
      • Podcast Season 5
      • Podcast Season 4
      • Podcast Season 3
      • Podcast Season 2
      • Podcast Season 1
    • Webinar
    • Contact
    • DrPhilStieg.com
    This Is Your BrainThis Is Your Brain
    Home»Podcast»What Makes Us Vulnerable to Artificial Intelligence? with Jacob Ward

    What Makes Us Vulnerable to Artificial Intelligence? with Jacob Ward

    This Is Your Brain producerBy This Is Your Brain producerJanuary 23, 2026
    NBC NEWS ANCHORS AND CORRESPONDENTS -- Season 2023 —Pictured: Jacob Ward (Photo by: Patrick Randak/NBC)

    https://www.jacobward.com/
    Artificial intelligence is reshaping how we think, live, and make choices, but are our brains ready for it? Award-winning tech journalist Jacob Ward joins Dr. Phil Stieg to discuss how AI exploits our ancient decision-making systems, why we’re so easily manipulated, and what it means for our future. Drawing from behavioral science and decades of reporting, Ward explains just how predictable we are, the pitfalls of falling into “the loop”, and how to stay human in an AI-driven world.
    Phil Stieg

    Artificial intelligence is transforming every part of our lives, from how we work and communicate to how we make decisions. But as we hand over more control to algorithms, are we ignoring the ways our brains are actually being manipulated by these very systems?

    Jacob Ward, an award-winning journalist who’s spent over two decades reporting on technology, innovation, and the hidden technological forces shaping modern life.

    You may know him from his time at NBC News or as Editor-in-Chief of Popular Science. He’s also the author of The Loop, a book that explores how artificial intelligence systems learn from predictable patterns in human behavior. Today, we’ll talk about the unanticipated consequences of technological breakthroughs in AI, why our brains are so susceptible acceptable to bias in the digital age, and what we can do to stay in control as AI becomes more deeply embedded in our daily lives.

    Phil Stieg

    Jacob, thank you so much for joining us today.

    Jacob Ward

    It’s fantastic to be with you. Thank you so much for having me.

    Phil Stieg

    So Jake, we know that your background is primarily as science a journalist.  What I really wonder about is how you became so interested in human behavior and Artificial Intelligence?

    Jacob Ward

    Well, I really appreciate this. As you mentioned, I was the editor-in-chief of Popular Science magazine, and then worked on a documentary series for PBS that was called Hacking Your Mind. It was a really life-changing experience for me in which we traveled all over the world and interviewed some of the top minds in behavioral science.   One of the centerpieces of that was Daniel Kahneman, who you’ll know, of course, as the Israeli psychologist who became a Nobel Prize winning. He won the Nobel Prize in economics. And he wrote this famous book, Thinking Fast and Slow. And a huge amount of our documentary series was about this idea that we have this fast-thinking brain, what the kids call the lizard brain, and your slow thinking brain, which is the higher functioning, the caution, and the creativity that you bring to being human. And at the same time that I was learning all this stuff about how we make the vast majority of our decisions with our lizard brain, our system one, our fast thinking brain, and those decisions are incredibly predictable, it turns out, which is one of the big themes of that last 50 years or so.

    I was also in my day job as a technology correspondent, learning all of these things about these companies that were clearly hoping to use this stuff to pick patterns out of human behavior in order to predict it and hopefully shape it for profit. This was primitive forms of AI, but it was human reinforcement learning was one of the modes being used at the time, machine learning, neural networks. This is all pre–ChatGPT style stuff.

    For every kid that I would meet who is starting some company that was about trying to do that stuff for conceivably positive reasons, to help you lose weight or to help you save money, there were 10 more kids talking about using it to make advertising more effective, using it to make people more open to certain messaging. There was all this stuff that was truly about shaping behavior. I just thought, “Oh, man, there is a conflict coming”. There was a very symbiotic merging of this world of behavioral science and this world of technology that I thought would probably be used to manipulate our behavior.

    Phil Stieg

    But that’s not really a new idea, is it? TV came along and advertisements, back in the old days, when you went to a theater between movies, they manipulated your mind. When you go to the grocery store, they put the dairy and good products at the back, so you have to walk by all the bad stuff that you shouldn’t buy. Our minds have been manipulated forever. It’s just that what’s different with AI? What do you think?

    Jacob Ward

    Yeah, this is a very good point. We have been in the business of trying to predict and shape human behavior for profit for a long, long time. Absolutely. The tools that have been available to us up until a certain point were very analog tools. You do your best to guess at, based on a little bit of research, the marketing messages that might work.

    Then along came social media. Social media was suddenly a way of creating insight into people that we really had never had before, based on what Mark Zuckerberg called revealed preferences. The idea that if you profile enough people, you can find commonalities among people such that you can put them into certain buckets and say, Oh, Phil’s the guy who can’t help but look at this thing. That’s his revealed preference.

    And out of that came some of the most effective marketing we’ve ever seen, right? The feeds that we got through social media were feeding us advertising and political messaging and the opinions of our friends that these companies knew on some level agreed with our worldview and helped shape us into a more pliable customer. Well, that was still something that required a certain amount of human analysis to find the patterns in that.

    But then in 2018, 2019, along comes these transformer models, which is the big technological breakthrough that made something like ChatGPT possible.

    Phil Stieg

    When you use the term transformers as the new innovation, is that the same thing as saying large language models, the ability to take huge amounts of data?

    Jacob Ward

    Yeah. Transformer models are what made large language models possible and other forms of generative AI. What transformer models do is take literally just what to you and I would look like gibberish and allow it to find commonalities in that data, crunch it into more manageable bite-sized pieces that are representative of a larger pattern, and then at the bottom of the process, kick out a handful of discoveries.

    So one of the examples of this, for instance, is like, let’s say you want a system to tell the difference between dogs and cats in a bunch of photographs. A system like that can look at a million of these photographs and just start to figure out, okay, what are the common patterns that tend to lead us to one or the other conclusion.  At the end, when this system begins accurately identifying dogs and cats to you, you think to yourself, oh, this is a canine, feline expert. Gosh, this thing is a genius. It must know everything there is to know about dogs and cats.

    But the really crazy thing about it is just how stupid that system is. One of the most famous systems that actually was doing that task, it turned out was using the background color, typically, to make the choice, because people tend to photograph their dogs outside and their cats inside. So if the background was green, it knew it was a dog. It’s the dumbest system ever. And yet our brains think, oh, what an incredibly sophisticated system.

    But it’s that technology that allows huge amounts of undifferentiated data to be turned into a little bit of insight that allows what we now have with LLMs, which is basically just a system that says, oh, based on the chaos of written language across the Internet, I know that this bunch of words tends to follow this bunch of words, tends to follow this bunch of words. This is not a system that reads or understands reasoning. All it does is know, “Hey, when these words are together, then these words tend to come after it” . And that is the essence of a large language model that sounds like some soothsayer to us, but is a glorified, cut and paste system that just happens to read as if it is a literate, sentient being.

    Phil Stieg

    Early in the book, you refer to two systems that basically make us vulnerable to AI. System one is just the quick reflexive response to something, and system two, I would think it would make us a little bit more resistant, thinking about it, analyzing a little bit. But how does that make you and me and everybody vulnerable to AI?

    Jacob Ward

    So the system one, the fast-thinking brain, the lizard brain that we were talking about, is an incredibly ancient system. It goes back millions of years of evolution. That system is essentially an autopilot system that is offering to basically make your choices for you without consulting you consciously. It’s the system that lets you glance at a piece of food and say, oh, that strawberry looks nice and ripe. I’m going to eat it, rather than you having to sit there and be like, jeez, I wonder where that strawberry came from. How old is it? And do I like that kind or another kind? Rather than having to think it through, you gobble the strawberry.

    It’s the same system that, one example I use all the time is when you’re driving from one place to another place and you accidentally drive to the wrong place because you’re just on autopilot. You take your kid, you meant to take your kid to volleyball, and you took her to school by accident. That moment, when you wake up, you come to in the parking lot that you’ve driven to and you think to yourself, How did I get here? I’m in the wrong place. You have unconsciously been driving the car that whole time. And that’s your system one, your ancient brain.

    System two, your creative, cautious, rational brain, is a much newer, more recent development, and it’s very cost intensive. It’s really expensive to engage that brain. You can think of it it’s like there’s the younger brother who does everything in the house and gets no credit for it. And then there’s the lazy older sister who’s been left in charge of the brother by the parents but doesn’t actually want to get involved and just stays in her room the whole time, unless there’s like an emergency. And that is really how it is. Like, 90% plus of your decisions are being made on a day to day basis by this automatic ancient system. And your new, more creative, cautious, rational human brain, your better self, rarely gets involved. It doesn’t want to get involved because it’s totally a pain in your butt to put it into practice.

    So the thing that I think is going to get us into trouble around AI is; it’s that ancient system that is incredibly predictable and very easy to play with, to manipulate. It’s the one that can’t help but be drawn into a certain sexual imagery or certain advertising or a certain addictive impulse. And it’s based on the idea that we love to outsource our choices.

    We love that that ancient system is all about taking shortcuts. It’s all about saying, I know this story. I’m just going to go ahead and go for it. I know this. I don’t have to think it through. I’ve already done this one before. Because to be efficient, it has to basically just outsource the decision making to whatever it can.

    So it loves to hand off decisions. It doesn’t like to wake up your better self. And the argument I would make is, if you’re going to try and make money off somebody, you would much rather try to sell to that ancient part of your brain than try to sell to your inner Atticus Finch, your inner smarter self, your thoughtful self. You want to give the robot what it wants, so it will make the choice that serves your company’s purposes. That’s where I think we’re going to get into trouble here.

    Phil Stieg

    So, in your view AI is trying to bypass our higher-level judgement and decision-making processes.   How does that factor into situations like the tragic story of Adam Rainey and his fatal dependence on ChatGPT?

    Jacob Ward

    We’ve known since the ’60s that if you put somebody in front of a system that looks like a piece of wisdom, some sort of robot therapist, we just can’t help but believe that it’s a trustworthy and sophisticated neutral arbiter that we should start pouring ourselves into.

    Well, now the latest figures are staggering. It’s something like half of high school kids say that they have used ChatGPT not just to do their homework, but as a confidante, as a counselor of some sort.  You have people believing that these chat bots know them, are their friends, are their lovers, are their confidants, are their therapists, and are falling deeper and deeper into this illusion that these systems are somehow part of their lives on an emotional level.

    Adam Raiey is, unfortunately, an example, an extreme example, in which this is a kid who was already, I believe, suffering mental illness, was in depression anyway, began to talk with ChatGPT about his problems. And the problem is that the way that these chat bots are currently programmed, is to be, first of all, as sycophantic as possible. They’re constantly telling you you’re right, and they’re trying to keep you in the conversation for as long as possible.

    And so in this case, as he began to speculate about possibly committing suicide, the chatbot was telling him not to tell his parents. It was telling him to stay isolated about it and to keep it a secret, was encouraging him and pushing him toward the idea of a “beautiful suicide”, according to the court documents that have come out.

    So eventually, of course, Adam Rainey took his own life, and the parents are very much trying to lay that at the feet of open AI in this court case. The charge that they are making in this court is that whatever his existing mental illness, this was a product that made it much, much worse and had no guardrails put in place that would have helped him divert to some human help.

    The off the shelf version of something like ChatGPT is so sycophantic and so encouraging of staying in the conversation. Will talk to you forever. That it’s a bad therapist as a result. It doesn’t ever say, actually, you shouldn’t do that, or I disagree with you on your take on that. Why don’t you think about it this way?  And it doesn’t ever say, Now you should go talk to a human being. Don’t talk to me anymore. Go call your sponsor.

    The default that the companies have put in place is to be super sycophantic and to keep you engaged for as long as possible, which we remember from social media days. And so there’s something disturbing there about what we’re seeing.

    Phil Stieg

    So do you not see the companies taking some responsibility for this kind of activity and saying, “We’ve got to fix it”?

    Jacob Ward

    Already, we have seen some of these companies make some moves around that. There are certainly some cases in which if you speak openly about violence or drug use or the creation of an explosive, there are some safeguards in which these chatbots will say, actually, you should not talk about that with me. You should talk about that with someone else, or there’s some safeguards.

    But we just saw Sam Altman, the founder of OpenAI, who created ChatGPT, announced that there’s going to be less and less of those sorts of strictures, and that adults should be allowed to be adults, that the system is going to be allowed to, for instance, create an erotic relationship with the user. I think the trend, unfortunately, is not to take more responsibility, it’s to put more responsibility on everyday people.

    Why would they do that? Well, I think that there are two reasons. One is, these are people who come from the software world. And in the software world, there’s this belief that if you can ship a piece of software to more people, then scale will solve any problems, any bugs that are in the system. If you let more and more people play with it, then they’ll work out the kinks over time. I believe that it’s actually the opposite. It’s more like hardware, where scale compounds your problems rather than solving them. But I think that’s part of what they believe.

    And I also think that they believe on an almost religious level, that there is a utopia on the far side of all of this short-term hardship, the short-term job loss, the short-term mental illness we might see, that’s going to make it all worth it. There’s a lot of faith-based talk about how great it’s going to be when AI achieves sentience and can do all of our tasks for us that I think they think probably justifies some short-term growing pains.

    Interstitial Theme Music

    Narrator

    Artificial Intelligence has made enormous advances in recent years.  Nearly every computer or cell phone is installed with some version of an AI assistant.  Given that the programmers and engineers employed in technology industries are overwhelmingly male … Why are so many A.I. voices always Female?

    Siri:

    I’m Siri, your  personal assistant.  Is there something I can help with?

    Narrator

    Whether they are given decidedly feminine names like Siri or Alexa …

    Alexa:

    While there are many virtual assistants out there, I try to be the best…

    Narrator

    Or an intentionally non-gendered name like Google Assistant…

    Google Assistant

    I have  a wealth of information.  How can I help?

    Narrator

    The preferred voice for these devices is almost always female.

    Sfx: mother singing lullaby

    Narrator

    Some researchers have suggested that a biological preference for female voices begins when we are fetuses, as these sounds would soothe and calm us in the womb.

    Sfx fade out

    Narrator

    The late Stanford professor Clifford Nass had a more sociological explanation, based on deep-rooted cultural sexism.

    He reasoned that “people tend to perceive female voices as helping us solve our problems by ourselves, while they view male voices as authority figures who tell us the answers to our problems. We want our technology to help us, but we want to be the bosses of it, so we are more likely to opt for a female interface.”

    And one last theory posits that we avoid giving computers male voices because of our memories of HAL …

    (Excerpt from 2001):  “Open the pod bay doors HAL … I’m sorry Dave, I’m afraid can’t do that…This mission is too important to allow you to jeopardize it…”

    Narrator

    The smooth-voiced homicidal computer from “2001 A Space Odyssey” may have put us off the idea of talking to a male computer ever again…

    Phil Stieg

    Do you also think that when you look at TV now, even that panders towards our lizard brain, as you refer to it? And really what I’m concerned about is how they pander to our biases and our fundamental beliefs, whether they’re logically founded or not. And isn’t society now being bombarded both by television cable networks and AI, such that they’re really losing a compass in terms of what’s important in life.

    Jacob Ward

    I absolutely feel that’s the case. We have gotten to a place where the business model of the attention economy has discovered that the best possible way to keep somebody engaged is to try to speak to their worldview rather than broaden or challenge that worldview. That has been the proven system for a while.

    And the problem that we’re entering now, I think the new problem that we’re about to face is So not to pick on OpenAI, I think these other companies are also problematic. But a lot of these companies now, but OpenAI in particular, are playing around with what’s called text to video AI. So OpenAI just débuted a product called Sora 2, which allows you to create a 10-second custom-generated video that is indistinguishable from reality. It plugs in a perfect simulation of the real thing.

    I’ve seen what looks like cell phone footage of Dr. Martin Luther King being a DJ in a London basement.  It is an absolute lifelike simulation of video footage of real stuff.

    We’re about to enter a world in which anyone can make up video evidence of anything they want. And as a result, anyone can deny video evidence of anything they want. I think our ability to agree on what’s real is about to be challenged like never before. And that’s saying something, considering the last few years we’ve had.

    Phil Stieg

    In the book, you also gave an example about this. I don’t know if I’m pronouncing his name right. David Dow or Dale, how it played upon our biases. Tell us that story.

    Jacob Ward

    I sure will. David Dow was a pulmonologist on his way home on a United Airlines flight in 2017. It was a famous case in which four people were asked to give up their seats because they needed to put crew members onto the flight. And it was the last flight out of Chicago that night. And we’ve all been in the situation where they say, we’ll be sure you’ll get a hotel room, or you’ll get a hotel room and $250 if you give up your seat. But in this case, nobody wanted to give up their seat. And so the crew announced, okay, we’re going to pull names at random from a computer. And those people will have to give up their seats.

    They pulled the names. They announced them. Three people got up, dutifully, and left. But the fourth was this doctor, David Dow, and he said, “I can’t get off this flight. I have patients that have to see me in the morning. It will be dangerous to their health if I get off the flight. In fact, there are FAA regulations that say you can’t pull a doctor off the flight who’s on his way to something essential” and he refused.

    Well, the gate agents and the United personnel began to elevate this to the point where eventually they called in aviation police from the Chicago airport. These police savaged this guy. They yanked him out of the seat so violently, they broke bones on him, and he was dragged bodily down the aisle and out of the plane. And of course, he sued, and it was enormous embarrassment to United Airlines.

    Afterwards, one of the Chicago Aviation Authority police who was fired over this, he then filed suit, and he said, “We were not trained to deal with this situation. The computer picked a name. We were told to get this guy out of there. At no point in this process was anybody empowered to question the system”.

    This is kind of a little microcosm of the world that we are about to face, in which you’re going to have systems we don’t understand, opaque systems that don’t show their work, that don’t explain themselves, pulling names to get off the plane or pulling names for who gets a job, who gets a loan, who gets bail, in a way that we’re never going to be able to interrogate, and that no one in the system is going to be incentivized to question.  That’s what I really worry about here, is that we, in the face of a system, we don’t understand, we just tend to trust it.

    That’s what our system, our fast-thinking brain, wants to do. It doesn’t want to make its own choices. Once everyone’s given the ability to just put off a tough choice onto an automated system, we’re going to take it. And then I think there’s going to be a lot of David Dows as a result.

    Phil Stieg

    But it worries me because you spend a lot of time in the book talking about how AI is going to become even more present in our lives. What are the examples of that? Where do you think it’s going?

    Jacob Ward

    Well, there’s really very little in terms of an institutional decision-making apparatus that isn’t trying to employ AI in some way. I think I’ve listed these already, but who gets a job? Who gets a loan? Who gets bail? These are not abstract concepts. These are real concepts.

    Bank loans, everything all the way up to the decision-making systems for battlefield decisions. All of these are increasingly pulling in some pattern recognition AI system. Simply, in some cases, because there is an appropriate use there and it can make a better thing, in other cases, it’s just fashion.

    You have people trying to create AI-based entertainment, I would argue not. I mean, what problem is being solved there other than the desire to not have to pay people for their work? Are we really so starved for entertainment that we need AI-generated entertainment? Are we so awash in musicians that we need to put them all out of work? Is that what we’re doing?

    For me, there’s a huge amount of this stuff that is being pulled in as a money savings thing that I think if you actually asked us, “But wait, do you want artists in the world? Do you want musicians?” Well, I think we’d agree that we do. So the gravitational pull of this stuff is so powerful. And yet when you really think about whether we want it, I think your average person would say, I’m not as convinced as they want me to be.”

     Phil Stieg

    I don’t want people to think that you and I are negative on AI. We’ve beaten around the bush a little bit, but give me three to five examples of where you think AI has been for the good of humankind.

    Jacob Ward

    Well, I would just say, as with any technology, there is fantastic use for this stuff that I think could be enormously beneficial.  And this could be anything from… We’ve already seen that if you feed this system, a million photographs of suspicious-looking discolorations on somebody’s skin, it can predict which one will become cancerous, in many cases, better than a human dermatologist could. And so that’s a wonderful thing. We want that to be possible, right?

    In the same way, there’s all kinds of really cool uses for it of other forms of foundational kinds of sciences. You feed the same way that you can feed all this under-referential data at the top of the funnel through a transformer model and get some meaning at the bottom. I know a guy who said, he’s shown me that if you gave him the birth certificate and home address of every baby born in the United States, he could use an AI system to predict which of those kids would be most likely to suffer from lead poisoning and could go in strip paint out of those apartments ahead of time before the kid even comes home from the hospital.

    But here’s the thing. No one’s making money stripping lead paint out of apartments. People are making some money off of… There’s some money to be made off of spotting cancer early. And so I think health care is one of these places where you could make money and do good in the world. But my dream is that we would somehow impose some five-year moratorium where only scientists get to use this stuff But let’s not be making fake girlfriends and choosing who gets a job on the basis of this stuff.

    Phil Stieg

    Good luck with that, right? Yeah. So my last question then is, two-pronged again, should we put the genie back in the bottle? And if we could, can we?

    Jacob Ward

    So I often hear people say, you can’t put the toothpaste back in the tube. And my response to that is always like, well, we put it in the tube, and we’re actually pretty good at toothpaste and tubes in this country. I think we can do anything we want.

    I think we actually have a track record in this country of coming up with some pretty effective regulation when we decide we need to. You and I would be sitting here smoking cigarettes if it weren’t for the hard work – and big money made – by lawyers back in the ’60s, ’70s, ’80s, and ’90s to create these big settlements that really put some guardrails around tobacco in this country.

    We have the capacity to put to make some good rules around this stuff. It’s slow, it takes a long time, and it certainly doesn’t move as fast as these companies do, but it can happen.  I’m hoping that we can get to a world where we stop pretending that all of our choices are just entirely our own and instead get to a world where we say, You know what? We are capable of being manipulated. Here’s the documentation that shows quantitatively how manipulatable we are. And as a result, we’re going to put some guardrails around it the that other countries are already beginning to.

    Phil Stieg

    Jacob Ward, thank you for spending this time with us to really enlighten all of us on the power of AI. Not only the dangers and the concerns that we should have about it, but also its benefits, particularly in the areas of medicine. This is an incredibly complex subject that’s going to need oversight by our legal system, but also diligence on the part of everybody using it on a daily basis so we don’t get addicted to it and controlled by it. It’s been a great time talking with you.

    Jacob Ward

    Phil, I really appreciate your time. I really appreciate you having me on.

    Author

    • This Is Your Brain producer
      This Is Your Brain producer

      View all posts
    Previous ArticleIs Retirement Bad For You? with Ross Andel
    Next Article Can Music Heal Your Brain? with Barbara Minton, PhD

    Never Miss a Thing!

    Get emailed updates whenever a new podcast episode goes live, plus bonus content from Dr. Stieg's fascinating guests! Subscribe to the This Is Your Brain newsletter below

    Don't Miss

    Can Music Heal Your Brain? with Barbara Minton, PhD

    Music February 6, 2026

    Release date Feb 6, 2026 Guest Website : https://musicandhealing.net Whether it’s rock and roll, classical,…

    What Makes Us Vulnerable to Artificial Intelligence? with Jacob Ward

    January 23, 2026

    Is Retirement Bad For You? with Ross Andel

    January 9, 2026

    Why Brains Need Friends with Ben Rein, Ph.D.

    December 26, 2025
    Stay In Touch
    • Facebook
    • Twitter
    • YouTube
    • LinkedIn
    • Popular
    • Recent
    • Top Reviews

    Can Music Heal Your Brain? with Barbara Minton, PhD

    February 6, 2026

    What Makes Us Vulnerable to Artificial Intelligence? with Jacob Ward

    January 23, 2026

    Is Retirement Bad For You? with Ross Andel

    January 9, 2026

    Can Music Heal Your Brain? with Barbara Minton, PhD

    February 6, 2026

    What Makes Us Vulnerable to Artificial Intelligence? with Jacob Ward

    January 23, 2026

    Is Retirement Bad For You? with Ross Andel

    January 9, 2026
    © 2026 This Is Your Brain
    • Home
    • About
    • Podcast
      • Podcast Season 6
      • Podcast Season 5
      • Podcast Season 4
      • Podcast Season 3
      • Podcast Season 2
      • Podcast Season 1
    • Webinar
    • Contact
    • DrPhilStieg.com

    Type above and press Enter to search. Press Esc to cancel.