Nvidia’s New Chips, With a Side of Valuation

Jensen Huang sees a path to $1 trillion in AI infrastructure. Is Wall Street buying it?
In this podcast, Motley Fool analyst Asit Sharma and host Mary Long discuss Nvidia‘s “Super Bowl of AI,” plus:
- The coming generation of chips.
- Increased competition from hyperscalers.
- Partnerships in fast food, autonomous driving, and robotics.
Then, we answer some listener questions about early stock analysis, how healthcare companies are using AI, and how to factor in customer experience to investment decisions.
Got a question for the show? Email [email protected].
To catch full episodes of all The Motley Fool’s free podcasts, check out our podcast center. When you’re ready to invest, check out this top 10 list of stocks to buy.
A full transcript is below.
This video was recorded on March 19, 2025
Mary Long: There’s a new chip on the block. You’re listening to Motley Fool Money. I’m Mary Long joined on this fine Wednesday morning with a Mr. Asit Sharma. Asit, thanks for being here.
Asit Sharma: Mary, thank you for having me.
Mary Long: Of course, we’ve got some big news happening after we record, but before the show publishes this morning, that is, of course, that Mr. Jerome Powell will be making some announcements later today. We’re not going to hit that on today’s show. Ricky and Nick Sciple will cover that tomorrow. Instead, Asit, you, and I have another big name dropping another announcement or multiple announcements. That big name is Jensen Huang. NVIDIA had its GTC/GPU technology conference. It’s running throughout this week, and yesterday, Mr. Huang delivered the keynote speech.
Asit, this is an event that used to be predominantly catered toward academics, and now it’s turned into what the New York Times dubs the Super Bowl of AI. We’ve got a number of things coming out of that keynote that you and I will hit on today’s show. But we’ll start with this. In late 2026, NVIDIA will release its next generation of GPUs. They’re calling this generation Vera Rubin. What do investors need to know about this upcoming generation of chips and how it’s different from perhaps, shall we call it its predecessor, the Grace?
Asit Sharma: Mary, I’m going to call it Vera Rubin, for some fun shift in pronunciation. It’s probably Vera, but anyway, so what is Vera Rubin? Vera Rubin is an improvement on the Blackwell architecture. That’s NVIDIA’s current biggest and baddest GPU complex accelerator technology. Vera Rubin does improve on Grace. You’re referring to the CPU, the chip that goes in the Blackwell. Blackwell has a GPU unit and a CPU unit. Vera Rubin replaces that chip unit Grace with something called New Grace, which is supposed to have two times the performance. Overall, this GPU system has a lot of compute power that improves on Blackwell.
For example, it’s got what is called NVLink scaling. This is when GPUs in a system communicate with each other, send information back and forth that doubles the power over Blackwell, the ability to scale between GPUs, and NVLink scaling. It has something called HBM4 memory. You probably heard us. If you’ve been listening to Mary Knight or myself and Ricky over the last year or two, talk about something called HBM3, which is a type of memory within GPU structures. This is the latest version of that. It just has much more compute power almost 10 times the aggregate compute power of the Blackwell platform. In many ways, Vera Rubin just represents a really big leap, and then it’s going to be followed up Mary by something called the Vera Rubin Ultra which is a little bit funny. I’m digging here a little bit at something that shouldn’t be criticized too much. It’s going to be an advancement over an advancement the next-next generation, the company’s platform. But, come on, this reminds me of new and improved from the grocery store when we were growing up, Mary. Like, after a while, how do you communicate that something’s even better? Not Vera Rubin Plus, but Vera Rubin Ultra sounds about right.
I want to lay a criticism here on NVIDIA, which is a company I admire. The CPU of this Vera Rubin system, which they called New Grace is complemented by the new GPU graphics processing unit in the system, and that’s called the CX9. Again, NVIDIA may be running out of inspiration here when you start naming your GPUs after Mazda SUVs.
Mary Long: Asit, you and I have a tendency to really harp on what’s in a name and why a company decides to name a certain project, X, Y or Z. We were talking a bit before the show about it’s interesting to see NVIDIA in particular. We’ll pick on them right now. To see NVIDIA in particular gives these names to all these different products, but they can be very hard. I appreciate why they’re taking the time to name something, name a GPU Vera Rubin rather than HBM or a series of numbers and letters. I do appreciate that. But it still can be very hard for the layman to differentiate between all these different names, between Kuta, which is more of a software platform, and the Grace, the Rubin, and Blackwell, and how these pieces fit together. Thank you for giving us the lay of the land a little bit and giving us an insight into what all these new announcements actually mean.
When it comes to what all these new announcements actually mean, some news that came out last week was that Meta is planning to train its own AI chips. Amazon and Google are already in the process of doing this, and this is all these companies, these hyperscalers are training their own AI chips in the hopes that they can ultimately reduce their reliance on NVIDIA. Help us understand what NVIDIA’s headstart looks like because they are the go-to-player in this game right now. Realistically, what do Meta, Amazon, Google, anyone else that’s interested in training their own AI chips, what do they actually have to do to build in-house chips that genuinely rival what NVIDIA is putting out there?
Asit Sharma: Mary, one thing they have to do is to make simpler chips that have custom purposes. When you train a model or when you get inference out of that model, not all of it has to be done on the latest and greatest GPUs. To the extent that a company like Amazon, with its Trainium chip. It’s now approaching the three series. A company like Meta, which is getting into the game or even Microsoft, which is later to the game with a chip called Maya, what they’re trying to do is to cut costs dramatically.
When you or I use a large language model on their Cloud platforms, it costs less for us, the end user. We’re paying in some way and enterprise businesses are paying direct to these companies. They have a lot to gain because they really don’t need the overkill of awesome NVIDIA GPUs for many of these use cases, and that’s why they’re investing in this. They’re working with great design companies. They are taking models, sending them over to TSMC, a great foundry, getting back prototypes, and then gradually, I think Amazon is furthest along in this race, just putting them into the data centers and showing a cost savings. On Amazon’s last conference call, CEO Andrew Jassy said, Look, we’re reducing compute cost by 30% in some cases.
Now, NVIDIA is looking at this and saying, Why is NVIDIA smugly smirking and laughing because we want ever more powerful compute, and the world is moving in the direction of reasoning models where we’re asking the models to think in steps, to go out and research on the web, come back to us with answers. Really, the chips that are best designed for that are the GPU/CPU complexes that NVIDIA Build. It believes it still has a really major role to play in this world going forward. I think, maybe the true story is somewhere in the middle, and time is going to tell how it really shakes out.
Mary Long: Patrick Moorhead, who’s the founder of Tech Research Form Moor Insights & Strategy told the New York Times for an article about the again, this conference, the Super Bowl of AI that “The gravy train comes to a screeching halt if Cloud companies stop spending.” That’s the gravy train that NVIDIA is currently reaping the benefits of. What can NVIDIA do today to freeze some of that gravy for the future if there is a potential hibernation winter period ahead?
Asit Sharma: Well, one thing they can do is to focus on the replenishment cycles. Even though data centers may get crimped at the margins, the data center buildouts. You have all these existing data centers that need to be upgraded in terms of their capacity, server capacity, memory capacity, etc, to meet the demands of our compute. Just because AI might become less expensive doesn’t mean we’re going to stop using it. NVIDIA can really think forward in terms of where they can start going to these customers and working on upgrades to existing data centers if they pull that new build out. I don’t think that’ll necessarily happen as this cliff moment which some project.
If you look back before what happened with generative AI, we had a really slow, steady but almost inexorable march to build data centers just to handle the migration of enterprises taking their stuff off of premise into Cloud. Those percentages, if you look at them, they still have a long way to go. Even that, people have forgotten this movement is still in motion. Over time, I think what we’re going to see is, yeah, there could be some valleys, some unexpected slowdowns in data center builds, but the direction of the future until we can make these things much more efficient, both from a consumption standpoint and an energy standpoint is to keep building, and NVIDIA stands to gain from that.
Mary Long: The direction of the future that Jensen Huang sees is $1 trillion in AI infrastructure buildout. That was a number that really stuck out to me. He mentioned that the company has already built out about $150 billion of AI infrastructure. But again, that he sees a roadmap to $1 trillion sooner rather than later. What did you make of that number, that statement, that vague timeline?
Asit Sharma: Not much. It’s not too different from a number that he dropped like three years ago. It’s just that no one could see that future. He was talking about hundreds of billions within a few year periods, three years ago and it just seemed like, Well, I guess if you’re talking about extremes and everything falls into place, and everyone would have to be using this stuff. Everyone would have to be using ChatGPT for that to happen. I guess so, Jensen. If we go back and extrapolate his original projections, guess what? Right on the mark, $1 trillion bucks.
Mary Long: That’s interesting context, too, because, we’ve seen NVIDIA’s stock go down about 15% thus far this year. It was down over 3% yesterday alone in spite of all these big advancements that Jensen Huang is touting. NVIDIA PEG ratio today is just under 0.3. Despite all of the advancements that they’re rolling out, that they’re talking about and hyping up, the market seems to think that this company’s growth is going to stall out. Did you hear or see anything during yesterday’s keynoe throughout the span of this conference that made you think, Maybe the market’s got it right? Or do you think this is another situation we’re where we were three years ago where Jensen Huang is saying, This is the future, and folks just aren’t buying it because they can’t see it.
Asit Sharma: I think we should bring a lot of skepticism to bear to NVIDIA’s growth just because now everyone wants to disrupt them on every level. Companies that didn’t have to compete with NVIDIA yesterday, let’s look at Arista Networks in the networking space, suddenly find it as a competitor, and they have to shift their models. Arista Networks is an amazing company, by the way, so it’s not like they’re going to completely reinvent their business, but now they’re paying attention to what could be some competition for them. I think when you think about this, you think about companies that are trying to figure out ways to make compute more efficient.
You have small competitors that are questioning, even the way we build GPUs. I think we really have to be skeptical on the growth thesis, but you bring up something interesting, Mary. What you’re basically saying is, Hey, that PEG ratio, what if that means that it really is undervalued in relation to its potential to earn out in the future? I saw lots of stuff in that presentation that told me this could happen. Just take one thing. You mentioned Kuta which is a collection of software acceleration libraries.
We’ve talked about this before, Mary, and I pointed out that look, Kuta’s lots of different libraries. There’s one for high precision math. There’s one for aeronautics. There’s almost one for every really cool use case you can think of across so many industries. Yet we see another one that was introduced yesterday, which is a version of something those who know the programming language, Python are very familiar with. There’s a library called Pandas, and it has to do with something called data frames. It’s essentially how we manipulate data.
What NVIDIA is doing now is taking the concepts behind data frames and extrapolating that into massively parallel computing use cases, which just points to a future in which the assumptions we had about how we use big data and how fast we can reek stuff out of it, maybe those assumptions are off because if we take the same concepts from Python, but throw those into these great acceleration libraries which are running on GPU sets that are much more powerful than what is on the ground today, then it means that lots of fun things that we do with computation today we can do in a much shorter time, and there will be potentially some very interesting advancements in science, in big data, etc.
That’s just one little thing I saw that indicated to me that they keep innovating and creating a future use for their technology. I wouldn’t bet against the company. I think right now it’s be very skeptical, but don’t discount the possibility that 3-5 years from today, you and I could be talking about some ho hum, amazing double digit growth rate that NVIDIA holds.
Mary Long: One of the central themes of yesterday’s keynote was that AI has used cases in every industry and that it’s NVIDIA that’s really laying that foundation across all those industries. As a part of that thesis, we have this many announcements about partnerships between NVIDIA and different companies. I’m going to call out three different partnerships that stuck out to me, and then you let me know which one’s most interesting to you and we can go from there and give a few more details on it. We’ve got GM and NVIDIA teaming up to build out GM’s fleet of fully self driving vehicles. We’ve got Walt Disney, Google’s DeepMind and NVIDIA, building a platform that will “supercharge humanoid robot development.” We’ve also gotten video partnering with Yum Brands. That’s the parent company of Taco Bell, KFC and Pizza Hut, and they’re going to roll out AI order taking. Which one of these do you want to take, Asit?
Asit Sharma: Let’s do the lightning round, all three, because they’re all fun.
Asit Sharma: Let’s start with a drive through.
Mary Long: We’ll start with the drive through. First of all, my question here is, do you want AI ordering? I’ve seen a ton of fast food companies roll this type of stuff out. McDonald’s roll this out and then also roll it back. McDonald’s this past summer ended their partnership with IBM when it came to AI ordering. I think that this idea of efficiency at all costs at the drive through, I understand why companies are pursuing that, but it also runs counter to a story that we’ve seen play out at Starbucks, where, Hey, you lean too hard into efficiency, and that actually pushes some consumers away. What do you think the future of this yum brands Nvidia partnership could look like?
Asit Sharma: The AI is getting better and better at the end of the day. What I want is really the human interaction. But you know what I’ll take, Mary, is an AI that’s smart enough to get something wrong in the order, so I have to correct them and just fool me. You wanted large fries with that? No, ma’am. I said medium fries. That’s the AI interjecting something to make me think it’s human. I’ll take it in that instance.
Mary Long: Then we’ve also got this self driving feature. What’s interesting to me is that GM has struggled in its attempt to build out fully autonomous vehicles. Last year, they pulled funding for its Cruise robotaxi company. Were you at all surprised to see GM seemingly giving full self driving another go?
Asit Sharma: I guess I was, but obviously I wasn’t paying enough attention, because GM signaled this. We talked about it, Mary, like, we’re not going to focus on trying to have a full fleet of autonomous vehicles. We’re going to focus on the software side, getting that into our vehicles, working with simulations so that our software’s second to none, the software that’s in the vehicle for your driving experience. It’s just a driver technology that’s interesting.
Mary Long: Last but not least, we’ve got this robot situation. I mentioned a partnership of three between Walt Disney, Google’s DeepMind, and Nvidia. I get Google. I get Nvidia. Walk me through how Walt Disney fits into that trio.
Asit Sharma: Walt Disney actually has a long history with robotics. I think the animatronics at their theme parks. They’re very good at robotic motion and making it human like. They also have research labs out near Burbank near their studios, which have been working for a long time on robotics and the science, the physics of how these robots work and the expression of that. Then you think of their animation and motion expertise that divisions like Pixlr bring. You actually have a company that makes sense when you put that puzzle together with those gurus from DeepMind and Nvidia is just very strong research into the robotics field and its forte in simulation and virtualization. I actually was surprised, but thought to myself, that makes sense.
Mary Long: We got Quantum Day coming up tomorrow. I know you said that you’re really excited for that. Anything in particular that you’ll be keeping an eye or ear out for.
Asit Sharma: Nvidia already said that it’s working on some COTA libraries for quantum, and there are some really interesting problems that quantum computing needs to solve before it becomes big time. One of those is correcting for errors in the computation. Without getting to any weeds because we haven’t got time, I’d be very interested to hear about how their software might help on the quantum level, cut down the error rates when we ask these particles to do their thing and measure them and try to get a mathematical result out of that. That’s what I’ll be looking for tomorrow.
Mary Long: Asit Sharma, always a pleasure. Thanks so much for talking Nvidia with me today and for breaking down these often complex topics.
Asit Sharma: Pots of fun, Mary. Thank you so much.
Mary Long: You got questions? We got answers. Up next, we continue with Mailbag Week and turn to a number of Motley Fool analysts to answer your questions about fundamental analysis, AI in healthcare, portfolio management, and how customer service experiences ought to affect your investment decisions. Let’s go.
Advertisement: Trading at Schwab is now powered by Ameritrade, giving you even more specialized support than ever before. Like access to the trade desk, our team of passionate traders ready to tackle anything from the most complex trading questions to a simple strategy gut check. Need assistance, no problem. Get 24/7 professional answers and live help and access support by phone, email, and in platform chat. That’s how Schwab is here for you to help you trade brilliantly. Learn more @schwab.com /trading.
Mary Long: Every now and then, we like to turn to our listener mailbag to see what kind of questions are on your mind. We noticed that there were a lot of questions that were roughly about how to get started investing. For today’s show, we rounded up a few analysts to answer your questions that are geared toward the beginner or intermediate investor. To kick us off, we got listener Cody King who wrote in, Fools. For new investors, what are some fundamentals to look at when considering investing in a stock? Is it PE ratio, most recent quarters earnings, news headlines, or any metrics overrated? What is the key info to dive into to determine if a stock is a winning investment? For the answer, we turned to none other than Fool analyst Asit Sharma.
Asit Sharma: Cody, I love your question. For me, there are two things that go on when I’m thinking about stocks or selecting stocks. One is to screen for new ideas or turnover stones, if you will. In that sense, different metrics like the PE ratio can be very useful in just comparing companies and seeing what might be a really high PE. Sometimes that’s a signal for a good stock because you’re looking at a company that must be growing its revenues or has the potential to do so. But let’s take a step back.
Because your question really asked, What is the key info to dive into to determine if a stock is a winning investment, and that’s quite a different thing. For me, the most important thing is the story, the narrative behind that company. If you can grasp that, then you can put an overlay of everything else. You can crunch the financials, you can look at ratios, you can compare it to other companies. One of the places that I like to start is the MD&A section of a quarterly or annual report. You can go to Edgar. This is sec.gov and search these reports.
They come labeled in varieties of 10K for annual and 10Q for quarter. When you go to this section, what you’re looking at is a required explanation of what a company does and how its recent past has fallen out. It required disclosure from the SEC. It’s part of regular reporting, and that’s a great place to understand exactly what management thinks about its company, how it presents its products or services, where it sees the business going, and that section discusses recent results. I like to start from there, and if I can grasp that and really feel that I’m starting to understand what a company does and what its potential might be, then the other things that you mentioned like listening to earnings transcripts, looking at the most recent quarters earnings, thinking about the news headlines, those are really key at that point to getting a beat on whether this is a company you want to put into your portfolio or take a pass on.
Mary Long: We talk about a lot of stocks on the show. Some of them are genuine investment ideas. Others are just public companies that we find interesting or newsworthy, but don’t necessarily think of as strong stock picks. One listener, Sumit Maru asked about how listeners ought to think about what we call radar stocks. They write, I listen to the show religiously every day. One of the questions I have is about the radar stocks. Every Friday, the team picks two radar stocks. I love what JMo, Mattie, and Emily bring to this section, but I’m not sure what everyday investors like me should do with those picks. For the answer, we turn to the host of our Friday show, Dylan Lewis.
Dylan Lewis: I’m glad someone asked, because I love the radar stock segment, and I think sometimes it does go on without explanation, Mary. For me, I think the radar stock segment puts us in a position to talk a little bit about stuff that is going on in the market that our analysts really want to discuss or feel like we’ll be popping up in the news at some point in the next week or two.
Very often, you’ll have our analysts previewing a company that’s reporting earnings. Sometimes they’ll be talking about a company that they’re not particularly excited about, but they feel like it’s something that they should bring up for listeners to the show if it’s a company that’s in their portfolio. I joke sometimes that people bring radar stocks on for good reasons because they’re excited and they’re following this business. Maybe it’s a future watch list stock for them. Sometimes it’s a radar stock for a bad reason, and it’s more of a moment for us to give people a check and make sure they’re paying attention to some of the things that could affect companies in their portfolio.
Mary Long: Speaking of investment ideas you get from the news, it’s no secret that excitement about artificial intelligence is everywhere. Dana in Cincinnati wonders how healthcare companies, in particular, are using AI and whether she might find any intriguing stock picks herself from looking at the intersection of those interests.
Asit Sharma: In a multitude of ways, and that’s probably why it’s something we can talk about at greater length, but just to pick an interesting area has to do with how you design drugs. Some people may know that the Nobel Prize in chemistry in 2024 went to the developers of AlphaFold, which is a program that can predict protein structure just from the amino acid sequence. That’s actually not a brand new thing, but using some AI tools, they were able to make it better than it’s been in the past. The upshot of that is that you can actually visualize in three dimensions what target you’re designing a drug against with a lot more ease than you could do in the past.
Once you get to that point, you can start to use digital tools to help make molecules that fit like a hand in glove into those targets that you’re looking for. These are all approaches that have been used for years. It’s often called rational drug design. It’s something that most companies do to some extent, but you’re seeing it step up in its complexity. It remains to be seen how much this bends the curve when it comes to speeding anything up or making anything better than we have in the past, but you’re seeing some interesting programs advance into clinical trials, and particularly interesting are companies that are going after targets that were once considered “undruggable”. A target where the pocket you’re trying to bind into is very hard to fit without making basically a key that opens everything. If you make a key that opens everything, that’s called having a ton of toxicity. You need it to bind to just that target. If you can do that and show that some of these work, I think you’re really going to see even more emphasis on this area.
Mary Long: You can learn a lot about companies not just through the news, but also through your own experience with them as a consumer. Yuan Lu wrote in with a question for Fool analyst Jason Moser, which we’ll paraphrase here. I know you like PayPal and want to ask, why is its customer service aggressively mediocre? It used to be that you could just pick up the phone and talk to someone or leave a private message on a message board, and someone would get back to you. In the past few years, however, all I can do is initiate a chat with absolutely no ticket ID, no queuname, nothing to allow the next agent who happens to be on the thread to understand what my question was or what information has already been exchanged. How much weight do I put on this experience when it comes to making a decision about buying or selling any kind of stock?
Jason Moser: I think that’s a good question, and one of the reasons is because there’s not one definitive answer. This is something that each investor has to answer for themselves. For example, like, I’ll say, one of the reasons why I like PayPal is just because it always works. I honestly don’t think I’ve ever had to actually contact PayPal customer service. I think, based on the question, I’m considering myself lucky at this point, because that stinks when you get bad customer service. I suspect if I had to interact with a company and continually got bad customer service, that would absolutely make me question whether or not this was a business that I really ultimately wanted to own. Now, you see these stories play out all over the market.
I think Comcast stands out in an example as a company that really owns the space in a lot of ways, certainly is one of just a handful of very big players in what it does. I think perpetually they’ve had a reputation for just awful customer service. Now, again, not a Comcast subscriber. We don’t have it here. I don’t use it. I don’t have that experience. But based on everything I’ve heard, that starts to make you wonder, well, do I feel good about owning this? I think for every investor, they have to be able to weigh that a little bit and saying, Well, is this a place where a company could improve versus are there a lot of positive qualities that this company already has. None of these investments is perfect, and I think that’s something always to remember is that none of these investments is perfect.
We try to identify the areas of weakness where they can improve, try to identify the areas of strength, and then you have to weigh those against each other. I guess for me, with PayPal, I still own shares in the company, I own them for many years. It just strikes me as one of the companies that’s really leading the way in this digital movement of money. Are there things that they could be doing better aside from customer service? Absolutely. I’m hopeful that new leadership here, and Alex Chriss is really spearheading that. He seems like he has some really neat strategies, and I think the market is starting to take note of that. But ultimately, yes, this is a question that each investor has to answer for themselves. I think looking at that customer experience, that’s something that you definitely need to weigh. That’s a valid concern that every investor should take into consideration.
Mary Long: Once you found the stocks you want to buy, you’ll have to figure out how big a role you want each of those positions to play in your portfolio. Over on X, Jorso asked us, for a beginner, middle experienced investor, are you better off buying small shares of multiple companies or saving up and buying a larger share amount in fewer companies? For the answer, we look to Asit, again.
Asit Sharma: Jorso, I think for beginner, middle, and even advanced or very experienced investors, we’re all better off buying small shares of multiple companies at the start of the game. Now, yes, I’m being partly facetious here, because if you are an extremely experienced investor, you may have this actually already in practice, but in a different way because you’re building up a lot of conviction and then putting your money in and putting serious capital to work. But the principle is that we are taking small stakes at the beginning to learn about businesses as they grow, and we’re going to keep adding money to the businesses we come to understand better and that keep performing as time goes on.
That’s a really great way to make money. It’s actually probably a better probabilistic strategy versus identifying at the outset a few companies where we’re going to allocate a lot of capital too. That implies that you’ve got a really good edge on the house in this game. Ricky Mulvey and I had a great conversation, likening, investing a bit to probability and playing cards. For most investors, the opposite is the strategy to use, which is you’re going to find out over time where you’re going to concentrate your capital. Don’t do it at the beginning.
I will say, over time, even though Warren Buffett is associated with taking big and concentrated bets, we’ve seen him in some businesses scale up over time. Of course, his billions have a lot more impact than maybe our hundreds or thousands. Nonetheless, this is a really fun strategy to employ it allows you also to learn about a lot of companies and to extend your investing chop. I’m all for going with more companies, smaller positions, taking your time, and then concentrating further investments on those winners, and also adding new ideas to your portfolio as the world changes.
Mary Long: We love getting listener questions. If you’ve got a question that you’d like to hear answer to, write to us on X or shoot us an email @[email protected]. That’s podcast with an s @fool.com. We’ll be sharing a few more of these with you on tomorrow’s show. See you then, Fools.
As always, people on the program may have interest in the stocks they talk about, and the Motley Fool may have formal recommendations for or against, so don’t buy or sell stocks based solely on what you hear. For the Motley Fool team, I’m Mary Long. Thanks for listening Fools. We’ll see you tomorrow.