Rajen Sheth: Where Will AI Take Us Next?
Since it launched in November 2022, the artificial intelligence bot known as ChatGPT has generated a lot of both excitement and controversy.
The conversation around ChatGPT invites larger questions around the role of artificial intelligence in our lives: where and how should we set limits? How can we employ it in a way that allows us to advance while minimizing collateral damage? And can computers ever attain the ability to demonstrate empathy?
In this episode, Eric speaks to Rajen Sheth, one of the leading experts in the field of AI, to help shine light on some of these profound and complicated questions.
Rajen spent nearly two decades at Google, where he helped create the company’s ubiquitous suite of apps, and eventually served as the VP of AI for Google Cloud. He currently serves as the CEO and Founder of Kyron Learning, a company focused on applying AI to the education system.
Listen along as Rajen helps us make sense of the evolution of AI, its limits and where he sees it headed next.
ABOUT RAJEN SHETH Rajen Sheth is CEO and Founder of Kyron Learning, a public benefit company focused on giving all students equitable access to high quality one-on-one teaching using AI. Previously, Rajen spent 17 years at Google, where he eventually became the VP of Google Cloud AI and Industry Solution. At Google, he focused on building products that enable enterprises to transform themselves through AI and building transformative products for Google Cloud’s key industries: healthcare, retail, financial services, media/entertainment, and manufacturing. Earlier on, Rajen led the development of Android and Chrome for business and education, including the Android for Work products, the Chromebooks for Education product line, and Chromebooks and Chrome browser for work. Rajen also helped create and lead product development for Google Apps for Work and Education (now known as GSuite), a full suite of communication and collaboration products for businesses which is now used by over 5 million businesses.
Eric Jaffe: We make decisions every day. While some of them are small, others can have a huge impact on our own lives and those around us. But how often do we stop to think about how we make decisions? Welcome to Deciding Factors, a podcast from GLG. I’m your host, Eric Jaffe. In each episode, I’ll talk to world-class experts and leaders in government, medicine, business, and beyond, who can share their firsthand experiences and explain how they make some of their biggest decisions. We’ll give you fresh insights to help you tackle the tough decisions in your professional life.
Since launching in November 2022, the artificial intelligence bot known as ChatGPT has generated a lot of both excitement and controversy. The chatbot is an example of generative artificial intelligence. In other words, it utilizes existing data to formulate human-sounding answers to questions. Many users of ChatGPT simply seek to amuse themselves. For example, the bot can adopt the tone of various historical and other cultural figures, but the text generator is so good at imitating humans that some of its more obvious use cases have ominous implications. For example, it can write a student’s essay or even a journalist’s news report. The conversation around ChatGPT invites larger questions around the role of artificial intelligence in our lives. Where and how should we set limits? How can we employ it in a way that allows us to advance while minimizing collateral damage? And can computers ever attain the ability to demonstrate empathy?
My guest today is a leading expert in the field of artificial intelligence and is well-positioned to help us make sense of this murky and emerging field. Rajen Sheth spent nearly two decades at Google where he helped create the company’s ubiquitous suite of apps, and eventually, he served as the VP of AI for Google Cloud. Rajen is currently the CEO and founder of Kyron Learning, a company focused on applying AI to the education system. Listen in as Rajen helps us make sense of the evolution of AI, its limits, and where he sees it headed next.
Rajen, welcome to the show.
Thank you. Thank you for having me.
Rajen, maybe we could start by you telling us a little bit about what you worked on at Google, you departed in 2022, and your experience with artificial intelligence.
Absolutely. So I was at Google for a little bit over 17 years, and in my time there focused on Google’s enterprise business and education business. And so I joined Google when we first started Google Enterprise, which is now what became Google Cloud, and started to think about how we can use a lot of the technologies that we were building there to be able to help enterprises and to help the schools in interesting ways. And I’ve worked on things like Google Apps for education and enterprise, Chromebooks, Android, and things like that. And then the last four years I spent leading the product team for AI for Google Cloud. And it was a very interesting area and really kind of leads well into the kinds of things that are happening right now because the whole goal of that role was to figure out how we can apply AI in interesting ways, in safe ways to help companies use AI to really create value for their companies.
It was everything from figuring out how we can help companies build on top of AI all the way to building solutions for companies in areas like financial services, in areas like customer service or in healthcare, and various other things like that to be able to use AI in a enriching way for those companies.
Obviously, with the introduction of ChatGPT, there’s an enormous amount of conversation going on out there in the world about AI, and particularly operating companies are thinking through different ways to leverage AI. So maybe you could just start by saying, how you would if you were at a company, look at the risks and the benefits of deploying artificial intelligence technology and how companies should be thinking about deploying this technology in the future.
There’s been a lot of talk about ChatGPT over the course of the last couple of months, and what really reminds me of is when we first saw the internet browser. If you think about it back in 1993 when we all first saw the internet browser, the internet had been around for nearly 25 years before that moment, but it was the first time that everybody was able to see the power of what the internet could do. And I think that’s the same thing that’s happening right now with ChatGPT. With that, I think that companies need to be thinking about a few things, that ChatGPT isn’t the only kind of AI that’s there, there are three different types of AI technologies that are out there. At one level, there’s essentially things that can help you support decisions and predict and you can be able to put in structured data to be able to get a prediction of what might be the best decision for you to make.
There’s a second layer of technologies that works with unstructured data and is able to either mix structure out of that unstructured data, so for example, understand the document, let’s say, or understand the picture and also classify what that might be. So for example, in manufacturing, this picture is a picture of a damaged part, for example, or this is a picture of this particular kind of product, those kinds of things. Those are things that are more mature. Those have been used in enterprises for quite a while now and have been able to create value in enterprises.
The new area that’s coming into play right now is generative technologies, and that’s really where ChatGPT and others like that come in, and that is where you use what’s called a transformer model to be able to create information. And that could be creating text information, could be creating videos, images, whatever it might be.
What I think for companies to really note is that that last layer is incredibly powerful but also fairly nascent. And I think that we all need to be careful with how we apply that and where we apply that. We can apply that in areas, especially around things like content generation right now where it can actually make a big difference. So for example, using it to help you create content for your website or create a particular document that you might have, or even help you write a particular email or proposal or things like that. Because it’s giving you a suggestion and then you’re editing it, those are areas where it’s really good right now. Where I would not put it into place right now is places where it goes direct to the end user, particularly in areas where it is incredibly sensitive or is really, really mission-critical where accuracy is incredibly important.
I think that ChatGPT and GPT technologies are going to go through multiple phases over the course of the next few years, but right now where it’s really solid is for things like content generation. If you’re looking for things like accuracy, if you’re looking for things like judgment, and if you’re looking for things like empathy like you might need, for example, in a customer service call, those aren’t areas yet where it’s good at and those will be things that it might be good at down the line.
For the layperson like myself, could you explain what a transformer model is and just kind of at a very high level how things like ChatGPT and other generative technologies work?
Yeah, absolutely. And so really what a transformer model is kind of a fill-in-the-blank type of model, where if I said, okay, my phrase is once upon a, and I wanted you to pick what is the next word that would come. Naturally, you would probably pick the word time. That’s probably the thing that would come to mind. And so that is what a transformer model is doing. It’s essentially taking various parts, it’s taking a phrase or a set of phrases that you give it and then it’s trying to figure out what’s the next logical word that should come in after that.
And it does that based on probabilities to figure out what’s the most probable word that would come next. And then given that word, what’s the next most probable word that would come into play and things like that until it generates a good amount of content based on that. And so what it’s doing is it’s taking in a lot of information. In the case of things like ChatGPT, it’s taking in billions and billions and billions of pieces of data and it’s using that to figure out what is that next word and the word after that and the word after that.
Very, very helpful. So we’ll certainly come back to ChatGPT and spend a lot of time on that. I’m curious though, you mentioned the three different types of AI and it strikes me that the first type you mentioned where you could put in structured data to help companies or organizations make better decisions, it’s a very practical, actionable type essentially. So could you point to a couple of different specific companies, technologies that are doing that kind of work?
That layer is something that all of us experience in our everyday lives. So for example, where it’s used right now is that anytime, for example, you get locked out of your credit card because of potential fraud, that’s where that’s coming from, is it is an algorithm that’s looking at the various activities that are there and determining is this fraud or is it not. Those are the kinds of things where it’s being used right now. But there’s more than that. So for example, recommendation technologies. When you go to an e-commerce site and it recommends the next thing you should buy, those are technologies that are taking in structured data, in that case, the kinds of things that you’re buying, the kinds of things that others like you are buying and figuring out the next thing you want to buy. So those kinds of things are very widely used.
Some things that most people have seen on that kind of second layer in terms of the classification of unstructured data, if you use things like Google Photos and you search for every picture of your family on a beach, it’s recognizing the beach, the picture of the beach, and it’s showing you those images. Those are now used pretty commonly. Even in things like healthcare where you’re looking at a radiology image and you’re trying to figure out is there something that a doctor should take a look at in this or examine more deeply, those are kinds of things where those technologies are being used. Generative technologies, I think we are still at the very beginning and I think we’re going to see over the course of the next year the kinds of places where those will be very helpful.
How should companies be thinking about how they can increasingly use AI?
I think that what is going to be really important for companies to think about is what is the right tool at the right time to give their customer that would be able to use AI in a efficient way. So for example, if I’m buying something, I don’t necessarily want something to start talking to me and say: Hey, would you want to buy this other thing over here? I would want in the page that I’m looking at or in the app that I’m looking at to have more suggestions or be able to find more suggestions for something. There are other times though that you’d want to have an open-ended conversation. And in those cases, I do think that generative technologies and the concept of conversational AI will really be helpful.
You see this already, and I think one of the best places you start to see this is with customer service. You see more and more places are using voice-based bots to be able to handle customer service. And so what we were finding for example, is that the fact that a consumer could get immediate service and get accurate service for common issues via a voice-based bot or a text-based bot was the thing that was really driving the concept of a voice assistant or a chat-based assistant. Those are going to get better and better as a result of this.
However, I would be very cautious about that because going back to those four stages, what this AI is very good at right now is generating content, but then it’s not going to be yet good at the accuracy, the judgment, or the empathy. And so for example, if you have an angry customer on the call, you’re not going to want to have that customer talk to a generative AI bot right now because you’re going to want a real person that’s going to be able to have the empathy to be able to handle the issue and have the judgment to be able to take the issue where it needs to go. Or if it’s a more common issue where you could put a bot in place, you want it to be really, really accurate.
How would you suggest companies think about these kinds of technologies replacing versus supplementing humans and other capabilities?
I think where you’d want to start with this is places where it can supplement and supercharge humans. And so looking at that first area of content generation, that’s where this is going to be really good right now. We’re seeing a lot of people start to use this, for example, to go and create content for their website. It’s something that typically would require a lot of thought and a lot of time, and these technologies like ChatGPT can do that in a very short amount of time.
What are other industries and or roles where you think ChatGPT and AI could most dramatically affect in the near term? And I wonder also if you have thoughts about journalism in particular.
There are definitely a lot of areas. Something where you wanted to be able to go and create the first draft of a contract, let’s say, or a policy that you may have, those are things where it’s good. And then also just productivity use cases. If I wanted to help me compose an email, those are things where I think it can be really, really strong right now. I do think that there is a strong use for this in journalism where you can be able to have it generate the first draft of an article that you put out, say an article or a blog post or whatever it might be. But then I think that there are things for people to think about with that. The information that comes out is not always accurate, so people are going to have to get very good at figuring out what is correct and what’s not correct out of there. People will have to be strong editors basically. And I think it’s incumbent of people to really make it their own and use this as a first draft as opposed to the final draft of what comes out.
Rajen, you are one of the leading thought leaders when it comes to artificial intelligence, so I want to make sure I ask you a few questions about how you expect AI to impact the broader culture in society because as we know, it is a bit controversial. So maybe we’ll start with this concept of artificial general intelligence, commonly referred to as AGI. That is, I believe, defined as when we’ll have created an AI system with at least human equivalent capabilities. Actually been talked about and appeared in science fiction for many years. How scared should we be about artificial general intelligence and overall just AI in general?
We need to be very careful about the ethics of how AI is used. I do think that we are far from artificial general intelligence. One thing that I think people are kind of conflating here is that things like, for example, ChatGPT feel like artificial general intelligence, it feels like you’re talking to a human being, but really what it’s doing is it’s just taking the words from a set of human beings and bringing it together in a way that might make sense as a response. So it’s not actually thinking, and I think that’s a really key thing for people to realize. It doesn’t yet have that, again, accuracy, empathy, and judgment that you would expect out of a human. These are things that will get better and better and better over time. And I think it’s going to be incumbent upon us to figure out where are the ways that it can supplement what we are doing and what opportunities does it open up for us to be more productive. And I think that’s going to be a really interesting thing over the next 10 years as this technology becomes more and more powerful.
So you’ve keyed in on this point about empathy. Is it possible in your view for AI to have empathy or would it have sort of an intelligence that simulated or replicated empathy? What’s the right way to think about that?
I think it’s possible, but if you think about the cognitive load of what it takes for a human to be empathetic, there’s a lot of thinking that goes into that, and I don’t think a computer has that at this point. And I think you see that with some of the things that you’ve seen with GPT technologies that sometimes it creates content that is really, really good, sometimes it creates content that is not appropriate. And so I think that that’s something that will take a lot more research. We’re seeing this in my new company in Kyron Learning, as we’re thinking about how this can apply to education, we don’t believe that AI can ever replace an educator. And we think that AI working in coordination with an educator who has that judgment, who has that empathy, and knows how to work with the student is the best way to implement AI.
Obviously, there’s been a lot of fear across the academic space, middle schools, high schools, educators in general, the very real fear that students are going to use ChatGPT to essentially cheat on homework assignments to write papers for themselves. Some have said that the introduction of this technology is kind of like the introduction of the calculator into math class. It didn’t ruin math, it eventually kind of built itself into the culture so that students would learn how to use a calculator and in some ways enable a richer math education. So is that the right analogy here?
I don’t think we yet know. And I think that this is where we need to think about how do these technologies enhance the education process and how do they help educators, because ultimately the goal for us is to figure out how do we drive students to mastery of a concept. And whichever way they get there, how do we actually make it such that they’re able to understand this concept on their own once they leave a classroom? And so I think AI can play a really interesting part in there. So for example, the work that we are doing at Kyron Learning is figuring out how do we scale and extend teachers to make it so that they can bring their teachings to many more people using AI. And so in those kinds of cases, you’re making it such that students out there have access to many, many more teachers as a result of the combining of really great teachers and AI.
On what you’re talking about, I think that there are going to be some really interesting ways that this could be used. So for example, I was talking to some educators a couple of weeks ago about this, and one thing that they were pointing out is finding flaws in the wrong answer actually involves more thinking than actually creating the right answer. And so what you may be able to do is have this be something where it’s able to generate some content and then students can have the judgment over that content to figure out what if that content is correct, what’s not correct, how would they save that kind of thing in a better way, and that will end up helping them drive towards mastery.
But then I think we’re going to really need to think carefully about where this can be inserted to help scale the education process, make it so that more students have access to great education and are able to really help. So for example, one thing that we are seeing in earlier grades is this could be a great way for students to write a story together with something like ChatGPT, where they’re able to have a dialogue back and forth to go and create a story. And that actually drives engagement with the student, but then also drives them to go be able to and create content.
How should companies be thinking about the risks associated with using technology, especially to their brand? So you could imagine, for example, there being an outcry if a company was using AI or a ChatGPT and they were not necessarily disclosing it, what’s the right way to think about that?
I think there are a couple of things that companies need to think about, and this is something that we had to think about a lot at Google as we thought about, okay, how do we apply this kind of technology to industry use cases. The first is to have a strong set of principles about what you will and you won’t do with AI technologies and be able to really commit to those principles. That becomes tough because there might be scenarios where you may be risking access to revenue by not using it in a particular way, but it is the right thing to do to not use the technology.
And the second thing is to be really transparent with users. For example, one thing that we really discovered is that we want it to be very transparent with a user about when they’re actually working with something that’s AI-generated as opposed to something that is a person. And that’s really, really important because the expectations need to be set correctly. Another part of that transparency is also the why. Why is it telling you what it’s telling you? What’s the information behind that? How did it come up with that answer? So the concept of explainability is really, really crucial.
A third thing that I think is extremely crucial is bias. Remember that AI is trained based on data that you put into it. And if you’re putting biased data into the AI model, it’s going to create biased results. And so you need to be able to look at the model, look at the data, and understand the biases in that model, and then go correct for them. Because what you may end up doing is you may end up propagating a lot of bias that people may not know is there in the system.
Rajen Sheth, thank you so much. This was a fascinating conversation. We really appreciate your insights. Thanks so much.
Great. Well, thank you. I really appreciate it.
That was Rajen Sheth, the CEO of Kyron Learning and the former VP of AI for Google Cloud. Our conversation opened my mind to the powerful role AI is highly likely to play in our future. Given this, it seems increasingly necessary that thoughtful people play a role in deciding how to deploy it in a way that minimizes harm and maximizes good.
We hope you’ll join us next time for a brand new episode of Deciding Factors featuring another one of GLG’s network members. Every day, GLG facilitates conversations with experts across nearly every industry and geography, helping our clients with insight that leads to true clarity. Feel free to leave us a review on Apple Podcasts. We’d love to hear from you. Or email us at firstname.lastname@example.org if you have feedback or ideas for future show topics. For Deciding Factors and GLG, I’m Eric Jaffe. Thanks for listening.