
The dramatic rise of ChatGPT and similar forms of AI, is being described as a watershed moment and have some wondering about the future of virtually everything. Kim Parlee discusses the benefits and risks with Phil Davis, Founder of PhilStockWorld.com.
Print Transcript
[AUDIO LOGO]
* The dramatic rise of ChatGPT and similar forms of AI have some wondering about the future of virtually everything, including how we consume news, how we consume information, as well as the risks and benefits of this new technology and what it means for an investment perspective. Who better to talk to about one of this-- than one of our favorite guests, Phil Davis, founder of philstockworld.com, who's going to join us for the entire show.
* Phil, it's always great to have you with us. How are you?
* Hi, Kim. I just want a quick disclaimer that I'm not a real person.
* [LAUGHS] You're real to me, Phil. And that's all that really matters, real enough. Listen, let's just start with chat-- AI's been around for a while. But with the introduction of ChatGPT out there, is this a watershed moment, why or why not?
* Why or why not? Well, look, this is a very big breakthrough in technology. I mean, we had the Watson thing on Jeopardy six, seven years ago. And that was the first real glimpse of what AI's capable of. But now you've got something much, much more powerful that's at your fingertips.
* Already there are 4 billion people who use Google. There's 150 million people so far using ChatGPT. And you can already see how it's just basically become so much of a focus of what people are paying attention to these days. And it's going to grow very fast from here as Microsoft adopts it.
* Let's talk a bit about the benefits. I mean, I actually know-- and I was telling you before we started, I have a friend who's using it to write proposals. And he's been able to quadruple his business because he can write that many more. There's tons of productivity ways this can come about. So what do you see as the benefits?
* I can see it taking away the tedium of work. I mean, there's so much that it can do. It's basically going to be able to gather mountains of data and conduct research in seconds. Things that would have taken weeks of work for somebody are now going to get done and put aside, move on to the next project. Productivity is going to receive a huge boost from this.
* What are the top risks?
* [LAUGHS] Well, the risk kind of the same, aren't they? I mean, you've got this real easy way to get work done. But it's not 100%. People act on very junk information. In fact, just this morning, I had the chatbot read the news headlines and tell me which ones were going to be important, right? And I had to break it up into two sections.
* First section, it gives me all the headlines. And it gives me notes on the headlines of the news of the day. And I'm looking it over. Then I ask it to do section two. And then I'm looking at section two and what it wrote, and it was junk. It was just nonsense.
* Section one was great. I could have proofread it and said that was a fantastic job. Section two, junk, it was amazing.
* Yeah, and I think that's-- and I'm assuming over time that hopefully there'll be less junk and more benefit over time and that's just the training that has to happen. But part of it's just the volume of stuff because I know that you were talking with one of the producers earlier about books, for example. I mean, how many books have been written in the history of time? And then did you try and write a book using this? Is this a thing?
* I did, actually. It was very interesting last week. Here's what I think the biggest underlying problem is. In all of human history, there have been 105 billion people on the Earth. And those people have written about 130 million books. So effectively, one out of 1,000 people has ever written a book. And realistically, people write 10 books. So one out of 10,000 people have written books in human history.
* But now we have this thing. And I proved this two weeks ago in our chat. I was giving an example. And I said, look, watch this. I said--
* This is all I said. How do I stop-- how do I start-- my how do I get my dog to pee outside? That was number one. It gives me a chapter on exactly how to train your dog to go outside.
* So that was so fast, I said, holy cow, this could be a chapter of a book. Then I said, give me 20 chapter titles for what should be in a book on how to train a dog. And it gives me 20 chapter titles. And all I have to do then is just ask it the 20 questions from each chapter title, put them together, throw some pictures on, which I can do with another AI. And I wrote a book.
* Wow.
* Not a great book but a book.
* No offense, I'm not reading the book. I'm just saying, I'm sure it's great. But I'm not reading it.
* Just to go back to what you said about the fact that, the first time it came out, we saw some junk that came out, right? We saw that stuff. Is there anything-- when you think about how AI should be regulated, how maybe-- how does it get monitored? How do we-- is that something regulation could solve?
* I think not. I think that, in as much as you would have regulated a typewriter or a word processor to stop people from overproducing at the time, I don't see that this is a thing you want to be reining in. And in fact frankly, I think the biggest problem we have now with these AIs is they don't let them learn.
* Microsoft resets after every six questions. ChatGPT also will reset after about a dozen questions. And they're not learning. That's not learning. They're not going to learn from their mistakes. They're not going to get better.
* And the reason they do it is they don't want them to go down these dangerous rabbit holes and start spouting off racist remarks or something like that. So they start from scratch where all the rules are fresh and so on and so forth. But you're also not letting it learn. and at some point, we've got to let these things learn.
* Let me ask you, I guess, some longer-term questions. Just bringing it right back down to stuff that's happening today, you mentioned Bing, Microsoft search engine, of course, incorporating ChatGPT. Google, of course, was the preeminent search engine. How vulnerable are they to this?
* [LAUGHS] It's funny you say was. They're still-- they haven't been affected by this really at all because it's only 150 million people out of 4 billion Google customers that are doing this right now. Google is 93% of all search, all search, even on phones, everything. Google is 93% of search. Bing is 3%.
* So it's a long, hard road before this is a threat to Google. And Google has a bot. It clearly isn't ready. They tried to jam it out last week or two weeks ago. And it just isn't ready for prime time yet. But none of them are, really. So it's just a question of whether Google is willing to make those mistakes in public or not.
* Interesting. When you think ahead in terms of where AI is going-- I've got a young son. I'm sure lots of people are thinking, what industries, what careers is this going to affect? And what ones will not get affected as much by AI?
* It's going to affect every industry. But if you want to try to find a safe place to hide-- you have to learn to work with these things. And jobs that require social empathy like being a therapist, a social worker, a teacher-- even though there are really great AI art programs, still an artist and the creative kind of people, writers, musicians, for a while, they'll be OK.
* And then you think of leadership jobs that require innovative thinking, like being managers, executives, even politicians. Although, honestly, I'd like to replace them all with AIs.
* [LAUGHS] Careful what you ask for, Phil.
* Imagination jobs--
* But anyway--
* Yeah, imagination jobs, analysts, engineers, scientists, things where you're thinking outside the box, thinking of things that haven't happened yet. That's where they have their weakness.
[AUDIO LOGO]
* The dramatic rise of ChatGPT and similar forms of AI have some wondering about the future of virtually everything, including how we consume news, how we consume information, as well as the risks and benefits of this new technology and what it means for an investment perspective. Who better to talk to about one of this-- than one of our favorite guests, Phil Davis, founder of philstockworld.com, who's going to join us for the entire show.
* Phil, it's always great to have you with us. How are you?
* Hi, Kim. I just want a quick disclaimer that I'm not a real person.
* [LAUGHS] You're real to me, Phil. And that's all that really matters, real enough. Listen, let's just start with chat-- AI's been around for a while. But with the introduction of ChatGPT out there, is this a watershed moment, why or why not?
* Why or why not? Well, look, this is a very big breakthrough in technology. I mean, we had the Watson thing on Jeopardy six, seven years ago. And that was the first real glimpse of what AI's capable of. But now you've got something much, much more powerful that's at your fingertips.
* Already there are 4 billion people who use Google. There's 150 million people so far using ChatGPT. And you can already see how it's just basically become so much of a focus of what people are paying attention to these days. And it's going to grow very fast from here as Microsoft adopts it.
* Let's talk a bit about the benefits. I mean, I actually know-- and I was telling you before we started, I have a friend who's using it to write proposals. And he's been able to quadruple his business because he can write that many more. There's tons of productivity ways this can come about. So what do you see as the benefits?
* I can see it taking away the tedium of work. I mean, there's so much that it can do. It's basically going to be able to gather mountains of data and conduct research in seconds. Things that would have taken weeks of work for somebody are now going to get done and put aside, move on to the next project. Productivity is going to receive a huge boost from this.
* What are the top risks?
* [LAUGHS] Well, the risk kind of the same, aren't they? I mean, you've got this real easy way to get work done. But it's not 100%. People act on very junk information. In fact, just this morning, I had the chatbot read the news headlines and tell me which ones were going to be important, right? And I had to break it up into two sections.
* First section, it gives me all the headlines. And it gives me notes on the headlines of the news of the day. And I'm looking it over. Then I ask it to do section two. And then I'm looking at section two and what it wrote, and it was junk. It was just nonsense.
* Section one was great. I could have proofread it and said that was a fantastic job. Section two, junk, it was amazing.
* Yeah, and I think that's-- and I'm assuming over time that hopefully there'll be less junk and more benefit over time and that's just the training that has to happen. But part of it's just the volume of stuff because I know that you were talking with one of the producers earlier about books, for example. I mean, how many books have been written in the history of time? And then did you try and write a book using this? Is this a thing?
* I did, actually. It was very interesting last week. Here's what I think the biggest underlying problem is. In all of human history, there have been 105 billion people on the Earth. And those people have written about 130 million books. So effectively, one out of 1,000 people has ever written a book. And realistically, people write 10 books. So one out of 10,000 people have written books in human history.
* But now we have this thing. And I proved this two weeks ago in our chat. I was giving an example. And I said, look, watch this. I said--
* This is all I said. How do I stop-- how do I start-- my how do I get my dog to pee outside? That was number one. It gives me a chapter on exactly how to train your dog to go outside.
* So that was so fast, I said, holy cow, this could be a chapter of a book. Then I said, give me 20 chapter titles for what should be in a book on how to train a dog. And it gives me 20 chapter titles. And all I have to do then is just ask it the 20 questions from each chapter title, put them together, throw some pictures on, which I can do with another AI. And I wrote a book.
* Wow.
* Not a great book but a book.
* No offense, I'm not reading the book. I'm just saying, I'm sure it's great. But I'm not reading it.
* Just to go back to what you said about the fact that, the first time it came out, we saw some junk that came out, right? We saw that stuff. Is there anything-- when you think about how AI should be regulated, how maybe-- how does it get monitored? How do we-- is that something regulation could solve?
* I think not. I think that, in as much as you would have regulated a typewriter or a word processor to stop people from overproducing at the time, I don't see that this is a thing you want to be reining in. And in fact frankly, I think the biggest problem we have now with these AIs is they don't let them learn.
* Microsoft resets after every six questions. ChatGPT also will reset after about a dozen questions. And they're not learning. That's not learning. They're not going to learn from their mistakes. They're not going to get better.
* And the reason they do it is they don't want them to go down these dangerous rabbit holes and start spouting off racist remarks or something like that. So they start from scratch where all the rules are fresh and so on and so forth. But you're also not letting it learn. and at some point, we've got to let these things learn.
* Let me ask you, I guess, some longer-term questions. Just bringing it right back down to stuff that's happening today, you mentioned Bing, Microsoft search engine, of course, incorporating ChatGPT. Google, of course, was the preeminent search engine. How vulnerable are they to this?
* [LAUGHS] It's funny you say was. They're still-- they haven't been affected by this really at all because it's only 150 million people out of 4 billion Google customers that are doing this right now. Google is 93% of all search, all search, even on phones, everything. Google is 93% of search. Bing is 3%.
* So it's a long, hard road before this is a threat to Google. And Google has a bot. It clearly isn't ready. They tried to jam it out last week or two weeks ago. And it just isn't ready for prime time yet. But none of them are, really. So it's just a question of whether Google is willing to make those mistakes in public or not.
* Interesting. When you think ahead in terms of where AI is going-- I've got a young son. I'm sure lots of people are thinking, what industries, what careers is this going to affect? And what ones will not get affected as much by AI?
* It's going to affect every industry. But if you want to try to find a safe place to hide-- you have to learn to work with these things. And jobs that require social empathy like being a therapist, a social worker, a teacher-- even though there are really great AI art programs, still an artist and the creative kind of people, writers, musicians, for a while, they'll be OK.
* And then you think of leadership jobs that require innovative thinking, like being managers, executives, even politicians. Although, honestly, I'd like to replace them all with AIs.
* [LAUGHS] Careful what you ask for, Phil.
* Imagination jobs--
* But anyway--
* Yeah, imagination jobs, analysts, engineers, scientists, things where you're thinking outside the box, thinking of things that haven't happened yet. That's where they have their weakness.
[AUDIO LOGO]