Episode #436
AI: Dangerous or Wonderful? Why Everyone Should Embrace It
AI is here to stay, and avoiding it is futile. By exploring AI tools like ChatGPT yourself, you'll discover ethical boundaries, practical applications, and ensure you won't be left behind in an AI-driven future.
22 minUpdated:

AI: Dangerous or Wonderful? Why Everyone Should Embrace It
0:000:00
Audio in Dutch
Key takeaways
- AI is a reality that won't disappear, making it essential to learn rather than avoid it
- AI tools are neutral—neither good nor bad—and their value depends on how we use them
- Only by personally experimenting with AI can you discover your own ethical boundaries
- Learning AI now prevents being technologically left behind in 10-20 years
- AI should enhance human connection and service, not replace it entirely
Timestamps
00:00:00Introduction: Why everyone should embrace AI
00:02:15The broken computer story: Why understanding technology matters
00:05:30AI resistance at the live event and the chocolate bliss balls revelation
00:08:45AI is neither good nor bad: The ethical neutrality argument
00:12:20AI-generated videos and deepfakes: Where do we draw the line?
00:16:40The supermarket example: Using AI to enhance human service
00:20:15Finding your own boundaries through experimentation
Show notes
In this walking podcast episode, Paul Vet explores why everyone should embrace artificial intelligence rather than resist it. Drawing from personal experiences and real-world examples, he argues that AI is neither inherently good nor bad—much like money, knives, or cars. The key is discovering your own ethical boundaries through experimentation. Paul shares stories ranging from helping elderly people with technology to his wife using ChatGPT for recipe creation, illustrating how AI can enhance rather than replace human connection. He challenges listeners to actively engage with AI tools to understand where they draw the line between helpful automation and concerning applications. Whether it's self-checkout kiosks or AI-generated content, Paul emphasizes that understanding these technologies now will prevent you from being the person who thinks their entire system is broken when it just needs a simple fix. The episode encourages personal exploration of AI's capabilities while maintaining awareness of ethical considerations and human values.
Topics
artificial intelligenceChatGPTAI ethicstechnology adoptionfuture of workAI boundariesdigital transformationAI applicationstechnology resistancepersonal development
Full transcript
View full transcript
This is the Paul Vet podcast, a podcast about hypnosis and mindset, ownership and entrepreneurship. Enjoy listening. In this episode, I want to talk to you about AI and why I think everyone, including you, should embrace it. And one of the reasons for that is it's simply here. You can also wish it wasn't here, but well, it is.
And two, I'll get to that in a moment. For the first time in a long while, I'm recording a walking podcast again. So you might hear ambient sounds or sometimes hear me saying hello to people. Inherent to walking outside. I also have time for it for the first time in ages.
I became a father seven and a half weeks ago, so that means it's quite busy in my life. And it's already getting a bit busy here on the road. But I'll be walking into a park in a moment. So then I'll have all the time for you. AI, yes.
Very simple. It's simply here. So I think everyone should at least try it, but actually should embrace it too. Because it's not going away. And I spoke to someone recently, an older person, and they asked me: 'hey, which laptop should I best buy?
This one or this one? Because my computer isn't working anymore, it's completely frozen.' Well, I didn't have time for a week to actually go to the store with them, so I'd forwarded a link to a laptop. Eventually they bought a different laptop through someone else, and then I came to their house to look at that computer because then it was completely frozen, they said. But what turned out?
The receiver of the wireless keyboard and mouse, it was defective, so they thought: hey, the mouse pointer is frozen on the screen and the keyboard doesn't do anything either. It's frozen and therefore broken. The only thing they needed to do was either buy a new receiver or what I advised them and what they fortunately did, buy a wired mouse and a wired keyboard. Because yes, then you can just continue and you don't need to buy a new laptop at all. But this is the danger that awaits you and me in 10, 20, 30 years when you don't keep up with AI and delve into it.
And believe me, those developments are going so fast that it's very smart to start with it now and understand things about it now. I'm not an expert in it either, but what I do know is how the thinking behind AI works and how I can work with it. And then I'm actually only talking about the simple language model ChatGPT. Because there are already other applications, already in development, that I personally think are really cool that are coming. But it's also really becoming an ethical dilemma.
I had a live day a few days ago and a few of the participants really dug their heels in the sand when I mentioned AI. Then lunch came and after lunch we had a nice bite, a nice snack. And one of those snacks were chocolate bliss balls. And one of the participants had just fasted so she could also enjoy certain things again. But she took a bite of the bliss ball and tears came to her eyes.
She enjoyed it that much. But what turned out? Those bliss balls my wife and I had made together and my wife had found that recipe via ChatGPT. So yes, suddenly for that participant there was also an opening of yes AI is not only bad and I also think it's really important that you realize that. AI is not only bad, in fact, where it's going we don't know exactly, but currently AI cannot be seen as good or bad.
Just like money. Hello. For money also applies, it's not good or bad. For a knife applies, it's not good or bad. For dish soap applies, it's not good or bad.
For a car applies, well yes, and you and I can both think of things to do with a car, a knife, dish soap that are good or normal you could say and that are bad. Getting drunk behind the wheel, bad. Being angry at someone, I read it sometimes in the news and then running that person over, bad. Using a knife to cut your food, good. Using a knife to, well yes, do something that's illegal, bad.
Dish soap, well I accidentally drank it recently. Just a drop though, because hello I'd already put a drop in a coffee cup to wash it but then I thought I'll have another cup of coffee. Then I got it in. Imagine when you give someone a few drops of dish soap, whether accidentally or not, bad. And the same applies to AI.
You can use it for good or for bad. And what exactly that is, I'm not going to tell you. If you were hoping for that, tough luck. Too bad. It's not yet possible to say what's good and what's bad, I think.
It's so new and people are massively ignoring it and massively using it. Ignoring it seems stupid to me. Because one, you're not keeping up with the times, but two also and that's actually my other point, you're not discovering for yourself where the boundary lies. What is the boundary? Look, I personally find that boundary at the moment when someone, because you can also do this with AI, you can write a text.
You can have it written by AI. You can upload a photo of yourself and then you can have yourself speak that text you typed as if you're saying it at that moment. Personally I think that's bad. I honestly don't know how I'll think about that in 20 years. Because, and I'm always someone who plays devil's advocate, so let me do that right away.
Because suppose I have a certain philosophy and I speak it into my phone, the same as what I'm actually doing now. And from that philosophy that I speak, I have ChatGPT make a summary. But in such a way that when you tell it to the audience, they understand the core message well. And ChatGPT makes that summary. And I have an AI-generated video, have me as a person speak this summary in a video.
And this summary helps people organize their thoughts or get more peace in their head, more clarity in their life. Is it then bad or not? When I say it like this now, I think yes, it could actually be a nice application to make this more easily and quickly available to people. And also in such a way that it gets through. Now you could say okay, does it come across better when I speak the video sincerely from my heart and it's still a bit confused because I'm organizing my thoughts while I'm speaking?
Or should I first learn it all by heart so that I'm already thinking yes I'm going to present it like this and I'm going to tell this exactly, that I learn that text by heart and then present it. Or that ChatGPT generates that summary for me, which is much faster, and then it's presented to you from an AI program, from the video. Think about this, because what I'm saying is, it's not at all clear yet what makes AI good or bad. And that's why I want you to apply it. Because the moment you're going to test it and you're going to try it out, then you're going to discover applications where you're certain this can't do any harm.
Simply put, my wife said hey I have these ingredients, 2 packages of something were still open from last week. So it had to be used up anyway. And I want to make something like chocolate balls. I like these flavors. What would be a good recipe for you?
And ChatGPT gives a good recipe. I think we can agree there's nothing wrong with that. Then suddenly the question is, and that dilemma also comes up. Suppose she develops this recipe together with ChatGPT and she's going to sell these chocolate balls. And she charges money for them.
Is it then good or bad? You can think about it, I'm not going to give an answer to that. But this is exactly the reason why I want you to use it. That you're going to apply it to things in your life. That you're going to discover okay, where can it help me?
That's just it. Where can it help me? And at that moment you also learn to use it and you also learn for the future of hey, how does this work? How does it fit together so that you're not the one in 10 or 20 years who thinks hey, this whole system has frozen. I'll buy a new one, you spend a few thousand euros on it. And then your grandchild comes in or your child.
I don't know how old you are of course. And they say but you just have to do this and then it works again. Yes, well it's not about those few thousand euros, but you understand this principle. I once came into IKEA, that might be 15 or 20 years ago. And there was a man standing there waiting in front of a pillar.
Because he thought well I have to wait at this pillar. But on the screen it literally said click here, go here. But this man had never worked with computers. He thought well I sometimes see an employee walk to this pillar, so I'll just wait here. And that man just kept waiting.
At first I didn't even realize he wanted something. I thought, well he's just waiting for his wife for example. Until I thought, but that gentleman is actually searching a bit. I asked what do you want sir? Yes, I want to go to the service desk, but nobody comes.
So then I helped him with that. That's something I think we should also take into account in society. That aside from, look, you and I, we can still keep up in this phase of AI in its rise. But there are people who are already too late. Yes, we should also offer them opportunities to just be able to continue living their lives.
I think that's important. I also think it's a really nice example. Well, I don't know the nice example, but it's my own thinking that I find beautiful. The self-scan checkouts. Well, here around the corner from me, at the Albert Heijn, 2, 5, 8, 11 self-scan checkouts have appeared and 1 regular checkout.
I think it's great, because I walk through the store, put everything straight into my bag or into the crate and I can pay immediately and go. But there you go, people have lost their jobs because of this. Of course, maintenance and the system of the self-scan checkouts cost Albert Heijn money. But what I hope they do is Okay, suppose you can let 4 staff members go because of the arrival of self-scan checkouts. Then let 2 go.
Or none at all, even better. And use those employees for the humanity in your store. Because what would it do for an Albert Heijn? Where the Albert Heijn is opposite the Jumbo and around the corner there are still loads of supermarkets. So there's also competition.
But what would it do for that specific store if they just placed 4 staff members spread across all shifts extra in the store with a special shirt on saying service. Or can I help you question mark. And everyone who needs help, they immediately approach that person. They're not stacking shelves. Because if I now approach someone and they're stacking shelves, then half the time that person has to call someone else or they have to run all the way to the back.
It takes ages before you get an answer. Whereas if there are just people trained for that, real service, to provide service. So they ultimately know of course not at first, but ultimately they do know which questions are generally asked. They can then give a direct answer to that. So you walk up to such an employee who is free, who isn't necessarily also busy with other things, who is there especially for you.
And you get the most wonderful service. That if you ask for something. Where can I find this? Well there. Okay.
And do you have anything else on the list that you're not sure where it is? Yes, this. But I thought I'd look for it myself. No, no, I'll show you straight away. That they might also give an explanation of Okay, this kind of product, in our stores you can always find them here.
Just an example, you know. And yes, then I think that's making good use of a system. Self-scan checkout isn't yet filled with AI applications. Of course it will come. At the moment if it's simply the case that the system is supportive and therefore more humanity and connection can arise then I think it's a good thing for example.
And suppose AI can bring that about. Yes, is it then good or bad? So I wish for you, 1 that you're going to embrace it. So yes, also try it, but also embrace it. Because it's here now.
And denying reality, no one has ever become happy from that. And 2, by trying it, you're going to discover for yourself. Where is the ethics to be found? Where is it still good? Where is there still an ethical application?
And where not? But you only find that out by searching for that boundary yourself. I notice that anyway, also during that live day. Those were then beginning coaches and therapists. And they often struggle with an ethical dilemma around asking for money.
Because yes, I want to help someone, you shouldn't ask a lot of money for that should you? Well yes, what is a lot of money? That's the question anyway. Because what for one is a lot, is nothing for another. Or very little.
And besides, the moment you drastically improve someone's life then that's also worth something. And if you pay you pay attention. So those are all those are reasons why I believe in searching for the boundaries and stretching your boundaries by testing and trying, just like children do, to then find out yourself, okay, where is my boundary according to my norms and values? You can get that from an article or now try to get it from a podcast, hoping that I'm going to tell you what your norms and values are. I can't do that, because that's different for everyone.
And then you only find out one way, by testing it. How can you test or find out your physical boundaries by testing them. It's that simple. But often we don't even dare to go towards that boundary. Often we even find it scary to even come near those boundaries at all or take a small step in the direction of those boundaries.
And that's just really a shame. Because yes, if you don't know your own boundaries, are you then going to listen to the boundaries of someone else? Or are you going to live a life that's average? Because average is not best, there are too many people unhappy. Too many people are on medication like antidepressants, but also other pills because they're simply not healthy in their body.
The average number of people suffer from being overweight. Yes, I just think that's a shame. So I wish for you that you don't necessarily live according to the average. But that you live according to your own best ability. And best ability within what you also find most enjoyable.
I wish everyone simply a nice life. And for many people that life can be nicer when they move a bit more towards their boundaries. And when you do that, then you also get to know them. And children can do that very well and freely, although we sometimes also find that difficult. I mean if you see a group of boys and they're fighting with each other.
Yes, that's often a bit of competition and searching for the boundaries. And as long as that just goes in a yes, fairly reasonable way, let me put it that way. Yes, then a boy gets hit once. If afterwards they're just friends again or there's nothing else behind it that they're always being bullied. But you see that it's just a group of friends perhaps even.
Where it's actually being determined okay, what is the hierarchy within the group? Yes, then sometimes a punch can just be thrown or pushed or pulled or shouted or cried and pain suffered. No one has ever died from a bruise. Yes, it might sound scary to say that or to think about it. But yes, that's just really nature.
Hey. And yes, I wish the same for you with AI. So that you're going to discover, hey, what is AI? What does it do for me? How can it help me?
And where is the boundary for me? And that can easily be somewhere else than for someone else. And then it's sometimes also just valuable to listen to that other person and check hey, but why is that boundary there for you? As I said if you can deploy AI so you can help more people, hello, why wouldn't you do it then? Depends a bit on the way.
Because I say why wouldn't you do that, that makes it very black and white. But on the other hand, you work completely from AI and have that work completely for you, so you can help 100000 people to a better life. Yes, then I would say do it. There again, what is a better life? What do you think?
Or do you think that? Does AI think that? Does science think that? Does Piet Paulus but think that? Is he actually still alive?
I don't actually know. But you get what I mean. So I would really want to ask you. Go explore for yourself. If you don't yet use or use little of for example ChatGPT, just do it daily.
Go daily just ask questions to ChatGPT or just things you search on Google, go ChatGPT them. To throw in a new verb. Yes, I wish for you. So I would say go get started with it. And yes, if it's clear to you where the boundary lies, I do find that interesting to hear of course.
So definitely add me on Instagram or LinkedIn. Then we can always have a nice conversation about that there. I hope of course that you're doing well. I hope it's going well with your loved ones. And I wish you a beautiful day.
---
This transcript has been translated from Dutch.
Frequently asked questions
Why does Paul believe everyone should embrace AI?
Paul argues that AI is an unavoidable reality of modern life, and resisting it only ensures you'll be left behind. By actively engaging with AI tools now, you learn how they work, discover practical applications, and develop your own understanding of ethical boundaries. Ignoring AI is like the person who thought their entire computer was broken when only the wireless receiver needed replacing—you miss opportunities and make costly mistakes through ignorance.
Is AI inherently good or bad according to this episode?
Paul firmly believes AI is neither good nor bad—it's a neutral tool, like money, knives, or cars. The ethical implications depend entirely on how humans choose to use it. A knife can prepare meals or cause harm; similarly, AI can generate helpful recipes or create deceptive deepfakes. The responsibility lies with users to discover where their personal ethical boundaries lie through experimentation and thoughtful application.
What practical example does Paul give of positive AI use?
Paul shares how his wife used ChatGPT to create a chocolate bliss ball recipe based on ingredients she had available. This simple application demonstrates AI's practical value without ethical concerns. The recipe was so good it brought tears of joy to a workshop participant—ironically, someone who had been resistant to AI just hours earlier. This example shows how AI can enhance everyday life in harmless, beneficial ways.
How can AI enhance rather than replace human connection?
Paul uses the example of supermarket self-checkout systems. Instead of simply eliminating cashier jobs, stores could redeploy those employees as dedicated service staff wearing special shirts asking 'Can I help you?' These workers wouldn't stock shelves or multitask—they'd focus entirely on customer service, creating more human connection than traditional checkout lines ever did. AI should free humans for more meaningful interactions, not eliminate them entirely.
What does Paul recommend as a first step with AI?
Paul encourages listeners to start using ChatGPT daily for questions they'd normally search on Google. This hands-on experimentation helps you understand AI's capabilities, discover useful applications, and begin forming your own opinions about ethical boundaries. He emphasizes that personal experience is the only way to truly understand where you stand on AI usage—you can't learn your boundaries from articles or podcasts alone.
Related episodes
Get in touch
Want to learn more or collaborate? Feel free to reach out.
Get in touch

