Artwork

Nội dung được cung cấp bởi Sarah Santacroce and Humane Marketer. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Sarah Santacroce and Humane Marketer hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Player FM - Ứng dụng Podcast
Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !

Using AI Like We're Human

38:39
 
Chia sẻ
 

Manage episode 413377078 series 1047241
Nội dung được cung cấp bởi Sarah Santacroce and Humane Marketer. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Sarah Santacroce and Humane Marketer hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.

Join us for another episode on the Humane Marketing podcast as we explore how to ethically partner with AI, with our guest, Naully Nicolas.

We talk about Naully's journey into the world of artificial intelligence, the crucial ethical and legal considerations surrounding AI implementation, and how AI empowers us to work smarter, not harder. Naully shares his PLATON framework, inspired by Plato and infused with philosophical principles, guiding us through the pillars of legality, accountability, transparency, objectivity, and neutrality.

Together, we envision the future of AI and work, inspiring us as Humane Marketers to embrace technology with empathy and mindfulness, shaping a future where humanity thrives alongside innovation.

What we addressed in this conversation:

  • How Naully got interested and started with AI
  • The ethical and legal considerations of AI
  • How AI enables us to work smarter not harder
  • Naully's PLATON framework, based on Plato and philosophical considerations (principles, legality, accountability, transparency, objectivity and neutrality, because in French Plato has an N at the end)
  • How Naully sees the future of AI and work
  • and much more...

---

Ep 187 whole episode

Sarah: [00:00:00] Hello, Humane Marketers. Welcome back to the Humane Marketing Podcast, the place to be for the generation of marketers that cares. This is a show where we talk about running your business in a way that feels good to you, is aligned with your values, and also resonates with today's conscious customers because it's humane, ethical, and non pushy.

I'm Sarah Santacroce, your hippie turned business coach for quietly rebellious entrepreneurs and marketing impact pioneers. Mama bear of the humane marketing circle and renegade author of marketing like we're human and selling like we're human. If after listening to the show for a while, you're ready to move on to the next level and start implementing and would welcome a community of like minded, quietly rebellious entrepreneurs who discuss with transparency what we're doing.

Works and what doesn't work in business. Then we'd love to welcome you in our humane marketing circle. If you're picturing your [00:01:00] typical Facebook group, let me paint a new picture for you. This is a closed community of like minded entrepreneurs from all over the world who come together once per month in a zoom circle workshop to hold each other accountable and build their business in a sustainable way.

We share with transparency and vulnerability. What works for us. And what doesn't work so that you can figure out what works for you instead of keep throwing spaghetti on the wall and seeing what sticks. Find out more at humane. marketing forward slash circle. And if you prefer one on one support from me, my humane business coaching could be just what you need, whether it's for your marketing, sales, general business building, or help with your big idea like writing a book.

I'd love to share my brain and my heart with you together with my almost 15. Years business experience and help you grow a sustainable business that is joyful and sustainable. If you love this [00:02:00] podcast, wait until I show you my Mama Bear qualities as my one-on-one client, and find out more at Humane Marketing slash coaching.

And finally, if you are a Marketing Impact pioneer and would like to bring Humane Marketing to your organization, have a look at my offers and workshops on my website. Humane dot marketing.

Hello, friends. Welcome back to the Humane Marketing Podcast. Today's conversation fits under the P of partnership, I'd say. We're partnering with AI. If you're a regular here, you know that I'm organizing the conversations around the seven P's of the Humane Marketing mandala. And if you're new here and this is your first time listening, well, A big warm welcome.

You probably don't know what I'm talking about, these seven [00:03:00] P's in the mandala. Well, you can download your one page marketing plan with the Humane Marketing version of the seven P's in the shape of a mandala at humane. marketing forward slash one page. Humane. marketing forward slash one page. That's the number one and the word page.

And this comes with seven email prompts to really help you reflect on these different P's for your business. For this conversation about partnering with AI in a humane way, I brought in my colleague, Noli Nicola. Noli is renowned Digital transformation consultant with over 16 years of experience in I. T.

engineering and 12 years in marketing specializing in emerging technologies like web three, the metaverse and A. I. Noli provides pragmatic advice to business leaders. Particularly in [00:04:00] SMEs, navigating the complexities of the digital age. His stoic philosophy combined with a profound understanding of the digital landscape makes him an invaluable guide for companies seeking strategic opportunities in technology.

So what we addressed in this conversation with Noli is how he got started and Interested in AI, the ethical and legal considerations of working with AI, especially as a humane marketer, how AI enables us to work smarter and not harder, Noli's framework based on Plato and philosophical considerations.

Principles, legality, accountability, transparency, objectivity, and neutrality. Because in French, Plato is Platon and has an N at the end. [00:05:00] So that's where the neutrality comes from. And then also how Nolly sees the future of AI and work and so much more. So I'd say without further ado, let's get into it.

Hi Noli, how are you? Como ti va?

Naully: I'm fine, and you?

Sarah: Yes, I'm great, thank you. You're in the middle of a move, so very stressful. We can't really use AI to help us move yet, or can we?

Naully: Yes, I also I hope I use AI maybe for to do the planning for my moving, so it was quite useful.

Sarah: Oh, wow. You'll have to tell us more about that.

But yeah, I'm glad to have this conversation in between trips and moving boxes and things like that. Because yeah, we're, we're super excited to have you come and teach an in depth workshop on May 1st. [00:06:00] And, and this is just kind of like a teaser and I'll ask you some questions that we then also have more time to go in to on, on May 1st.

And so. If you're listening to this and feel like, Oh, I want more of this content. And please join us on May 1st that it's a 90 minute workshop, humane. marketing forward slash workshop, but let's dive into it. And I'll, I'll just kind of start with how did you get into AYA, AI NALI and, and, and like, what does it represent for you in this day and age?

Naully: How I discovered AI, I would say it's a normal step in my long career because I've been working for almost I would say 19 years into the IT universe. So, and also since my childhood, I was very curious, I like to dismount my own [00:07:00] PC and remote the remote again the PC. And it's also. On my personal view, I was there during the, the passage between the old internet, which was the I would say the effects of Minitel for some, and let's say the first browsing on internet.

So it was like into the nineties, I think, around, around this era. And Then I work in IT for almost 20 years and I saw the progress. Also, I saw the constraint also of let's say the digital world. And I discovered AI when I was, reading a book, I would say it was not only, I would say sci fi books, but also I would say it was, I came across a book, so I don't remember the name and I was sure that in the next step of our digital world will be the AI.

And And I was able also to sense [00:08:00] the switch between the, let's say, all the world. And I'm not that old, but the way that we interact with the computer and the new way that we are in this AI universe now.

Sarah: Do you feel like we're completely there in the AI universe or we're still like at the very beginning of it?

I

Naully: think we are in the beginning because most of them. Approach that we have is only true chat, GPT and code and song, but I think it's only the tip of the iceberg because maybe your audience don't really make sense, but we're already using AI in every day. So, for example, for in Spotify, for example, it's an algorithm, it's not AI per se, but we are using the data, right?

Yeah. Like when you're browsing on Netflix or something. It's a kind of [00:09:00] AI, which is gripping you the best show after you finish one. I'll tell you how you finish this show, there's also this one in which you might be interested. So,

Sarah: yeah, so it's, it's kind of this blurry line between algorithms that are kind of gearing us towards where they want to go.

And then also. Yeah, AI for like what you said, planning things like a move and, and probably if you can plan a move with AI, you can also plan a vacation with AI. Like you can do so many things and, and we'll get into some more of that. But I think when I brought up the topic of AI and. You know, Chachi PT is kind of the most note one right now.

In the humane marketing circle, our community, there's a lot of I wouldn't, maybe a day, a day wouldn't say it's fear, but I think it's fear or [00:10:00] hesitation. And then there's also all these ethical considerations, which. Obviously are very important for someone who's doing humane business and humane marketing.

So, yeah, what are some ethical considerations we should keep in mind when, when we're going down this road of using AI in our business?

Naully: I would say if we talk in terms of fear, I can say we have the same when I would say the first software network appears. So because some people are afraid to me on Facebook, never, never, never, never.

Some people switch to to Facebook anyways, but I think the thing different with AI is the fact that they can aggregate a lot of data, which are mostly it's a personal data. And also the carry things is they can be more [00:11:00] personalized that before, because I would say before we look up information into Google, but now we can create our own, I would say chat GPT with in every sector, for example, but I can create a personal coach GPT, which contains all my Let's say, personal view or approach that no other coach can have.

And there is the main, let's say, reflection about what, what are stored, those data, because now we can put, let's say, more personal data, like the, the people that we have interaction with, name, date, address and so on. And those that are located in country who are less more, I would say, regarding internal flow.

For example, in the US, we have the Patriot [00:12:00] Act, in which any federal agency can look into those data without asking you. So that's why in Europe they have the G-G-D-P-R.

Sarah: mm-hmm .

Naully: And now they in, they want to enforce the EU Act in, which is a kind of G-E-D-P-R for ai. So it's to determine which is the good usage of AI and which is the, would say risky usage of hair.

Sarah: They're trying to kind of come up with laws. They're catching up, really. They have to catch up. Yeah, because there

Naully: is some issue, for example, with AI using for credit score, for example, because we have to we have to determine who is responsible for these tools.

Sarah: It

Naully: is the developer. It is the person who is using the tools.

Or this is the user, [00:13:00]

Sarah: right?

Naully: So I would say the same, like if you buy a car, there is a responsibility. It's the one who drives the car, that's the car manufacturer,

Sarah: right?

Naully: So,

Sarah: so the

Naully: ethics is the main, I would say the main point of ethics in AI is to determine the responsibility. in the creation and the use of those tools.

Sarah: Right. Yeah. Because I think the one thing to keep in mind is that you can always go. Either way, right. You can use AI for good, or you can use AI for evil. And that's what we're all afraid of. When we talk about, oh, AI is going to take over, the robots are going to take over, well, we're afraid of things going in the wrong direction.

And so is that what they're now trying, trying to kind of come up with legal responsibilities of who's [00:14:00] responsible for what?

Naully: Yeah, I, as I said before, it's. It's not perfect or ideal, but it's better than nothing because at least we have a framework in which someone can and which some people can refer to, so it's not the wild, wild west in terms of AI, so there's some moral, moral and legal framework.

In the use of AI,

Sarah: right? Is this what happened after Elon Musk and a bunch of other people sent that open letter? Was that in response to that or kind of happened

Naully: anyway? I think it's that. But also there is the thing that. They don't want to this kind of tool to be out of control because things can go badly and we can see in country like China that are using AI not.

In the right use, [00:15:00] mostly for surveillance of their steel. And I think the country in Europe, they don't want to that rule. So, so, and I think also there is some moral issues of also, of kind of still.

Sarah: Right, right, yeah, it's interesting because everything happens so fast that governments and legal people, they, they have a hard time catching up with everything because that's, that's from the old paradigm.

So it's just like very slow and admin heavy and all of that. Right. So you think there is ever going to be a point where. They're on the same page and they caught up. Well,

Naully: there is now, I think, more and more countries are, I would say, are just according to the same principle, because I think there is, I would say, there is some universal [00:16:00] principle that you will find anywhere in the world.

It's the world of justice.

Sarah: I think

Naully: everyone want want to be as the Stanford justice, and also to have the opportunity to questions also the AI, because it's like, Well, you're, let's say, in your common life, you're also the right to question if you're arrested by a police officer, you have the right to, to have a lawyer and also to to be in a tribunal.

So it should be the same also when we use AI for this, I think it's the kind, it's the same. Universal principle that you can find in any country from Switzerland to France to Peru, anywhere. So,

Sarah: yeah. Yeah. Okay. So we went in a bit big picture in terms of, you know, what needs to change in a society on the legal aspect, the justice aspect for us in order to, to [00:17:00] work with AI.

But now if we take it down to our entrepreneurial level, how can we. integrate in, you know, AI in our businesses in a way that is. Ethical and makes us work smarter and not harder, but also stays away from like, the one thing that I don't like about AI is, is this push towards even more productivity, toward even more working and, you know, more hustle.

And I'm like, well, no, I think that's getting it wrong. It's like, we have this amazing tool that helps us actually. Work smarter, not harder, but then freeze us with more time to be more human. That's the way I look at it. So what are some practical ways that you have worked with entrepreneurs that they use [00:18:00] AI to work smarter and not harder?

Naully: But the first thing that I told my. Entrepreneur is that AI is not there to replace you, but to help you. So you should consider AI as a tool because AI is not perfect because by extension, AI was created by a new man. So the human is not perfect per se. So also AI are subject as we call hallucination because.

Yeah, it's predicting, let's say, words, it's not contextualizing the words, so you have also understand the limits of the AI because it can be considered as a magic tool can rule, but you have also to understand that AI has also its own limits. So you won't, you won't pass you in any ways, you will simply help you [00:19:00] maybe in turn now in term of workload, I would say that.

AI is a good tool if you want to, I don't know, manage your content. For example, if you are someone who loves to write content, it could be a good assistant, but it won't replace you to create your content, but also it can help you to I don't know, create a content schedule for the next two, once a month, next two months.

And then you can schedule those contents and then you can sort of manage your day to day life also easier because you already create your content for the month for the next two months. So you can maybe take a day off because usually before you took, well, I don't know, one week to create your content, to write it and publish it.

And those, I would say, save time, you [00:20:00] can save it elsewhere.

Sarah: Yeah. Yeah. You can actually invest it in the human relations, right? Yeah. Have, have coffee with a friend or something like that. Yeah. Yeah. And I like how you said you can help you, it assists you, it can help you with brainstorming ideas and, and give you content ideas you know, never ending lists of content ideas.

Yeah. And it can then even help you, guide you through writing it. But I think we should not just rely on AI to now take over all the writing because knowing you and what you write, I would definitely be able to tell, I think, if all of a sudden it would just just be AI writing it. Yes, you can train it to a certain extent probably to, you know, have talk like you.

And that's what I'm experimenting with as well. But then I [00:21:00] still. Go in, like I still am the manager, right? And AI is the assistant and then I have to change it and make sure it speaks like I do. So I think that's really important to understand because What we see a lot out there is like these bland sounding things, right, that you can tell, oh, this is just like, you know, AI created content that has no humanness and no personality to it.

Yeah. Yeah. Yeah, exactly. And then Google actually just said that they're starting to punish the pages on Google that are only content, I mean, AI created. So that's a, that's a good good move from them. Obviously they're a bit scared as well, I think, but yeah, I think that's a good move. So where would we, would you tell.

Actually, before, before that, I, I [00:22:00] know you have this framework based on Blayton. Yeah. And, and so I, I'm just wondering if you could explain a little bit. Your approach to philosophy and AI, because that kind of, from hearing that, that kind of sounds like an, it's, you know, an oxymoron, like how do those two go together?

But

Naully: tell us more

Sarah: about it.

Naully: Yeah, for sure. So I, I, I'm always being, I would say I'm a huge history buff. So I always has book in my house and I love read books about history and all kinds of books could be philosophy, psychology, and I found that philosophy was a good way to be grounded in what we are doing and all what we are thinking, especially in this time with everything is going so fast and we can be so lost [00:23:00] rapidly.

And during my one of my reading, I came across a biography of. Plateau, which is plateau in French. And I was thinking that maybe we could use some aspect of the story, story season, but in how we approach technology, because sometimes we are using technology because we have. We are using it, but we don't really think what we do with it project.

So that's how I came up, I came with this idea of Platon, which is put in French. And the first, let's say P is for principle in the wishes. What are the, the principle that I put it in my content, it could be also in my content or the principle that on why I'm using the ai. So it ask me to [00:24:00] myself when I'm using a tool, which could be charge GPT and some to ask me what are the consequence of using this tool if the tool are, I would say.

Ethically based or the people are treated, I would say, correctly or humanly. And then the L can stand for legality. Maybe it's more about when I'm using an AI tool is my content is not under, it's not copyrighted by someone else. And actually, there is a huge debate about AI, because mostly they are using data scrapped from the internet, and most of the data are copyrighted.

Sarah: So,

Naully: so you need to ask yourself, is the thing that I'm using is completely legal or not? Then there's the A, [00:25:00] which is for accountability, which I have to be concerned that I'm using a AI tool. Don't say that if for example, if I'm putting wrong information, because I use AI tool, I have to count accountable after I, that I put, if I use, for example, if I'm using a AI tool like me Journey, maybe I should be aware that maybe I'm using copyrighted.

Image from illustrator and maybe if needed, but who put any annotation when I'm using those kind of image also to be transparent and the T for transparency. So for example, is to be transparent in the use of AI tool, especially if you're working for a journalist. You have to say that for example, that I, this part or this part of your article has been [00:26:00] written with the help of AI2 or this image has been modified by the AI2.

For example, recently there is a journalist who made a Documentary about the young in Iran, and it's instead of using blurred image, they use the, he praised the faces of the person who are being interviewed with AI generated image. So, so they made a disclaimer saying that those people faces have been generated by AI.

So, and O stands for objectivity. So you have to be like concerned or so, but on why you're using the AI in your marketing. It's of course, the N stands for neutrality, which it says that it's mostly when you're would say. Using AI to in marketing, it's saying that you are [00:27:00] using the tool, not in a harmful way.

So you should be conscious that you are not using the tool to do arm on or give false information.

Sarah: I love, I love that. I love these words. Let's so principles, legality, accountability, transparency, objectivity, and neutrality. Yeah, they, they sound very humane, like, you know, they're very humane words and it's, it's a really good idea to, yeah, to go into AI with these considerations, right?

To, to think about that deeply And, and we'll talk more about that in the, in the workshop and, and I think you have some you've created a game, so I look Yes. Yeah. Taking some questions from that game around that framework. Yeah. So in terms of where we're going with this, because like you said, it's just, you know, the kind of like, we're just seeing a tiny [00:28:00] bit of the iceberg right now.

So where do you think. We're heading in terms of entrepreneurs using AI, how is it going to take over more of our, yeah, workload and what so many people like last year, this year, I don't hear it so much anymore, but so many people were afraid of AI taking over their jobs. So, so yeah what do you see as future development?

Naully: I think also people fear what they don't understand also, because really new is like the first internet came up. We had the same fear because people didn't know how. How to use it, what it is really, because, and I think it's, there is a lot of work in terms of education, in terms of educating people, because, I won't say it's difficult to stop technology.

[00:29:00] So then it's better to. Learn it with it. That's to fear it. So I think also it's it asks us to maybe to embrace the change because a lot of people don't like to change. Also, and for some people change bring fear because fear, but maybe they have to, if they work for a job, like, I don't know, like service job for like the 10 last year.

Maybe they need to go to school again. So maybe they don't have the money or don't have the energy or maybe they're near from the retirement. So they ask, they ask themselves why they, that I need to go to school because I just have to five years to work, then I will be able to retire. And I think, I consider we are on a good path.

It's not the perfect one, because at [00:30:00] least we are not into the apocalyptic one, the one we can see into the movie, because I think we can, we are able to see the fear. Also, there is some people who are pro, some people are against. I think neither side does. The monopoly of a reason and for now, I think it's in, in between, I think we should be in both sides.

Maybe you have fear of maybe this technology, but also we can embrace technology because maybe they can help us to with our current, I would say. On environment issue, for example, or or maybe with a social issue also. So I think it's there's a lot of challenge for this technology and it's difficult to say what happened in five years, 10 years because they're in a few months, every, every two, every [00:31:00] two weeks, the new AI app.

So it's difficult to say what, what the future brings.

Sarah: Yeah, it will happen so fast though, right? Like that's the main thing with this AI technology. It's like, like I remember when Chachi PT came out, well, it's been already out, but nobody talked about it. And then within, let's say three weeks, everybody was talking about it.

And so that's probably going to happen again with the next thing and the next thing, and the next thing, and. And what I like that you said is like, yes, we're on the right path because it would be probably really spooky if there was no fear at all like that. And I think that's kind of where Elon Musk and the gang, they got a bit freaked out because they're like, whoa, like this is going too fast.

So they backed up a bit. And, and so I think that's a healthy. [00:32:00] kind of relationship to, to something new that, that we need to learn to live with. And so I appreciate that.

Naully: I think it's I love to compare AI like the yin and yang.

Sarah: Mm.

Naully: It's like it should be equ equilibrium between those two.

Sarah: Yeah. Mm-Hmm.

Naully: it can be good, it can be bad. I think it's a mix of, can be cannot, it can be not. Also fully and utopia. Or fully a dystopia.

Sarah: Right.

Naully: I think it should be both at the same time, so.

Sarah: A little mix. A little mix.

Naully: I think it's like, I think I think it's like us. I think we, there is some day we are full of energy.

Some day we are just, we just want to lay in bed all day. And I think it's this the circle of life also, we have your spring, summer, autumn, [00:33:00] winter, I think it's a cycle. So,

Sarah: yeah, and you're right. I mean, it's in the end it's created by humans. And so it's still the humans that influences AI. And so if humans.

Right now you can't say that humans are all good. Like we're in one of the biggest messes that we've ever been in. And so how can we expect the AI to just be beautiful and loving and all of that. So I feel like if we're working on becoming better humans, then the AI. We'll follow that trend. So that's, yeah, that's kind of my thought on that, but yeah, any, any closing thoughts that you have, that you, like what you're going to talk about on the workshop, maybe give us a, a little sneak preview of, of what we're going to do there.

Naully: I [00:34:00] think we are, we are going to the. Ethics of AI and also the the ground base also of ai, which is which, which is where is it is and which is, is not, mm.

Sarah: Right? Yeah. And then also doing some, some breakout rooms, right? And, and, and also, yeah, working on, on different,

Naully: so we'll do some workshops and, Mm-Hmm, , all the, the, the pattern framework is working.

Sarah: Yeah, I, I look forward to that framework and the, and the questions from that. So yeah, exciting. So yeah, again thank you so much for coming on, Noli. And if you're listening to this and you're interested in AI, but you're just a little bit also afraid of, you know, how does it work in a, in a business that is supposed to be humane.

In marketing, that is supposed to be humane. Well, I invite you to join us for this workshop on [00:35:00] May 1st with Noli, because we're definitely going to approach it from the humane side of things. So,

Naully: I just say, I just want to say that you mean it's always in loop AI or

Sarah: not. Say that again. I didn't.

Naully: I would say the AI human is always in the loop or not.

So, yeah,

Sarah: yeah, that's, yeah, that's nicely said. So yeah, do join us on, on May 1st go to humane. marketing forward slash workshop to reserve your seat and Noli and I look forward to having you there. Thanks so much. You're there. Yes. Thank you. Thanks for coming on to the podcast as well, Noli.

I hope you got some great value from listening to this episode. Please find out more about Noli and his work at nolinicola. ch and [00:36:00] join us on Facebook for a 90 minute workshop on May 1st in the safety of our community, the Humane Marketing Circle. Members can attend these workshops for free, but you can join us with a pay what you can amount between 15 and 27.

Find out more and reserve your spot at humane. marketing. com. And if you are looking for others who think like you, then why not join us in the humane marketing circle? Find out more at humane dot marketing forward slash circle. You find the show notes of this episode at humane dot marketing forward slash H M 1 8 7.

And on this beautiful page, you'll also find a series of free offers. Such as the Humane Business Manifesto and my two books, Marketing Like We're Human and Selling Like We're Human. Thank you so much for listening and being part of a generation of marketers who [00:37:00] cares for yourself, your clients, and the planet.

We are change makers before we are marketers. So go be the change you want to see in the world. Speak soon.

  continue reading

153 tập

Artwork
iconChia sẻ
 
Manage episode 413377078 series 1047241
Nội dung được cung cấp bởi Sarah Santacroce and Humane Marketer. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Sarah Santacroce and Humane Marketer hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.

Join us for another episode on the Humane Marketing podcast as we explore how to ethically partner with AI, with our guest, Naully Nicolas.

We talk about Naully's journey into the world of artificial intelligence, the crucial ethical and legal considerations surrounding AI implementation, and how AI empowers us to work smarter, not harder. Naully shares his PLATON framework, inspired by Plato and infused with philosophical principles, guiding us through the pillars of legality, accountability, transparency, objectivity, and neutrality.

Together, we envision the future of AI and work, inspiring us as Humane Marketers to embrace technology with empathy and mindfulness, shaping a future where humanity thrives alongside innovation.

What we addressed in this conversation:

  • How Naully got interested and started with AI
  • The ethical and legal considerations of AI
  • How AI enables us to work smarter not harder
  • Naully's PLATON framework, based on Plato and philosophical considerations (principles, legality, accountability, transparency, objectivity and neutrality, because in French Plato has an N at the end)
  • How Naully sees the future of AI and work
  • and much more...

---

Ep 187 whole episode

Sarah: [00:00:00] Hello, Humane Marketers. Welcome back to the Humane Marketing Podcast, the place to be for the generation of marketers that cares. This is a show where we talk about running your business in a way that feels good to you, is aligned with your values, and also resonates with today's conscious customers because it's humane, ethical, and non pushy.

I'm Sarah Santacroce, your hippie turned business coach for quietly rebellious entrepreneurs and marketing impact pioneers. Mama bear of the humane marketing circle and renegade author of marketing like we're human and selling like we're human. If after listening to the show for a while, you're ready to move on to the next level and start implementing and would welcome a community of like minded, quietly rebellious entrepreneurs who discuss with transparency what we're doing.

Works and what doesn't work in business. Then we'd love to welcome you in our humane marketing circle. If you're picturing your [00:01:00] typical Facebook group, let me paint a new picture for you. This is a closed community of like minded entrepreneurs from all over the world who come together once per month in a zoom circle workshop to hold each other accountable and build their business in a sustainable way.

We share with transparency and vulnerability. What works for us. And what doesn't work so that you can figure out what works for you instead of keep throwing spaghetti on the wall and seeing what sticks. Find out more at humane. marketing forward slash circle. And if you prefer one on one support from me, my humane business coaching could be just what you need, whether it's for your marketing, sales, general business building, or help with your big idea like writing a book.

I'd love to share my brain and my heart with you together with my almost 15. Years business experience and help you grow a sustainable business that is joyful and sustainable. If you love this [00:02:00] podcast, wait until I show you my Mama Bear qualities as my one-on-one client, and find out more at Humane Marketing slash coaching.

And finally, if you are a Marketing Impact pioneer and would like to bring Humane Marketing to your organization, have a look at my offers and workshops on my website. Humane dot marketing.

Hello, friends. Welcome back to the Humane Marketing Podcast. Today's conversation fits under the P of partnership, I'd say. We're partnering with AI. If you're a regular here, you know that I'm organizing the conversations around the seven P's of the Humane Marketing mandala. And if you're new here and this is your first time listening, well, A big warm welcome.

You probably don't know what I'm talking about, these seven [00:03:00] P's in the mandala. Well, you can download your one page marketing plan with the Humane Marketing version of the seven P's in the shape of a mandala at humane. marketing forward slash one page. Humane. marketing forward slash one page. That's the number one and the word page.

And this comes with seven email prompts to really help you reflect on these different P's for your business. For this conversation about partnering with AI in a humane way, I brought in my colleague, Noli Nicola. Noli is renowned Digital transformation consultant with over 16 years of experience in I. T.

engineering and 12 years in marketing specializing in emerging technologies like web three, the metaverse and A. I. Noli provides pragmatic advice to business leaders. Particularly in [00:04:00] SMEs, navigating the complexities of the digital age. His stoic philosophy combined with a profound understanding of the digital landscape makes him an invaluable guide for companies seeking strategic opportunities in technology.

So what we addressed in this conversation with Noli is how he got started and Interested in AI, the ethical and legal considerations of working with AI, especially as a humane marketer, how AI enables us to work smarter and not harder, Noli's framework based on Plato and philosophical considerations.

Principles, legality, accountability, transparency, objectivity, and neutrality. Because in French, Plato is Platon and has an N at the end. [00:05:00] So that's where the neutrality comes from. And then also how Nolly sees the future of AI and work and so much more. So I'd say without further ado, let's get into it.

Hi Noli, how are you? Como ti va?

Naully: I'm fine, and you?

Sarah: Yes, I'm great, thank you. You're in the middle of a move, so very stressful. We can't really use AI to help us move yet, or can we?

Naully: Yes, I also I hope I use AI maybe for to do the planning for my moving, so it was quite useful.

Sarah: Oh, wow. You'll have to tell us more about that.

But yeah, I'm glad to have this conversation in between trips and moving boxes and things like that. Because yeah, we're, we're super excited to have you come and teach an in depth workshop on May 1st. [00:06:00] And, and this is just kind of like a teaser and I'll ask you some questions that we then also have more time to go in to on, on May 1st.

And so. If you're listening to this and feel like, Oh, I want more of this content. And please join us on May 1st that it's a 90 minute workshop, humane. marketing forward slash workshop, but let's dive into it. And I'll, I'll just kind of start with how did you get into AYA, AI NALI and, and, and like, what does it represent for you in this day and age?

Naully: How I discovered AI, I would say it's a normal step in my long career because I've been working for almost I would say 19 years into the IT universe. So, and also since my childhood, I was very curious, I like to dismount my own [00:07:00] PC and remote the remote again the PC. And it's also. On my personal view, I was there during the, the passage between the old internet, which was the I would say the effects of Minitel for some, and let's say the first browsing on internet.

So it was like into the nineties, I think, around, around this era. And Then I work in IT for almost 20 years and I saw the progress. Also, I saw the constraint also of let's say the digital world. And I discovered AI when I was, reading a book, I would say it was not only, I would say sci fi books, but also I would say it was, I came across a book, so I don't remember the name and I was sure that in the next step of our digital world will be the AI.

And And I was able also to sense [00:08:00] the switch between the, let's say, all the world. And I'm not that old, but the way that we interact with the computer and the new way that we are in this AI universe now.

Sarah: Do you feel like we're completely there in the AI universe or we're still like at the very beginning of it?

I

Naully: think we are in the beginning because most of them. Approach that we have is only true chat, GPT and code and song, but I think it's only the tip of the iceberg because maybe your audience don't really make sense, but we're already using AI in every day. So, for example, for in Spotify, for example, it's an algorithm, it's not AI per se, but we are using the data, right?

Yeah. Like when you're browsing on Netflix or something. It's a kind of [00:09:00] AI, which is gripping you the best show after you finish one. I'll tell you how you finish this show, there's also this one in which you might be interested. So,

Sarah: yeah, so it's, it's kind of this blurry line between algorithms that are kind of gearing us towards where they want to go.

And then also. Yeah, AI for like what you said, planning things like a move and, and probably if you can plan a move with AI, you can also plan a vacation with AI. Like you can do so many things and, and we'll get into some more of that. But I think when I brought up the topic of AI and. You know, Chachi PT is kind of the most note one right now.

In the humane marketing circle, our community, there's a lot of I wouldn't, maybe a day, a day wouldn't say it's fear, but I think it's fear or [00:10:00] hesitation. And then there's also all these ethical considerations, which. Obviously are very important for someone who's doing humane business and humane marketing.

So, yeah, what are some ethical considerations we should keep in mind when, when we're going down this road of using AI in our business?

Naully: I would say if we talk in terms of fear, I can say we have the same when I would say the first software network appears. So because some people are afraid to me on Facebook, never, never, never, never.

Some people switch to to Facebook anyways, but I think the thing different with AI is the fact that they can aggregate a lot of data, which are mostly it's a personal data. And also the carry things is they can be more [00:11:00] personalized that before, because I would say before we look up information into Google, but now we can create our own, I would say chat GPT with in every sector, for example, but I can create a personal coach GPT, which contains all my Let's say, personal view or approach that no other coach can have.

And there is the main, let's say, reflection about what, what are stored, those data, because now we can put, let's say, more personal data, like the, the people that we have interaction with, name, date, address and so on. And those that are located in country who are less more, I would say, regarding internal flow.

For example, in the US, we have the Patriot [00:12:00] Act, in which any federal agency can look into those data without asking you. So that's why in Europe they have the G-G-D-P-R.

Sarah: mm-hmm .

Naully: And now they in, they want to enforce the EU Act in, which is a kind of G-E-D-P-R for ai. So it's to determine which is the good usage of AI and which is the, would say risky usage of hair.

Sarah: They're trying to kind of come up with laws. They're catching up, really. They have to catch up. Yeah, because there

Naully: is some issue, for example, with AI using for credit score, for example, because we have to we have to determine who is responsible for these tools.

Sarah: It

Naully: is the developer. It is the person who is using the tools.

Or this is the user, [00:13:00]

Sarah: right?

Naully: So I would say the same, like if you buy a car, there is a responsibility. It's the one who drives the car, that's the car manufacturer,

Sarah: right?

Naully: So,

Sarah: so the

Naully: ethics is the main, I would say the main point of ethics in AI is to determine the responsibility. in the creation and the use of those tools.

Sarah: Right. Yeah. Because I think the one thing to keep in mind is that you can always go. Either way, right. You can use AI for good, or you can use AI for evil. And that's what we're all afraid of. When we talk about, oh, AI is going to take over, the robots are going to take over, well, we're afraid of things going in the wrong direction.

And so is that what they're now trying, trying to kind of come up with legal responsibilities of who's [00:14:00] responsible for what?

Naully: Yeah, I, as I said before, it's. It's not perfect or ideal, but it's better than nothing because at least we have a framework in which someone can and which some people can refer to, so it's not the wild, wild west in terms of AI, so there's some moral, moral and legal framework.

In the use of AI,

Sarah: right? Is this what happened after Elon Musk and a bunch of other people sent that open letter? Was that in response to that or kind of happened

Naully: anyway? I think it's that. But also there is the thing that. They don't want to this kind of tool to be out of control because things can go badly and we can see in country like China that are using AI not.

In the right use, [00:15:00] mostly for surveillance of their steel. And I think the country in Europe, they don't want to that rule. So, so, and I think also there is some moral issues of also, of kind of still.

Sarah: Right, right, yeah, it's interesting because everything happens so fast that governments and legal people, they, they have a hard time catching up with everything because that's, that's from the old paradigm.

So it's just like very slow and admin heavy and all of that. Right. So you think there is ever going to be a point where. They're on the same page and they caught up. Well,

Naully: there is now, I think, more and more countries are, I would say, are just according to the same principle, because I think there is, I would say, there is some universal [00:16:00] principle that you will find anywhere in the world.

It's the world of justice.

Sarah: I think

Naully: everyone want want to be as the Stanford justice, and also to have the opportunity to questions also the AI, because it's like, Well, you're, let's say, in your common life, you're also the right to question if you're arrested by a police officer, you have the right to, to have a lawyer and also to to be in a tribunal.

So it should be the same also when we use AI for this, I think it's the kind, it's the same. Universal principle that you can find in any country from Switzerland to France to Peru, anywhere. So,

Sarah: yeah. Yeah. Okay. So we went in a bit big picture in terms of, you know, what needs to change in a society on the legal aspect, the justice aspect for us in order to, to [00:17:00] work with AI.

But now if we take it down to our entrepreneurial level, how can we. integrate in, you know, AI in our businesses in a way that is. Ethical and makes us work smarter and not harder, but also stays away from like, the one thing that I don't like about AI is, is this push towards even more productivity, toward even more working and, you know, more hustle.

And I'm like, well, no, I think that's getting it wrong. It's like, we have this amazing tool that helps us actually. Work smarter, not harder, but then freeze us with more time to be more human. That's the way I look at it. So what are some practical ways that you have worked with entrepreneurs that they use [00:18:00] AI to work smarter and not harder?

Naully: But the first thing that I told my. Entrepreneur is that AI is not there to replace you, but to help you. So you should consider AI as a tool because AI is not perfect because by extension, AI was created by a new man. So the human is not perfect per se. So also AI are subject as we call hallucination because.

Yeah, it's predicting, let's say, words, it's not contextualizing the words, so you have also understand the limits of the AI because it can be considered as a magic tool can rule, but you have also to understand that AI has also its own limits. So you won't, you won't pass you in any ways, you will simply help you [00:19:00] maybe in turn now in term of workload, I would say that.

AI is a good tool if you want to, I don't know, manage your content. For example, if you are someone who loves to write content, it could be a good assistant, but it won't replace you to create your content, but also it can help you to I don't know, create a content schedule for the next two, once a month, next two months.

And then you can schedule those contents and then you can sort of manage your day to day life also easier because you already create your content for the month for the next two months. So you can maybe take a day off because usually before you took, well, I don't know, one week to create your content, to write it and publish it.

And those, I would say, save time, you [00:20:00] can save it elsewhere.

Sarah: Yeah. Yeah. You can actually invest it in the human relations, right? Yeah. Have, have coffee with a friend or something like that. Yeah. Yeah. And I like how you said you can help you, it assists you, it can help you with brainstorming ideas and, and give you content ideas you know, never ending lists of content ideas.

Yeah. And it can then even help you, guide you through writing it. But I think we should not just rely on AI to now take over all the writing because knowing you and what you write, I would definitely be able to tell, I think, if all of a sudden it would just just be AI writing it. Yes, you can train it to a certain extent probably to, you know, have talk like you.

And that's what I'm experimenting with as well. But then I [00:21:00] still. Go in, like I still am the manager, right? And AI is the assistant and then I have to change it and make sure it speaks like I do. So I think that's really important to understand because What we see a lot out there is like these bland sounding things, right, that you can tell, oh, this is just like, you know, AI created content that has no humanness and no personality to it.

Yeah. Yeah. Yeah, exactly. And then Google actually just said that they're starting to punish the pages on Google that are only content, I mean, AI created. So that's a, that's a good good move from them. Obviously they're a bit scared as well, I think, but yeah, I think that's a good move. So where would we, would you tell.

Actually, before, before that, I, I [00:22:00] know you have this framework based on Blayton. Yeah. And, and so I, I'm just wondering if you could explain a little bit. Your approach to philosophy and AI, because that kind of, from hearing that, that kind of sounds like an, it's, you know, an oxymoron, like how do those two go together?

But

Naully: tell us more

Sarah: about it.

Naully: Yeah, for sure. So I, I, I'm always being, I would say I'm a huge history buff. So I always has book in my house and I love read books about history and all kinds of books could be philosophy, psychology, and I found that philosophy was a good way to be grounded in what we are doing and all what we are thinking, especially in this time with everything is going so fast and we can be so lost [00:23:00] rapidly.

And during my one of my reading, I came across a biography of. Plateau, which is plateau in French. And I was thinking that maybe we could use some aspect of the story, story season, but in how we approach technology, because sometimes we are using technology because we have. We are using it, but we don't really think what we do with it project.

So that's how I came up, I came with this idea of Platon, which is put in French. And the first, let's say P is for principle in the wishes. What are the, the principle that I put it in my content, it could be also in my content or the principle that on why I'm using the ai. So it ask me to [00:24:00] myself when I'm using a tool, which could be charge GPT and some to ask me what are the consequence of using this tool if the tool are, I would say.

Ethically based or the people are treated, I would say, correctly or humanly. And then the L can stand for legality. Maybe it's more about when I'm using an AI tool is my content is not under, it's not copyrighted by someone else. And actually, there is a huge debate about AI, because mostly they are using data scrapped from the internet, and most of the data are copyrighted.

Sarah: So,

Naully: so you need to ask yourself, is the thing that I'm using is completely legal or not? Then there's the A, [00:25:00] which is for accountability, which I have to be concerned that I'm using a AI tool. Don't say that if for example, if I'm putting wrong information, because I use AI tool, I have to count accountable after I, that I put, if I use, for example, if I'm using a AI tool like me Journey, maybe I should be aware that maybe I'm using copyrighted.

Image from illustrator and maybe if needed, but who put any annotation when I'm using those kind of image also to be transparent and the T for transparency. So for example, is to be transparent in the use of AI tool, especially if you're working for a journalist. You have to say that for example, that I, this part or this part of your article has been [00:26:00] written with the help of AI2 or this image has been modified by the AI2.

For example, recently there is a journalist who made a Documentary about the young in Iran, and it's instead of using blurred image, they use the, he praised the faces of the person who are being interviewed with AI generated image. So, so they made a disclaimer saying that those people faces have been generated by AI.

So, and O stands for objectivity. So you have to be like concerned or so, but on why you're using the AI in your marketing. It's of course, the N stands for neutrality, which it says that it's mostly when you're would say. Using AI to in marketing, it's saying that you are [00:27:00] using the tool, not in a harmful way.

So you should be conscious that you are not using the tool to do arm on or give false information.

Sarah: I love, I love that. I love these words. Let's so principles, legality, accountability, transparency, objectivity, and neutrality. Yeah, they, they sound very humane, like, you know, they're very humane words and it's, it's a really good idea to, yeah, to go into AI with these considerations, right?

To, to think about that deeply And, and we'll talk more about that in the, in the workshop and, and I think you have some you've created a game, so I look Yes. Yeah. Taking some questions from that game around that framework. Yeah. So in terms of where we're going with this, because like you said, it's just, you know, the kind of like, we're just seeing a tiny [00:28:00] bit of the iceberg right now.

So where do you think. We're heading in terms of entrepreneurs using AI, how is it going to take over more of our, yeah, workload and what so many people like last year, this year, I don't hear it so much anymore, but so many people were afraid of AI taking over their jobs. So, so yeah what do you see as future development?

Naully: I think also people fear what they don't understand also, because really new is like the first internet came up. We had the same fear because people didn't know how. How to use it, what it is really, because, and I think it's, there is a lot of work in terms of education, in terms of educating people, because, I won't say it's difficult to stop technology.

[00:29:00] So then it's better to. Learn it with it. That's to fear it. So I think also it's it asks us to maybe to embrace the change because a lot of people don't like to change. Also, and for some people change bring fear because fear, but maybe they have to, if they work for a job, like, I don't know, like service job for like the 10 last year.

Maybe they need to go to school again. So maybe they don't have the money or don't have the energy or maybe they're near from the retirement. So they ask, they ask themselves why they, that I need to go to school because I just have to five years to work, then I will be able to retire. And I think, I consider we are on a good path.

It's not the perfect one, because at [00:30:00] least we are not into the apocalyptic one, the one we can see into the movie, because I think we can, we are able to see the fear. Also, there is some people who are pro, some people are against. I think neither side does. The monopoly of a reason and for now, I think it's in, in between, I think we should be in both sides.

Maybe you have fear of maybe this technology, but also we can embrace technology because maybe they can help us to with our current, I would say. On environment issue, for example, or or maybe with a social issue also. So I think it's there's a lot of challenge for this technology and it's difficult to say what happened in five years, 10 years because they're in a few months, every, every two, every [00:31:00] two weeks, the new AI app.

So it's difficult to say what, what the future brings.

Sarah: Yeah, it will happen so fast though, right? Like that's the main thing with this AI technology. It's like, like I remember when Chachi PT came out, well, it's been already out, but nobody talked about it. And then within, let's say three weeks, everybody was talking about it.

And so that's probably going to happen again with the next thing and the next thing, and the next thing, and. And what I like that you said is like, yes, we're on the right path because it would be probably really spooky if there was no fear at all like that. And I think that's kind of where Elon Musk and the gang, they got a bit freaked out because they're like, whoa, like this is going too fast.

So they backed up a bit. And, and so I think that's a healthy. [00:32:00] kind of relationship to, to something new that, that we need to learn to live with. And so I appreciate that.

Naully: I think it's I love to compare AI like the yin and yang.

Sarah: Mm.

Naully: It's like it should be equ equilibrium between those two.

Sarah: Yeah. Mm-Hmm.

Naully: it can be good, it can be bad. I think it's a mix of, can be cannot, it can be not. Also fully and utopia. Or fully a dystopia.

Sarah: Right.

Naully: I think it should be both at the same time, so.

Sarah: A little mix. A little mix.

Naully: I think it's like, I think I think it's like us. I think we, there is some day we are full of energy.

Some day we are just, we just want to lay in bed all day. And I think it's this the circle of life also, we have your spring, summer, autumn, [00:33:00] winter, I think it's a cycle. So,

Sarah: yeah, and you're right. I mean, it's in the end it's created by humans. And so it's still the humans that influences AI. And so if humans.

Right now you can't say that humans are all good. Like we're in one of the biggest messes that we've ever been in. And so how can we expect the AI to just be beautiful and loving and all of that. So I feel like if we're working on becoming better humans, then the AI. We'll follow that trend. So that's, yeah, that's kind of my thought on that, but yeah, any, any closing thoughts that you have, that you, like what you're going to talk about on the workshop, maybe give us a, a little sneak preview of, of what we're going to do there.

Naully: I [00:34:00] think we are, we are going to the. Ethics of AI and also the the ground base also of ai, which is which, which is where is it is and which is, is not, mm.

Sarah: Right? Yeah. And then also doing some, some breakout rooms, right? And, and, and also, yeah, working on, on different,

Naully: so we'll do some workshops and, Mm-Hmm, , all the, the, the pattern framework is working.

Sarah: Yeah, I, I look forward to that framework and the, and the questions from that. So yeah, exciting. So yeah, again thank you so much for coming on, Noli. And if you're listening to this and you're interested in AI, but you're just a little bit also afraid of, you know, how does it work in a, in a business that is supposed to be humane.

In marketing, that is supposed to be humane. Well, I invite you to join us for this workshop on [00:35:00] May 1st with Noli, because we're definitely going to approach it from the humane side of things. So,

Naully: I just say, I just want to say that you mean it's always in loop AI or

Sarah: not. Say that again. I didn't.

Naully: I would say the AI human is always in the loop or not.

So, yeah,

Sarah: yeah, that's, yeah, that's nicely said. So yeah, do join us on, on May 1st go to humane. marketing forward slash workshop to reserve your seat and Noli and I look forward to having you there. Thanks so much. You're there. Yes. Thank you. Thanks for coming on to the podcast as well, Noli.

I hope you got some great value from listening to this episode. Please find out more about Noli and his work at nolinicola. ch and [00:36:00] join us on Facebook for a 90 minute workshop on May 1st in the safety of our community, the Humane Marketing Circle. Members can attend these workshops for free, but you can join us with a pay what you can amount between 15 and 27.

Find out more and reserve your spot at humane. marketing. com. And if you are looking for others who think like you, then why not join us in the humane marketing circle? Find out more at humane dot marketing forward slash circle. You find the show notes of this episode at humane dot marketing forward slash H M 1 8 7.

And on this beautiful page, you'll also find a series of free offers. Such as the Humane Business Manifesto and my two books, Marketing Like We're Human and Selling Like We're Human. Thank you so much for listening and being part of a generation of marketers who [00:37:00] cares for yourself, your clients, and the planet.

We are change makers before we are marketers. So go be the change you want to see in the world. Speak soon.

  continue reading

153 tập

Tất cả các tập

×
 
Loading …

Chào mừng bạn đến với Player FM!

Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.

 

Hướng dẫn sử dụng nhanh