Artwork

Nội dung được cung cấp bởi Daniel J. Lewis. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Daniel J. Lewis hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Player FM - Ứng dụng Podcast
Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !

11 Warnings about Using AI in Content-Creation (including podcasting)

 
Chia sẻ
 

Manage episode 412993143 series 48727
Nội dung được cung cấp bởi Daniel J. Lewis. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Daniel J. Lewis hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.

“Artificial intelligence” (“AI”) has made huge leaps in abilities within a very short time. It was only a few years ago that I felt on the cutting edge teaching how to use AI tools like Jasper (originally called “Conversion.ai” and “Jarvis”), even before ChatGPT was released.

Now, AI has become so prominent, that it's almost surprising if a software company of any size is not offering some kind of AI-based solution.

While inflation has skyrocketed the prices of almost everything, the cost for accessing AI has significantly dropped. When I first started using AI, a good plan with access to only one central AI system cost $99 per month. But now, you can use a tool like Magai to use a whole bunch of different language- and image-based AI tools starting at only $19 per month!

(As an affiliate, I earn from qualifying purchases through these links. But I recommend things I truly believe in, regardless of earnings.)

All this potential means we need to quote the line from Spider-Man, “With great power comes great responsibility.”

And thus why I want to share these warnings with you, to advocate for responsible use of generative AI, large language models (LLMs), machine learning, or whatever you want to call it.

This warnings apply to any kind of content-creation, not only podcasting!

(And in case you're wondering, I did not use AI to create any of this content, but I might be using some AI to transcribe or help me market this content.)

Aside: most warnings apply to generative AI, but not repurposing or enhancement AI

Before I get into my list of warnings about using AI, I want to clarify that these are focused using AI to essentially create something from nothing. I still think AI can be a great assistant on your content. For example, processing audio or video, clipping excerpts, suggesting marketing approaches, improving how things communicate, repurposing, and more. All of those things start with your intelligence, and then the AI works from that.

But I see most of these warnings as applying solely to generative AI, or when you start with nothing but a prompt.

Now, on to the warnings!

1. Undisclosed use of generative AI can get you in trouble

YouTube, social networks, and lots of other websites and platforms are starting to require you to disclose whenever you're putting out content generated by AI. And I think this is a good thing to do as it helps the potential audience know what kind of quality to expect.

Even for things like podcast transcripts, it's good to disclose whether AI was used to transcribe the audio. As I mentioned in my previous episode about using podcast transcripts, someone on your podcast might say, “I love two li'l puppies,” but the AI might transcribe it as, “I love to kill puppies.” Sometimes, even omitting a single word can drastically alter the meaning. For example, imagine accidentally omitting the “not” in a sentence like, “I'm not guilty.”

This doesn't necessarily mean you must disclose every time you use AI in any capacity (like you need to disclose whenever you're compensated for anything you talk about), but you should be aware of the requirements of platforms and seek to always be above reproach.

And if you're concerned about how it might affect your reputation if you disclose every time you use AI, then here's a radical thought: maybe don't use AI! (More on this in #11.)

2. AI often “hallucinates” facts and citations

ChatGPT, Claude, Grok, Gemini, and all the text-based AIs we know are also called “large language models” (or “LLMs”). And I think that's a much better term, too, because they're not actually intelligent; they are simply good with language.

This is why you'll often see LLMs write something that grammatically makes sense, but is conceptually nonsense.

In other words, LLMs know how to write sentences.

For example, I sometimes like to ask AI, “Who is Daniel J. Lewis?” Not because of any kind of ego complex, but because I'm an interesting test subject for LLMs since I am partially a public figure, but I also have a name very close to a celebrity: Daniel Day Lewis. Thus, the responses LLMs give me often conflate the two of us (a mistake I wish my bank would make!). I've seen responses that both describe me as a podcasting-industry expert and highlight my roles in There Will Be Blood and The Last of the Mohicans. (And I'm not helping any LLMs scraping my content by just now writing those things together!)

So for anything an AI or LLM writes for you, I urge you to fact-check it! I've even seen some responses completely make up citations that don't exist!

3. AI lacks humanity

From the moment of conception, you have always been a unique being of tremendous value and potential with unique DNA, unique experiences, unique thoughts, unique emotions, and more. Like a snowflake, there will never be someone—or something—exactly like you! Not even an AI trained on all of your content!

AI is not an actual intelligence and I believe it never will be. And AI will never be human.

But you are. You can feel, express, and empathize through emotion. You can question, explore, change your mind, and change others' minds. You can create things of great beauty and originality with no outside prompting.

And it's because of this that I think AI can never replace you. While it might have better skills than you in some areas, it will never beat the quality and personableness that you can offer.

4. AI-created images can be humiliating

AI image models have produced some hilarious or nightmarish results and lots of things that are physically impossible! Like with how AI can hallucinate facts and citations, it can also make images that look real, until you actually pay attention to the details.

I think this teaser for Despicable Me 4 accurately explains it:

Or The Babylon Bee‘s explanation of ChatGPT:

Lest you think this is only outdated models producing bad content, here are some things I've actually seen from current-generation AI image models:

  • Backwards hands
  • Limbs that seamlessly merge into the surroundings
  • Misspelled text that you might not notice unless you try to actually read it
  • Device parts that disappear into nowhere
  • Placements that are physically impossible
  • Broken, slanted, or curvy lines that absolutely should be straight
  • Incorrect size ratios

Watch out for these things! For any image you generate (or that someone else gives you that they might have generated with AI), look at it very carefully to ensure everything about it makes sense and isn't simply a pretty—but embarrassing—combination of pixels.

For this reason, you might actually want your image AI to make artwork that is obviously not photorealistic.

5. AI is biased because it was fed biased content and programmed by biased people

The following is not to push a particular political or moral direction, but just to expose some facts! Most LLMs lean a particular political and moral direction because they were trained with content that leaned that direction. Thus, even if not intentional, the outputs will often have that same leaning.

Imagine it this way. If the majority of content on the Internet—especially the most popular sites—said that 2 + 2 = 5, then LLMs trained from Internet content would also propagate that fallacy.

Furthermore, many of the companies behind these AIs or LLMs also lean the same political and moral direction as the majority of the Internet, and so they will favor content from the same echo chamber and sometimes even intentionally train the AI to push that agenda.

Look at the shamefully bad images of people that Google Gemini was originally generating, even going so far as to render Nazis as Asians or blacks instead of whites, because of “diversity, inclusion, and equity!”

And that's why there's a market for LLMs that lean the opposite direction.

Even taking out the political and moral leanings, I see LLMs regularly put out “mythinformation”—even in the podcasting space, like saying that podcast ratings and reviews affect your rankings in Apple Podcasts. That's not true! But it's been said so many times on the the Internet, that LLMs think it's true!

6. Content from AI always needs editing

It's because of warnings #2–#5 that I come to this one: edit, edit, edit!

I'd love to hear your opinion on this, too. But I'm starting to think it reflects worse on someone when they put out bad AI-created content than if they put out authentic content with typos or small mistakes. Do you agree?

For example, you might accidentally write about “George Wishington,” but an AI might say that George Washington fought in World War II! In this case, your typo is a human error and your meaning could probably still be understood by your context. But if you put out something that an AI hallucinated, then people have to wonder if you're actually that misinformed (AKA “stupid”).

7. AI-generated content raises copyright concerns

In the United States of America, and some other countries, anything you create is considered immediately and automatically protected by copyright, and thus you reserve all rights to it. (That's why it's really not necessary anymore to write “All rights reserved,” at least most of the time.)

But you also share or forfeit some of your rights when you consent to using some tools or publishing through some platforms. For example, most places have clauses in their terms of service that allow them to use the content you provide (in whatever form it is) in their own marketing materials. This could be as simple as your podcast cover art visible with 999 others on a grid image for an app's homepage. Or it could mean you granted the platform a license to clip your content in an advertisement for their platform.

While most of these terms of service have been safe (despite some fear-mongering), some places are starting to update their terms of service—requiring your consent—and giving themselves a license to use your content to train their AI tools. Even if your content has a registered copyright, you are still granting other places licenses to use your copyrighted content.

However, it's being uncovered that many LLMs were trained on copyrighted material without any license from the copyright holders.

And if you use an LLM to generate new content from nothing, you might potentially be infringing on someone else's intellectual property rights. And you would be held liable for that. Just like if you hire a cheap “designer” to make your podcast cover art and they steal images from a Google image search, you would be liable for that theft.

Some might argue that this isn't very different from going out, reading all the content yourself, and writing your own conglomeration of your newfound knowledge. But even then, you can be guilty of plagiarism by putting forth something as your idea, when it was actually someone else's.

And the more niche the subject, the less information there was to train the AI, and thus the higher chance of it outright copying other information, or making up something factually incorrect (see #2).

This is probably never a problem when you're using AI on your already-created content.

8. AI might already be “stealing” your intellectual property

I've had my own original content and images plagiarized or directly stolen before. But AI is only making it easier for that to happen and harder for me to catch it.

For example, I often talk about my Podcasting P.R.O.F.I.T. Paradigm: popularity, relationships, opportunities, fun, income, or tangibles (and why you should put podcasting PROFIT first). If someone used an AI to talk about podcasting profit and replaced only one of those words, it's still theft, but it wouldn't be as easy to spot.

And because LLMs have been trained on a large percentage of the Internet, it's very possible your own content has already been scraped and used in the training. But you might never know.

Many places are proposing legislation that would require AI companies to disclose their sources, allow people to have their content removed or exempted, or only use properly licensed content for training the AI models. (This is why some AI companies have taken an interest in purchasing publishing companies that own the rights to large amounts of content.) And I think you should have this protection over your content even without having to do the technical processes of blocking all the AI user agents from scraping your website (or transcribing your audio or video content).

And all it takes for social-media sites to do the same is a simple and non-obvious change to their terms of service, which most of us click “I have read and agree” without actually reading what we're agreeing to. For example, Reddit, Zoom, and X-Twitter have used (or continue to use) content on their platforms to train their own AI models—and we've probably given them the rights to do so.

Also watch out for terms of service that allow the AI to train itself from whatever you input into the AI. That's the case for ChatGPT, but supposedly not for any use of OpenAI's GPT models (what powers ChatGPT) through an API (such as what Magai uses).

9. Claiming “fair use” might require a higher standard

I think anyone using AI might face a more difficult time trying to use “fair use” as a legal defense, especially if they haven't properly disclosed their use of AI, like I talked about in warning #1.

One of my favorite things to test on an image-generating AI is giving it the prompt, “Harrison Ford as a pirate.” That's simple innocent fun that I will probably never publish for the public.

But imagine if I used AI to make a realistic photo of Harrison Ford using or endorsing my products. Or maybe using a voice or video AI to make Harrison Ford say something he didn't say.

Indeed, I've seen some intentionally hilarious results with AI. And those kinds of things are often allowed when they don't cause harm and are obviously parodies (this is not legal advice; it's only an observation).

But AI lets things easily get far more complicated. Copying or making a derivative have some clear limitations. But generating something that seems real and uses someone else's likeness or intellectual property might be in a whole different category.

Thus, while I cannot give you legal advice as to what you're allowed to do, I can urge you to not do anything that might get you in trouble! So maybe pretend there isn't even such a thing as “fair use” when it comes to how you use AI to create stuff for you.

10. Affiliate-marketing with AI might get you in trouble

AIs, LLMs, or whatever you want to call them are very good at creating a lot of content very quickly. And that is very alluring to people who want content only for the purpose of promoting their affiliate links. I even saw that years ago when I was among the early users of what's now called Jasper. I would see people frequently ask about what kinds of inputs could be used to get an AI to write a full “review” of an affiliate product.

But remember that thing about how large language models are good at writing sentences? They're not actually good at testing products, sharing experiences, and offering opinions. Thus, using AI to write a “review” could lead to misleading information.

For probably this reason and more, some places will probably start to forbid using AI to create content for promoting their products through affiliate links.

For example—and I haven't heard anyone talking about this!—the Amazon Associates Operating Agreement was updated on March 1, 2024, with the following addition:

Revised the language in Section 5 of the Participation Requirements to clarify that Program Content and Special Links should not be used in connection with generative AI.

“Associates Operating Agreement – What’s Changed,” March 1, 2024, accessed April 14, 2024.

That initially seems like it's forbidding the use of generative AI to promote your Amazon affiliate links. However, the actual points in the operating agreement seem to restrict using AI on the Amazon site content, and especially for training the AI.

2.(e) You will not, without our express prior written approval, access or use PA API or Data Feeds for the purpose of aggregating, analyzing, extracting, or repurposing any Product Advertising Content or in connection with any software or other application intended for use by persons or entities that offer products on an Amazon Site, or in the direct training or fine-tuning of a machine learning model.

5. Distribution of Special Links Through Software and Devices

You will not use any Program Content or Special Link, or otherwise link to an Amazon Site, on or in connection with: (a) any client-side software application (e.g., a browser plug-in, helper object, toolbar, extension, component, or any other application executable or installable by an end user) on any device, including computers, mobile phones, tablets, or other handheld devices (other than Approved Mobile Applications); or (b) any television set-top box (e.g., digital video recorders, cable or satellite boxes, streaming video players, blu-ray players, or dvd players) or Internet-enabled television (e.g., GoogleTV, Sony Bravia, Panasonic Viera Cast, or Vizio Internet Apps). You will not, without or [sic?] express prior written approval, use, or allow any third party to use, any Special Links or Program Content to develop machine learning models or related technology.

“Associates Program Policies,” accessed April 14, 2024. Emphasis added.

That first part is clearly forbidding using the Amazon API with an AI model to programmatically create content for you. However, it seems to still allow you to use AI to create your content about the product itself, and even use your affiliate links in that content.

But I still think you shouldn't!

I, for one, would love to see a stop to all the AI-generated worthless “reviews” on YouTube and other places. For example, the following video or probably anything from “The Smart Kitchen” on YouTube:

As an owner of a couple of affiliate programs myself, I know that I would not want anyone promoting my products with AI generated content. In fact, I'm going to update my affiliate terms to explicitly forbid that! I want real people with real experiences promoting my products! (For a good example, Danny Brown did this very nicely when he authentically promoted my Podgagement service in his recent episode of One Minute Podcast Tips about two ways for podcasters to get feedback from their audiences.)

11. Relying on AI can cost your authority and influence

Lastly, but certainly not least, I urge you to consider the intangible cost of relying on any kind of AI as you podcast or create any other content.

I've said for many years that what I love about podcasting is that it allows you to communicate with your own voice, so people can hear your authentic emotions and they can hear how well you communicate your thoughts, even if you do some editing.

Imagine if you used AI to create and communicate all “your” content, and then you're put on a stage in front of a live audience and you have done no preparation. Aside from any stage-fright, could you actually communicate your message authentically, understandably, and memorably?

Several years ago, I was invited to speak in person about podcasting to a Cincinnati business group. And for the first time ever in my life, I completely forgot about it! I remembered only because about about an hour before I was supposed to speak, the organizer sent me a kind message just to say how excited she was to have me and I think to give me a heads up about parking.

The event was about 45 minutes away, so I had only enough time to throw some stuff in my car, and think about my presentation on the way up.

Now imagine if AI was my crutch and most of my content had been created, organized, or even optimized by AI.

Instead, I was able to speak for half an hour and confidently and thoroughly answer 15 minutes of questions, all with no notes except a 5-word outline in my head. And I think I nailed it!

I could do that because I know my stuff! And I don't share this to brag about me or try to make you think I'm amazing, but to point out what a catastrophe that could have been if I was merely a fraud using ChatGPT.

So don't let AI cost your authority and influence.

Certainly, artificial intelligence can be a really powerful tool to help you do many things or save lots of time, but don't trade your value for AI.

Engage your audience and grow your podcast!

Do you ever feel like your podcast is stuck? Like you're pouring your heart into your podcast but it seems like no one is listening?

Try Podgagement to help you engage your audience and grow your podcast!

Get speakable pages to simplify engaging with your audience, accept voicemail feedback (with automatic transcripts), track your ratings and reviews from nearly 200 places, and more!

Ask your questions or share your feedback

  • Comment on the show notes
  • Leave a voicemail at (903) 231-2221
  • Email feedback@TheAudacitytoPodcast.com (audio files welcome)

Follow The Audacity to Podcast

Disclosure

This post may contain links to products or services with which I have an affiliate relationship. I may receive compensation from your actions through such links. However, I don't let that corrupt my perspective and I don't recommend only affiliates.

The post 11 Warnings about Using AI in Content-Creation (including podcasting) first appeared on The Audacity to Podcast.

  continue reading

398 tập

Artwork
iconChia sẻ
 
Manage episode 412993143 series 48727
Nội dung được cung cấp bởi Daniel J. Lewis. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được Daniel J. Lewis hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.

“Artificial intelligence” (“AI”) has made huge leaps in abilities within a very short time. It was only a few years ago that I felt on the cutting edge teaching how to use AI tools like Jasper (originally called “Conversion.ai” and “Jarvis”), even before ChatGPT was released.

Now, AI has become so prominent, that it's almost surprising if a software company of any size is not offering some kind of AI-based solution.

While inflation has skyrocketed the prices of almost everything, the cost for accessing AI has significantly dropped. When I first started using AI, a good plan with access to only one central AI system cost $99 per month. But now, you can use a tool like Magai to use a whole bunch of different language- and image-based AI tools starting at only $19 per month!

(As an affiliate, I earn from qualifying purchases through these links. But I recommend things I truly believe in, regardless of earnings.)

All this potential means we need to quote the line from Spider-Man, “With great power comes great responsibility.”

And thus why I want to share these warnings with you, to advocate for responsible use of generative AI, large language models (LLMs), machine learning, or whatever you want to call it.

This warnings apply to any kind of content-creation, not only podcasting!

(And in case you're wondering, I did not use AI to create any of this content, but I might be using some AI to transcribe or help me market this content.)

Aside: most warnings apply to generative AI, but not repurposing or enhancement AI

Before I get into my list of warnings about using AI, I want to clarify that these are focused using AI to essentially create something from nothing. I still think AI can be a great assistant on your content. For example, processing audio or video, clipping excerpts, suggesting marketing approaches, improving how things communicate, repurposing, and more. All of those things start with your intelligence, and then the AI works from that.

But I see most of these warnings as applying solely to generative AI, or when you start with nothing but a prompt.

Now, on to the warnings!

1. Undisclosed use of generative AI can get you in trouble

YouTube, social networks, and lots of other websites and platforms are starting to require you to disclose whenever you're putting out content generated by AI. And I think this is a good thing to do as it helps the potential audience know what kind of quality to expect.

Even for things like podcast transcripts, it's good to disclose whether AI was used to transcribe the audio. As I mentioned in my previous episode about using podcast transcripts, someone on your podcast might say, “I love two li'l puppies,” but the AI might transcribe it as, “I love to kill puppies.” Sometimes, even omitting a single word can drastically alter the meaning. For example, imagine accidentally omitting the “not” in a sentence like, “I'm not guilty.”

This doesn't necessarily mean you must disclose every time you use AI in any capacity (like you need to disclose whenever you're compensated for anything you talk about), but you should be aware of the requirements of platforms and seek to always be above reproach.

And if you're concerned about how it might affect your reputation if you disclose every time you use AI, then here's a radical thought: maybe don't use AI! (More on this in #11.)

2. AI often “hallucinates” facts and citations

ChatGPT, Claude, Grok, Gemini, and all the text-based AIs we know are also called “large language models” (or “LLMs”). And I think that's a much better term, too, because they're not actually intelligent; they are simply good with language.

This is why you'll often see LLMs write something that grammatically makes sense, but is conceptually nonsense.

In other words, LLMs know how to write sentences.

For example, I sometimes like to ask AI, “Who is Daniel J. Lewis?” Not because of any kind of ego complex, but because I'm an interesting test subject for LLMs since I am partially a public figure, but I also have a name very close to a celebrity: Daniel Day Lewis. Thus, the responses LLMs give me often conflate the two of us (a mistake I wish my bank would make!). I've seen responses that both describe me as a podcasting-industry expert and highlight my roles in There Will Be Blood and The Last of the Mohicans. (And I'm not helping any LLMs scraping my content by just now writing those things together!)

So for anything an AI or LLM writes for you, I urge you to fact-check it! I've even seen some responses completely make up citations that don't exist!

3. AI lacks humanity

From the moment of conception, you have always been a unique being of tremendous value and potential with unique DNA, unique experiences, unique thoughts, unique emotions, and more. Like a snowflake, there will never be someone—or something—exactly like you! Not even an AI trained on all of your content!

AI is not an actual intelligence and I believe it never will be. And AI will never be human.

But you are. You can feel, express, and empathize through emotion. You can question, explore, change your mind, and change others' minds. You can create things of great beauty and originality with no outside prompting.

And it's because of this that I think AI can never replace you. While it might have better skills than you in some areas, it will never beat the quality and personableness that you can offer.

4. AI-created images can be humiliating

AI image models have produced some hilarious or nightmarish results and lots of things that are physically impossible! Like with how AI can hallucinate facts and citations, it can also make images that look real, until you actually pay attention to the details.

I think this teaser for Despicable Me 4 accurately explains it:

Or The Babylon Bee‘s explanation of ChatGPT:

Lest you think this is only outdated models producing bad content, here are some things I've actually seen from current-generation AI image models:

  • Backwards hands
  • Limbs that seamlessly merge into the surroundings
  • Misspelled text that you might not notice unless you try to actually read it
  • Device parts that disappear into nowhere
  • Placements that are physically impossible
  • Broken, slanted, or curvy lines that absolutely should be straight
  • Incorrect size ratios

Watch out for these things! For any image you generate (or that someone else gives you that they might have generated with AI), look at it very carefully to ensure everything about it makes sense and isn't simply a pretty—but embarrassing—combination of pixels.

For this reason, you might actually want your image AI to make artwork that is obviously not photorealistic.

5. AI is biased because it was fed biased content and programmed by biased people

The following is not to push a particular political or moral direction, but just to expose some facts! Most LLMs lean a particular political and moral direction because they were trained with content that leaned that direction. Thus, even if not intentional, the outputs will often have that same leaning.

Imagine it this way. If the majority of content on the Internet—especially the most popular sites—said that 2 + 2 = 5, then LLMs trained from Internet content would also propagate that fallacy.

Furthermore, many of the companies behind these AIs or LLMs also lean the same political and moral direction as the majority of the Internet, and so they will favor content from the same echo chamber and sometimes even intentionally train the AI to push that agenda.

Look at the shamefully bad images of people that Google Gemini was originally generating, even going so far as to render Nazis as Asians or blacks instead of whites, because of “diversity, inclusion, and equity!”

And that's why there's a market for LLMs that lean the opposite direction.

Even taking out the political and moral leanings, I see LLMs regularly put out “mythinformation”—even in the podcasting space, like saying that podcast ratings and reviews affect your rankings in Apple Podcasts. That's not true! But it's been said so many times on the the Internet, that LLMs think it's true!

6. Content from AI always needs editing

It's because of warnings #2–#5 that I come to this one: edit, edit, edit!

I'd love to hear your opinion on this, too. But I'm starting to think it reflects worse on someone when they put out bad AI-created content than if they put out authentic content with typos or small mistakes. Do you agree?

For example, you might accidentally write about “George Wishington,” but an AI might say that George Washington fought in World War II! In this case, your typo is a human error and your meaning could probably still be understood by your context. But if you put out something that an AI hallucinated, then people have to wonder if you're actually that misinformed (AKA “stupid”).

7. AI-generated content raises copyright concerns

In the United States of America, and some other countries, anything you create is considered immediately and automatically protected by copyright, and thus you reserve all rights to it. (That's why it's really not necessary anymore to write “All rights reserved,” at least most of the time.)

But you also share or forfeit some of your rights when you consent to using some tools or publishing through some platforms. For example, most places have clauses in their terms of service that allow them to use the content you provide (in whatever form it is) in their own marketing materials. This could be as simple as your podcast cover art visible with 999 others on a grid image for an app's homepage. Or it could mean you granted the platform a license to clip your content in an advertisement for their platform.

While most of these terms of service have been safe (despite some fear-mongering), some places are starting to update their terms of service—requiring your consent—and giving themselves a license to use your content to train their AI tools. Even if your content has a registered copyright, you are still granting other places licenses to use your copyrighted content.

However, it's being uncovered that many LLMs were trained on copyrighted material without any license from the copyright holders.

And if you use an LLM to generate new content from nothing, you might potentially be infringing on someone else's intellectual property rights. And you would be held liable for that. Just like if you hire a cheap “designer” to make your podcast cover art and they steal images from a Google image search, you would be liable for that theft.

Some might argue that this isn't very different from going out, reading all the content yourself, and writing your own conglomeration of your newfound knowledge. But even then, you can be guilty of plagiarism by putting forth something as your idea, when it was actually someone else's.

And the more niche the subject, the less information there was to train the AI, and thus the higher chance of it outright copying other information, or making up something factually incorrect (see #2).

This is probably never a problem when you're using AI on your already-created content.

8. AI might already be “stealing” your intellectual property

I've had my own original content and images plagiarized or directly stolen before. But AI is only making it easier for that to happen and harder for me to catch it.

For example, I often talk about my Podcasting P.R.O.F.I.T. Paradigm: popularity, relationships, opportunities, fun, income, or tangibles (and why you should put podcasting PROFIT first). If someone used an AI to talk about podcasting profit and replaced only one of those words, it's still theft, but it wouldn't be as easy to spot.

And because LLMs have been trained on a large percentage of the Internet, it's very possible your own content has already been scraped and used in the training. But you might never know.

Many places are proposing legislation that would require AI companies to disclose their sources, allow people to have their content removed or exempted, or only use properly licensed content for training the AI models. (This is why some AI companies have taken an interest in purchasing publishing companies that own the rights to large amounts of content.) And I think you should have this protection over your content even without having to do the technical processes of blocking all the AI user agents from scraping your website (or transcribing your audio or video content).

And all it takes for social-media sites to do the same is a simple and non-obvious change to their terms of service, which most of us click “I have read and agree” without actually reading what we're agreeing to. For example, Reddit, Zoom, and X-Twitter have used (or continue to use) content on their platforms to train their own AI models—and we've probably given them the rights to do so.

Also watch out for terms of service that allow the AI to train itself from whatever you input into the AI. That's the case for ChatGPT, but supposedly not for any use of OpenAI's GPT models (what powers ChatGPT) through an API (such as what Magai uses).

9. Claiming “fair use” might require a higher standard

I think anyone using AI might face a more difficult time trying to use “fair use” as a legal defense, especially if they haven't properly disclosed their use of AI, like I talked about in warning #1.

One of my favorite things to test on an image-generating AI is giving it the prompt, “Harrison Ford as a pirate.” That's simple innocent fun that I will probably never publish for the public.

But imagine if I used AI to make a realistic photo of Harrison Ford using or endorsing my products. Or maybe using a voice or video AI to make Harrison Ford say something he didn't say.

Indeed, I've seen some intentionally hilarious results with AI. And those kinds of things are often allowed when they don't cause harm and are obviously parodies (this is not legal advice; it's only an observation).

But AI lets things easily get far more complicated. Copying or making a derivative have some clear limitations. But generating something that seems real and uses someone else's likeness or intellectual property might be in a whole different category.

Thus, while I cannot give you legal advice as to what you're allowed to do, I can urge you to not do anything that might get you in trouble! So maybe pretend there isn't even such a thing as “fair use” when it comes to how you use AI to create stuff for you.

10. Affiliate-marketing with AI might get you in trouble

AIs, LLMs, or whatever you want to call them are very good at creating a lot of content very quickly. And that is very alluring to people who want content only for the purpose of promoting their affiliate links. I even saw that years ago when I was among the early users of what's now called Jasper. I would see people frequently ask about what kinds of inputs could be used to get an AI to write a full “review” of an affiliate product.

But remember that thing about how large language models are good at writing sentences? They're not actually good at testing products, sharing experiences, and offering opinions. Thus, using AI to write a “review” could lead to misleading information.

For probably this reason and more, some places will probably start to forbid using AI to create content for promoting their products through affiliate links.

For example—and I haven't heard anyone talking about this!—the Amazon Associates Operating Agreement was updated on March 1, 2024, with the following addition:

Revised the language in Section 5 of the Participation Requirements to clarify that Program Content and Special Links should not be used in connection with generative AI.

“Associates Operating Agreement – What’s Changed,” March 1, 2024, accessed April 14, 2024.

That initially seems like it's forbidding the use of generative AI to promote your Amazon affiliate links. However, the actual points in the operating agreement seem to restrict using AI on the Amazon site content, and especially for training the AI.

2.(e) You will not, without our express prior written approval, access or use PA API or Data Feeds for the purpose of aggregating, analyzing, extracting, or repurposing any Product Advertising Content or in connection with any software or other application intended for use by persons or entities that offer products on an Amazon Site, or in the direct training or fine-tuning of a machine learning model.

5. Distribution of Special Links Through Software and Devices

You will not use any Program Content or Special Link, or otherwise link to an Amazon Site, on or in connection with: (a) any client-side software application (e.g., a browser plug-in, helper object, toolbar, extension, component, or any other application executable or installable by an end user) on any device, including computers, mobile phones, tablets, or other handheld devices (other than Approved Mobile Applications); or (b) any television set-top box (e.g., digital video recorders, cable or satellite boxes, streaming video players, blu-ray players, or dvd players) or Internet-enabled television (e.g., GoogleTV, Sony Bravia, Panasonic Viera Cast, or Vizio Internet Apps). You will not, without or [sic?] express prior written approval, use, or allow any third party to use, any Special Links or Program Content to develop machine learning models or related technology.

“Associates Program Policies,” accessed April 14, 2024. Emphasis added.

That first part is clearly forbidding using the Amazon API with an AI model to programmatically create content for you. However, it seems to still allow you to use AI to create your content about the product itself, and even use your affiliate links in that content.

But I still think you shouldn't!

I, for one, would love to see a stop to all the AI-generated worthless “reviews” on YouTube and other places. For example, the following video or probably anything from “The Smart Kitchen” on YouTube:

As an owner of a couple of affiliate programs myself, I know that I would not want anyone promoting my products with AI generated content. In fact, I'm going to update my affiliate terms to explicitly forbid that! I want real people with real experiences promoting my products! (For a good example, Danny Brown did this very nicely when he authentically promoted my Podgagement service in his recent episode of One Minute Podcast Tips about two ways for podcasters to get feedback from their audiences.)

11. Relying on AI can cost your authority and influence

Lastly, but certainly not least, I urge you to consider the intangible cost of relying on any kind of AI as you podcast or create any other content.

I've said for many years that what I love about podcasting is that it allows you to communicate with your own voice, so people can hear your authentic emotions and they can hear how well you communicate your thoughts, even if you do some editing.

Imagine if you used AI to create and communicate all “your” content, and then you're put on a stage in front of a live audience and you have done no preparation. Aside from any stage-fright, could you actually communicate your message authentically, understandably, and memorably?

Several years ago, I was invited to speak in person about podcasting to a Cincinnati business group. And for the first time ever in my life, I completely forgot about it! I remembered only because about about an hour before I was supposed to speak, the organizer sent me a kind message just to say how excited she was to have me and I think to give me a heads up about parking.

The event was about 45 minutes away, so I had only enough time to throw some stuff in my car, and think about my presentation on the way up.

Now imagine if AI was my crutch and most of my content had been created, organized, or even optimized by AI.

Instead, I was able to speak for half an hour and confidently and thoroughly answer 15 minutes of questions, all with no notes except a 5-word outline in my head. And I think I nailed it!

I could do that because I know my stuff! And I don't share this to brag about me or try to make you think I'm amazing, but to point out what a catastrophe that could have been if I was merely a fraud using ChatGPT.

So don't let AI cost your authority and influence.

Certainly, artificial intelligence can be a really powerful tool to help you do many things or save lots of time, but don't trade your value for AI.

Engage your audience and grow your podcast!

Do you ever feel like your podcast is stuck? Like you're pouring your heart into your podcast but it seems like no one is listening?

Try Podgagement to help you engage your audience and grow your podcast!

Get speakable pages to simplify engaging with your audience, accept voicemail feedback (with automatic transcripts), track your ratings and reviews from nearly 200 places, and more!

Ask your questions or share your feedback

  • Comment on the show notes
  • Leave a voicemail at (903) 231-2221
  • Email feedback@TheAudacitytoPodcast.com (audio files welcome)

Follow The Audacity to Podcast

Disclosure

This post may contain links to products or services with which I have an affiliate relationship. I may receive compensation from your actions through such links. However, I don't let that corrupt my perspective and I don't recommend only affiliates.

The post 11 Warnings about Using AI in Content-Creation (including podcasting) first appeared on The Audacity to Podcast.

  continue reading

398 tập

Tất cả các tập

×
 
Loading …

Chào mừng bạn đến với Player FM!

Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.

 

Hướng dẫn sử dụng nhanh