Tonic DigitalTonic Digital

  • Home
  • About
  • Specialisations
    • Not-for-Profit
      • Not-For-Profit Marketing
      • Not-For-Profit Grant
      • Social Impact Funds
    • For-Profit
  • Projects
  • Insights
  • Contact
  • Home
  • 2024
  • October

Month: October 2024

 Balancing Free Speech and Responsibility

Thursday, 24 October 2024 by Tonic Digital

Section 230, a key part of U.S. Internet law, helped shape the modern Internet. However, the rise of advanced algorithms, AI, and the rapid growth of social media platforms has made it harder to interpret and apply this law.

The recent decision by the US Court of Appeals for the Third Circuit in Anderson v TikTok Inc., which held TikTok responsible for its algorithmic recommendations, could reshape the landscape for social media platforms because it departs from the broad immunity provided by Section 230.

Section 230 of the Communications Decency Act

Section 230 of the Communications Decency Act is a pivotal piece of U.S. Internet legislation passed in 1996 that provides immunity to social media platforms and other websites that host user-generated content from being held liable for the content posted by users of the platform. Section 230 operates in two ways.

It protects social media platforms if someone posts something illegal or harmful. It also allows platforms to moderate content in good faith, meaning they can remove or restrict access to objectionable content.

Section 230, a cornerstone of U.S. Internet legislation, played a pivotal role in shaping the modern Internet. However, the advent of sophisticated algorithms, artificial intelligence (AI), and the exponential growth of social media platforms has made the interpretation and application of this section increasingly contested. 

The impact of modern technological advances on Section 230

Twenty-six words in Section 230 distil the legalese while simultaneously creating the key ambiguity. The critical words are “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Determining who is a publisher and who is only a distributor can be ambiguous, particularly as advances in modern technology compound this ambiguity.

Some of these advances that have had a significant impact on Section 320 are:

1. Algorithmic Content Recommendations

Platforms like Facebook, YouTube, and TikTok no longer passively host user-generated content; they actively recommend content to users based on their preferences and behaviour. These algorithms can amplify harmful content such as misinformation, conspiracy theories, and dangerous challenges, as occurred in Anderson v. TikTok.

Court cases and legal discussions now consider whether platforms should be shielded from liability when their algorithms promote or recommend harmful content. While the consensus has continued to protect platforms, the Court ruled in Anderson v TikTok that TikTok could be liable. This marks a change in one Court’s view of Section 230.

2. Artificial Intelligence and Content Moderation

The rapid growth of the internet and the sheer volume of material posted mean platforms use AI-powered tools to moderate content, identifying and removing harmful material like hate speech, extremist content, and misinformation. While AI can assist with content moderation at scale, it also introduces complexities regarding biases, errors, and inconsistencies in enforcement.

AI content moderation raises questions about platforms’ responsibility. Critics argue that platforms should be subject to more scrutiny regarding the effectiveness of their AI moderation systems. If AI fails to remove illegal or harmful content, platforms might face increasing calls to limit their Section 230 immunity.

3. Deepfakes and Misinformation

Advances in AI and deep learning have led to the rise of deep fakes—realistic but fabricated videos and audio clips that can spread false information. This technology complicates the regulation of harmful content because it can be challenging to detect and prevent.

While Section 230 has historically protected platforms from liability for user-generated content, deep fakes present new challenges. There are calls for reform to hold platforms accountable for failing to detect or remove malicious deep fakes, especially those that cause real-world harm (e.g., political disinformation or manipulated videos of public figures).

4. Massive Scale of Platforms

As mentioned, the growth and size of platforms like Facebook, YouTube, and TikTok—each with billions of users—make traditional content moderation difficult. Automation helps with moderation, but this often leads to over- or under-enforcement.

As platforms grow, critics argue that they should be treated differently under Section 230. Reform proposals have suggested reducing or removing Section 230 protections for the largest platforms, which are better equipped to invest in content moderation while keeping protections in place for smaller startups that lack these resources.

5. Targeted Advertising and Data Collection

Platforms increasingly use user data to target ads, and this business model has led to concerns that harmful content (such as misinformation or extremist content) is promoted because it generates engagement, which drives advertising revenue.

Some have argued that this business model means platforms should face liability for promoting harmful content if they profit from its spread through targeted advertising. This has sparked discussions around reforming Section 230 to account for platforms’ financial incentives when harmful content thrives.

7. Geopolitical Influence and Misinformation

Foreign actors can exploit platforms to spread disinformation or influence elections, as seen in cases like the 2016 U.S. election. The role of bots, AI-powered accounts, and algorithmic amplification of divisive content have become significant concerns.

Consequently, there are calls for revising Section 230 to hold platforms accountable when they fail to prevent foreign interference or misinformation campaigns. Platforms’ global reach makes it more challenging to address these issues through existing legal frameworks.

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Technological advances complicate the application of Section 230, making it more difficult to justify blanket immunity for platforms that not only host content but also amplify, moderate, and profit from it. 

As AI and algorithms increasingly shape online experiences, there is growing momentum for reforms to narrow the scope of Section 230’s protections or hold platforms accountable for algorithmic recommendations and the consequences of automated decision-making. 

While the ruling in Anderson v TikTok reverses this protection normally provided to platforms, whether this decision is upheld and other Courts follow the lead of the Court of Appeals for the Third Circuit remains to be seen. However, if Courts follow the example set by the Court of Appeals, it will significantly impact social media platforms.

What it would mean for social media platforms

If Section 230 of the Communications Decency Act (CDA) were overruled or watered down, it would have significant implications for social media platforms and the broader internet. 

Some of these implications include:

1. Increased Liability for Content

Without Section 230, platforms could be held responsible for what their users post. This might force platforms to implement stricter content moderation policies to avoid lawsuits. It could discourage platforms from allowing open discourse because any harmful, defamatory, or illegal user-generated content could lead to legal action.

2. Stricter Content Moderation

Platforms might become far more cautious about what content they allow. They would likely implement extensive moderation systems or use more automated systems (like artificial intelligence) to monitor and filter content. This could lead to more aggressive removal of posts, reducing the amount of user-generated content.

3. Reduction in User Participation

With stricter moderation or a fear of being sued for controversial content, platforms might limit the types of users they allow. There might be fewer opportunities for free and open conversation, making these platforms more tightly controlled. Users might migrate to smaller or decentralised platforms with less strict rules.

4. Impact on Smaller Platforms and Startups

Overturning Section 230 would likely benefit larger platforms with the resources to handle legal challenges and implement complex moderation systems. Smaller platforms or startups might struggle with the increased cost and legal risks, potentially stifling innovation and competition.

5. Content Polarization

Without Section 230, platforms could become more polarised, promoting a controlled environment or allowing almost unrestricted content to avoid accusations of bias. This could exacerbate existing divides in social media ecosystems.

6. Less Tolerance for Controversial Content

The fear of being sued could lead platforms to take down or censor more content pre-emptively, including content that may not be illegal but is controversial. This could have chilling effects on free speech and expression on the internet.

7. Possible Legal Fragmentation

Different states or countries may have laws governing content moderation and liability, leading platforms to implement region-specific policies. This could result in fragmented user experiences depending on the location, with some regions seeing stricter rules than others.

8. Increased Legal Battles

Social media companies could face lawsuits from individuals, groups, or governments claiming harm from specific posts or content. These legal challenges could result in financial burdens and put some companies out of business.

In summary, overruling Section 230 could lead to a more regulated, less open internet, where platforms take a much more conservative approach to allowing user-generated content, potentially reshaping the entire landscape of online communication.

Efforts to reform Section 230 primarily focus on increasing accountability for harmful content while retaining the law’s role in protecting free speech and innovation. The challenge lies in balancing the benefits that Section 230 provides to online platforms with the growing concerns over the harmful effects of misinformation, hate speech, and illegal activity that can flourish under its protections.

Read more
  • Published in Data, marketing, SEO
No Comments

Media Release | Arctic Basecamp Amplifies Its Global Voice with Tonic Digital

Friday, 11 October 2024 by Tonic Digital

For over a decade, scientists at Arctic Basecamp have been communicating and sharing crucial, actionable information about the poles and their critical relationship to climate change worldwide.

Read more
  • Published in Data, marketing, SEO
No Comments

Misinformation and Moral Panic

Friday, 11 October 2024 by Tonic Digital

While it seems the amount of misinformation circulating within society is greater than previously, has it increased, or is it simply a consequence of the heightened fear caused by the moral panic?

Read more
  • Published in Data, marketing, SEO
No Comments

Google’s Balancing Act

Friday, 04 October 2024 by Tonic Digital

In a significant announcement on July 22, Google has officially changed its earlier decision and advised it will now retain third-party cookies.

Read more
  • Published in Data, marketing, SEO
No Comments

Recent Posts

  • Illuminating the Blind Spot: How Businesses Can Prevent Ethical Blindness

    Why good people - and companies sometimes make...
  • The Rise of the Answer Economy: What It Means for Business and How to Prepare

    Generative AI platforms like ChatGPT and Perple...
  • Merging for Mission: Five Critical Considerations for Non-Profit Organisations

    Non-profit mergers are becoming increasingly co...
  • The End of Google Call-Only Ads

    Google's decision to end call-only ads represen...
  • Psychographic Segmentation for Deeper Connection

    A psychographic profile aims to reveal what peo...

Recent Comments

    Archives

    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • July 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • March 2020
    • November 2019
    • July 2019
    • June 2019
    • May 2019

    Categories

    • Apps
    • Data
    • Design
    • ethics
    • Google
    • marketing
    • Mobile
    • News
    • NFT
    • Not-For-Profit
    • Psychology
    • SEO
    • Wellbeing
    • Workplace

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org

    OFFICES

    SINGAPORE

    63 Robinson Road, Afro-Asia
    Level 8, 068894

    PERTH

    37 St Georges Tce
    Level 13, 6000

    SOCIAL

    MENU

    ABOUT
    PROJECTS
    INSIGHTS
    CONTACT

    CONTACT

    © 2016 - 2025 | All Rights Reserved
    Privacy & Disclaimer

    TOP