• Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Trending

Why Conversational Commerce is the Future of Shopping

May 29, 2025

10 Leadership Myths You Need to Stop Believing

May 29, 2025

Tesla’s Layoffs Won’t Solve Its Growing Pains

May 29, 2025
Facebook Twitter Instagram
  • Newsletter
  • Submit Articles
  • Privacy
  • Advertise
  • Contact
Facebook Twitter Instagram
InDirectica
  • Home
  • Startup
  • Money & Finance
  • Starting a Business
    • Branding
    • Business Ideas
    • Business Models
    • Business Plans
    • Fundraising
  • Growing a Business
  • More
    • Innovation
    • Leadership
Subscribe for Alerts
InDirectica
Home » How Does China’s Approach To AI Regulation Differ From The US And EU?
Leadership

How Does China’s Approach To AI Regulation Differ From The US And EU?

adminBy adminJuly 19, 20231 ViewsNo Comments8 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email

Artificial intelligence and geopolitics go hand in hand.

On Thursday, July 13th 2023, the Cyberspace Administration of China (CAC) released their “Interim Measures for the Management of Generative Artificial Intelligence Services.” In it, the Chinese government lays out its rules to regulate those who provide generative AI capabilities to the public in China. While many provisions of the law focus on traditional AI safety measures such as IP protections, transparency, and discrimination, other sections are quite unique to China, such as adherence to the values of socialism and the prohibition of generating incitement against the State.

Unsurprisingly, AI stands apart from other technologies in how it has spurred major countries to closely examine their national and geopolitical standings regarding the development of this technology.

Given these recent developments in China, let’s explore how these regulatory approaches differ from those in the United States and the European Union.

China

Key Theme: state control; economic dynamism

The Great (AI) Firewall

The Chinese government sees AI as a strategic technology that can help it achieve its economic and geopolitical goals, and as such has been actively promoting the development and adoption of AI. However, China’s approach to AI also raises concerns about privacy and civil liberties, as the government has been known to use AI for surveillance, censorship, and social control purposes. Generative AI presents a risk for state control beyond those risks presented with the internet.

Under these new regulations, firms must require a license to provide generative AI services to the public and submit a security assessment if public opinion attributes or social mobilization capabilities are used in the model. In China, generative AI providers must uphold the integrity of state power, refrain from inciting secession, safeguard national unity, preserve economic and social order, and ensure the development of products aligning with the country’s socialist values.

China has also been building its bureaucratic toolkits to quickly and iteratively propose new AI governance laws – allowing it to quickly adjust regulatory guidance as new use cases of the technology get adopted.

AI as an Economic Tool

Despite the Chinese government’s concerns about Generative AI applications, the country is deeply committed to investing in AI across sectors. China accounted for nearly one-fifth of global private investment funding in 2021, attracting $17 billion for AI start-ups. In research, China produced about one-third of both AI journal papers and AI citations worldwide in 2021. Researchers estimate that AI can create upwards of $600 billion in economic value annually for the country. Expect China to continue investing in AI to support its transportation, manufacturing, and defense sectors. Lastly, the manufacturing and distribution of semiconductors will also play a critical role in AI development.

China will ensure information generated by AI aligns with the interests of the Chinese Communist Party (CCP). However, recognizing AI’s economic potential, China will strategically utilize it to enhance global commercial and technological goals.

United States

Key Theme: self regulation, pro-innovation

Federal Approach is TBD

The United States Congress has taken a relatively hands-off approach to regulating AI thus far, though the Democratic Party’s leadership has expressed its intent to introduce a federal law regulating AI. Republicans will likely present their version as well. We expect the likelihood of such a law to pass through Congress as low. The country’s regulatory framework is largely based on voluntary guidelines like the NIST AI Risk Management Framework and self-regulation by the industry.

However, US Federal Agencies are likely to step in to regulate within their jurisdictional authority. For example, the Federal Trade Commission (FTC) has been active in policing deceptive and unfair practices related to AI, particularly enforcing statutes such as the Fair Credit Reporting Act, Equal Credit Opportunity Act, and FTC Act. They have released publications outlining rules for AI development and use. These rules include training AI with representative data sets, testing AI before and after deployment to avoid bias, ensuring explainable AI outcomes, and establishing accountability and governance mechanisms for fair and responsible AI use. In addition, certain sectors such as healthcare and financial services are subject to specific regulations related to AI.

While the US generally favors a “light touch” approach to regulation in order to foster innovation and growth in the AI industry, the country is starting to align with the EU on international cooperation of AI, though specifics are unclear. Most of the initiatives revolve around topics of trade, national security, and privacy.

State and Local Take the Lead

In a recent post, we outlined how to navigate the AI regulatory minefield across US State and Local level. In 2018, California adopted the California Consumer Privacy Act (CCPA) as a response to the European Union’s General Data Protection Regulation (GDPR). We expect states in the US to enact legislation on AI regulation due to the lack of federal enforcement, which would create a patchwork of state-level regulations for companies to comply with.

In New York City, Local Law 144 requires employers and employment agencies to provide a bias audit of automated employment decision tools. Colorado’s SB21-169 protects consumers from unfair discrimination in insurance practices using AI, and in California, AB 331 requires impact assessments for developers and deployers of automated decision tools. Moreover, state legislatures in Texas, Vermont, and Washington are introducing legislation that requires state agencies to conduct an inventory of all AI systems being developed, used, or procured – which would likely demand government contractors to more effectively disclose where AI is being used in their public sector contracts.

We expect US states and localities to continue introducing legislation to regulate AI in specific use cases.

European Union

Key Theme: consumer protection; fairness & safety

Global standard for AI regulation

Much like with GDPR, the EU’s AI Act is likely to become the global standard for AI regulation – likely changing how many machine learning engineers do their work. The proposal includes a ban on certain uses of AI, such as facial recognition in public spaces, as well as requirements for transparency and accountability in the use of AI. Most importantly, organizations must assign a risk category to each use case of AI and conduct a risk assessment and cost-benefit analysis before implementing a new AI system, especially if it poses a “heightened risk” to consumers. Controls to mitigate risks should be determined and integrated into business units where risk can arise. From an enforcement standpoint, Europe has learned lessons from GDPR that it will likely apply to AI – such as member-state enforcement agencies and better incident response.

Risk assessments will likely become standard practice for AI implementation, helping organizations understand the cost-benefit tradeoffs of an AI system and enable them to provide transparency and explainability to impacted stakeholders. Our partners at the Responsible AI Institute are one of the leading institutions helping organizations conduct risk assessments.

Conflicting Perspectives on AI innovation

The proposed regulation has been criticized by some as overly burdensome, creating additional costs and administrative responsibilities to organizations already overwhelmed by regulatory complexity. The EU argues that it is necessary to protect individuals from the potential harms of AI.

Interestingly, according to a recent Accenture report, many organizations see regulatory compliance as an unexpected source of competitive advantage. In fact, 43% of respondents think it will improve their ability to industrialize and scale AI, and 36% believe it will create opportunities for competitive advantage/differentiation. Organizations in regulated sectors like healthcare and finance are concerned with developing and deploying AI with few guardrails. Coherent AI regulations that clarify responsibilities and liabilities would allow organizations to confidently adopt AI. The EU is betting on this.

Parting Thoughts

It is clear that each model (US, EU, and China) reflects each region’s societal values and national priorities. These diverging requirements are also potentially in conflict with each other – meaning complying with China’s requirements for socialist values could directly conflict with US and EU standards. Ultimately, this would create a more complex regulatory environment for businesses to operate in.

Over the coming years, governments, businesses, and citizens will ask themselves fundamental questions about the definitions of fairness, human values, and economic trade offs with AI. While each regulatory framework may be perceived as more or less innovative, fair, or safe, all models will require organizations leveraging AI to document certain information about the system. Transparency and explainability (at the organizational, use case, and model/data level) are key to complying with emerging regulations and fostering trust in the technology.

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Articles

What It Means For Passengers

Leadership December 29, 2023

How AI is Revolutionizing Customer Service with Human-like Responses

Leadership December 28, 2023

Lawmakers Push Forward On Legislation To Expand Community Schools

Leadership December 27, 2023

20 Ways To Navigate Misunderstandings In Multinational Workplaces

Leadership December 26, 2023

If Your MBA Application Was Deferred or Denied, Here’s Some Advice

Leadership December 25, 2023

7 Tips For Recovering From Burnout Over The Holidays

Leadership December 24, 2023
Add A Comment

Leave A Reply Cancel Reply

Editors Picks

Why Conversational Commerce is the Future of Shopping

May 29, 2025

10 Leadership Myths You Need to Stop Believing

May 29, 2025

Tesla’s Layoffs Won’t Solve Its Growing Pains

May 29, 2025

Going Eco Benefits Planet And This Hotel’s Bottom Line

May 29, 2025

What IBM’s Deal For HashiCorp Means For The Cloud Infra Battle

April 25, 2024

Latest Posts

The Future of Football Comes Down to These Two Words, Says This CEO

April 25, 2024

This Side Hustle Is Helping Land-Owners Earn Up to $60,000 a Year

April 25, 2024

A Wave of AI Tools Is Set to Transform Work Meetings

April 25, 2024

Is Telepathy Possible? Perhaps, Due To New Technology

April 24, 2024

How to Control the Way People Think About You

April 24, 2024
Advertisement
Demo

InDirectica is your one-stop website for the latest news and updates about how to start a business, follow us now to get the news that matters to you.

Facebook Twitter Instagram Pinterest YouTube
Sections
  • Growing a Business
  • Innovation
  • Leadership
  • Money & Finance
  • Starting a Business
Trending Topics
  • Branding
  • Business Ideas
  • Business Models
  • Business Plans
  • Fundraising

Subscribe to Updates

Get the latest business and startup news and updates directly to your inbox.

© 2025 InDirectica. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.