Bookkeeping Service Providers

  • Accounting
  • Bookkeeping
  • US Taxation
  • Financial Planning
  • Accounting Software
  • Small Business Finance
You are here: Home / How Credible Are the White House’s AI Regulation Principles?

How Credible Are the White House’s AI Regulation Principles?

January 23, 2020 by cbn Leave a Comment

What is the Trump administration’s real goal in suggesting new national regulations governing the use of artificial intelligence?

The presence of artificial intelligence in our lives will continue to grow. Considering the degree of alarm that AI has triggered in the general population, we can expect a deeper dose of laws and regulations governing how the technology is deployed, used, and managed.

It’s a bit amusing that the Trump administration should caution agencies against “overreach” as they consider whether, when, and how to regulate AI. The reach of any regulatory regime should be commensurate with the reach of the phenomenon being regulated. I doubt, however, that Trump has a crystal ball that tells him how extensively AI will disrupt our world in coming years.

Actually, it’s curious that this president would undertake a new regulatory initiative of any sort. Trump is implacably hostile to any and all environmental, health and safety, anti-trust, and other regulations that have benefited Americans immensely for many generations. Earlier this month, the White House Office of Science and Technology Policy (OSTP) released a set of principles to guide federal agencies when regulating the use of AI in the private sector. Release of the document kicked off a 90-day public commentary period. At the end of that, agencies will have 180 days to decide how to implement the principles.

On the unlikely chance that Trump-appointed agency heads will eventually implement these principles, let’s consider what the document’s current draft actually says. As summarized here, the principles state that agencies “must promote reliable, robust, and trustworthy AI applications.” They also advocate cross-agency consistency and public participation in the rulemaking process, require security, transparency, and fairness in how AI is used, and call for flexible regulatory updates to adapt to technological advances. They also encourage industry self-regulation where feasible over heavy-handed government regulation of AI development, deployment, and utilization.

That’s all fine and good, and even a Democratic administration would probably put out something similar. But I almost lost it when the document stated that issuance of new regulations on AI’s use require “scientific evidence” to inform the necessary upfront risk assessments and cost-benefit analyses.

I’m sorry, but how dumb does this administration think we are? This principle has little credibility coming from the most irrationally anti-scientific president in US history. Among other atrocities, Trump has rolled back numerous regulations that were instituted to address climate change. Under Trump, private business is being given free regulatory rein — without interference from pesky scientific authorities — to heat the planet, pollute our environment, and endanger the safety of workers, consumers, and everybody else.

Image: Tashatuvango - stock.adobe.com

Image: Tashatuvango – stock.adobe.com

Even if we accept the OSTP document’s requirement of “scientific evidence” in the rulemaking process without a shred of cynicism, we need to ask who exactly would determine what constitutes such evidence for the purpose of framing specific agency regulations that govern AI. This administration has ruthlessly suppressed credible scientific studies that were produced by government employees and contractors. More than that, scientific professionals — including the data scientists most competent to advise on AI regulations — have been told in no uncertain terms that their skills are no longer needed under this administration and that it would be best for them to leave public service entirely.

If you’re hoping that US federal agencies’ engagement with other nations’ AI experts would make up for this scientific brain drain, you’re sadly mistakenly. Trump shot down that hope when he rejected US participation with other G7 nations in the Global Partnership on AI, which seeks to establish shared regulatory principles governing the technology’s use around the globe.

If you’re a US taxpayer, you best believe that the people remaining at the federal level to adjudicate what constitutes credible scientific evidence will be some unholy alliance of pseudoscientific quacks and ideological hacks.

It’s no surprise that regulations over AI’s use in US society — such as for facial recognition — are starting to take root at the state and local levels. Though an unidentified Trump administration official recently characterized those efforts as “over-regulation,” you could very plausibly argue that they are nothing of the sort, but, rather, a justified grass-roots campaign to counter egregious under-regulation at the national level.

Besides, it’s not at all clear whether Trump and his administration truly care about such AI downsides as privacy encroachment, biased decisioning, and so on. Though some headlines claim otherwise, these new principles are not intended to make AI “safer,” which would imply that some sort of consumer-protection impulse motivates this effort.

Though US CTO Michael Kratsios expressed concern about “the rise of authoritarian governments that have no qualms with AI being used to track, surveil, and imprison their own people,” his boss at 1600 Pennsylvania has no qualms about openly admiring practically every dictator who walks the Earth.

When you look at it, these principles are designed to hamstring efforts by federal agencies to ensure that private businesses manage AI responsibly for the benefit of all Americans. More to the point, Trump’s primary interest in AI is nationalistic: as a weapons-grade asset for maintaining US global dominance. As Kratsios stated here, the ulterior purpose of these principles is to “maintain and strengthen the US position of leadership” on AI.

One would hope that the purpose is, at least in part, to ensure that AI is managed responsibly to benefit all humanity, but apparently that’s too much liberal folderol for this administration to stomach. If you seek a set of AI governance principles that put people first, with ethics (not power politics) at their core, check out such initiatives as this.

Interestingly, Trump advocates a laissez faire AI regulatory regime domestically while, consistent with this nationalistic philosophy, going the opposite direction internationally. The administration recently instituted an export ban that forbids US companies from selling software abroad that uses AI to analyze satellite imagery without a license. This ban is quite clearly intended to deny China, in particular, access to such technology, though they’ve obviously made huge investments domestically and probably can get by without US-developed AI software for this use case.

So let’s get real here. No matter how much merit these proposed AI regulation principles might possess in the abstract, they’re an obvious ploy for the Trump administration to retaliate against the left-leaning Silicon Valley companies that are driving the AI revolution. Demonizing AI is an effective smokescreen for Trump to lash out against the likes of Amazon, Microsoft, Google, Facebook, and other powerful tech companies that have bet their futures in part on their AI prowess.

Even if this administration were promulgating these principles in good faith, they come almost a year after Trump’s signing of the “American AI Initiative. This executive order that puts forth a high-level strategy guiding AI development within the US but includes no new federal funding to give the initiative a chance of succeeding. If Trump were truly trying to strengthen the US’s AI competencies, he would already have proposed a substantial federal outlay in this regard.

Let’s hope that whatever administration follows Trump actually institutes responsible regulation of AI at the federal level, while funding the R&D needed to develop credible tooling and approaches to manage AI responsibly wherever it touches our lives.

For more on AI check out these recent articles.

A Realistic Framework for AI in the Enterprise

How to Manage the Human-Machine Workforce

The Facial Recognition Debate

Restart Data and AI Momentum This Year

James Kobielus is Futurum Research’s research director and lead analyst for artificial intelligence, cloud computing, and DevOps. View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.

More Insights

Share on FacebookShare on TwitterShare on Google+Share on LinkedinShare on Pinterest

Filed Under: Uncategorized

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Archives

  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • May 2021
  • April 2021
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • March 2016

Recent Posts

  • How Azure Cobalt 100 VMs are powering real-world solutions, delivering performance and efficiency results
  • FabCon Vienna: Build data-rich agents on an enterprise-ready foundation
  • Agent Factory: Connecting agents, apps, and data with new open standards like MCP and A2A
  • Azure mandatory multifactor authentication: Phase 2 starting in October 2025
  • Microsoft Cost Management updates—July & August 2025

Recent Comments

    Categories

    • Accounting
    • Accounting Software
    • BlockChain
    • Bookkeeping
    • CLOUD
    • Data Center
    • Financial Planning
    • IOT
    • Machine Learning & AI
    • SECURITY
    • Uncategorized
    • US Taxation

    Categories

    • Accounting (145)
    • Accounting Software (27)
    • BlockChain (18)
    • Bookkeeping (205)
    • CLOUD (1,322)
    • Data Center (214)
    • Financial Planning (345)
    • IOT (260)
    • Machine Learning & AI (41)
    • SECURITY (620)
    • Uncategorized (1,284)
    • US Taxation (17)

    Subscribe Our Newsletter

     Subscribing I accept the privacy rules of this site

    Copyright © 2025 · News Pro Theme on Genesis Framework · WordPress · Log in