Trump’s AI Execuive Order

Feb 15, 2019 | Blog

This past Monday, President Trump signed an executive order titled The American Artificial Intelligence Initiative (AAII). It outlines five initiatives to help the US promote growth and maintain leadership in AI. Below are each of them along with the quote I found most concisely summarizes the section.

1. Investing in AI Research and Development (R&D)

“directing Federal agencies to prioritize AI investments in their R&D missions”

2. Unleashing AI Resources

“The initiative directs agencies to make Federal data, models, and computing resources more available”

3. Setting AI Governance Standards

“This initiative also calls for the National Institute of Standards and Technology (NIST) to lead the development of appropriate technical standards for reliable, robust, trustworthy, secure, portable, and interoperable AI systems.”

4. Building the AI Workforce

“prioritize fellowship and training programs to help American workers gain AI-relevant skills”

5. International Engagement and Protecting our AI Advantage

“committed to promoting an international environment that supports AI R&D”

I’d first like to discuss what sections stood out to me and then dive into more of a general discussion on why this executive order was even created.

Areas that stood out to me

Initiative No. 2: Unleashing AI Resources. I kind of like how this section is worded because for someone to build an AI solution you have to feed it resources. Those resources come in the form of data, compute time, and existing models; and these are each called out in the section.

First, in terms of data and models, we’ve already seen some government agencies open their vaults up to the public. I still remember two years ago when NASA announced that all their funded research would be open to the public, giving anyone access to their data, analysis, and findings. Of course, I didn’t have to scroll far to find someone in the comments stating the headline should have read “All publicly funded research is open”. While that may still be a ways off, today you can go to GitHub and find Jupyter Notebooks (think of it as a whitepaper plus code) from NASA as well as national labs and other government agencies. Local governments have also become great sources of data. Cities like New York and Chicago make it incredibly easy to access data on anything from potholes to fires to rodent sightings. If the federal government could promote more of this kind of openness, we would see some interesting developments. You could imagine municipalities hosting Kaggle competitions where people compete to make the most optimized traffic light system for their city.

Compute time is an interesting one. While we won’t see the federal government getting into the business of cloud computing (unless that rivalry between Trump and Bezos really heats up), what supercomputers they do own will now prioritize AI projects. It’s also possible we could see federal grants and discounts for purchasing compute time. Many companies (Google, Amazon, and Microsoft) already offer a free amount of compute time for students and first-time users. There’s also the argument that compute time isn’t much of a barrier for AI projects. Most AI work can easily be done on a budget laptop, but there are cases where you do need more power.

Initiative No. 3: Setting AI Governance Standards

This initiative calls for the National Institute of Standards and Technology to set standards in a number of areas. Just like the name suggests, setting standards is what the NIST does. However, what it does not do is regulate or enforce those standards. Giving their standards the enforcement power of a suggestion. However, I do like the idea. It makes sense that pubic models should have standards that are known by the public before the models are deployed in the public. Bridges have to meet standards before cars are allowed to drive on them. Is it only fair that an automatic toll booth should have set standards it has to pass before cars are allowed to drive through it?

It’s an interesting list of areas that the NIST is being asked to develop standards for: reliable, robust, trustworthy, secure, portable, and interoperable. Having trustworthy in the list not only makes it sound like the Scout Law, but turns it somewhat into a philosophical discussion. What is trustworthy? Is that the same as ethical? Does the definition of trustworthy change over time, or differ from place to place? This can quickly get into a moral discussion on the ethical application of AI. Which needs to happen -but I’m not going to get into it here.

Why are we getting this?

A bigger question is what prompted this executive order. And the answer can be summed up in a simple equation. AI + China = Fear. In 1957 Sputnik create a storm of fear that the USSR would dominate the new arena of space. At the time, we couldn’t even guess what we would go on to use space technology for. Over half a century later, that technology enables us to do everything from getting driving directions to having the right time on our phones. Fast forward to today: AI is the arena and China is the super power that’s hoping to capitalize on it. While AI is still in its infancy, we can’t predict how this new technology will be applied 50 years from now. We do understand that it is a race and with that comes the fear of losing.

Do we need to worry about falling behind in AI?

My answer is no and yes. Let me explain.

Will we fall behind in developing the underlying AI algorithms that let us build powerful models?

No, because no one is behind in this area. AI has been incredibly open, meaning anyone with an internet connection can access the latest AI tools and start creating their own AI solutions. In 2015 Google announced TensorFlow, the most powerful deep learning package. Not only did they make it free to download but the repo was also written in an open source language and hosted on GitHub, meaning that anyone could take a peak under the hood to see exactly how it runs. This openness has greatly benefited TensorFlow. In GitHub’s latest year in review, they listed TensorFlow as one of the highest code bases in terms of number of contributors,

many of whom are not Google employees. The growth and acceptance of open source communities has made it so the best tools are available to anyone. Go back 10 year and this wasn’t the case. You had companies like SAS and STATA who’s pricing made it so only large corporations and research institutes could afford the technology. Today those companies are seeing a large loss of market share to the open source market, and there’s no sign of things going back. So, I don’t think we’ll lag behind in terms of underlining AI technology.

Can we fall behind in developing creative solutions that leverage AI?

Yes. If we don’t encourage using this tech in creative ways, then we will miss out being the ones to own that IP. For the government that means missing out on the creation of new jobs and tax revenue. Fortunately, we’re doing pretty well right now. Anecdotally, you can’t throw a stone without hitting a company that says it’s using AI (how much of that is real vs hype is up for debate). Looking at the actual numbers, we can see that the US has the most AI talent and the most AI companies (about 14% of the worlds talent compared to 9% from China). However, this will most likely change because China is out funding the US. Last year they made up 48% of the global funding for AI, whereas the US was only 38%.

This isn’t happening by chance. In the last four years China has made several concreate plans to push its country’s economy. In 2015 China announced ,Made in China, a plan for how over the next decade China will advance to be more technology focused. And in 2017 they doubled down on AI by coming out with the ,Next Generation Artificial Intelligence Plan, with the goal of being the world leader in AI by 2030. This was a dense 29 page document that lays out stages, milestones and goals. Going back to what we got on Monday, we see some pretty vague language and no set amount of funding is declared for AI research. However, it does call for further planning to be presented in 180 days. Maybe this is par for the course when it comes to an executive order, I don’t know, my degree is in data science, not political science. However, it feels like China has a business plan that they’re already executing on, whereas the US has only put together a mission statement.

In 1961 Kennedy announced before congress that we will go to the moon. This was four years after Sputnik. In May it will be four years since China announced Made in China. I know comparing a plan with an orbiting satellite isn’t the same. And it’s here that the nebulas nature of AI makes things a little more difficult to set goals. With the space program we had a clear objective that anyone could literally see in the night sky. AI is more amorphous. Part of the challenge for governments is going to be creating a plan and goal that the public can understand as well as support spending on.

Author: Matt Yancey

Matt Yancey is a data scientist, data illustrator and writer who enjoys finding undiscovered insights in data. He is the Principal Data Scientist and Machine Learning Engineer at ClearObject where he helps set best-practices for AI project development.