The development and use of Artificial Intelligence (AI) became official policy of the United States with the signing of an Executive Order in February. This order outlines and directs America's government-wide push to advance the use of AI through research and public/private partnerships. In the ensuing months, the Department of Energy has emerged as a leader in these efforts.
In September 2019, the DOE initiated the Artificial Intelligence and Technology Office (AITO) to help channel the department's vast resources across its national lab facilities. These efforts are paying off as DOE partners with Health and Human Services and Veterans Affairs as part of the COVID-19 Insights Partnership with the goal to increase data sharing and analysis in the fight against the spread of COVID-19. The DOE is also pressing ahead with private partnerships announcing the First Five Consortium with Microsoft, the Pacific Northwest National Laboratory, and the Defense Department's Joint Artificial Intelligence Center (JAIC). Together they will develop AI-based solutions for data-first responders.
Another cross-government AI initiative involves the DOE partnering with the National Science Foundation (NSF) to establish new research institutes for AI development. Projects in these institutes will focus on machine learning, synthetic manufacturing, precision agriculture, and forecasting predictions. Research will be done in coordination with state universities nationwide.
Finally, the DOE will be combining their AI focus and their leading role in High Performance Computing (HPC) research to better secure these powerful resources. Malicious cryptocurrency miners look to these HPC machines as a way to gain advantage in the cryptocurrency market. They can (and have) hijack high-performing computers at universities and government facilities to take advantage of their processing power and save themselves from having to set up their own mining systems. DOE is working on AI technology that compares control flow graphs of programs actually running on the system to a catalog of graphs for programs that have permission to run on a given computer. This helps spot unauthorized programs, even if they have been disguised to look like legitimate programming.
There are a number of events and resources that examine the many applications of AI and detail where we are today in the use and development of solutions.
7th Annual Defense Research and Development Summit (January 14, 2021; virtual) - Learn about research and development within the defense sector. The summit will address the latest priorities, advancements and challenges within the development and delivery of innovative solutions.
Ai4 2021 Cybersecurity Summit (February 3-4, 2021; virtual) - This event brings together business leaders and data practitioners to facilitate the adoption of artificial intelligence and machine learning technology within the cybersecurity industry. With a use-case oriented approach to content, attendees will get actionable insights from those working on the frontlines of AI.
AI Week 2021 (May 10-14; online) - This week-long multi-organization event gathers the biggest names in tech and artificial intelligence to discuss ways to harness emerging technology's potential to revolutionize all aspects of life.
AI Readiness for Government (white paper) -- As various government agencies prepare to deploy artificial intelligence, a six-pronged framework can help assess AI readiness. To capture AI's potential to create value, government organizations will need a plan to retool the relevant existing processes, upskill or hire key staff, refine approaches toward partnership, and develop the necessary data and technical infrastructure to deploy AI.
We'd love to hear where you are learning about AI progress in government. Let us know what events you're attending in the comments.
Be sure to check out GovEvents for a complete listing of conferences, virtual events, webinars, and a library of on-demand resources.
State and local agencies are home to some of the most innovative ideas in government. Their use of artificial intelligence (AI) is no exception. Localities are embracing AI as a way to make sense of all the data they hold to better understand how citizens are using their services and where gaps may exist. A survey from the National Association of State Chief Information Officers (NASCIO) released in the fall of 2019 found that 32% of those surveyed "strongly agreed" that AI and related technologies can help them meet citizen demands and improve operations. Specifically, the survey found that nearly 50% of respondents planned to use AI as a way to shift workers away from rote tasks and toward high-value activities.
Taking a look around the country, we see some interesting applications of AI at the state and local level.
With the government fiscal year starting in October, our Federal government gets a head start on their New Year's resolutions. As we launch into a new year--a new decade, even--we wanted to take a quick look at government technology priorities for 2020 and beyond.
Cybersecurity - In the past decade security has transitioned from a stand-alone technology that had to be added to planning and systems, to a utility-type service that is baked into every piece of technology deployed within government. This fall, Federal CIO, Suzette Kent shared her focus areas for the next year (and beyond) to include cross-agency information sharing, improved identity management, and increased workforce cybersecurity literacy.
Reskilling - The introduction of automation into administrative functions is driving a need for employees to be re-skilled. While machines are not taking over the jobs of humans, they are improving efficiency in many roles, freeing up time for people to take on more complex (and frankly, more interesting and more important) roles within an organization. Continue reading →
Artificial Intelligence (AI) continues to dominate tech headlines. Now, rather than learning what the technology could mean for government, we're reading about where it's being implemented, and the results being achieved. A recent report found that AI is no longer considered optional, but rather a critical component to managing and using large amounts of data. IT leaders in government are looking to AI to automate routine, data-oriented tasks, ease access to diverse sets of data, prioritize tasks based on the benefit to the organization, and generally keep track of ever-growing streams of data.
The Intelligence Community (IC) has long been a top consumer and analyzer of data in government. Not surprisingly, they have embraced AI technology to supplement the work of analysts by reducing the amount of manual data sorting with machine-assisted, high-level cognitive analysis. AI is being used to help triage so the highly-trained analysts can spend their time making sense of the data collected by looking at the most valuable and seemingly connected pieces.
Health and Human Services (HHS) implemented an AI solution when they needed to quickly procure Hazmat suits to meet the response to an Ebola outbreak. Procurement officials were able to use AI to make like-to-like comparisons among products. After the initial tactical analysis, the acquisition teams were able to use the data gathered on department wide pricing and the terms and conditions to better define parameters for ten categories of purchases.
Despite the successful implementations in many agencies, AI is still in the pilot and introductory phase. The Air Force is making it easier to begin experimenting with AI. Because the DoD has strict rules about what can be put on their networks, it is difficult to introduce new technologies into the production environment. The Air Force has created a workaround with the Air Force Cognitive Engine (ACE) software platform, a software ecosystem that can connect core infrastructures that are required for successful AI development (people, algorithms, data, and computational resources).
HHS is looking to use AI to analyze dated regulations as part of their AI for deregulation project. The pilot has found that 85 percent of HHS regulations from before 1990 have not been edited and are most likely obsolete. Using AI to flag regulations with the term "telegram," for example, will begin the prioritization of data that needs to be looked at by humans.
Artificial Intelligence (AI) is a hot buzzword being thrown around in technical as well as business circles as a way to increase the efficiency of organizations. More than just a buzzword or "next big thing," it is now official policy of the United States. This February the President issued an executive order directing federal agencies to invest more money and resources into the development of artificial intelligence technologies to ensure the U.S. keeps pace with the world in using AI (and related technology) for business, innovation, and defense.
On the heels of the executive order, the DoD outlined its AI plans which include using AI technology to improve situational awareness and decision-making, increasing the safety of operating vehicles in rapidly changing situations, implementing predictive maintenance, and streamlining business processes.
But with all of this focus and excitement around AI, there are many groups raising concerns. Paramount is the federal workforce who sees AI technology potentially taking over their work. A recent survey found that while 50 percent of workers were optimistic that AI would have a positive impact, 29 percent said they could see new technologies being implemented "without regard for how they will benefit employees' current responsibilities." Across government, technology leaders are working to ease fears, stating that technology will take on the rote, manual tasks that humans tend to dread, freeing up people to spend additional time on more strategic, meaningful work.
Another group wary of AI's broad impact are security experts who say that with new, more advanced technologies come new, more advanced threats. In an effort to get in front of these threats, DARPA has launched the Guaranteeing AI Robustness against Deception (GARD) program. This program aims to develop theories, algorithms, and testbeds to aid in the creation of ML models that will defend against a wide range of attacks. Continue reading →