3 Ways Artificial Intelligence is Producing Real Solutions for Government

Artificial Intelligence (AI) allows tasks that typically require human intelligence to be completed at machine speed. For government agencies, this means that they can make better use of the troves of data they hold for daily decision making, strategic planning, and citizen service.

Protecting the Bat Population

Bats are a critical part of the natural ecosystems as pollinators and in their role of natural insect extermination. However, many bats are at risk due to habitat loss. When they lose their natural habitat, many take to bridges as a new home, causing potential damage and even posing health hazards.

AI is being used by transportation departments to analyze photographs, sensor data, and computer vision images to detect the presence of bats on bridges. Manual inspections sometimes miss bats because stains that look like bat droppings could also be caused by water seepage or rust. Computers, however, can find the slight differences and identify bridges that have become habitats.

Preventing Forest Fires

California's Napa County is installing an artificial intelligence-based system that uses optical and heat sensors to detect and analyze smoke plumes that may indicate fire. When smoke that indicates a possible fire is spotted, the system alerts authorities, sending images so officials can rapidly confirm a fire and dispatch appropriate response. This will help firefighters respond to fires before they get out of control and ensure faster public notification.

Monitoring the Call Coming From Inside the (Big) House

Correctional agencies have long recorded and monitored inmates' phone conversations. Now they are applying AI to those recordings to flag phone calls in near real-time that contain conversations indicating violence or criminal behavior. AI-powered software identifies discussions focused on weapons, contraband, threats to inmates, gangs, homicides, assaults or suicide. Law enforcement is notified of flagged conversations and can take action before anything escalates from conversation to action.

Of course all of these exciting applications bring questions as to appropriate use of data and ethics of the models being applied. In the last 18 months, contact tracing powered by AI, has correlated mobile phone numbers and location data with lab results in public health systems to help keep infected people quarantined and slow the spread. The ethical use of this technology requires that there is protection of the end-user's privacy and plans for the gathered data.

Finally, the algorithms that power these AI solutions also need protecting. The Air Force, in particular, is focusing on how to protect the AI once it is in the field. Officials within the Air Force are organizing around an emerging subcategory called AI Safety, a practice that ensures deployed AI programs not only work as expected, but that they are safe from attack.

There are a number of resources that address the application of AI along with the policies that must be behind every application.

  • The Spectrum of AI Applications (November 4, 2021; webcast) - This presentation will focus on how AI, in particular neural networks, are being used in signal processing. It will discuss where neural networks are effective and have shown superior performance, compared to traditional approaches, as well as where they have underperformed.
  • The Cost of AI (November 9, 2021; virtual) - When planning for an artificial intelligence (AI) solution, disruption should be an expected cost driver throughout its lifecycle. The increasing interconnectivity of the rapidly growing enterprise AI ecosystem further complicates estimates of the cost of AI solutions. Responding to disruption for one AI solution may require coevolution of connected solutions.This discussion organizes views about AI cost drivers and the nuances characterizing them.
  • Data for AI: Lessons Learned from AI Project Failures (November 16, 2021; webcast) - This session will provide an introduction to Cognitive Project Management for AI (CPMAI) methodology to provide you with the foundation needed for project success, especially as you incorporate advanced analytics and AI projects.
  • Reimagining the Future of Intelligence (November 18-19, 2021; virtual & in-person) - Safe House Global asks 16 experts to describe their vision of how Intelligence will change to more accurately predict future threats in the coming years.
  • Scotland's AI Strategy: Trustworthy, Ethical & Inclusive (December 16, 2021; virtual) - Scotland's AI Strategy sets out a vision for Scotland to become a leader in the development and use of trustworthy, ethical and inclusive AI.
  • Culture, Communication, and Capabilities: Preparing Federal Agencies to be AI Ready (ATARC; white paper) - Read about the shared successful strategies and other conversation highlights from a roundtable discussion between an esteemed group of experts in data, technology, and human resources from a series of federal agencies.
  • Key Considerations for the Responsible Development and Fielding of Artificial Intelligence (National Security Commission on Artificial Intelligence; white paper) - The paradigm and recommended practices described here stem from NSCAI's line of effort dedicated to Ethics and Responsible Artificial Intelligence. This includes developing processes and programs aimed at adopting the paradigm's recommended practices, monitoring their implementation, and continually refining them as best practices evolve.

Check out more AI-related events on GovEvents.com. You can also browse the GovWhitePapers library of informative resources.

Comments are closed temporarily due to excessive Spam.