Balancing AI’s Power with Privacy

Artificial Intelligence (AI) has incredible potential to speed decision-making and unearth connections between data to inform government services and programs. AI is being implemented across government and private industry with very little policy or regulation as to its development or use. In many ways, this lack of oversight is driving exciting innovation, but as this innovation leads to new uses, the risks of infringing on citizen rights and privacy increase.

Peter Parker (Spiderman) was warned, "with great power comes great responsibility." Similarly, AI developers need a voice providing gentle guidance as they figure out how best to use AI's power for good. In the fall of 2022, the White House released the AI Bill of Rights, designed to address concerns about how, without some oversight, AI could lead to discrimination against minority groups and further systemic inequality.

This blueprint document has five key principles for the regulation of the technology:

  • Safe and effective systems - AI should be developed by experts with input from the communities who will use or benefit from it.
  • Algorithmic discrimination protections - implementing equity assessments to see how an AI system may impact members of exploited and marginalized communities.
  • Data privacy - giving people more say in how their data is used.
  • Notice and explanation - people should know when an AI system is being used as part of an online service.
  • Human alternatives consideration and fallback - giving the people the option to opt out of AI and still access services via human interaction.

The National Institute of Standards and Technology (NIST) will issue an AI Risk Management Framework (AI RMF) in early 2023 to follow on this high-level guidance with actionable steps agencies can take. This next level of tactical guidance is important as the seemingly sensible principles in the bill of rights can be hard to implement. For example, how do you define a "fair algorithm" and, logistically, what does removal of data do to AI systems? Will companies need to rebuild their customer recommendation systems every time someone deletes their data?

One proposed solution is the use of privacy-preserving machine learning (PPML). This approach uses synthetic data generation, differential privacy, federated learning, and edge processing and would address some of the core concerns raised in the bill of rights principles.

To stay ahead of technology and policy innovations as they relate to AI and privacy, check out these resources on GovEvents and GovWhitePapers.

  • Implement the New NIST RMF Standards and Meet the 2023-2024 FISMA Metrics (March 8-9, 2023; virtual) - This 2-day seminar will identify the changes to Federal Cybersecurity requirements that will affect government and contractor information systems and enterprises and provide strategies for effectively and quickly implementing solutions for meeting the new requirements. This includes guidance around AI, automation, and privacy.
  • ATARC Artificial Intelligence and Data Analytics Breakfast Summit (April 6, 2023; Washington, DC) - Federal leaders discuss the pros and cons of Artificial Intelligence in security and future plans for using AI for defense, intelligence, and citizen service.
  • Emerging Technology and Innovation Conference 2023 (May 7-9, 2023; Cambridge, MD) - Listen and engage with a range of keynotes, panels, and executive sessions from influential thought leaders and innovators. Gain insight into leading edge technology, network with important decision makers, and be inspired by visionaries who are using emerging technology and innovation to make positive advancements.
  • The Ethics of Healthcare AI (white paper) - In this report, ICIT Executive Director Joyce Hunter explores the ethics of healthcare AI and the stakeholder responsibilities and considerations when developing, adopting, and implementing AI in healthcare. AI systems are only as reliable and accurate as the data we provide, and only as ethical as the constraints incorporated into the developed code.
  • AI Bias Is Correctable. Human Bias? Not So Much (white paper) - This "Defending Digital" Series, No. 5 focuses on how individual decisions are shaped by our values, beliefs, experiences, inclinations, prejudices, and blind spots. These "biases" can easily leak into information system design. But overall and over time, modern technology will prove a force for more fair and objective societal actions.
  • The Impact of Emerging Technology on AI Within the Federal Government (white paper) - In a recent ATARC roundtable discussion, government IT experts shared the various challenges and solutions they have encountered with the emergence of Artificial Intelligence (AI) technology and advanced data analytics (Data). The group also discussed where they see AI and Data heading in the Federal government, and what steps should be taken for these emerging technologies to be fully accepted and adopted.

Visit GovEvents and GovWhitePapers to find additional resources on AI and privacy trends.

Comments are closed temporarily due to excessive Spam.