In January 2024, Virginia Governor Glenn Youngkin announced and signed Executive Order 30 on Artificial Intelligence (EO 30), establishing “important safety standards to ensure the responsible, ethical, and transparent use of AI by state government.”

EO 30 is expected to impact the adoption, use, monitoring, and management of AI technologies for state agencies, K-12 schools, colleges and universities, and law enforcement. The EO 30 AI governance policies and standards may also impact third parties that work with the Commonwealth, such as businesses, suppliers, and contractors.

Key Directives of EO 30

EO 30 contains five main directives, as follows: (1) enactment of AI Policy Standards published by the Virginia Information Technologies Agency (VITA), (2) enactment of AI Information Technology Standards, also published by VITA, (3) enactment of AI Education Guidelines applicable to K-12 schools, community colleges, and universities, (4) a directive to establish AI standards for executive branch law enforcement and model standards for local law enforcement by October 2024, and (5) establishment of an AI Task Force.

The first three directives regarding the VITA AI Policy and IT Standards and AI Education Guidelines are discussed below.

Takeaways on EO 30’s AI Policy Standards

The AI Policy Standards guidance document provides a set of comprehensive guidelines for the “responsible, ethical and transparent use of AI by” the Commonwealth. The key takeaways on EO 30’s AI Policy Standards are as follows:

  1. Business Use Cases. AI capabilities may only be used if they are the optimal choice to achieve positive outcomes for Virginia citizens, such as improving government services, reducing wait times, or limiting bureaucracy and delays.
  2. Approval of AI Technology. Before deploying any AI capabilities the application submitted to VITA requires identification and description of the AI technologies at the model-level, including model inputs, output data type and structure, model algorithms, and data sets.
  3. Mandatory Disclaimers. When AI capabilities are used to process or produce any decision or output regarding citizens or businesses, a disclaimer is to be used explaining the degree of AI involvement in the decision or output.
  4. Data Protection. The EO requires prioritization of privacy and protection of citizen data used in AI systems, necessitating purpose and storage limitations, data minimization, accuracy controls, and strong security to ensure confidentiality of personal data.

AI IT Standards Include Guidance on Enterprise Architecture

The AI IT Standards call for the development of specific requirements for how new and existing AI systems are integrated into enterprise architecture. VITA’s current guidance, titled Enterprise Solutions Architecture for Artificial Intelligence (ESA), details the technical standard for the management, development, purchase, and use of AI in the Commonwealth, aimed to promote AI safety, privacy, transparency, accountability, and sustainability.

To achieve the EO’s goals, such as trustworthy and ethical use of AI for the Commonwealth of Virginia (COV), ESA draws on the National Institute of Standards and Technology’s (NIST) Artificial Intelligence Risk Management Framework. For example, EAS technology standard AI-601 instructs that COV AI Systems should be “certified for trustworthiness according to the following characteristics defined by NIST AI 100-1”: valid and reliable, safe and resilient, accountable and transparent, explainable and interpretable, privacy enhanced, and fair with harmful bias managed. Further drawing from NIST’s AI Framework, under AI-602, VITA endeavors to promote accuracy by these AI systems to test, evaluation, verification, and validation (TEVV) review for the AI system’s entire lifecycle.

The AI Education Guidelines Aim to Balance Opportunities and Risks Associated with AI Use in Schools

EO 30 recognizes “the dual nature—both the opportunities and risks—of [AI] developing technology in education” by promoting innovative educational opportunities while concurrently ensuring guardrails “to safeguard individual data privacy and mitigate discriminatory outcomes.”

The AI Education Guidelines cover public educational institutions in Virginia, including K-12 primary schools, colleges and universities, and the Virginia Community College System. These guidelines are structured around three key components: guiding principles, strategies for success, and stakeholder roles and responsibilities.

The guiding principles emphasize AI can “never replace teachers who provide wisdom, context, feedback, empathy, nurturing, and humanity in ways that a machine cannot.” At the same time, the guiding principles highlight the power of AI to empower students and opportunities to equip students with practices to use AI responsibly. These guiding principles also recognize inherent risks of AI integration in schools such as the potential for discriminatory outcomes and algorithmic biases.

Included in the AI Education Guidelines is an initial list of stakeholders who will embrace certain roles and responsibilities to successfully integrate AI into Virginia’s education system. Stakeholders include the Virginia Department of Education, State Council of Higher Education for Virginia, 131 K-12 school division boards of education along with leadership, boards at every public college and university, along with teachers and technology directors.

If you need assistance with AI-related issues, contact a member of the WRVB Cybersecurity & Data Privacy team.