Texas Enacts a Responsible Artificial Intelligence Governance Act
23 June 2025
Texas has enacted a new act regulating the use of artificial intelligence (AI) systems, with the signing yesterday (22 June 2025) of the Texas Responsible Artificial Intelligence Governance Act. The new act joins other state-level AI laws enacted in the US in the past year, such as Colorado and Utah, alongside bills in Virgina and California that were vetoed by the respective governors.
The primary goals of the act are to promote and encourage the responsible development and use of AI systems, safeguard individuals from known and reasonably foreseeable risks associated with these systems, ensure transparency about potential risks throughout their development, deployment and use, and to provide clear notice when state agencies use or intend to use AI systems. The act also emphasizes regulating the use of AI systems by government entities and prohibits businesses from intentionally developing or deploying AI systems for a specified set of illicit purposes.
Below is an overview of the key provisions under the new act:
Scope of Application
The act applies to developers, deployers and distributors of any AI system (as defined in the act) to consumers. A “consumer” is defined as an individual who is a resident of Texas acting only in an individual or household context. The term does not include an individual acting in a commercial or employment context.
While the definition of an AI system is modeled after the Colorado AI Act (and other laws in the word adopting this common definition), it is potentially broader in scope, as the act does not limit its provisions to “high risk AI systems” only.
The act applies to a person who: (i) Promotes, advertises or conducts business in Texas; (ii) Produces a product or service used by residents of Texas; or (iii) Develops or deploys an AI system in Texas.
Duties and Prohibitions on Use of Artificial Intelligence
Disclosure to consumers
The Texas AI act requires a governmental agency, that makes available an AI system intended to interact with consumers, to disclose to each consumer, before or at the time of interaction, that they are interacting with an AI system, regardless of whether it would be obvious to a reasonable consumer that the interaction involves an AI system. The disclosure must be clear and conspicuous, written in plain language, and prohibited from using a “dark pattern”. The disclosure may be provided via a hyperlink that directs the consumer to a separate internet web page.
Additionally, providers of health care services and treatments who utilize AI systems in their practice must give the same disclosure to the recipient/patients or their personal representative, no later than the date the service or treatment is first provided. In the case of an emergency, the disclosure must be provided as soon as reasonably possible (the term “health care services” is defined at the beginning of the section in the act).
This disclosure requirement does not apply to private business interacting with customers or employees.
Prohibitions
The act also includes prohibitions when it comes to the development and deployment of an AI system that, among other things:
- Causes manipulation of human behavior. An AI system that intentionally aims to incite or encourage a person to commit physical self-harm, including suicide, harm another person or engage in criminal activity.
- Intentionally infringe, restrict or impair an individual’s constitutional rights.
- Intentionally discriminate against a protected class, violating state and federal law (with certain exceptions). A disparate impact is not sufficient by itself to demonstrate an intent to discriminate.
- Intentionally produces, assists or aiding in producing or distributing certain sexually explicit content, child pornography, deep fake videos or images, etc.
Moreover, under the new act, a governmental entity may not:
- Use or deploy an AI system to carry out social scoring that results or may result in detrimental, unfavorable treatment or the infringement of any right under state or federal law.
- Develop or deploy an AI system for the purpose of uniquely identifying a specific individual using biometric data or the targeted or untargeted gathering of media from public sources, without consent if it would violate constitutional, federal or state law.
Personal and Biometric Data
Biometric Identifier – under Texas law, the capture of an individual’s biometric identifier for commercial purposes is prohibited, unless specific conditions including informed consent met. The act clarifies, that consent cannot be inferred solely from the existence of an image or media containing biometric data that is publicly available, unless the image or media was made publicly available by the individual to whom the biometric identifier pertains. Certain exceptions apply; for example, restrictions on the use of biometric identifiers do not extend to their use in the training, processing or storage of AI systems, unless the purpose is to uniquely identify an individual.
Processors – The act clarifies the duties of processors in handling personal data, emphasizing their responsibilities to assist controllers in meeting compliance obligations under the relevant chapter. Specifically, processors must assist controllers in complying with requirements related to the security of personal data processing, including, if applicable, personal data collected, stored and processed by an AI system. This amendment explicitly incorporates AI systems into the processor’s security related assistance obligations, expanding their role to include AI-specific personal data.
Enforcement and Rulemaking
The act will take effect on 1 January 2026. The Texas Attorney General (“AG”) has the exclusive enforcement authority, as the act does not provide for a private right of action.
The Texas AG is required to create an online mechanism on its website through which a consumer can submit complaints. The Texas AG will have the authority to issue civil investigative demands to determine whether a violation has occurred. If a violation is found, the Texas AG will provide written notification. There will be a 60-day cure period during which the violation may be remedied.
Texas state agencies may impose sanctions against individuals and companies within their jurisdictions. The act prescribes civil penalties ranging from $10,000 to $12,000 for curable violations, from $80,000 to $200,000 for uncurable violations and from $2,000 to $40,000 per day for continuing violations.
Sandbox
The new act also provides a limited carveout for innovation and experimentation through its AI regulatory sandbox program, administered by the Texas Department of Information Resources. The program allows approved applicants to develop and test innovative AI systems in a controlled environment without full regulatory compliance for up to 36 months, even if they are not licensed or registered in Texas. The purpose of the program is to encourage the responsible deployment of AI systems while supporting technological progress.
Companies developing, providing, distributing and deploying AI systems should evaluate their exposure to this new legislation.
Feel free to contact us if you have any questions regarding this new law and practical implications.