Arizona Supreme Court uses AI avatars to announce rulings — Meet the digital reporters
Virtual news anchors Daniel and Victoria present decisions written by Supreme Court justices, with the goal of increasing public confidence.

The Arizona Supreme Court has welcomed AI avatars to reveal court rulings as a new way for the judiciary to communicate with the potential public audience. The avatars, Daniel and Victoria, are being trialed as a means of improving public outreach and to meet the audience where they consume the most media – online, as asynchronously as possible.
This month, the avatars reported the rulings on arson and DUI cases, rather than the former spokespersons for the court. Alberto Rodriguez, Communications Director, said that the court's process for producing videos using residual human oversight will be reduced from hours to minutes. However, the use of AI does not remove the necessary rights processes to ensure legal accuracy.
AI avatars bring efficiency to judicial communications
Rodriguez, who had a role in designing the avatars' appearance and voice,, remarked that the avatars have they do not replace human staff, but assist the courts effort in communicating. "It still takes manpower," Rodriguez stated, emphasizing that human officers regularly consult with judges on content to verify its accuracy prior to being made publicly available.
The Supreme Court Chief Justice of Arizona, Ann Timmer, stated the avatars are reading scripts the Justices wrote themselves. Her hope is the this approach, not only makes legal information easier to access but also helps restore public trust in the judicial system amid a large amount of distrust across institutions.
Legal AI expands — but not without limits
Even though Daniel and Victoria offered the most explicit example of artificial intelligence in Arizona's court system, Justice Timmer noted that AI is already helping lawyers in the form of research, document review, and data analysis. Although its use is proliferating, she assured the public that the court methodologically distinguishes between AI assistance and legal decision-making.
In other places AI has caused controversy in the legal profession. In New York, for example, one plaintiff tried to bring an AI attorney into court, but was turned away. California's state bar also suffered backlash after admitting that some questions on the bar exam were written with AI. Concerns remain about AI hallucinations and fictitious case citations- issues Arizona officials would not make with their purposely controlled use of technology.