News Alert
Additional News Alerts
Federal Judiciary Advisors Hold Hearing on Proposed New Rule of Evidence 707 Governing AI-Generated Evidence
On January 29, 2026, the Committee on Rules of Practice and Procedure—the advisory body to the Judicial Conference of the United States—held a hearing on the creation of a new provision pertaining to machine learning and artificial intelligence (“AI”): Federal Rule of Evidence 707. This proposal comes at the heels of three years of research and investigation by the Committee to address the fast-developing field of AI and concerns about the unreliability of evidence created by machine learning.
In proposing Rule 707, the Advisory Committee on Rules of Evidence—a subset of the larger Committee—alluded to such examples as AI analysis of stock trading patterns to establish causation, analysis of digital data to determine whether two works in copyright litigation are substantially similar, and AI assessment of the complexity of software programs to determine whether code had been misappropriated.
The proposal was first put forward on May 2, 2025, by the Advisory Committee on Evidence Rules. The Advisory Committee explained that, while at first it considered amending Rule 702, which sets forth the Daubert standard for expert witnesses, the committee ultimately decided that an entirely new rule of evidence dealing solely with machine evidence would be more appropriate. After publication of the proposed Rule on June 10, 2025, the Committee on Rules of Practice and Procedure released Rule 707 for public comment from August 15, 2025, until February 16, 2026.
Rule 707 specifically addresses those situations in which a party offering a certain piece of evidence admits it is AI-generated but offers no expert to attest to the authenticity of the evidence. The evidence would be scrutinized with respect to the Daubert factors: helpfulness to the trier of fact, basis in sufficient facts or data, reliability of methods, and reliable application. Without an attesting witness available to be cross-examined, however, the Advisory Committee worries that the evidence might be unreliable, and yet the unreliability would be difficult to detect, as inaccuracies of AI-generated data are often buried in code.
The Rule contains an exception that applies to the output of “simple scientific instruments” such as a mercury-based thermometer, electronic scale, or battery-operated digital thermometer to avoid litigation over instruments that can safely be presumed reliable.
On January 29, 2026, commenters ranging from Winston & Strawn, King & Spalding, and U.C. Berkeley School of Law to the American Civil Liberties Union offered comments on the proposal. They voice various concerns such as what may be considered a “simple scientific instrument,” practical concerns about the lack of procedural precedents, burdens on parties in meeting the requirements of the new rule, and potential abuses of the rule.
Critics caution that Rule 707 does not apply to evidence whose authenticity is in dispute and does little to help courts avoid deepfakes or other falsified evidence. Others worry that litigants and their lawyers will be subject to substantial additional pretrial proceedings and expense.
Nevertheless, this new rule demonstrates the federal judiciary’s recognition of and response to the unavoidable reality of AI-generated evidence in litigation.