Toxicologists today unveiled a digital chemical safety screening tool that could greatly reduce the need for six common animal tests. Those tests account for nearly 60% of the estimated 3 million to 4 million animals used annually in risk testing worldwide.
The computerized tool-built on a massive database of molecular structures and existing safety data-appears to match, and sometimes improve on, the results of animal tests for properties such as skin sensitization and eye irritation, the researchers report in today’s issue of Toxicological Sciences. But it also has limitations; for instance, the method can’t reliably evaluate a chemical’s risk of causing cancer. And it’s not clear how open regulatory agencies will be to adopting a nonanimal approach.
Still, “We’re really excited about the potential of this model,” says toxicologist Nicole Kleinstreuer, deputy director of a center that evaluates alternatives to animal testing at the National Institute of Environmental Health Sciences (NIEHS) in Durham, North Carolina. Kleinstreuer, who was not involved in the work, adds that using “big data … to build predictive models is an extremely promising avenue for reducing and replacing animal testing.”
Most developed nations require new chemicals that enter commerce to undergo at least some safety testing. But the long-standing practice of exposing rabbits, rats, and other animals to chemicals to evaluate risks is facing growing public objections and cost concerns, helping spur a hunt for alternatives. In the United States, the Environmental Protection Agency (EPA) has been backing research into new ways of evaluating chemicals through programs such as its Toxicity Forecaster (ToxCast) effort. And in 2016, Congress passed an updated chemical safety law -the Toxic Substances Control Act (TSCA)-that orders federal regulators to take steps to reduce the number of animals that companies use to test compounds for safety.
One approach is to use what is already known about the safety of existing compounds to predict the risks posed by new chemicals with similar molecular structures. Two years ago, a team led by Thomas Hartung of the Johns Hopkins University Bloomberg School of Public Health in Baltimore, Maryland, took a step toward that goal by assembling test data on 9800 chemicals regulated by the European Chemicals Agency (ECHA) in Helsinki. They then showed that chemicals with similar structures can have similar health effects, such as being an irritant.
In today’s paper, Hartung’s team goes bigger. First, the researchers expanded their database to 10 million chemical structures by adding information from the public database PubChem and the U.S. National Toxicology Program. Next, they compared the structures and toxicological properties of every possible pair of compounds in their database-a total of 50 trillion comparisons-creating a vast similarity map that groups compounds by structure and effect. Finally, they tested the model: They asked it to predict a randomly chosen chemical’s toxicological profile by linking it to similar “neighbors” on the map and compared the results to six actual animal tests of the compound.
On average, the computational tool reproduced the animal test results 87% of the time. That’s better than animal tests themselves can do, Hartung says: In reviewing the literature, his group found that repeated animal tests replicated past results just 81% of the time, on average. “This is an important finding,” Hartung says, because regulators often expect alternative methods to animal testing to be reproducible at the 95% threshold-a standard even the animal tests aren’t meeting.
“Our data shows that we can replace six common tests-which account for 57% of the world’s animal toxicology testing-with computer-based prediction and get more reliable results,” Hartung says. And it could help eliminate duplication of effort, he adds. The team found, for instance, that 69 chemicals were each tested at least 45 separate times using the so-called Draize rabbit test-a method that involves placing a chemical in the rabbit’s eye and has drawn extensive public opposition.
The screening method has weaknesses. Although it can predict simple effects such as irritation, more complex endpoints such as cancer are out of its reach, says Mike Rasenberg, who heads ECHA’s Computational Assessment & Dissemination unit. “This won’t be the end of animal testing,” he predicts, “but it’s a useful concept for looking at simple toxicity.”
The question now is how regulators will view the method. Rasenberg thinks European regulators will accept it for simple endpoints because it meets validating criteria for so-called quantitative structure-activity relationship models.
In the United States, the NIEHS center is working on validating similar methods. And once that validation is complete, EPA “will be able to review the evaluation results to determine how and if they can be used to inform chemicals evaluated under TSCA,” officials said in a statement. “If evaluation is favorable, these types of models could be used in conjunction with other tools such as ToxCast to inform screening-level hazard determinations or rank/prioritize large numbers of substances.”
Hartung says he hopes the screening method will also be of interest to countries that are gearing up for implementing new chemical laws, such as Turkey and South Korea.
In the meantime, the researchers have teamed up with Underwriters Laboratories headquartered in Northbrook, Illinois, to make the tool available to companies that might want to screen products before submitting them for regulatory review.
Clarification, 7/24/2018, 12:29 p.m.: NIEHS is not validating the specific method developed by Thomas Hartung’s team. It is validating similar methods. The story has been updated to make this clearer.
Article by channel:
Everything you need to know about Digital Transformation
The best articles, news and events direct to your inbox
Read more articles tagged: AI