Ethical AI Aims to Make Transportation Planning Work for Every Community

Technology smart navigation and location tracking with professional hand interacting with futuristic GPS map interface locations of services, delivery, travel app, logistics and transportation

Transportation agencies such as the Delaware Valley Regional Planning Commission, Pennsylvania Department of Transportation, and SEPTA depend on forecasting models to make decisions about where highways should be built, how often trains should arrive, and how to ease congestion or reduce emissions. Increasingly, these models are powered by artificial intelligence, which can sift through massive datasets and detect patterns more efficiently than traditional approaches. But while AI models often appear more accurate overall, they can still perform unevenly across communities. When errors fall more heavily on certain groups, such as low-income riders or neighborhoods of color, the result can be service decisions that unintentionally place greater burdens on those communities.

A new study led by Zhiwei Chen, PhD, assistant professor of civil, architectural and environmental engineering at Drexel University, introduces a way to build fairness directly into these predictive models. Working with collaborators at the Georgia Institute of Technology, Chen developed a deep learning approach that allows agencies to set measurable targets for how evenly model errors are distributed across groups, and then enforces those targets during prediction. The work appears in Transportation Research Part B: Methodological.

“Think of a model that predicts whether a person will drive or take transit,” Chen said. “If it is systematically less accurate for bus riders in lower-income areas, planners could under-estimate demand there and over-invest elsewhere. Our method measures those gaps directly and constrains the model so that performance stays within a user-defined threshold.”

The team’s framework integrates a statistical fairness test directly into the model as a constraint that limits disparities across designated groups. The authors analyze the resulting prediction problem, show how to solve it efficiently in practice, and demonstrate that the constraint can be expressed in flexible ways. This means agencies can define fairness according to their own policy priorities or regulatory standards without needing to redesign the entire model.

“Rather than treating fairness as an afterthought, we embed it into the decision process of the model itself,” Chen explained. “By doing so, we can offer guarantees about performance differences across groups, not just improvements on average.”

Using real-world travel behavior data collected from the National Household Travel Survey, the researchers showed that their method substantially reduces group-level disparities in common measures such as accuracy, precision, and recall, while preserving strong overall predictive performance. The team tested the framework with multiple definitions of fairness, in both urban and rural contexts, and for different outcomes such as predicting car use and transit use, demonstrating its robustness and adaptability.

The study underscores why fairness in forecasting matters. Travel choice models influence budget allocations, service frequencies, and environmental assessments. If certain communities’ travel behaviors are predicted less accurately, they can end up with fewer resources, longer wait times, or higher exposure to traffic and pollution. By keeping group-level error rates in check, agencies can make more balanced and accountable decisions.

“This is about accountability in analytics,” Chen said. “Agencies should be able to use advanced learning tools and show, with evidence, that no group is consistently disadvantaged by the model. If widely adopted, this approach can help ensure that the benefits of data-driven mobility reach every community.”

Read the full paper: https://doi.org/10.1016/j.trb.2025.103318