Crowdsourced urban safety applications have successfully democratized real-time incident awareness, yet they predominantly operate as reactive alerting systems. As these platforms transition toward proactive, AI-driven interventions—such as predictive safety routing—they introduce complex Human-Computer Interaction (HCI) challenges regarding algorithmic transparency, cognitive load, and user trust. In this paper, we explore the Algorithmic Experience (AX) of navigating physical risk through a participatory design study. We conducted a 60-minute co-design workshop with six domain experts in Information Technology and Human-Computer Interaction, serving as expert proxies, to conceptualize a novel "SafeWalk" feature within the Citizen mobile application. Through the generative sketching of route-planning and active-navigation interfaces, participants negotiated the visualization of probabilistic safety data and the mechanics of dynamic AI rerouting. Our findings highlight the critical tension between providing Explainable AI (XAI) rationale to establish trust and minimizing cognitive overload during high-stress, nighttime navigation. We contribute design implications for mitigating algorithmic anxiety, avoiding perceived algorithmic redlining, and structuring user agency when wayfinding systems prioritize physical safety over temporal efficiency.
Authors: Jess Kropczynski; Odunayo Adepoju; Samuel Akinnusi; Lily Dzamesi; Robby Hoover; Kevin Jin; George Owusu Dameh