UCL School of Management

Shivaang Sharma | 16 January 2024

How AI tools foster trust in data sensitive contexts with UCL of Management’s Shivaang Sharma

In his latest blog, UCL School of Management’s Shivaang’s Sharma, a PhD student in the Strategy and Entrepreneurship reserach Group expands on the conversation and delves into AI tools that Foster Trust in Technologies.

The unprecedented scale of data generated by ongoing crises – such as Gaza, Sudan, and Ukraine – is pivoting leading international agencies – such as the World Food Programme – to integrate AI tools in humanitarian work. But despite calls for such necessary, strategic shifts across sectors, agents and analysts can remain reluctant to trust such ‘novel’ tools. Prior research explains this apprehension persists due to concerns over data security, ethics, and sector specific norms (e.g. fears over violating the ‘do no harm’ principle in humanitarian work). In my research, however, I uncover AI tools that are successfully breaching the trust barrier and even cultivating a growing community of committed users. This short article, the first in the #trustworthytechnologies series explores three core characteristics that are common across AI tools that help navigate societal grand challenges.

Proactively clarifying inscrutabilities is essential

Widely used AI tools (e.g. ChatGPT) can often remain mysterious about their back-end and instead outsource tasks of interpreting algorithmic ambiguities to its users. In contrast, organisations that foster trust in their tools make their back end visible to users. They leverage various ‘languages’ that consistently clarify their tool’s datasets, logic, and limitations to users of varying technical proficiencies. For example, consider DEEP – the leading collaborative sensemaking NLP- based data analysis platform for humanitarian and development actors. Data Friendly Space (DFS), as implementing partner of DEEP along with its partner Togglecorp continue to create DIY educational modules for new, non-technical users on how to use their AI tools (e.g. e-learning hubs like KayaConnect or Zendesk), and routinely update their publicly accessible GitHub repositories for users who are technologically proficient. 

User community driven interoperabilities

Organisations that manage to cultivate trust in their tools amongst users handling sensitive data also adopt a democratised approach to sourcing feedback from the user community on making their tools useful and user-friendly. For instance, DFS conducts monthly, on-demand demos on DEEP, conducts targeted surveys and user interviews to fine-tune or develop new or features that enhance the tools’ integration into team workflows and with sector specific databases. 

User-centric accessibilities

Typically, organisations granting public access to their AI tools tend to remain gatekeepers to important inputs of their tools (e.g. proprietary data sets, model architecture) and outputs (e.g. defining rules for permissible prompts, access to outputs). In contrast, organizations that develop democratic AI tools can grant access to the general public to pools of vast training datasets and empower users to determine the degree of control they wish to have or give to other users. For example, the Humanitarian Open Street Map team (HOT)’s AI tool, FAIR – a computer vision model that helps improve accuracy of mapping humanitarian efforts – allows project teams to determine levels of access of outputs to different user groups, and continuously discuss related issues on dedicated slack channels and github repositories.

Unlocking the potential of AI in data sensitive contexts, such as humanitarian crises contexts, requires clarifying model inscrutabilities, pursuing community-driven interoperabilities, and empowering users to self-determine extent of input-output accessibilities. Explore how organizations are demystifying technology, building trust, and navigating societal challenges in the #trustworthytechnologies series.

Humanitarian AI today podcast on responsible AI

Shivaang will continue these conversations with leaders and innovators on the Humanitarian AI Today podcast series – the leading AI for Good podcast series focusing on humanitarian applications of artificial intelligence. The series is produced by Brent Phillips, founder of the global Humanitarian AI meetup.com community. 

Shivaang recently hosted an episode with Suzy Madigan, founder of ‘The Machine Race’ blog series and senior humanitarian advisor at CARE International. Suzy and Shivaang discuss some of the human rights and safety implications of AI for society globally, particularly for communities in the global south experiencing humanitarian crises, conflicts, poverty or marginalisation and look at how to ensure that the design, deployment and governance of AI is inclusive and equitable, to make sure everybody can share in its potential benefits and be protected from potential harms.

Shivaang’s research examines collaborative intelligence systems that address discrimination experienced by vulnerable social groups. He examines the promises and perils of Human-AI teams in humanitarian crisis contexts and how organisations make sense of, navigate, and challenge discriminatory institutional regimes. Shivaang is actively building a global knowledge-sharing community around demystifying humanitarian technologies, workflows of tech-assisted collaborative organisations in crisis contexts and conducts workshops and podcasts on these topics.   

Listen to the full podcast episode today!

Last updated Wednesday, 14 February 2024